# Asset Requirements (/docs/builderkit/asset-requirements) --- title: Asset Requirements description: "Required assets and file structure for chain and token logos." --- # Asset Requirements BuilderKit requires specific asset files for displaying chain and token logos. These assets should follow a standardized file structure and naming convention. ## Chain Logos Chain logos are used by components like `ChainIcon`, `ChainDropdown`, and `TokenIconWithChain`. ### File Structure Chain logos should be placed at: ``` /chains/logo/{chain_id}.png ``` ### Examples ``` /chains/logo/43114.png // Avalanche C-Chain /chains/logo/43113.png // Fuji Testnet /chains/logo/173750.png // Echo L1 ``` ### Requirements - Format: PNG with transparency - Dimensions: 32x32px (minimum) - Background: Transparent - Shape: Circular or square with rounded corners - File size: < 100KB ## Token Logos Token logos are used by components like `TokenIcon`, `TokenChip`, and `TokenRow`. ### File Structure Token logos should be placed at: ``` /tokens/logo/{chain_id}/{address}.png ``` ### Examples ``` /tokens/logo/43114/0x1234567890123456789012345678901234567890.png // Token on C-Chain /tokens/logo/43113/0x5678901234567890123456789012345678901234.png // Token on Fuji ``` ### Requirements - Format: PNG with transparency - Dimensions: 32x32px (minimum) - Background: Transparent - Shape: Circular or square with rounded corners - File size: < 100KB ## Directory Structure Your public assets directory should look like this: ``` public/ ├── chains/ │ └── logo/ │ ├── 43114.png │ ├── 43113.png │ └── 173750.png └── tokens/ └── logo/ ├── 43114/ │ ├── 0x1234....png │ └── 0x5678....png └── 43113/ ├── 0x9012....png └── 0xabcd....png ``` # Custom Chain Setup (/docs/builderkit/chains) --- title: Custom Chain Setup description: "Configure custom Avalanche L1 chains in your application." --- # Custom Chain Setup Learn how to configure custom Avalanche L1 chains in your BuilderKit application. ## Chain Definition Define your custom L1 chain using `viem`'s `defineChain`: ```tsx import { defineChain } from "viem"; export const myL1 = defineChain({ id: 173750, // Your L1 chain ID name: 'My L1', // Display name network: 'my-l1', // Network identifier nativeCurrency: { decimals: 18, name: 'Token', symbol: 'TKN', }, rpcUrls: { default: { http: ['https://api.avax.network/ext/L1/rpc'] }, }, blockExplorers: { default: { name: 'Explorer', url: 'https://explorer.avax.network/my-l1' }, }, // Optional: Custom metadata iconUrl: "/chains/logo/my-l1.png", icm_registry: "0x..." // ICM registry contract }); ``` ## Provider Configuration Add your custom L1 chain to the Web3Provider: ```tsx import { Web3Provider } from '@avalabs/builderkit'; import { avalanche } from '@wagmi/core/chains'; import { myL1 } from './chains/definitions/my-l1'; function App() { return ( ); } ``` ## Required Properties | Property | Type | Description | |----------|------|-------------| | `id` | `number` | Unique L1 chain identifier | | `name` | `string` | Human-readable chain name | | `network` | `string` | Network identifier | | `nativeCurrency` | `object` | Chain's native token details | | `rpcUrls` | `object` | RPC endpoint configuration | | `blockExplorers` | `object` | Block explorer URLs | ## Optional Properties | Property | Type | Description | |----------|------|-------------| | `iconUrl` | `string` | Chain logo URL | | `icm_registry` | `string` | ICM registry contract address | | `testnet` | `boolean` | Whether the chain is a testnet | ## Example: Echo L1 Here's a complete example using the Echo L1: ```tsx import { defineChain } from "viem"; export const echo = defineChain({ id: 173750, name: 'Echo L1', network: 'echo', nativeCurrency: { decimals: 18, name: 'Ech', symbol: 'ECH', }, rpcUrls: { default: { http: ['https://subnets.avax.network/echo/testnet/rpc'] }, }, blockExplorers: { default: { name: 'Explorer', url: 'https://subnets-test.avax.network/echo' }, }, iconUrl: "/chains/logo/173750.png", icm_registry: "0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228" }); ``` # Contribute (/docs/builderkit/contribute) --- title: Contribute description: "Guide for contributing to BuilderKit by building hooks, components, and flows." --- # Contributing to BuilderKit We welcome contributions to BuilderKit! Whether you're fixing bugs, adding new features, or improving documentation, your help makes BuilderKit better for everyone. ## What You Can Contribute ### Hooks Build reusable hooks that handle common Web3 functionality: - Chain data management - Token interactions - Contract integrations - State management - API integrations ### Components Create new UI components or enhance existing ones: - Form elements - Display components - Interactive elements - Layout components - Utility components ### Flows Design complete user journeys by combining components: - Token swaps - NFT minting - Governance voting - Staking interfaces - Custom protocols # Getting Started (/docs/builderkit/getting-started) --- title: Getting Started description: "Quick setup guide for BuilderKit in your React application." --- Get started with BuilderKit in your React application. ## Installation ```bash npm install @avalabs/builderkit # or yarn add @avalabs/builderkit ``` ## Provider Setup Wrap your application with the Web3Provider to enable wallet connections and chain management: ```tsx import { Web3Provider } from '@avalabs/builderkit'; import { avalanche, avalancheFuji } from '@wagmi/core/chains'; import { echo } from './chains/definitions/echo'; import { dispatch } from './chains/definitions/dispatch'; // Configure chains const chains = [avalanche, avalancheFuji, echo, dispatch]; function App() { return ( ); } ``` ## Next Steps - Learn about [Token Configuration](/docs/builderkit/tokens) - Explore [Core Components](/docs/builderkit/components/control) - Check out [Pre-built Flows](/docs/builderkit/flows/ictt) # Introduction (/docs/builderkit) --- title: Introduction description: "A comprehensive React component library for building Web3 applications on Avalanche." --- BuilderKit is a powerful collection of React components and hooks designed specifically for building Web3 applications on Avalanche. It provides everything you need to create modern, user-friendly blockchain applications with minimal effort. ## Ready to Use Components BuilderKit offers a comprehensive set of components that handle common Web3 functionalities: - **Control Components**: Buttons, forms, and wallet connection interfaces - **Identity Components**: Address displays and domain name resolution - **Token Components**: Balance displays, inputs, and price conversions - **Input Components**: Specialized form inputs for Web3 data types - **Chain Components**: Network selection and chain information displays - **Transaction Components**: Transaction submission and status tracking - **Collectibles Components**: NFT displays and collection management ## Powerful Hooks BuilderKit provides hooks for seamless integration with Avalanche's ecosystem: ### Blockchain Interaction Access and manage blockchain data, tokens, and cross-chain operations with hooks for chains, tokens, DEX interactions, and inter-chain transfers. ### Precompile Integration Easily integrate with Avalanche's precompiled contracts for access control, fee management, native minting, rewards, and cross-chain messaging. ## Getting Started Get started quickly by installing BuilderKit in your React application: ```bash npm install @avalabs/builderkit # or yarn add @avalabs/builderkit ``` Check out our [Getting Started](/docs/builderkit/getting-started) guide to begin building your Web3 application. # Token Configuration (/docs/builderkit/tokens) --- title: Token Configuration description: "Guide for configuring tokens in BuilderKit flows." --- # Token Configuration BuilderKit flows require proper token configuration to function correctly. This guide explains the required fields for different token configurations. ## Basic Token Structure All tokens in BuilderKit share a common base structure with these required fields: ```typescript interface BaseToken { // Contract address of the token, use "native" for native chain token address: string; // Human-readable name of the token name: string; // Token symbol/ticker symbol: string; // Number of decimal places the token uses decimals: number; // ID of the chain where this token exists chain_id: number; } ``` ## ICTT Token Fields ICTT tokens extend the base structure with additional fields for cross-chain functionality: ```typescript interface ICTTToken extends BaseToken { // Whether this token can be used with ICTT supports_ictt: boolean; // Address of the contract that handles transfers transferer?: string; // Whether this token instance is a transferer is_transferer?: boolean; // Information about corresponding tokens on other chains mirrors: { // Contract address of the mirrored token address: string; // Transferer contract on the mirror chain transferer: string; // Chain ID where the mirror exists chain_id: number; // Decimal places of the mirrored token decimals: number; // Whether this is the home/original chain home?: boolean; }[]; } ``` ## Field Requirements ### Base Token Fields - `address`: Must be a valid contract address or "native" - `name`: Should be human-readable - `symbol`: Should match the token's trading symbol - `decimals`: Must match the token's contract configuration - `chain_id`: Must be a valid chain ID ### ICTT-Specific Fields - `supports_ictt`: Required for ICTT functionality - `transferer`: Required if token supports ICTT - `is_transferer`: Optional, indicates if token is a transferer - `mirrors`: Required for ICTT, must contain at least one mirror configuration ### Mirror Configuration Fields - `address`: Required, contract address on mirror chain - `transferer`: Required, transferer contract on mirror chain - `chain_id`: Required, must be different from token's chain_id - `decimals`: Required, must match token contract - `home`: Optional, indicates original/home chain # Getting Started (/docs/avalanche-l1s) --- title: Getting Started description: As you begin your Avalanche L1 journey, it's useful to look at the lifecycle of taking an Avalanche L1 from idea to production. --- ## Figure Out Your Needs The first step of planning your Avalanche L1 is determining your application's needs. What features do you need that the Avalanche C-Chain doesn't provide? ### When to Choose an Avalanche L1 Building your own Avalanche L1 is a great choice when your project demands capabilities beyond those offered by the C-Chain. For instance, if you need the flexibility to use a custom gas token, require strict access control (for example, by only permitting users who are KYC-verified), or wish to implement a unique transaction fee model, then an Avalanche L1 can provide the necessary options. In addition, if having a completely sovereign network with its own governance and consensus mechanisms is central to your vision, an Avalanche L1 is likely the best path forward. ### Decide What Type of Avalanche L1 You Want After confirming that an Avalanche L1 suits your project's requirements, the next step is to select the type of virtual machine (VM) that will power your blockchain. Broadly, you can choose among three options. #### EVM-Based Avalanche L1s The majority of Avalanche L1s are utilizing the Ethereum Virtual Machine. They support Solidity smart contracts and standard [Ethereum APIs](/docs/api-reference/c-chain/api#ethereum-apis). Ava Labs' implementation, [Subnet-EVM](https://github.com/ava-labs/subnet-evm), is the most mature option available. It is recognized for its robust developer tooling and regular updates, making it the safest and most popular choice for building your blockchain. #### Custom Avalanche L1s Custom Avalanche L1s offer an open-ended interface that enables you to build any virtual machine you envision. Whether you fork an existing VM such as Subnet-EVM, integrate a non-Avalanche-native VM like Solana's, or build a completely new VM using any programming language you prefer, the choice is yours. For guidance on how to get started with VM development, see [Introduction to VMs](/docs/virtual-machines). ### Determine Tokenomics Avalanche L1s are powered by gas tokens, and building your own blockchain gives you the flexibility to determine which token to use and how to distribute it. Whether you decide to leverage AVAX, adapt an existing C-Chain token, or launch a new token entirely, you'll need to plan the allocation of tokens for validator rewards, establish an emission schedule, and decide whether transaction fees should be burned or redistributed as block rewards. ### Decide how to Customize Your Avalanche L1 Once you have selected your virtual machine, further customization may be necessary to align the blockchain with your specific needs. This might involve configuring the token allocation in the genesis block, setting gas fee rates, or making changes to the VM's behavior through precompiles. Such customizations often require careful iterative testing to perfect. For detailed instructions, refer to [Customize Your EVM-Powered Avalanche L1](/docs/avalanche-l1s/upgrade/customize-avalanche-l1). ### Available Subnet-EVM Precompiles The Subnet-EVM provides several precompiled contracts that you can use in your Avalanche L1 blockchain: - [AllowList Interface](/docs/avalanche-l1s/evm-configuration/allowlist) - A reusable interface for permission management - [Permissions](/docs/avalanche-l1s/evm-configuration/permissions) - Control contract deployment and transaction submission - [Tokenomics](/docs/avalanche-l1s/evm-configuration/tokenomics) - Manage native token supply and minting - [Transaction Fees & Validator Rewards](/docs/avalanche-l1s/evm-configuration/transaction-fees) - Configure fee parameters and reward mechanisms - [Warp Messenger](/docs/avalanche-l1s/evm-configuration/warpmessenger) - Perform cross-chain operations # WAGMI Avalanche L1 (/docs/avalanche-l1s/wagmi-avalanche-l1) --- title: WAGMI Avalanche L1 description: Learn about the WAGMI Avalanche L1 in this detailed case study. --- This is one of the first cases of using Avalanche L1s as a proving ground for changes in a production VM (Coreth). Many underestimate how useful the isolation of Avalanche L1s is for performing complex VM testing on a live network (without impacting the stability of the primary network). We created a basic WAGMI Explorer [https://subnets-test.avax.network/wagmi](https://subnets-test.avax.network/wagmi) that surfaces aggregated usage statistics about the Avalanche L1. - SubnetID: [28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY](https://explorer-xp.avax-test.network/avalanche-l1/28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY?tab=validators) - ChainID: [2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt](https://testnet.avascan.info/blockchain/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt) ### Network Parameters[​](#network-parameters "Direct link to heading") - NetworkID: 11111 - ChainID: 11111 - Block Gas Limit: 20,000,000 (2.5x C-Chain) - 10s Gas Target: 100,000,000 (~6.67x C-Chain) - Min Fee: 1 Gwei (4% of C-Chain) - Target Block Rate: 2s (Same as C-Chain) The genesis file of WAGMI can be found [here](https://github.com/ava-labs/public-chain-assets/blob/1951594346dcc91682bdd8929bcf8c1bf6a04c33/chains/11111/genesis.json). ### Adding WAGMI to Core[​](#adding-wagmi-to-core "Direct link to heading") - Network Name: WAGMI - RPC URL: [https://subnets.avax.network/wagmi/wagmi-chain-testnet/rpc] - WS URL: wss://avalanche-l1s.avax.network/wagmi/wagmi-chain-testnet/ws - Chain ID: 11111 - Symbol: WGM - Explorer: [https://subnets.avax.network/wagmi/wagmi-chain-testnet/explorer] This can be used with other wallets too, such as MetaMask. Case Study: WAGMI Upgrades[​](#case-study-wagmi-upgrades "Direct link to heading") ---------------------------------------------------------------------------------- This case study uses [WAGMI](https://subnets-test.avax.network/wagmi) Avalanche L1 upgrade to show how a network upgrade on an EVM-based (Ethereum Virtual Machine) Avalanche L1 can be done simply, and how the resulting upgrade can be used to dynamically control fee structure on the Avalanche L1. ### Introduction[​](#introduction "Direct link to heading") [Subnet-EVM](https://github.com/ava-labs/subnet-evm) aims to provide an easy to use toolbox to customize the EVM for your blockchain. It is meant to run out of the box for many Avalanche L1s without any modification. But what happens when you want to add a new feature updating the rules of your EVM? Instead of hard coding the timing of network upgrades in client code like most EVM chains, requiring coordinated deployments of new code, [Subnet-EVM v0.2.8](https://github.com/ava-labs/subnet-evm/releases/tag/v0.2.8) introduces the long awaited feature to perform network upgrades by just using a few lines of JSON in a configuration file. ### Network Upgrades: Enable/Disable Precompiles[​](#network-upgrades-enabledisable-precompiles "Direct link to heading") Detailed description of how to do this can be found in [Customize an Avalanche L1](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#network-upgrades-enabledisable-precompiles) tutorial. Here's a summary: 1. Network Upgrade utilizes existing precompiles on the Subnet-EVM: - ContractDeployerAllowList, for restricting smart contract deployers - TransactionAllowList, for restricting who can submit transactions - NativeMinter, for minting native coins - FeeManager, for configuring dynamic fees - RewardManager, for enabling block rewards 2. Each of these precompiles can be individually enabled or disabled at a given timestamp as a network upgrade, or any of the parameters governing its behavior changed. 3. These upgrades must be specified in a file named `upgrade.json` placed in the same directory where [`config.json`](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#avalanchego-chain-configs) resides: `{chain-config-dir}/{blockchainID}/upgrade.json`. ### Preparation[​](#preparation "Direct link to heading") To prepare for the first WAGMI network upgrade, on August 15, 2022, we had announced on [X](https://x.com/AaronBuchwald/status/1559249414102720512) and shared on other social media such as Discord. For the second upgrade, on February 24, 2024, we had another announcement on [X](https://x.com/jceyonur/status/1760777031858745701?s=20). ### Deploying upgrade.json[​](#deploying-upgradejson "Direct link to heading") The content of the `upgrade.json` is: ```json { "precompileUpgrades": [ { "feeManagerConfig": { "adminAddresses": ["0x6f0f6DA1852857d7789f68a28bba866671f3880D"], "blockTimestamp": 1660658400 } }, { "contractNativeMinterConfig": { "blockTimestamp": 1708696800, "adminAddresses": ["0x6f0f6DA1852857d7789f68a28bba866671f3880D"], "managerAddresses": ["0xadFA2910DC148674910c07d18DF966A28CD21331"] } } ] } ``` With the above `upgrade.json`, we intend to perform two network upgrades: 1. The first upgrade is to activate the FeeManager precompile: - `0x6f0f6DA1852857d7789f68a28bba866671f3880D` is named as the new Admin of the FeeManager precompile. - `1660658400` is the [Unix timestamp](https://www.unixtimestamp.com/) for Tue Aug 16 2022 14:00:00 GMT+0000 (future time when we made the announcement) when the new FeeManager change would take effect. 2. The second upgrade is to activate the NativeMinter precompile: - `0x6f0f6DA1852857d7789f68a28bba866671f3880D` is named as the new Admin of the NativeMinter precompile. - `0xadFA2910DC148674910c07d18DF966A28CD21331` is named as the new Manager of the NativeMinter precompile. Manager addresses are enabled after Durango upgrades which occurred on February 13, 2024. - `1708696800` is the [Unix timestamp](https://www.unixtimestamp.com/) for Fri Feb 23 2024 14:00:00 GMT+0000 (future time when we made the announcement) when the new NativeMinter change would take effect. Detailed explanations of feeManagerConfig can be found in [here](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#configuring-dynamic-fees), and for the contractNativeMinterConfig in [here](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#minting-native-coins). We place the `upgrade.json` file in the chain config directory, which in our case is `~/.avalanchego/configs/chains/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/`. After that, we restart the node so the upgrade file is loaded. When the node restarts, AvalancheGo reads the contents of the JSON file and passes it into Subnet-EVM. We see a log of the chain configuration that includes the updated precompile upgrade. It looks like this: ```bash INFO [02-22|18:27:06.473] <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain> github.com/ava-labs/subnet-evm/core/blockchain.go:335: Upgrade Config: {"precompileUpgrades":[{"feeManagerConfig":{"adminAddresses":["0x6f0f6da1852857d7789f68a28bba866671f3880d"],"blockTimestamp":1660658400}},{"contractNativeMinterConfig":{"adminAddresses":["0x6f0f6da1852857d7789f68a28bba866671f3880d"],"managerAddresses":["0xadfa2910dc148674910c07d18df966a28cd21331"],"blockTimestamp":1708696800}}]} ``` We note that `precompileUpgrades` correctly shows the upcoming precompile upgrades. Upgrade is locked in and ready. ### Activations[​](#activations "Direct link to heading") When the time passed 10:00 AM EDT August 16, 2022 (Unix timestamp 1660658400), the `upgrade.json` had been executed as planned and the new FeeManager admin address has been activated. From now on, we don't need to issue any new code or deploy anything on the WAGMI nodes to change the fee structure. Let's see how it works in practice! For the second upgrade on February 23, 2024, the same process was followed. The `upgrade.json` had been executed after Durango, as planned, and the new NativeMinter admin and manager addresses have been activated. ### Using Fee Manager[​](#using-fee-manager "Direct link to heading") The owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D` can now configure the fees on the Avalanche L1 as they see fit. To do that, all that's needed is access to the network, the private key for the newly set manager address and making calls on the precompiled contract. We will use [Remix](https://remix.ethereum.org/) online Solidity IDE and the [Core Browser Extension](https://support.avax.network/en/articles/6066879-core-extension-how-do-i-add-the-core-extension). Core comes with WAGMI network built-in. MetaMask will do as well but you will need to [add WAGMI](/docs/avalanche-l1s/wagmi-avalanche-l1) yourself. First using Core, we open the account as the owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D`. Then we connect Core to WAGMI, Switch on the `Testnet Mode` in `Advanced` page in the hamburger menu: ![Core Testnet mode](/images/wagmi1.png) And then open the `Manage Networks` menu in the networks dropdown. Select WAGMI there by clicking the star icon: ![Core network selection](/images/wagmi2.png) We then switch to WAGMI in the networks dropdown. We are ready to move on to Remix now, so we open it in the browser. First, we check that Remix sees the extension and correctly talks to it. We select `Deploy & run transactions` icon on the left edge, and on the Environment dropdown, select `Injected Provider`. We need to approve the Remix network access in the Core browser extension. When that is done, `Custom (11111) network` is shown: ![Injected provider](/images/wagmi3.png) Good, we're talking to WAGMI Avalanche L1. Next we need to load the contracts into Remix. Using 'load from GitHub' option from the Remix home screen we load two contracts: - [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol) - and [IFeeManager.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IFeeManager.sol). IFeeManager is our precompile, but it references the IAllowList, so we need that one as well. We compile IFeeManager.sol and use deployed contract at the precompile address `0x0200000000000000000000000000000000000003` used on the [Avalanche L1](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/feemanager/module.go#L21). ![Deployed contract](/images/wagmi4.png) Now we can interact with the FeeManager precompile from within Remix via Core. For example, we can use the `getFeeConfig` method to check the current fee configuration. This action can be performed by anyone as it is just a read operation. Once we have the new desired configuration for the fees on the Avalanche L1, we can use the `setFeeConfig` to change the parameters. This action can **only** be performed by the owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D` as the `adminAddress` specified in the [`upgrade.json` above](#deploying-upgradejson). ![setFeeConfig](/images/wagmi5.png) When we call that method by pressing the `transact` button, a new transaction is posted to the Avalanche L1, and we can see it on [the explorer](https://subnets-test.avax.network/wagmi/block/0xad95ccf04f6a8e018ece7912939860553363cc23151a0a31ea429ba6e60ad5a3): ![transaction](/images/wagmi6.png) Immediately after the transaction is accepted, the new fee config takes effect. We can check with the `getFeeCofig` that the values are reflected in the active fee config (again this action can be performed by anyone): ![getFeeConfig](/images/wagmi7.png) That's it, fees changed! No network upgrades, no complex and risky deployments, just making a simple contract call and the new fee configuration is in place! ### Using NativeMinter[​](#using-nativeminter "Direct link to heading") For the NativeMinter, we can use the same process to connect to the Avalanche L1 and interact with the precompile. We can load INativeMinter interface using 'load from GitHub' option from the Remix home screen with following contracts: - [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol) - and [INativeMinter.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/INativeMinter.sol). We can compile them and interact with the deployed contract at the precompile address `0x0200000000000000000000000000000000000001` used on the [Avalanche L1](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/nativeminter/module.go#L22). ![Deployed contract](/images/wagmi8.png) The native minter precompile is used to mint native coins to specified addresses. The minted coins is added to the current supply and can be used by the recipient to pay for gas fees. For more information about the native minter precompile see [here](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#minting-native-coins). `mintNativeCoin` method can be only called by enabled, manager and admin addresses. For this upgrade we have added both an admin and a manager address in [`upgrade.json` above](#deploying-upgradejson). The manager address was available after Durango upgrades which occurred on February 13, 2024. We will use the manager address `0xadfa2910dc148674910c07d18df966a28cd21331` to mint native coins. ![mintNativeCoin](/images/wagmi9.png) When we call that method by pressing the `transact` button, a new transaction is posted to the Avalanche L1, and we can see it on [the explorer](https://subnets-test.avax.network/wagmi/tx/0xc4aaba7b5863c1b8f6664ac1d483e2d7d392ab58d1a8feb0b6c318cbae7f1e93): ![tx](/images/wagmi10.png) As a result of this transaction, the native minter precompile minted a new native coin (1 WGM) to the recipient address `0xB78cbAa319ffBD899951AA30D4320f5818938310`. The address page on the explorer [here](https://subnets-test.avax.network/wagmi/address/0xB78cbAa319ffBD899951AA30D4320f5818938310) shows no incoming transaction; this is because the 1 WGM was directly minted by the EVM itself, without any sender. ### Conclusion[​](#conclusion "Direct link to heading") Network upgrades can be complex and perilous procedures to carry out safely. Our continuing efforts with Avalanche L1s is to make upgrades as painless and simple as possible. With the powerful combination of stateful precompiles and network upgrades via the upgrade configuration files we have managed to greatly simplify both the network upgrades and network parameter changes. This in turn enables much safer experimentation and many new use cases that were too risky and complex to carry out with high-coordination efforts required with the traditional network upgrade mechanisms. We hope this case study will help spark ideas for new things you may try on your own. We're looking forward to seeing what you have built and how easy upgrades help you in managing your Avalanche L1s! If you have any questions or issues, feel free to contact us on our [Discord](https://chat.avalabs.org/). Or just reach out to tell us what exciting new things you have built! # Why Build Avalanche L1s (/docs/avalanche-l1s/when-to-build-avalanche-l1) --- title: Why Build Avalanche L1s description: Learn key concepts to decide when to build your own Avalanche L1. --- ## Why Build Your Own Avalanche L1 There are many advantages to running your own Avalanche L1. If you find one or more of these a good match for your project then an Avalanche L1 might be a good solution for you. ### We Want Our Own Gas Token C-Chain is an Ethereum Virtual Machine (EVM) chain; it requires the gas fees to be paid in its native token. That is, the application may create its own utility tokens (ERC-20) on the C-Chain, but the gas must be paid in AVAX. In the meantime, [Subnet-EVM](https://github.com/ava-labs/subnet-evm) effectively creates an application-specific EVM-chain with full control over native(gas) coins. The operator can pre-allocate the native tokens in the chain genesis, and mint more using the [Subnet-EVM](https://github.com/ava-labs/subnet-evm) precompile contract. And these fees can be either burned (as AVAX burns in C-Chain) or configured to be sent to an address which can be a smart contract. Note that the Avalanche L1 gas token is specific to the application in the chain, thus unknown to the external parties. Moving assets to other chains requires trusted bridge contracts (or upcoming cross Avalanche L1 communication feature). ### We Want Higher Throughput The primary goal of the gas limit on C-Chain is to restrict the block size and therefore prevent network saturation. If a block can be arbitrarily large, it takes longer to propagate, potentially degrading the network performance. The C-Chain gas limit acts as a deterrent against any system abuse but can be quite limiting for high throughput applications. Unlike C-Chain, Avalanche L1 can be single-tenant, dedicated to the specific application, and thus host its own set of validators with higher bandwidth requirements, which allows for a higher gas limit thus higher transaction throughput. Plus, [Subnet-EVM](https://github.com/ava-labs/subnet-evm) supports fee configuration upgrades that can be adaptive to the surge in application traffic. Avalanche L1 workloads are isolated from the Primary Network; which means, the noisy neighbor effect of one workload (for example NFT mint on C-Chain) cannot destabilize the Avalanche L1 or surge its gas price. This failure isolation model in the Avalanche L1 can provide higher application reliability. ### We Want Strict Access Control The C-Chain is open and permissionless where anyone can deploy and interact with contracts. However, for regulatory reasons, some applications may need a consistent access control mechanism for all on-chain transactions. With [Subnet-EVM](https://github.com/ava-labs/subnet-evm), an application can require that “only authorized users may deploy contracts or make transactions.” Allow-lists are only updated by the administrators, and the allow list itself is implemented within the precompile contract, thus more transparent and auditable for compliance matters. ### We Need EVM Customization If your project is deployed on the C-Chain then your execution environment is dictated by the setup of the C-Chain. Changing any of the execution parameters means that the configuration of the C-Chain would need to change, and that is expensive, complex and difficult to change. So if your project needs some other capabilities, different execution parameters or precompiles that C-Chain does not provide, then Avalanche L1s are a solution you need. You can configure the EVM in an Avalanche L1 to run however you want, adding precompiles, and setting runtime parameters to whatever your project needs. ### We Need Custom Validator Management With the Etna upgrade, L1s can implement their own validator management logic through a _ValidatorManager_ smart contract. This gives you complete control over your validator set, allowing you to define custom staking rules, implement permissionless proof-of-stake with your own token, or create permissioned proof-of-authority networks. The validator management can be handled directly through smart contracts, giving you programmatic control over validator selection and rewards distribution. ### We Want to Build a Sovereign Network L1s on Avalanche are truly sovereign networks that operate independently without relying on other systems. You have complete control over your network's consensus mechanisms, transaction processing, and security protocols. This independence allows you to scale horizontally without dependencies on other networks while maintaining full control over your network parameters and upgrades. This sovereignty is particularly important for projects that need complete autonomy over their blockchain's operation and evolution. ## Conclusion Here we presented some considerations in favor of running your own Avalanche L1 vs. deploying on the C-Chain. If an application has relatively low transaction rate and no special circumstances that would make the C-Chain a non-starter, you can begin with C-Chain deployment to leverage existing technical infrastructure, and later expand to an Avalanche L1. That way you can focus on working on the core of your project and once you have a solid product/market fit and have gained enough traction that the C-Chain is constricting you, plan a move to your own Avalanche L1. Of course, we're happy to talk to you about your architecture and help you choose the best path forward. Feel free to reach out to us on [Discord](https://chat.avalabs.org/) or other [community channels](https://www.avax.network/community) we run. # When to Build on C-Chain (/docs/dapps/c-chain-or-avalanche-l1) --- title: When to Build on C-Chain description: Learn key concepts to decide when to build on the Avalanche C-Chain. --- Here are some advantages of the Avalanche C-Chain that you should take into account. ## High Composability with C-Chain Assets[​](#we-want-high-composability-with-c-chain-assets "Direct link to heading") C-Chain is a better option for seamless integration with existing C-Chain assets and contracts. It is easier to build a DeFi application on C-Chain, as it provides larger liquidity pools and thus allows for efficient exchange between popular assets. ## Low Initial Cost[​](#we-want-low-initial-cost "Direct link to heading") C-Chain has economic advantages of low-cost deployment and cheap transactions. The recent Etna upgrade trim down the base fee of Avalanche C-Chain by 25x, which results in much lower transaction costs. ## Low Operational Costs[​](#we-want-low-operational-costs "Direct link to heading") C-Chain is run and operated by thousands of nodes, it is highly decentralized and reliable, and all the infrastructure (explorers, indexers, exchanges, bridges) has already been built out by dedicated teams that maintain them for you at no extra charge. Project deployed on the C-Chain can leverage all of that basically for free. ## High Security[​](#we-want-high-security "Direct link to heading") The security of Avalanche Primary Network is a function of the security of the underlying validators and stake delegators. You can choose C-Chain in order to achieve maximum security by utilizing thousands of Avalanche Primary Network validators. ## Conclusion If an application has relatively low transaction rate and no special circumstances that would make the C-Chain a non-starter, you can begin with C-Chain deployment to leverage existing technical infrastructure, and later expand to an Avalanche L1. That way you can focus on working on the core of your project and once you have a solid product/market fit and have gained enough traction that the C-Chain is constricting you, plan a move to your own Avalanche L1. Of course, we're happy to talk to you about your architecture and help you choose the best path forward. Feel free to reach out to us on [Discord](https://chat.avalabs.org/) or other [community channels](https://www.avax.network/community) we run. # Introduction (/docs/dapps) --- title: Introduction description: Learn about the Avalanche C-Chain. --- Avalanche is a [network of networks](/docs/quick-start/primary-network). One of the chains running on Avalanche Primary Network is an EVM fork called the C-Chain (contract chain). C-Chain runs a fork of [`go-ethereum`](https://geth.ethereum.org/docs/rpc/server) called [`coreth`](https://github.com/ava-labs/coreth) that has the networking and consensus portions replaced with Avalanche equivalents. What's left is the Ethereum VM, which runs Solidity smart contracts and manages data structures and blocks on the chain. As a result, you get a blockchain that can run all the Solidity smart contracts from Ethereum, but with much greater transaction bandwidth and instant finality that [Avalanche's revolutionary consensus](/docs/quick-start/avalanche-consensus) enables. Coreth is loaded as a plugin into [AvalancheGo](https://github.com/ava-labs/avalanchego), the client node application used to run Avalanche network. Any dApp deployed to Avalanche C-Chain will function the same as on Ethereum, but much faster and cheaper. ## Add C-Chain to Wallet ### Avalanche C-Chain Mainnet - **Network Name**: Avalanche Mainnet C-Chain - **RPC URL**: https://api.avax.network/ext/bc/C/rpc - **WebSocket URL**: wss://api.avax.network/ext/bc/C/ws - **ChainID**: `43114` - **Symbol**: `AVAX` - **Explorer**: https://subnets.avax.network/c-chain ### Avalanche Fuji Testnet - **Network Name**: Avalanche Fuji C-Chain - **RPC URL**: https://api.avax-test.network/ext/bc/C/rpc - **WebSocket URL**: wss://api.avax-test.network/ext/bc/C/ws - **ChainID**: `43113` - **Symbol**: `AVAX` - **Explorer**: https://subnets-test.avax.network/c-chain ### Via Block Explorers Head to either explorer linked above and select "Add Avalanche C-Chain to Wallet" under "Chain Info" to automatically add the network. Alternatively, visit [chainlist.org](https://chainlist.org/?search=Avalanche&testnets=true) and connect your wallet. # Introduction (/docs/cross-chain) --- title: Introduction description: Learn about different interoperability protocols in the Avalanche ecosystem. --- # Introduction (/docs/nodes) --- title: Introduction description: A brief introduction to the concepts of nodes and validators within the Avalanche ecosystem. --- The Avalanche network is a decentralized platform designed for high throughput and low latency, enabling a wide range of applications. At the core of the network are nodes and validators, which play vital roles in maintaining the network's security, reliability, and performance. ## What is a Node? A node in the Avalanche network is any computer that participates in the network by maintaining a copy of the blockchain, relaying information, and validating transactions. Nodes can be of different types depending on their role and level of participation in the network’s operations. ### Types of Nodes - **Full Node**: Stores the entire blockchain data and helps propagate transactions and blocks across the network. It does not participate directly in consensus but is crucial for the network's health and decentralization. **Archival full nodes** store the entire blockchain ledger, including all transactions from the beginning to the most recent. **Pruned full nodes** download the blockchain ledger, then delete blocks starting with the oldest to save memory. - **Validator Node**: A specialized type of full node that actively participates in the consensus process by validating transactions, producing blocks, and securing the network. Validator nodes are required to stake AVAX tokens as collateral to participate in the consensus mechanism. - **RPC (Remote Procedure Call) Node**: These nodes act as an interface, enabling third-party applications to query and interact with the blockchain. ## More About Validator Nodes A validator node participates in the network's consensus protocol by validating transactions and creating new blocks. Validators play a critical role in ensuring the integrity, security, and decentralization of the network. #### Key Functions of Validators: - **Transaction Validation**: Validators verify the legitimacy of transactions before they are added to the blockchain. - **Block Production**: Validators produce and propose new blocks to the network. This involves reaching consensus with other validators to agree on which transactions should be included in the next block. - **Security and Consensus**: Validators work together to secure the network and ensure that only valid transactions are confirmed. This is done through the Avalanche Consensus protocol, which allows validators to achieve agreement quickly and with high security. ### Primary Network Validators To become a validator on the Primary Network, you must stake **2,000 AVAX**. This will grant you the ability to validate transactions across all three chains in the Primary Network: the P-Chain, C-Chain, and X-Chain. ### Avalanche L1 Validator To become a validator on an Avalanche L1, you must meet the specific validator management criteria for that network. If the L1 operates on a Proof-of-Stake (PoS) model, you will need to stake the required amount of tokens to be eligible. In addition to meeting these criteria, there is a monthly fee of **1.33 AVAX** per validator. # System Requirements (/docs/nodes/system-requirements) --- title: System Requirements description: This document provides information about the system and networking requirements for running an AvalancheGo node. --- ## Hardware and Operating Systems Avalanche is an incredibly lightweight protocol, so nodes can run on commodity hardware. Note that as network usage increases, hardware requirements may change. - **CPU**: Equivalent of 8 AWS vCPU - **RAM**: 8 GiB (16 GiB recommended) - **Storage**: 1 TiB SSD - **OS**: Ubuntu 22.04 or MacOS >= 12 Nodes which choose to use a HDD may get poor and random read/write latencies, therefore reducing performance and reliability. An SSD is strongly suggested. ## Networking To run successfully, AvalancheGo needs to accept connections from the Internet on the network port `9651`. Before you proceed with the installation, you need to determine the networking environment your node will run in. ### On a Cloud Provider If your node is running on a cloud provider computer instance, it will have a static IP. Find out what that static IP is, or set it up if you didn't already. ### On a Home Connection If you're running a node on a computer that is on a residential internet connection, you have a dynamic IP; that is, your IP will change periodically. **For the sake of demonstration, you can ignore the following information.** Otherwise, you will need to set up inbound port forwarding of port `9651` from the internet to the computer the node is installed on. As there are too many models and router configurations, we cannot provide instructions on what exactly to do, but there are online guides to be found (like [this](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/), or [this](https://www.howtogeek.com/66214/how-to-forward-ports-on-your-router/) ), and your service provider support might help too. Please note that a fully connected Avalanche node maintains and communicates over a couple of thousand of live TCP connections. For some under-powered and older home routers, that might be too much to handle. If that is the case, you may experience lagging on other computers connected to the same router, node getting benched, or failing to sync and similar issues. # Avalanche Community Proposals (/docs/quick-start/avalanche-community-proposals) --- title: Avalanche Community Proposals description: Learn about community proposals and how to create them. --- An Avalanche Community Proposal is a concise document that introduces a change or best practice for adoption on the [Avalanche Network](https://www.avax.network/). ACPs should provide clear technical specifications of any proposals and a compelling rationale for their adoption. ACPs are an open framework for proposing improvements and gathering consensus around changes to the Avalanche Network. ACPs can be proposed by anyone and will be merged into this repository as long as they are well-formatted and coherent. Once an overwhelming majority of the Avalanche Network/Community have [signaled their support for an ACP](/docs/nodes/configure/configs-flags#avalanche-community-proposals), it may be scheduled for activation on the Avalanche Network by Avalanche Network Clients (ANCs). It is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible ANC, such as [AvalancheGo](https://github.com/ava-labs/avalanchego). ## ACP Tracks There are three kinds of ACP: - A `Standards Track` ACP describes a change to the design or function of the Avalanche Network, such as a change to the P2P networking protocol, P-Chain design, Avalanche L1 architecture, or any change/addition that affects the interoperability of Avalanche Network Clients (ANCs). - A `Best Practices Track` ACP describes a design pattern or common interface that should be used across the Avalanche Network to make it easier to integrate with Avalanche or for Avalanche L1s to interoperate with each other. This would include things like proposing a smart contract interface, not proposing a change to how smart contracts are executed. - A `Meta Track` ACP describes a change to the ACP process or suggests a new way for the Avalanche Community to collaborate. - A `Subnet Track` ACP describes a change to a particular Avalanche L1. This would include things like configuration changes or coordinated Layer 1 upgrades. ## ACP Statuses There are four statuses of an ACP: - A `Proposed` ACP has been merged into the main branch of the ACP repository. It is actively being discussed by the Avalanche Community and may be modified based on feedback. - An `Implementable` ACP is considered "ready for implementation" by the author and will no longer change meaningfully from its current form (which would require a new ACP). - An `Activated` ACP has been activated on the Avalanche Network via a coordinated upgrade by the Avalanche Community. Once an ACP is `Activated`, it is locked. - A `Stale` ACP has been abandoned by its author because it is not supported by the Avalanche Community or has been replaced with another ACP. ## ACP Workflow ### Step 0: Think of a Novel Improvement to Avalanche The ACP process begins with a new idea for Avalanche. Each potential ACP must have an author: someone who writes the ACP using the style and format described below, shepherds the associated GitHub Discussion, and attempts to build consensus around the idea. Note that ideas and any resulting ACP is public. Authors should not post any ideas or anything in an ACP that the Author wants to keep confidential or to keep ownership rights in (such as intellectual property rights). ### Step 1: Post Your Idea to [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/ideas) The author should first attempt to ascertain whether there is support for their idea by posting in the "Ideas" category of GitHub Discussions. Vetting an idea publicly before going as far as writing an ACP is meant to save both the potential author and the wider Avalanche Community time. Asking the Avalanche Community first if an idea is original helps prevent too much time being spent on something that is guaranteed to be rejected based on prior discussions (searching the Internet does not always do the trick). It also helps to make sure the idea is applicable to the entire community and not just the author. Small enhancements or patches often don't need standardization between multiple projects; these don't need an ACP and should be injected into the relevant development workflow with a patch submission to the applicable ANC issue tracker. ### Step 2: Propose an ACP via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) Once the author feels confident that an idea has a decent chance of acceptance, an ACP should be drafted and submitted as a pull request (PR). This draft must be written in ACP style as described below. It is highly recommended that a single ACP contain a single key proposal or new idea. The more focused the ACP, the more successful it tends to be. If in doubt, split your ACP into several well-focused ones. The PR number of the ACP will become its assigned number. ### Step 3: Build Consensus on [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/discussion) and Provide an Implementation (if Applicable) ACPs will be merged by ACP maintainers if the proposal is generally well-formatted and coherent. ACP editors will attempt to merge anything worthy of discussion, regardless of feasibility or complexity, that is not a duplicate or incomplete. After an ACP is merged, an official GitHub Discussion will be opened for the ACP and linked to the proposal for community discussion. It is recommended for author or supportive Avalanche Community members to post an accompanying non-technical overview of their ACP for general consumption in this GitHub Discussion. The ACP should be reviewed and broadly supported before a reference implementation is started, again to avoid wasting the author and the Avalanche Community's time, unless a reference implementation will aid people in studying the ACP. ### Step 4: Mark ACP as `Implementable` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) Once an ACP is considered complete by the author, it should be marked as `Implementable`. At this point, all open questions should be addressed and an associated reference implementation should be provided (if applicable). As mentioned earlier, the Avalanche Foundation meets periodically to recommend the ratification of specific ACPs but it is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible Avalanche Network Client (ANC), such as [AvalancheGo](https://github.com/ava-labs/avalanchego). ### [Optional] Step 5: Mark ACP as `Stale` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) An ACP can be superseded by a different ACP, rendering the original obsolete. If this occurs, the original ACP will be marked as `Stale`. ACPs may also be marked as `Stale` if the author abandon work on it for a prolonged period of time (12+ months). ACPs may be reopened and moved back to `Proposed` if the author restart work. ## What Belongs in a Successful ACP? Each ACP must have the following parts: - `Preamble`: Markdown table containing metadata about the ACP, including the ACP number, a short descriptive title, the author, and optionally the contact info for each author, etc. - `Abstract`: Concise (~200 word) description of the ACP - `Motivation`: Rationale for adopting the ACP and the specific issue/challenge/opportunity it addresses - `Specification`: Complete description of the semantics of any change should allow any ANC/Avalanche Community member to implement the ACP - `Security Considerations`: Security implications of the proposed ACP Each ACP can have the following parts: - `Open Questions`: Questions that should be resolved before implementation Each `Standards Track` ACP must have the following parts: - `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community - `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change Each `Best Practices Track` ACP can have the following parts: - `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community - `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change ### ACP Formats and Templates Each ACP is allocated a unique subdirectory in the `ACPs` directory. The name of this subdirectory must be of the form `N-T` where `N` is the ACP number and `T` is the ACP title with any spaces replaced by hyphens. ACPs must be written in [markdown](https://daringfireball.net/projects/markdown/syntax) format and stored at `ACPs/N-T/README.md`. Please see the [ACP template](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/TEMPLATE.md) for an example of the correct layout. ### Auxiliary Files ACPs may include auxiliary files such as diagrams or code snippets. Such files should be stored in the ACP's subdirectory (`ACPs/N-T/*`). There is no required naming convention for auxiliary files. ### Waived Copyright ACP authors must waive any copyright claims before an ACP will be merged into the repository. This can be done by including the following text in an ACP: ``` ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). ``` ## Contributing Before contributing to ACPs, please read the [ACP Terms of Contribution](https://github.com/avalanche-foundation/ACPs/blob/main/CONTRIBUTING.md). # Avalanche Consensus (/docs/quick-start/avalanche-consensus) --- title: Avalanche Consensus description: Learn about the groundbreaking Avalanche Consensus algorithms. --- Consensus is the task of getting a group of computers (a.k.a. nodes) to come to an agreement on a decision. In blockchain, this means that all the participants in a network have to agree on the changes made to the shared ledger. This agreement is reached through a specific process, a consensus protocol, that ensures that everyone sees the same information and that the information is accurate and trustworthy. ## Avalanche Consensus Avalanche Consensus is a consensus protocol that is scalable, robust, and decentralized. It combines features of both classical and Nakamoto consensus mechanisms to achieve high throughput, fast finality, and energy efficiency. For the whitepaper, see [here](https://www.avalabs.org/whitepapers). Key Features Include: - Speed: Avalanche consensus provides sub-second, immutable finality, ensuring that transactions are quickly confirmed and irreversible. - Scalability: Avalanche consensus enables high network throughput while ensuring low latency. - Energy Efficiency: Unlike other popular consensus protocols, participation in Avalanche consensus is neither computationally intensive nor expensive. - Adaptive Security: Avalanche consensus is designed to resist various attacks, including sybil attacks, distributed denial-of-service (DDoS) attacks, and collusion attacks. Its probabilistic nature ensures that the consensus outcome converges to the desired state, even when the network is under attack. ## Conceptual Overview Consensus protocols in the Avalanche family operate through repeated sub-sampled voting. When a node is determining whether a [transaction](http://support.avalabs.org/en/articles/4587384-what-is-a-transaction) should be accepted, it asks a small, random subset of [validator nodes](http://support.avalabs.org/en/articles/4064704-what-is-a-blockchain-validator) for their preference. Each queried validator replies with the transaction that it prefers, or thinks should be accepted. Consensus will never include a transaction that is determined to be **invalid**. For example, if you were to submit a transaction to send 100 AVAX to a friend, but your wallet only has 2 AVAX, this transaction is considered **invalid** and will not participate in consensus. If a sufficient majority of the validators sampled reply with the same preferred transaction, this becomes the preferred choice of the validator that inquired. In the future, this node will reply with the transaction preferred by the majority. The node repeats this sampling process until the validators queried reply with the same answer for a sufficient number of consecutive rounds. - The number of validators required to be considered a "sufficient majority" is referred to as "α" (_alpha_). - The number of consecutive rounds required to reach consensus, a.k.a. the "Confidence Threshold," is referred to as "β" (_beta_). - Both α and β are configurable. When a transaction has no conflicts, finalization happens very quickly. When conflicts exist, honest validators quickly cluster around conflicting transactions, entering a positive feedback loop until all correct validators prefer that transaction. This leads to the acceptance of non-conflicting transactions and the rejection of conflicting transactions. ![How Avalanche Consensus Works](/images/avalanche-consensus1.png) Avalanche Consensus guarantees that if any honest validator accepts a transaction, all honest validators will come to the same conclusion. For a great visualization, check out [this demo](https://tedyin.com/archive/snow-bft-demo/#/snow). ## Deep Dive Into Avalanche Consensus ### Intuition First, let's develop some intuition about the protocol. Imagine a room full of people trying to agree on what to get for lunch. Suppose it's a binary choice between pizza and barbecue. Some people might initially prefer pizza while others initially prefer barbecue. Ultimately, though, everyone's goal is to achieve **consensus**. Everyone asks a random subset of the people in the room what their lunch preference is. If more than half say pizza, the person thinks, "OK, looks like things are leaning toward pizza. I prefer pizza now." That is, they adopt the _preference_ of the majority. Similarly, if a majority say barbecue, the person adopts barbecue as their preference. Everyone repeats this process. Each round, more and more people have the same preference. This is because the more people that prefer an option, the more likely someone is to receive a majority reply and adopt that option as their preference. After enough rounds, they reach consensus and decide on one option, which everyone prefers. ### Snowball The intuition above outlines the Snowball Algorithm, which is a building block of Avalanche consensus. Let's review the Snowball algorithm. #### Parameters - _n_: number of participants - _k_ (sample size): between 1 and _n_ - α (quorum size): between 1 and _k_ - β (decision threshold): >= 1 #### Algorithm ``` preference := pizza consecutiveSuccesses := 0 while not decided: ask k random people their preference if >= α give the same response: preference := response with >= α if preference == old preference: consecutiveSuccesses++ else: consecutiveSuccesses = 1 else: consecutiveSuccesses = 0 if consecutiveSuccesses > β: decide(preference) ``` #### Algorithm Explained Everyone has an initial preference for pizza or barbecue. Until someone has _decided_, they query _k_ people (the sample size) and ask them what they prefer. If α or more people give the same response, that response is adopted as the new preference. α is called the _quorum size_. If the new preference is the same as the old preference, the `consecutiveSuccesses` counter is incremented. If the new preference is different then the old preference, the `consecutiveSuccesses` counter is set to `1`. If no response gets a quorum (an α majority of the same response) then the `consecutiveSuccesses` counter is set to `0`. Everyone repeats this until they get a quorum for the same response β times in a row. If one person decides pizza, then every other person following the protocol will eventually also decide on pizza. Random changes in preference, caused by random sampling, cause a network preference for one choice, which begets more network preference for that choice until it becomes irreversible and then the nodes can decide. In our example, there is a binary choice between pizza or barbecue, but Snowball can be adapted to achieve consensus on decisions with many possible choices. The liveness and safety thresholds are parameterizable. As the quorum size, α, increases, the safety threshold increases, and the liveness threshold decreases. This means the network can tolerate more byzantine (deliberately incorrect, malicious) nodes and remain safe, meaning all nodes will eventually agree whether something is accepted or rejected. The liveness threshold is the number of malicious participants that can be tolerated before the protocol is unable to make progress. These values, which are constants, are quite small on the Avalanche Network. The sample size, _k_, is `20`. So when a node asks a group of nodes their opinion, it only queries `20` nodes out of the whole network. The quorum size, α, is `14`. So if `14` or more nodes give the same response, that response is adopted as the querying node's preference. The decision threshold, β, is `20`. A node decides on choice after receiving `20` consecutive quorum (α majority) responses. Snowball is very scalable as the number of nodes on the network, _n_, increases. Regardless of the number of participants in the network, the number of consensus messages sent remains the same because in a given query, a node only queries `20` nodes, even if there are thousands of nodes in the network. Everything discussed to this point is how Avalanche is described in [the Avalanche white-paper](https://assets-global.website-files.com/5d80307810123f5ffbb34d6e/6009805681b416f34dcae012_Avalanche%20Consensus%20Whitepaper.pdf). The implementation of the Avalanche consensus protocol by Ava Labs (namely in AvalancheGo) has some optimizations for latency and throughput. ### Blocks A block is a fundamental component that forms the structure of a blockchain. It serves as a container or data structure that holds a collection of transactions or other relevant information. Each block is cryptographically linked to the previous block, creating a chain of blocks, hence the term "blockchain." In addition to storing a reference of its parent, a block contains a set of transactions. These transactions can represent various types of information, such as financial transactions, smart contract operations, or data storage requests. If a node receives a vote for a block, it also counts as a vote for all of the block's ancestors (its parent, the parents' parent, etc.). ### Finality Avalanche consensus is probabilistically safe up to a safety threshold. That is, the probability that a correct node accepts a transaction that another correct node rejects can be made arbitrarily low by adjusting system parameters. In Nakamoto consensus protocol (as used in Bitcoin and Ethereum, for example), a block may be included in the chain but then be removed and not end up in the canonical chain. This means waiting an hour for transaction settlement. In Avalanche, acceptance/rejection are **final and irreversible** and only take a few seconds. ### Optimizations It's not safe for nodes to just ask, "Do you prefer this block?" when they query validators. In Ava Labs' implementation, during a query a node asks, "Given that this block exists, which block do you prefer?" Instead of getting back a binary yes/no, the node receives the other node's preferred block. Nodes don't only query upon hearing of a new block; they repeatedly query other nodes until there are no blocks processing. Nodes may not need to wait until they get all _k_ query responses before registering the outcome of a poll. If a block has already received _alpha_ votes, then there's no need to wait for the rest of the responses. ### Validators If it were free to become a validator on the Avalanche network, that would be problematic because a malicious actor could start many, many nodes which would get queried very frequently. The malicious actor could make the node act badly and cause a safety or liveness failure. The validators, the nodes which are queried as part of consensus, have influence over the network. They have to pay for that influence with real-world value in order to prevent this kind of ballot stuffing. This idea of using real-world value to buy influence over the network is called Proof of Stake. To become a validator, a node must **bond** (stake) something valuable (**AVAX**). The more AVAX a node bonds, the more often that node is queried by other nodes. When a node samples the network it's not uniformly random. Rather, it's weighted by stake amount. Nodes are incentivized to be validators because they get a reward if, while they validate, they're sufficiently correct and responsive. Avalanche doesn't have slashing. If a node doesn't behave well while validating, such as giving incorrect responses or perhaps not responding at all, its stake is still returned in whole, but with no reward. As long as a sufficient portion of the bonded AVAX is held by correct nodes, then the network is safe, and is live for virtuous transactions. ### Big Ideas Two big ideas in Avalanche are **subsampling** and **transitive voting**. Subsampling has low message overhead. It doesn't matter if there are twenty validators or two thousand validators; the number of consensus messages a node sends during a query remains constant. Transitive voting, where a vote for a block is a vote for all its ancestors, helps with transaction throughput. Each vote is actually many votes in one. ### Loose Ends Transactions are created by users which call an API on an [AvalancheGo](https://github.com/ava-labs/avalanchego) full node or create them using a library such as [AvalancheJS](https://github.com/ava-labs/avalanchejs). ### Other Observations Conflicting transactions are not guaranteed to be live. That's not really a problem because if you want your transaction to be live then you should not issue a conflicting transaction. Snowman is the name of Ava Labs' implementation of the Avalanche consensus protocol for linear chains. If there are no undecided transactions, the Avalanche consensus protocol _quiesces_. That is, it does nothing if there is no work to be done. This makes Avalanche more sustainable than Proof-of-work where nodes need to constantly do work. Avalanche has no leader. Any node can propose a transaction and any node that has staked AVAX can vote on every transaction, which makes the network more robust and decentralized. ## Why Do We Care? Avalanche is a general consensus engine. It doesn't matter what type of application is put on top of it. The protocol allows the decoupling of the application layer from the consensus layer. If you're building a dapp on Avalanche then you just need to define a few things, like how conflicts are defined and what is in a transaction. You don't need to worry about how nodes come to an agreement. The consensus protocol is a black box that put something into it and it comes back as accepted or rejected. Avalanche can be used for all kinds of applications, not just P2P payment networks. Avalanche's Primary Network has an instance of the Ethereum Virtual Machine, which is backward compatible with existing Ethereum Dapps and dev tooling. The Ethereum consensus protocol has been replaced with Avalanche consensus to enable lower block latency and higher throughput. Avalanche is very performant. It can process thousands of transactions per second with one to two second acceptance latency. ## Summary Avalanche consensus is a radical breakthrough in distributed systems. It represents as large a leap forward as the classical and Nakamoto consensus protocols that came before it. Now that you have a better understanding of how it works, check out other documentations for building game-changing Dapps and financial instruments on Avalanche. # Avalanche L1s (/docs/quick-start/avalanche-l1s) --- title: Avalanche L1s description: Explore the multi-chain architecture of Avalanche ecosystem. --- An Avalanche L1 is a sovereign network which defines its own rules regarding its membership and token economics. It is composed of a dynamic subset of Avalanche validators working together to achieve consensus on the state of one or more blockchains. Each blockchain is validated by exactly one Avalanche L1, while an Avalanche L1 can validate many blockchains. Avalanche's [Primary Network](/docs/quick-start/primary-network) is a special Avalanche L1 running three blockchains: - The Platform Chain [(P-Chain)](/docs/quick-start/primary-network#p-chain) - The Contract Chain [(C-Chain)](/docs/quick-start/primary-network#c-chain) - The Exchange Chain [(X-Chain)](/docs/quick-start/primary-network#x-chain) ![image](/images/subnet1.png) Every validator of an Avalanche L1 **must** sync the P-Chain of the Primary Network for interoperability. Node operators that validate an Avalanche L1 with multiple chains do not need to run multiple machines for validation. For example, the Primary Network is an Avalanche L1 with three coexisting chains, all of which can be validated by a single node, or a single machine. ## Advantages ### Independent Networks - Avalanche L1s use virtual machines to specify their own execution logic, determine their own fee regime, maintain their own state, facilitate their own networking, and provide their own security. - Each Avalanche L1's performance is isolated from other Avalanche L1s in the ecosystem, so increased usage on one Avalanche L1 won't affect another. - Avalanche L1s can have their own token economics with their own native tokens, fee markets, and incentives determined by the Avalanche L1 deployer. - One Avalanche L1 can host multiple blockchains with customized [virtual machines](/docs/quick-start/virtual-machines). ### Native Interoperability Avalanche Warp Messaging enables native cross-Avalanche L1 communication and allows Virtual Machine (VM) developers to implement arbitrary communication protocols between any two Avalanche L1s. ### Accommodate App-Specific Requirements Different blockchain-based applications may require validators to have certain properties such as large amounts of RAM or CPU power. an Avalanche L1 could require that validators meet certain [hardware requirements](/docs/nodes/system-requirements#hardware-and-operating-systems) so that the application doesn't suffer from low performance due to slow validators. ### Launch Networks Designed With Compliance Avalanche's L1 architecture makes regulatory compliance manageable. As mentioned above, an Avalanche L1 may require validators to meet a set of requirements. Some examples of requirements the creators of an Avalanche L1 may choose include: - Validators must be located in a given country. - Validators must pass KYC/AML checks. - Validators must hold a certain license. ### Control Privacy of On-Chain Data Avalanche L1s are ideal for organizations interested in keeping their information private. Institutions conscious of their stakeholders' privacy can create a private Avalanche L1 where the contents of the blockchains would be visible only to a set of pre-approved validators. Define this at creation with a [single parameter](/docs/nodes/configure/avalanche-l1-configs#private-avalanche-l1). ### Validator Sovereignty In a heterogeneous network of blockchains, some validators will not want to validate certain blockchains because they simply have no interest in those blockchains. The Avalanche L1 model enables validators to concern themselves only with blockchain networks they choose to participate in. This greatly reduces the computational burden on validators. ## Develop Your Own Avalanche L1 Avalanche L1s on Avalanche are deployed by default with [Subnet-EVM](https://github.com/ava-labs/subnet-evm#subnet-evm), a fork of go-ethereum. It implements the Ethereum Virtual Machine and supports Solidity smart contracts as well as most other Ethereum client functionality. To get started, check out our [L1 Toolbox](/tools/l1-toolbox) or the tutorials in the [Avalanche CLI](/docs/tooling/create-avalanche-l1) section. # AVAX Token (/docs/quick-start/avax-token) --- title: AVAX Token description: Learn about the native token of Avalanche Primary Network. --- AVAX is the native utility token of Avalanche. It's a hard-capped, scarce asset that is used to pay for fees, secure the platform through staking, and provide a basic unit of account between the multiple Avalanche L1s created on Avalanche. `1 nAVAX` is equal to `0.000000001 AVAX`. ## Utility AVAX is a capped-supply (up to 720M) resource in the Avalanche ecosystem that's used to power the network. AVAX is used to secure the ecosystem through staking and for day-to-day operations like issuing transactions. AVAX represents the weight that each node has in network decisions. No single actor owns the Avalanche Network, so each validator in the network is given a proportional weight in the network's decisions corresponding to the proportion of total stake that they own through proof of stake (PoS). Any entity trying to execute a transaction on Avalanche pays a corresponding fee (commonly known as "gas") to run it on the network. The fees used to execute a transaction on Avalanche is burned, or permanently removed from circulating supply. ## Tokenomics A fixed amount of 360M AVAX was minted at genesis, but a small amount of AVAX is constantly minted as a reward to validators. The protocol rewards validators for good behavior by minting them AVAX rewards at the end of their staking period. The minting process offsets the AVAX burned by transactions fees. While AVAX is still far away from its supply cap, it will almost always remain an inflationary asset. Avalanche does not take away any portion of a validator's already staked tokens (commonly known as "slashing") for negligent/malicious staking periods, however this behavior is disincentivized as validators who attempt to do harm to the network would expend their node's computing resources for no reward. AVAX is minted according to the following formula, where $R_j$ is the total number of tokens at year $j$, with $R_1 = 360M$, and $R_l$ representing the last year that the values of $\gamma,\lambda \in \R$ were changed; $c_j$ is the yet un-minted supply of coins to reach $720M$ at year $j$ such that $c_j \leq 360M$; $u$ represents a staker, with $u.s_{amount}$ representing the total amount of stake that $u$ possesses, and $u.s_{time}$ the length of staking for $u$. AVAX is minted according to the following formula, where $R_j$ is the total number of tokens at: $$ R_j = R_l + \sum_{\forall u} \rho(u.s_{amount}, u.s_{time}) \times \frac{c_j}{L} \times \left( \sum_{i=0}^{j}\frac{1}{\left(\gamma + \frac{1}{1 + i^\lambda}\right)^i} \right) $$ where, $$ L = \left(\sum_{i=0}^{\infty} \frac{1}{\left(\gamma + \frac{1}{1 + i^\lambda} \right)^i} \right) $$ At genesis, $c_1 = 360M$. The values of $\gamma$ and $\lambda$ are governable, and if changed, the function is recomputed with the new value of $c_*$. We have that $\sum_{*}\rho(*) \le 1$. $\rho(*)$ is a linear function that can be computed as follows ($u.s_{time}$ is measured in weeks, and $u.s_{amount}$ is measured in AVAX tokens): $$ \rho(u.s_{amount}, u.s_{time}) = (0.002 \times u.s_{time} + 0.896) \times \frac{u.s_{amount}}{R_j} $$ If the entire supply of tokens at year $j$ is staked for the maximum amount of staking time (one year, or 52 weeks), then $\sum_{\forall u}\rho(u.s_{amount}, u.s_{time}) = 1$. If, instead, every token is staked continuously for the minimal stake duration of two weeks, then $\sum_{\forall u}\rho(u.s_{amount}, u.s_{time}) = 0.9$. Therefore, staking for the maximum amount of time incurs an additional 11.11% of tokens minted, incentivizing stakers to stake for longer periods. Due to the capped-supply, the above function guarantees that AVAX will never exceed a total of $720M$ tokens, or $\lim_{j \to \infty} R(j) = 720M$. # Disclaimer (/docs/quick-start/disclaimer) --- title: Disclaimer --- The Knowledge Base, including all the Help articles on this site, is provided for technical support purposes only, without representation, warranty or guarantee of any kind. Not an offer to sell or solicitation of an offer to buy any security or other regulated financial instrument. Not technical, investment, financial, accounting, tax, legal or other advice; please consult your own professionals. Please conduct your own research before connecting to or interacting with any dapp or third party or making any investment or financial decisions. MoonPay, ParaSwap and any other third party services or dapps you access are offered by third parties unaffiliated with us. Please review this [Notice](https://assets.website-files.com/602e8e4411398ca20cfcafd3/60ec9607c853cd466383f1ad_Important%20Notice%20-%20avalabs.org.pdf) and the [Terms of Use](https://core.app/terms/core). # Introduction (/docs/quick-start) --- title: Introduction description: Learn about Avalanche Protocol and its unique features. --- Avalanche is an open-source platform for building decentralized applications in one interoperable, decentralized, and highly scalable ecosystem. Powered by a uniquely powerful [consensus mechanism](/docs/quick-start/avalanche-consensus), Avalanche is the first ecosystem designed to accommodate the scale of global finance, with near-instant transaction finality. ## Blazingly Fast Avalanche employs the fastest consensus mechanism of any Layer 1 blockchain. The unique consensus mechanism enables quick finality and low latency: in less than 2 seconds, your transaction is effectively processed and verified. ## Built to Scale Developers who build on Avalanche can build application-specific blockchains with complex rulesets or build on existing private or public Avalanche L1s in any language. Avalanche is incredibly energy-efficient and can run easily on consumer-grade hardware. The entire Avalanche network consumes the same amount of energy as 46 US households, equivalent to 0.0005% of the amount of energy consumed by Bitcoin. Solidity developers can build on Avalanche's implementation of the EVM straight out-of-the box, or build their own custom Virtual Machine (VM) for advanced use cases. ## Advanced Security Avalanche consensus scales to thousands of concurrent validators without suffering performance degradation making it one of the most secure protocols for internet scaling systems. Permissionless and permissioned custom blockchains deployed as an Avalanche L1s can include custom rulesets designed to be compliant with legal and jurisdictional considerations. # Primary Network (/docs/quick-start/primary-network) --- title: Primary Network description: Learn about the Avalanche Primary Network and its three blockchains. --- Avalanche is a heterogeneous network of blockchains. As opposed to homogeneous networks, where all applications reside in the same chain, heterogeneous networks allow separate chains to be created for different applications. The Primary Network is a special [Avalanche L1](/docs/quick-start/avalanche-l1s) that runs three blockchains: - The Platform Chain [(P-Chain)](/docs/quick-start/primary-network#p-chain) - The Contract Chain [(C-Chain)](/docs/quick-start/primary-network#c-chain) - The Exchange Chain [(X-Chain)](/docs/quick-start/primary-network#x-chain) [Avalanche Mainnet](/docs/quick-start/networks/mainnet) is comprised of the Primary Network and all deployed Avalanche L1s. A node can become a validator for the Primary Network by staking at least **2,000 AVAX**. ![Primary network](/images/primary-network1.png) ## The Chains All validators of the Primary Network are required to validate and secure the following: ### C-Chain The **C-Chain** is an implementation of the Ethereum Virtual Machine (EVM). The [C-Chain's API](/docs/api-reference/c-chain/api) supports Geth's API and supports the deployment and execution of smart contracts written in Solidity. The C-Chain is an instance of the [Coreth](https://github.com/ava-labs/coreth) Virtual Machine. ### P-Chain The **P-Chain** is responsible for all validator and Avalanche L1-level operations. The [P-Chain API](/docs/api-reference/p-chain/api) supports the creation of new blockchains and Avalanche L1s, the addition of validators to Avalanche L1s, staking operations, and other platform-level operations. The P-Chain is an instance of the Platform Virtual Machine. ### X-Chain The **X-Chain** is responsible for operations on digital smart assets known as **Avalanche Native Tokens**. A smart asset is a representation of a real-world resource (for example, equity, or a bond) with sets of rules that govern its behavior, like "can't be traded until tomorrow." The [X-Chain API](/docs/api-reference/x-chain/api) supports the creation and trade of Avalanche Native Tokens. One asset traded on the X-Chain is AVAX. When you issue a transaction to a blockchain on Avalanche, you pay a fee denominated in AVAX. The X-Chain is an instance of the Avalanche Virtual Machine (AVM). # Rewards Formula (/docs/quick-start/rewards-formula) --- title: Rewards Formula description: Learn about the rewards formula for the Avalanche Primary Network validator --- ## Primary Network Validator Rewards Consider a Primary Network validator which stakes a $Stake$ amount of `AVAX` for $StakingPeriod$ seconds. The potential reward is calculated **at the beginning of the staking period**. At the beginning of the staking period there is a $Supply$ amount of `AVAX` in the network. The maximum amount of `AVAX` is $MaximumSupply$. At the end of its staking period, a responsive Primary Network validator receives a reward. $$ Potential Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{Staking Period}{Minting Period} \times EffectiveConsumptionRate $$ where, $$ MaximumSupply - Supply = \text{the number of AVAX tokens left to emit in the network} $$ $$ \frac{Stake}{Supply} = \text{the individual's stake as a percentage of all available AVAX tokens in the network} $$ $$ \frac{StakingPeriod}{MintingPeriod} = \text{time tokens are locked up divided by the $MintingPeriod$} $$ $$ \text{$MintingPeriod$ is one year as configured by the network).} $$ $$ EffectiveConsumptionRate = $$ $$ \frac{MinConsumptionRate}{PercentDenominator} \times \left(1- \frac{Staking Period}{Minting Period}\right) + \frac{MaxConsumptionRate}{PercentDenominator} \times \frac{Staking Period}{Minting Period} $$ Note that $StakingPeriod$ is the staker's entire staking period, not just the staker's uptime, that is the aggregated time during which the staker has been responsive. The uptime comes into play only to decide whether a staker should be rewarded; to calculate the actual reward only the staking period duration is taken into account. $EffectiveConsumptionRate$ is the rate at which the Primary Network validator is rewarded based on $StakingPeriod$ selection. $MinConsumptionRate$ and $MaxConsumptionRate$ bound $EffectiveConsumptionRate$: $$ MinConsumptionRate \leq EffectiveConsumptionRate \leq MaxConsumptionRate $$ The larger $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MaxConsumptionRate$. The smaller $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MinConsumptionRate$. A staker achieves the maximum reward for its stake if $StakingPeriod$ = $Minting Period$. The reward is: $$ Max Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{MaxConsumptionRate}{PercentDenominator} $$ Note that this formula is the same as the reward formula at the top of this section because $EffectiveConsumptionRate$ = $MaxConsumptionRate$. For reference, you can find all the Primary network parameters in [the section below](#primary-network-parameters-on-mainnet). ## Delegators Weight Checks There are bounds set of the maximum amount of delegators' stake that a validator can receive. The maximum weight $MaxWeight$ a validator $Validator$ can have is: $$ MaxWeight = \min(Validator.Weight \times MaxValidatorWeightFactor, MaxValidatorStake) $$ where $MaxValidatorWeightFactor$ and $MaxValidatorStake$ are the Primary Network Parameters described above. A delegator won't be added to a validator if the combination of their weights and all other validator's delegators' weight is larger than $MaxWeight$. Note that this must be true at any point in time. Note that setting $MaxValidatorWeightFactor$ to 1 disables delegation since the $MaxWeight = Validator.Weight$. ## Notes on Percentages `PercentDenominator = 1_000_000` is the denominator used to calculate percentages. It allows you to specify percentages up to 4 digital positions. To denominate your percentage in `PercentDenominator` just multiply it by `10_000`. For example: - `100%` corresponds to `100 * 10_000 = 1_000_000` - `1%` corresponds to `1* 10_000 = 10_000` - `0.02%` corresponds to `0.002 * 10_000 = 200` - `0.0007%` corresponds to `0.0007 * 10_000 = 7` ## Primary Network Parameters on Mainnet For reference we list below the Primary Network parameters on Mainnet: - `AssetID = Avax` - `InitialSupply = 240_000_000 Avax` - `MaximumSupply = 720_000_000 Avax`. - `MinConsumptionRate = 0.10 * reward.PercentDenominator`. - `MaxConsumptionRate = 0.12 * reward.PercentDenominator`. - `Minting Period = 365 * 24 * time.Hour`. - `MinValidatorStake = 2_000 Avax`. - `MaxValidatorStake = 3_000_000 Avax`. - `MinStakeDuration = 2 * 7 * 24 * time.Hour`. - `MaxStakeDuration = 365 * 24 * time.Hour`. - `MinDelegationFee = 20000`, that is `2%`. - `MinDelegatorStake = 25 Avax`. - `MaxValidatorWeightFactor = 5`. This is a platformVM parameter rather than a genesis one, so it's shared across networks. - `UptimeRequirement = 0.8`, that is `80%`. ### Interactive Graph The graph below demonstrates the reward as a function of the length of time staked. The x-axis depicts $\frac{StakingPeriod}{MintingPeriod}$ as a percentage while the y-axis depicts $Reward$ as a percentage of $MaximumSupply - Supply$, the amount of tokens left to be emitted. Graph variables correspond to those defined above: - `h` (high) = $MaxConsumptionRate$ - `l` (low) = $MinConsumptionRate$ - `s` = $\frac{Stake}{Supply}$ # Validator Management (/docs/quick-start/validator-manager) --- title: Validator Management description: Learn about the Validator Manager contract suite for Avalanche L1s --- The Validator Manager contract suite allows Avalanche Layer 1s (L1s) to manage and enforce custom logic for validator sets through smart contracts. ### Choosing Between Proof of Authority and Proof of Stake Chains Organizations may opt to run a Proof of Authority (PoA) or a Proof of Stake (PoS) chain based on their specific needs and objectives. #### Proof of Authority In a PoA chain, a limited number of validators are pre-approved and recognized entities. This model is ideal for organizations that require: - **Control and Compliance**: Regulatory compliance or the need for trusted validators. - **Simplified Governance**: Easier coordination among validators. PoA is often used by private enterprises, consortiums, or government agencies where validator identity is crucial, and a controlled environment is preferred. #### Proof of Stake In a PoS chain, validators are selected based on the amount of stake (tokens) they hold and are willing to lock up. This model is suitable for organizations aiming for: - **Decentralization**: Encouraging a wide distribution of validators. - **Security**: Economic incentives align validator behavior with network health. - **Community Participation**: Allowing token holders to participate in network validation. PoS chains are ideal for public networks or organizations that wish to build an open ecosystem with active community involvement. ### Enforcing Custom Validation Logic via Smart Contracts Avalanche L1s have the unique capability to enforce any validation logic that can be encoded via smart contracts. This flexibility allows developers and organizations to define custom rules and conditions for validator participation in their networks. By leveraging smart contracts, L1s can implement complex validation mechanisms, such as dynamic validator sets, customized staking requirements, or hybrid consensus models. Smart contracts act as the governing code that dictates how validators are selected, how they behave, and under what conditions they can participate in the network. This programmable approach ensures that the validation logic is transparent, auditable, and can be updated or modified as needed to adapt to changing requirements or threats. --- [Learn more about the Validator Manager contract suite](/docs/avalanche-l1s/validator-manager/contract) [Build your first Avalanche L1](/docs/tooling/create-avalanche-l1) # Virtual Machines (/docs/quick-start/virtual-machines) --- title: Virtual Machines description: Learn about blockchain VMs and how you can build a custom VM-enabled blockchain in Avalanche. --- A **Virtual Machine** (VM) is the blueprint for a blockchain, meaning it defines a blockchain's complete application logic by specifying the blockchain's state, state transitions, transaction rules, and API interface. Developers can use the same VM to create multiple blockchains, each of which follows identical rules but is independent of all others. All Avalanche validators of the **Avalanche Primary Network** are required to run three VMs: - **Coreth**: Defines the Contract Chain (C-Chain); supports smart contract functionality and is EVM-compatible. - **Platform VM**: Defines the Platform Chain (P-Chain); supports operations on staking and Avalanche L1s. - **Avalanche VM**: Defines the Exchange Chain (X-Chain); supports operations on Avalanche Native Tokens. All three can easily be run on any computer with [AvalancheGo](/docs/nodes). ## Custom VMs on Avalanche Developers with advanced use-cases for utilizing distributed ledger technology are often forced to build everything from scratch - networking, consensus, and core infrastructure - before even starting on the actual application. Avalanche eliminates this complexity by: - Providing VMs as simple blueprints for defining blockchain behavior - Supporting development in any programming language with familiar tools - Handling all low-level infrastructure automatically This lets developers focus purely on building their dApps, ecosystems, and communities, rather than wrestling with blockchain fundamentals. ### How Custom VMs Work Customized VMs can communicate with Avalanche over a language agnostic request-response protocol known as [RPC](https://en.wikipedia.org/wiki/Remote_procedure_call). This allows the VM framework to open a world of endless possibilities, as developers can implement their dApps using the languages, frameworks, and libraries of their choice. Validators can install additional VMs on their node to validate additional [Avalanche L1s](/docs/quick-start/avalanche-l1s) in the Avalanche ecosystem. In exchange, validators receive staking rewards in the form of a reward token determined by the Avalanche L1s. ## Building a Custom VM You can start building your first custom virtual machine in two ways: 1. Use the ready-to-deploy Subnet-EVM for Solidity-based development 2. Create a custom VM in Golang, Rust, or your preferred language The choice depends on your needs. Subnet-EVM provides a quick start with Ethereum compatibility, while custom VMs offer maximum flexibility. ### Golang Examples See here for a tutorial on [How to Build a Simple Golang VM](/docs/virtual-machines/golang-vms/simple-golang-vm). ### Rust Examples See here for a tutorial on [How to Build a Simple Rust VM](/docs/virtual-machines/rust-vms/setting-up-environment). # ACP-103: Dynamic Fees (/docs/acps/103-dynamic-fees) --- title: "ACP-103: Dynamic Fees" description: "Details for Avalanche Community Proposal 103: Dynamic Fees" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/103-dynamic-fees/README.md --- | ACP | 103 | | :--- | :--- | | **Title** | Add Dynamic Fees to the P-Chain | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/104)) | | **Track** | Standards | ## Abstract Introduce a dynamic fee mechanism to the P-Chain. Preview a future transition to a multidimensional fee mechanism. ## Motivation Blockchains are resource-constrained environments. Users are charged for the execution and inclusion of their transactions based on the blockchain's transaction fee mechanism. The mechanism should fluctuate based on the supply of and demand for said resources to serve as a deterrent against spam and denial-of-service attacks. With a fixed fee mechanism, users are provided with simplicity and predictability but network congestion and resource constraints are not taken into account. There is no incentive for users to withhold transactions since the cost is fixed regardless of the demand. The fee does not adjust the execution and inclusion fee of transactions to the market clearing price. The C-Chain, in [Apricot Phase 3](https://medium.com/avalancheavax/apricot-phase-three-c-chain-dynamic-fees-432d32d67b60), employs a dynamic fee mechanism to raise the price during periods of high demand and lowering the price during periods of low demand. As the price gets too expensive, network utilization will decrease, which drops the price. This ensures the execution and inclusion fee of transactions closely matches the market clearing price. The P-Chain currently operates under a fixed fee mechanism. To more robustly handle spikes in load expected from introducing the improvements in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md), it should be migrated to a dynamic fee mechanism. The X-Chain also currently operates under a fixed fee mechanism. However, due to the current lower usage and lack of new feature introduction, the migration of the X-Chain to a dynamic fee mechanism is deferred to a later ACP to reduce unnecessary additional technical complexity. ## Specification ### Dimensions There are four dimensions that will be used to approximate the computational cost of, or "gas" consumed in, a transaction: 1. Bandwidth $B$ is the amount of network bandwidth used for transaction broadcast. This is set to the size of the transaction in bytes. 2. Reads $R$ is the number of state/database reads used in transaction execution. 3. Writes $W$ is the number of state/database writes used in transaction execution. 4. Compute $C$ is the total amount of compute used to verify and execute a transaction, measured in microseconds. The gas consumed $G$ in a transaction is: $$G = B + 1000R + 1000W + 4C$$ A future ACP could remove the merging of these dimensions to granularly meter usage of each resource in a multidimensional scheme. ### Mechanism This mechanism aims to maintain a target gas consumption $T$ per second and adjusts the fee based on the excess gas consumption $x$, defined as the difference between the current gas consumption and $T$. Prior to the activation of this mechanism, $x$ is initialized: $$x = 0$$ At the start of building/executing block $b$, $x$ is updated: $$x = \max(x - T \cdot \Delta{t}, 0)$$ Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The gas price for block $b$ is: $$M \cdot \exp\left(\frac{x}{K}\right)$$ Where: - $M$ is the minimum gas price - $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification ```python # Approximates factor * e ** (numerator / denominator) using Taylor expansion def fake_exponential(factor: int, numerator: int, denominator: int) -> int: i = 1 output = 0 numerator_accum = factor * denominator while numerator_accum > 0: output += numerator_accum numerator_accum = (numerator_accum * numerator) // (denominator * i) i += 1 return output // denominator ``` - $K$ is a constant to control the rate of change of the gas price After processing block $b$, $x$ is updated with the total gas consumed in the block $G$: $$x = x + G$$ Whenever $x$ increases by $K$, the gas price increases by a factor of `~2.7`. If the gas price gets too expensive, average gas consumption drops, and $x$ starts decreasing, dropping the price. The gas price constantly adjusts to make sure that, on average, the blockchain consumes $T$ gas per second. A [token bucket](https://en.wikipedia.org/wiki/Token_bucket) is employed to meter the maximum rate of gas consumption. Define $C$ as the capacity of the bucket, $R$ as the amount of gas to add to the bucket per second, and $r$ as the amount of gas currently in the bucket. Prior to the activation of this mechanism, $r$ is initialized: $$r = 0$$ At the beginning of processing block $b$, $r$ is set: $$r = \min\left(r + R \cdot \Delta{t}, C\right)$$ Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The maximum gas consumed in a given $\Delta{t}$ is $r + R \cdot \Delta{t}$. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$. After processing block $b$, the total gas consumed in $b$, or $G$, will be known. If $G \gt r$, $b$ is considered an invalid block. If $b$ is a valid block, $r$ is updated: $$r = r - G$$ A block gas limit does not need to be set as it is implicitly derived from $r$. The parameters at activation are: | Parameter | P-Chain Configuration| | - | - | | $T$ - target gas consumed per second | 50,000 | | $M$ - minimum gas price | 1 nAVAX | | $K$ - gas price update constant | 2_164_043 | | $C$ - maximum gas capacity | 1,000,000 | | $R$ - gas capacity added per second | 100,000 | $K$ was chosen such that at sustained maximum capacity ($R=100,000$ gas/second), the fee rate will double every ~30 seconds. As the network gains capacity to handle additional load, this algorithm can be tuned to increase the gas consumption rate. #### A note on $e^x$ There is a subtle reason why an exponential adjustment function was chosen: The adjustment function should be _equally_ reactive irrespective of the actual fee. Define $b_n$ as the current block's gas fee, $b_{n+1}$ as the next block's gas fee, and $x$ as the excess gas consumption. Let's use a linear adjustment function: $$b_{n+1} = b_n + 10x$$ Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 + 10 \cdot 1 = 110$, an increase of `10%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 + 10 \cdot 1 = 10,010$, an increase of `0.1%`. The fee is _less_ reactive as the fee increases. This is because the rate of change _does not scale_ with $x$. Now, let's use an exponential adjustment function: $$b_{n+1} = b_n \cdot e^x$$ Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 \cdot e^1 \approx 271.828$, an increase of `171%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 \cdot e^1 \approx 27,182.8$, an increase of `171%` again. The fee is _equally_ reactive as the fee increases. This is because the rate of change _scales_ with $x$. ### Block Building Procedure When a transaction is constructed on the P-Chain, the amount of $AVAX burned is given by `sum($AVAX outputs) - sum($AVAX inputs)`. The amount of gas consumed by the transaction can be deterministically calculated after construction. Dividing the amount of $AVAX burned by the amount of gas consumed yields the maximum gas price that the transaction can pay. Instead of using a FIFO queue for the mempool (like the P-Chain does now), the mempool should use a priority queue ordered by the maximum gas price of each transaction. This ensures that higher paying transactions are included first. ## Backwards Compatibility Modification of a fee mechanism is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation. After this ACP is activated, any transaction issued on the P-Chain must account for the fee mechanism defined above. Users are responsible for reconstructing their transactions to include a larger fee for quicker inclusion when the fee increases. ## Reference Implementation ACP-103 was implemented into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp103` label [here](https://github.com/ava-labs/avalanchego/pulls?q=is%3Apr+label%3Aacp103). ## Security Considerations The current fixed fee mechanism on the X-Chain and P-Chain does not robustly handle spikes in load. Migrating the P-Chain to a dynamic fee mechanism will ensure that any additional load caused by demand for new P-Chain features (such as those introduced in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md)) is properly priced given allotted processing capacity. The X-Chain, in comparison, currently has significantly lower usage, making it less likely for the demand for blockspace on it to exceed the current static fee rates. If necessary or desired, a future ACP can reuse the mechanism introduced here to add dynamic fee rates to the X-Chain. ## Acknowledgements Thank you to [@aaronbuchwald](https://github.com/aaronbuchwald) and [@patrick-ogrady](https://github.com/patrick-ogrady) for providing feedback prior to publication. Thank you to the authors of [EIP-4844](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4844.md) for creating the fee design that inspired the above mechanism. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-108: Evm Event Importing (/docs/acps/108-evm-event-importing) --- title: "ACP-108: Evm Event Importing" description: "Details for Avalanche Community Proposal 108: Evm Event Importing" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/108-evm-event-importing/README.md --- | ACP | 108 | | :--- | :--- | | **Title** | EVM Event Importing Standard | | **Author(s)** | Michael Kaplan ([@mkaplan13](https://github.com/mkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/114)) | | **Track** | Best Practices Track | ## Abstract Defines a standard smart contract interface and abstract implementation for importing EVM events from any blockchain within Avalanche using [Avalanche Warp Messaging](https://docs.avax.network/build/cross-chain/awm/overview). ## Motivation The implementation of Avalanche Warp Messaging within `coreth` and `subnet-evm` exposes a [mechanism for getting authenticated hashes of blocks](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IWarpMessenger.sol#L43) that have been accepted on blockchains within Avalanche. Proofs of acceptance of blocks, such as those introduced in [ACP-75](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/75-acceptance-proofs), can be used to prove arbitrary events and state changes that occured in those blocks. However, there is currently no clear standard for using authenticated block hashes in smart contracts within Avalanche, making it difficult to build applications that leverage this mechanism. In order to make effective use of authenticated block hashes, contracts must be provided encoded block headers that match the authenticated block hashes and also Merkle proofs that are verified against the state or receipts root contained in the block header. With a standard interface and abstract contract implemetation that handles the authentication of block hashes and verification of Merkle proofs, smart contract developers on Avalanche will be able to much more easily create applications that leverage data from other Avalanche blockchains. These type of cross-chain application do not require any direct interaction on the source chain. ## Specification ### Event Importing Interface We propose that smart contracts importing EVM events emitted by other blockchains within Avalanche implement the following interface. #### Methods Imports the EVM event uniquely identified by the source blockchain ID, block header, transaction index, and log index. The `blockHeader` must be validated to match the authenticated block hash from the `sourceBlockchainID`. The specification for EVM block headers can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/block.go#L73). The `txIndex` identifies the key of receipts trie of the given block header that the `receiptProof` must prove inclusion of. The value obtained by verifying the `receiptProof` for that key is the encoded transaction receipt. The specification for EVM transaction receipts can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/receipt.go#L62). The `logIndex` identifies which event log from the given transaction receipt is to be imported. Must emit an `EventImported` event upon success. ```solidity function importEvent( bytes32 sourceBlockchainID, bytes calldata blockHeader, uint256 txIndex, bytes[] calldata receiptProof, uint256 logIndex ) external; ``` This interface does not require that the Warp precompile is used to authenticate block hashes. Implementations could: - Use the Warp precompile to authenticate block hashes provided directly in the transaction calling `importEvent`. - Check previously authenticated block hashes using an external contract. - Allows for a block hash to be authenticated once and used in arbitrarily many transactions afterwards. - Allows for alternative authentication mechanisms to be used, such as trusted oracles. #### Events Must trigger when an EVM event is imported. ```solidity event EventImported( bytes32 indexed sourceBlockchainID, bytes32 indexed sourceBlockHash, address indexed loggerAddress, uint256 txIndex, uint256 logIndex ); ``` ### Event Importing Abstract Contract Applications importing EVM events emitted by other blockchains within Avalanche should be able to use a standard abstract implementation of the `importEvent` interface. This abstract implementation must handle: - Authenticating block hashes from other chains. - Verifying that the encoded `blockHeader` matches the imported block hash. - Verifying the Merkle `receiptProof` for the given `txIndex` against the receipt root of the provided `blockHeader`. - Decoding the event log identified by `logIndex` from the receipt obtained from verifying the `receiptProof`. As noted above, implementations could directly use the Warp precompile's `getVerifiedWarpBlockHash` interface method for authenticating block hashes, as is done in the reference implementation [here](https://github.com/ava-labs/event-importer-poc/blob/main/contracts/src/EventImporter.sol#L51). Alternatively, implementations could use the `sourceBlockchainID` and `blockHeader` provided in the parameters to check with an external contract that the block has been accepted on the given chain. The specifics of such an external contract are outside the scope of this ACP, but for illustrative purposes, this could look along the lines of: ```solidity bool valid = blockHashRegistry.checkAuthenticatedBlockHash( sourceBlockchainID, keccack256(blockHeader) ); require(valid, "Invalid block header"); ``` Inheriting contracts should only need to define the logic to be executed when an event is imported. This is done by providing an implementation of the following internal function, called by `importEvent`. ```solidity function _onEventImport(EVMEventInfo memory eventInfo) internal virtual; ``` Where the `EVMEventInfo` struct is defined as: ```solidity struct EVMLog { address loggerAddress; bytes32[] topics; bytes data; } struct EVMEventInfo { bytes32 blockchainID; uint256 blockNumber; uint256 txIndex; uint256 logIndex; EVMLog log; } ``` The `EVMLog` struct is meant to match the `Log` type definition in the EVM [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/log.go#L39). ## Reference Implementation See reference implementation on [Github here](https://github.com/ava-labs/event-importer-poc). In addition to implementing the interface and abstract contract described above, the reference implementation shows how transactions can be constructed to import events using Warp block hash signatures. ## Open Questions See [here](https://github.com/ava-labs/event-importer-poc?tab=readme-ov-file#open-questions-and-considerations). ## Security Considerations The correctness of a contract using block hashes to prove that a specific event was emitted within that block depends on the correctness of: 1. The mechanism for authenticating that a block hash was finalized on another blockchain. 2. The Merkle proof validation library used to prove that a specific transaction receipt was included in the given block. For considerations on using Avalanche Warp Messaging to authenticate block hashes, see [here](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/30-avalanche-warp-x-evm#security-considerations). To improve confidence in the correctness of the Merkle proof validation used in implementations, well-audited and widely used libraries should be used. ## Acknowledgements Using Merkle proofs to verify events/state against root hashes is not a new idea. Protocols such as [IBC](https://ibc.cosmos.network/v8/), [Rainbow Bridge](https://github.com/Near-One/rainbow-bridge), and [LayerZero](https://layerzero.network/publications/LayerZero_Whitepaper_V1.1.0.pdf), among others, have previously suggested using Merkle proofs in a similar manner. Thanks to [@aaronbuchwald](https://github.com/aaronbuchwald) for proposing the `getVerifiedWarpBlockHash` interface be included in the AWM implemenation within Avalanche EVMs, which enables this type of use case. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-113: Provable Randomness (/docs/acps/113-provable-randomness) --- title: "ACP-113: Provable Randomness" description: "Details for Avalanche Community Proposal 113: Provable Randomness" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/113-provable-randomness/README.md --- | ACP | 113 | | :------------ | :------------------------------------------------------------------------------------ | | **Title** | Provable Virtual Machine Randomness | | **Author(s)** | Tsachi Herman [http://github.com/tsachiherman](http://github.com/tsachiherman) | | **Status** | Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/142)) | | **Track** | Standards | ## Future Work This ACP was marked as stale due to its documented security concerns. In order to safely utilize randomness produced by this mechanism, the consumer of the randomness must: 1. Define a security threshold `x` which is the maximum number of consecutive blocks which can be proposed by a malicious entity. 2. After committing to a request for randomness, the consumer must wait for `x` blocks. 3. After waiting for `x` blocks, the consumer must verify that the randomness was not biased during the `x` blocks. 4. If the randomness was biased, it would be insufficient to request randomness again, as this would allow the malicious block producer to discard any randomness that it did not like. If using the randomness mechanism proposed in this ACP, the consumer of the randomness must be able to terminate the request for randomness in such a way that no participant would desire the outcome. Griefing attacks would likely result from such a construction. ### Alternative Mechanisms There are alternative mechanisms that would not result in such security concerns, such as: - Utilizing a deterministic threshold signature scheme to finalize a block in consensus would allow the threshold signature to be used during the execution of the block. - Utilizing threshold commit-reveal schemes that guarantee that committed values will always be revealed in a timely manner. However, these mechanisms are likely too costly to be introduced into the Avalanche Primary Network due to its validator set size. It is left to a future ACP to specify the implementation of one of these alternative schemes for L1 networks with smaller sized validator sets. ## Abstract Avalanche offers developers flexibility through subnets and EVM-compatible smart contracts. However, the platform's deterministic block execution limits the use of traditional random number generators within these contracts. To address this, a mechanism is proposed to generate verifiable, non-cryptographic random number seeds on the Avalanche platform. This method ensures uniformity while allowing developers to build more versatile applications. ## Motivation Reliable randomness is essential for building exciting applications on Avalanche. Games, participant selection, dynamic content, supply chain management, and decentralized services all rely on unpredictable outcomes to function fairly. Randomness also fuels functionalities like unique identifiers and simulations. Without a secure way to generate random numbers within smart contracts, Avalanche applications become limited. Avalanche's traditional reliance on external oracles for randomness creates complexity and bottlenecks. These oracles inflate costs, hinder transaction speed, and are cumbersome to integrate. As Avalanche scales to more Subnets, this dependence on external systems becomes increasingly unsustainable. A solution for verifiable random number generation within Avalanche solves these problems. It provides fair randomness functionality across the chains, at no additional cost. This paves the way for a more efficient Avalanche ecosystem. ## Specification ### Changes Summary The existing Avalanche protocol breaks the block building into two parts : external and internal. The external block is the Snowman++ block, whereas the internal block is the actual virtual machine block. To support randomness, a BLS based VRF implementation is used, that would be recursively signing its own signatures as its message. Since the BLS signatures are deterministic, they provide a great way to construct a reliable VRF. For proposers that do not have a BLS key associated with their node, the hash of the signature from the previous round is used in place of their signature. In order to bootstrap the signatures chain, a missing signature would be replaced with a byte slice that is the hash product of a verifiable and trustable seed. The changes proposed here would affect the way a blocks are validated. Therefore, when this change gets implemented, it needs to be deployed as a mandatory upgrade. ``` +-----------------------+ +-----------------------+ | Block n | <-------- | Block n+1 | +-----------------------+ +-----------------------+ | VRF-Sig(n) | | VRF-Sig(n+1) | | ... | | ... | +-----------------------+ +-----------------------+ +-----------------------+ +-----------------------+ | VM n | | VM n+1 | +-----------------------+ +-----------------------+ | VRF-Out(n) | | VRF-Out(n+1) | +-----------------------+ +-----------------------+ VRF-Sig(n+1) = Sign(VRF-Sig(n), Block n+1 proposer's BLS key) VRF-Out(n) = Hash(VRF-Sig(n)) ``` ### Changes Details #### Step 1. Adding BLS signature to proposed blocks ```go type statelessUnsignedBlock struct { … vrfSig []byte `serialize:”true”` } ``` #### Step 2. Populate signature When a block proposer attempts to build a new block, it would need to use the parent block as a reference. The `vrfSig` field within each block is going to be daisy-chained to the `vrfSig` field from it's parent block. Populating the `vrfSig` would following this logic: 1. The current proposer has a BLS key a. If the parent block has an empty `vrfSig` signature, the proposer would sign the bootStrappingBlockSignature with its BLS key. See the bootStrappingBlockSignature details below. This is the base case. b. If the parent block does not have an empty `vrfSig` signature, that signature would be signed using the proposer’s BLS key. 2. The current proposer does not have a BLS key a. If the parent block has a non-empty `vrfSig` signature, the proposer would set the proposed block `vrfSig` to the 32 byte hash result of the following preimage: ``` +-------------------------+----------+------------+ | prefix : | [8]byte | "rng-derv" | +-------------------------+----------+------------+ | vrfSig : | [96]byte | 96 bytes | +-------------------------+----------+------------+ ``` b. If the parent block has an empty `vrfSig` signature, the proposer would leave the `vrfSig` on the new block empty. The bootStrappingBlockSignature that would be used above is the hash of the following preimage: ``` +-----------------------+----------+------------+ | prefix : | [8]byte | "rng-root" | +-----------------------+----------+------------+ | networkID: | uint32 | 4 bytes | +-----------------------+----------+------------+ | chainID : | [32]byte | 32 bytes | +-----------------------+----------+------------+ ``` #### Step 3. Signature Verification This signature verification would perform the exact opposite of what was done in step 2, and would verify the cryptographic correctness of the operation. Validating the `vrfSig` would following this logic: 1. The proposer has a BLS key a. If the parent block's `vrfSig` was non-empty , then the `vrfSig` in the proposed block is verified to be a valid BLS signature of the parent block's `vrfSig` value for the proposer's BLS public key. b. If the parent block's `vrfSig` was empty, then a BLS signature verification of the proposed block `vrfSig` against the proposer’s BLS public key and bootStrappingBlockSignature would take place. 2. The proposer does not have a BLS key a. If the parent block had a non-empty `vrfSig`, then the hash of the preimage ( as described above ) would be compared against the proposed `vrfSig`. b. If the parent block has an empty `vrfSig` then the proposer's `vrfSig` would be validated to be empty. #### Step 4. Extract the VRF Out and pass to block builders Calculating the VRF Out would be done by hashing the preimage of the following struct: ``` +-----------------------+----------+------------+ | prefix : | [8]byte | "vrfout " | +-----------------------+----------+------------+ | vrfout: | [96]byte | 96 bytes | +-----------------------+----------+------------+ ``` Before calculating the VRF Out, the method needs to explicitly check the case where the `vrfSig` is empty. In that case, the output of the VRF Out needs to be empty as well. ## Backwards Compatibility The above design has taken backward compatibility considerations. The chain would keep working as before, and at some point, would have the newly added `vrfSig` populated. From usage perspective, each VM would need to make its own decision on whether it should use the newly provided random seed. Initially, this random seed would be all zeros - and would get populated once the feature rolled out to a sufficient number of nodes. Also, as mentioned in the summary, these changes would necessitate a network upgrade. ## Reference Implementation A full reference implementation has not been provided yet. It will be provided once this ACP is considered `Implementable`. ## Security Considerations Virtual machine random seeds, while appearing to offer a source of randomness within smart contracts, fall short when it comes to cryptographic security. Here's a breakdown of the critical issues: - Limited Permutation Space: The number of possible random values is derived from the number of validators. While no validator, nor a validator set, would be able to manipulate the randomness into any single value, a nefarious actor(s) might be able to exclude specific numbers. - Predictability Window: The seed value might be accessible to other parties before the smart contract can benefit from its uniqueness. This predictability window creates a vulnerability. An attacker could potentially observe the seed generation process and predict the sequence of "random" numbers it will produce, compromising the entire cryptographic foundation of your smart contract. Despite these limitations appearing severe, attackers face significant hurdles to exploit them. First, the attacker can't control the random number, limiting the attack's effectiveness to how that number is used. Second, a substantial amount of AVAX is needed. And last, such an attack would likely decrease AVAX's value, hurting the attacker financially. One potential attack vector involves collusion among multiple proposers to manipulate the random number selection. These attackers could strategically choose to propose or abstain from proposing blocks, effectively introducing a bias into the system. By working together, they could potentially increase their chances of generating a random number favorable to their goals. However, the effectiveness of this attack is significantly limited for the following reasons: - Limited options: While colluding attackers expand their potential random number choices, the overall pool remains immense (2^256 possibilities). This drastically reduces their ability to target a specific value. - Protocol's countermeasure: The protocol automatically eliminates any bias introduced by previous proposals once an honest proposer submits their block. - Detectability: Exploitation of this attack vector is readily identifiable. A successful attack necessitates coordinated collusion among multiple nodes to synchronize their proposer slots for a specific block height ( the proposer slot order are known in advance ). Subsequent to this alignment, a designated node constructs the block proposal. The network maintains a record of the proposer slot utilized for each block. A value of zero for the proposer slot unequivocally indicates the absence of an exploit. Increasing values correlate with a heightened risk of exploitation. It is important to note that non-zero slot numbers may also arise from transient network disturbances. While this attack is theoretically possible, its practical impact is negligible due to the vast number of potential outcomes and the protocol's inherent safeguards. ## Open Questions ### How would the proposed changes impact the proposer selection and their inherit bias ? The proposed modifications will not influence the selection process for block proposers. Proposers retain the ability to determine which transactions are included in a block. This inherent proposer bias remains unchanged and is unaffected by the proposed changes. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-118: Warp Signature Request (/docs/acps/118-warp-signature-request) --- title: "ACP-118: Warp Signature Request" description: "Details for Avalanche Community Proposal 118: Warp Signature Request" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/118-warp-signature-request/README.md --- | ACP | 118 | | :--- | :--- | | **Title** | Warp Signature Interface Standard | | **Author(s)** | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/123)) | | **Track** | Best Practices Track | ## Abstract Proposes a standard [AppRequest](https://github.com/ava-labs/avalanchego/blob/master/proto/p2p/p2p.proto#L385) payload format type for requesting Warp signatures for the provided bytes, such that signatures may be requested in a VM-agnostic manner. To make this concrete, this standard type should be defined in AvalancheGo such that VMs can import it at the source code level. This will simplify signature aggregator implementations by allowing them to depend only on AvalancheGo for message construction, rather than individual VM codecs. ## Motivation Warp message signatures consist of an aggregate BLS signature composed of the individual signatures of a subnet's validators. Individual signatures need to be retreivable by the party that wishes to construct an aggregate signature. At present, this is left to VMs to implement, as is the case with [Subnet EVM](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/message/signature_request.go#20) and [Coreth](https://github.com/ava-labs/coreth/blob/v0.13.6-rc.0/plugin/evm/message/signature_request.go#L20) This creates friction in applications that are intended to operate across many VMs (or distinct implementations of the same VM). As an example, the reference Warp message relayer implementation, [awm-relayer](https://github.com/ava-labs/awm-relayer), fetches individual signatures from validators and aggregates them before sending the Warp message to its destination chain for verification. However, Subnet EVM and Coreth have distinct codecs, requiring the relayer to [switch](https://github.com/ava-labs/awm-relayer/blob/v1.4.0-rc.0/relayer/application_relayer.go#L372) according to the target codebase. Another example is ACP-75, which aims to implement acceptance proofs using Warp. The signature aggregation mechanism is not [specified](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/75-acceptance-proofs/README.md#signature-aggregation), which is a blocker for that ACP to be marked implementable. Standardizing the Warp Signature Request interface by defining it as a format for `AppRequest` message payloads in AvalancheGo would simplify the implementation of ACP-75, and streamline signature aggregation for out-of-protocol services such as Warp message relayers. ## Specification We propose the following types, implemented as Protobuf types that may be decoded from the `AppRequest`/`AppResponse` `app_bytes` field. By way of example, this approach is currently used to [implement](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/proto/sdk/sdk.proto#7) and [parse](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/gossip/message.go#22) gossip `AppRequest` types. - `SignatureRequest` includes two fields. `message` specifies the payload that the returned signature should correspond to, namely a serialized unsigned Warp message. `justification` specifies arbitrary data that the requested node may use to decide whether or not it is willing to sign `message`. `justification` may not be required by every VM implementation, but `message` should always contain the bytes to be signed. It is up to the VM to define the validity requirements for the `message` and `justification` payloads. ```protobuf message SignatureRequest { bytes message = 1; bytes justification = 2; } ``` - `SignatureResponse` is the corresponding `AppResponse` type that returns the requested signature. ```protobuf message SignatureResponse { bytes signature = 1; } ``` ### Handlers For each of the above types, VMs must implement corresponding `AppRequest` and `AppResponse` handlers. The `AppRequest` handler should be [registered](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/network.go#L173) using the canonical handler ID, defined as `2`. ## Use Cases Generally speaking, `SignatureRequest` can be used to request a signature over a Warp message by serializing the unsigned Warp message into `message`, and populating `justification` as needed. ### Sign a known Warp Message Subnet EVM and Coreth store messages that have been seen (i.e. on-chain message sent through the [Warp Precompile](https://github.com/ava-labs/subnet-evm/tree/v0.6.7/precompile/contracts/warp) and [off-chain](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/config.go#L226) Warp messages) such that a signature over that message can be provided on request. `SignatureRequest` can be used for this case by specifying the Warp message in `message`. The queried node may then look up the Warp message in its database and return the signature. In this case, `justification` is not needed. ### Attest to an on-chain event Subnet EVM and Coreth also support attesting to block hashes via Warp, by serving signature requests made using the following `AppRequest` type: ``` type BlockSignatureRequest struct { BlockID ids.ID } ``` `SignatureRequest` can achieve this by specifying an unsigned Warp message with the `BlockID` as the payload, and serializing that message into `message`. `justification` may optionally be used to provide additional context, such as a the block height of the given block ID. ### Confirm that an event did not occur With [ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets), Subnets will have the ability to manage their own validator sets. The Warp message payload contained in a `RegisterSubnetValidatorTx` includes an `expiry`, after which the specified validation ID (i.e. a unique hash over the Subnet ID, node ID, stake weight, and expiry) becomes invalid. The Subnet needs to know that this validation ID is expired so that it can keep its locally tracked validator set in sync with the P-Chain. We also assume that the P-Chain will not persist expired or invalid validation IDs. We can use `SignatureRequest` to construct a Warp message attesting that the validation ID expired. We do so by serializing an unsigned Warp message containing the validation ID into `message`, and providing the validation ID hash preimage in `justification` for the P-Chain to reconstruct the expired validation ID. ## Security Considerations VMs have full latitude when implementing `SignatureRequest` handlers, and should take careful consideration of what `message` payloads their implementation should be willing to sign, given a `justification`. Some considerations include, but are not limited to: - Input validation. Handlers should validate `message` and `justification` payloads to ensure that they decode to coherent types, and that they contain only expected data. - Signature DoS. AvalancheGo's peer-to-peer networking stack implements message rate limiting to mitigate the risk of DoS, but VMs should also consider the cost of parsing and signing a `message` payload. - Payload collision. `message` payloads should be implemented as distinct types that do not overlap with one another within the context of signed Warp messages from the VM. For instance, a `message` payload specifying 32-byte hash may be interpreted as a transaction hash, a block hash, or a blockchain ID. ## Backwards Compatibility This change is backwards compatible for VMs, as nodes running older versions that do not support the new message types will simply drop incoming messages. ## Reference Implementation A reference implementation containing the Protobuf types and the canonical handler ID can be found [here](https://github.com/ava-labs/avalanchego/pull/3218). ## Acknowledgements Thanks to @joshua-kim, @iansuvak, @aaronbuchwald, @michaelkaplan13, and @StephenButtolph for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-125: Basefee Reduction (/docs/acps/125-basefee-reduction) --- title: "ACP-125: Basefee Reduction" description: "Details for Avalanche Community Proposal 125: Basefee Reduction" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/125-basefee-reduction/README.md --- | ACP | 125 | | :--- | :--- | | **Title** | Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/127)) | | **Track** | Standards | ## Abstract Reduce the minimum base fee on the Avalanche C-Chain from 25 nAVAX to 1 nAVAX. ## Motivation With dynamic fees, the gas price is supposed to be a result of a continuous auction such that the consumed gas per second converges to the target gas usage per second. When dynamic fees were first introduced, safeguards were added to ensure the mechanism worked as intended, such as a relatively high minimum gas price and a maximum gas price. The maximum gas price has since been entirely removed. The minimum gas price has been reduced significantly. However, the base fee is often observed pinned to this minimum. This shows that it is higher than what the market demands, and therefore it is artificially reducing network usage. ## Specification The dynamic fee calculation currently must enforce a minimum base fee of 25 nAVAX. This change proposes reducing the minimum base fee to 1 nAVAX upon the next network upgrade activation. ## Backwards Compatibility Modifies the consensus rules for the C-Chain, therefore it requires a network upgrade. ## Reference Implementation A draft implementation of this ACP for the coreth VM can be found [here](https://github.com/ava-labs/coreth/pull/604/files). ## Security Considerations Lower gas costs may increase state bloat. However, we note that the dynamic fee algorithm responded appropriately during periods of high use (such as Dec. 2023), which gives reasonable confidence that enforcing a 25 nAVAX minimum fee is no longer necessary. ## Open Questions N/A ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-13: Subnet Only Validators (/docs/acps/13-subnet-only-validators) --- title: "ACP-13: Subnet Only Validators" description: "Details for Avalanche Community Proposal 13: Subnet Only Validators" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/13-subnet-only-validators/README.md --- | ACP | 13 | | :--- | :--- | | **Title** | Subnet-Only Validators (SOVs) | | **Author(s)** | Patrick O'Grady ([contact@patrickogrady.xyz](mailto:contact@patrickogrady.xyz)) | | **Status** | Stale | | **Track** | Standards | | **Superseded-By** | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) | ## Abstract Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network. Require SOVs to pay a refundable fee of 500 $AVAX on the P-Chain to register as a Subnet Validator instead of staking at least 2000 $AVAX, the minimum requirement to become a Primary Network Validator. Preview a future transition to Pay-As-You-Go Subnet Validation and $AVAX-Augmented Subnet Security. _This ACP does not modify/deprecate the existing Subnet Validation semantics for Primary Network Validators._ ## Motivation Each node operator must stake at least 2000 $AVAX ($20k at the time of writing) to first become a Primary Network Validator before they qualify to become a Subnet Validator. Most Subnets aim to launch with at least 8 Subnet Validators, which requires staking 16000 $AVAX ($160k at time of writing). All Subnet Validators, to satisfy their role as Primary Network Validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating. Avalanche Warp Messaging (AWM), the native interoperability mechanism for the Avalanche Network, provides a way for Subnets to communicate with each other/C-Chain without a trusted intermediary. Any Subnet Validator must be able to register a BLS key and participate in AWM, otherwise a Subnet may not be able to generate a BLS Multi-Signature with sufficient participating stake. Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) can’t launch a Subnet because they can’t opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain <-> Subnets using AWM/Teleporter). A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network Validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline). Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet Validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. _Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load._ Elastic Subnets allow any community to weight Subnet Validation based on some staking token and reward Subnet Validators with high uptime with said staking token. However, there is no way for $AVAX holders on the Primary Network to augment the security of such Subnets. ## Specification ### Required Changes 1) Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network 2) Introduce a refundable fee (called a "lock") of 500 $AVAX that nodes must pay to become an SOV 3) Introduce a non-refundable fee of 0.1 $AVAX that SOVs must pay to become an SOV 4) Introduce a new transaction type on the P-Chain to register as an SOV (i.e. `AddSubnetOnlyValidatorTx`) 5) Add a mode to ANCs that allows SOVs to optionally disable full Primary Network verification (only need to verify P-Chain) 6) ANCs track IPs for SOVs to ensure Subnet Validators can find peers whether or not they are Primary Network Validators 7) Provide a guaranteed rate limiting allowance for SOVs like Primary Network Validators Because SOVs do not validate the Primary Network, they will not be rewarded with $AVAX for "locking" the 500 $AVAX required to become an SOV. This enables people interested in validating Subnets to opt for a lower upfront $AVAX commitment and lower infrastructure costs instead of $AVAX rewards. Additionally, SOVs will only be required to sync the P-chain (not X/C-Chain) to track any validator set changes in their Subnet and to support Cross-Subnet communication via AWM (see “Primary Network Partial Sync” mode introduced in [Cortina 8](https://github.com/ava-labs/avalanchego/releases/tag/v1.10.8)). The lower resource requirement in this "minimal mode" will provide Subnets with greater flexibility of validation hardware requirements as operators are not required to reserve any resources for C-Chain/X-Chain operation. If an SOV wishes to sync the entire Primary Network, they still can. ### Future Work The previously described specification is a minimal, additive change to Subnet Validation semantics that prepares the Avalanche Network for a more flexible Subnet model. It alone, however, fails to communicate this flexibility nor provides an alternative use of $AVAX that would have otherwise been used to create Subnet Validators. Below are two high-level ideas (Pay-As-You-Go Subnet Validation Registration Fees and $AVAX-Augmented Security) that highlight how this initial change could be extended in the future. If the Avalanche Community is interested in their adoption, they should each be proposed as a unique ACP where they can be properly specified. **These ideas are only suggestions for how the Avalanche Network could be modified in the future if this ACP is adopted. Supporting this ACP does not require supporting these ideas or committing to their rollout.** #### Pay-As-You-Go Subnet Validation Registration Fees _Transition Subnet Validator registration to a dynamically priced, continuously charged fee (that doesn't require locking large amounts of $AVAX upfront)._ While it would be possible to just transition to a lower required "lock" amount, many think that it would be more competitive to transition to a dynamically priced, continuous payment mechanism to register as a Subnet Validator. This new mechanism would target some $Y nAVAX fee that would be paid by each Subnet Validator per Subnet per second (pulling from a "Subnet Validator's Account") instead of requiring a large upfront lockup of $AVAX. The rate of nAVAX/second should be set by the demand for validating Subnets on Avalanche compared to some usage target per Subnet and across all Subnets. This rate should be locked for each Subnet Validation period to ensure operators are not subject to surprise costs if demand rises significantly over time. The optimization work outlined in [BLS Multi-Signature Voting](https://hackmd.io/@patrickogrady/100k-subnets#How-will-BLS-Multi-Signature-uptime-voting-work) should allow the min rate to be set as low as ~512-4096 nAVAX/second (or 1.3-10.6 $AVAX/month). Fees paid to the Avalanche Network for PAYG could be burned, like all other P-Chain, X-Chain, and C-Chain transactions, or they could be partially rewarded to Primary Network Validators as a "boost" over the existing staking rewards. The nice byproduct of the latter approach is that it better aligns Primary Network Validators with the growth of Subnets. #### $AVAX-Augmented Subnet Security _Allow pledging unstaked $AVAX to Subnet Validators on Elastic Subnets that can be slashed if said Subnet Validator commits an attributable fault (i.e. proposes/signs conflicting blocks/AWM payloads). Reward locked $AVAX associated with Subnet Validators that were not slashed with Elastic Subnet staking rewards._ Currently, the only way to secure an Elastic Subnet is to stake its custom staking token (defined in the `TransformSubnetTx`). Many have requested the option to use $AVAX for this token, however, this could easily allow an adversary to take over small Elastic Subnets (where the amount of $AVAX staked may be much less than the circulating supply). $AVAX-Augmented Subnet Security would allow anyone holding $AVAX to lock it to specific Subnet Validators and earn Elastic Subnet reward tokens for supporting honest participants. Recall, all stake management on the Avalanche Network (even for Subnets) occurs on the P-Chain. Thus, staked tokens ($AVAX and/or custom staking tokens used in Elastic Subnets) and stake weights (used for AWM verification) are secured by the full $AVAX stake of the Primary Network. $AVAX-Augmented Subnet Security, like staking, would be implemented on the P-Chain and enjoy the full security of the Primary Network. This approach means locking $AVAX occurs on the Primary Network (no need to transfer $AVAX to a Subnet, which may not be secured by meaningful value yet) and proofs of malicious behavior are processed on the Primary Network (a colluding Subnet could otherwise choose not to process a proof that would lead to their "lockers" being slashed). _This native approach is comparable to the idea of using $ETH to secure DA on [EigenLayer](https://www.eigenlayer.xyz/) (without reusing stake) or $BTC to secure Cosmos Zones on [Babylon](https://babylonchain.io/) (but not using an external ecosystem)._ ## Backwards Compatibility * Existing Subnet Validation semantics for Primary Network Validators are not modified by this ACP. This means that All existing Subnet Validators can continue validating both the Primary Network and whatever Subnets they are validating. This change would just provide a new option for Subnet Validators that allows them to sacrifice their staking rewards for a smaller upfront $AVAX commitment and lower infrastructure costs. * Support for this ACP would require adding a new transaction type to the P-Chain (i.e. `AddSubnetOnlyValidatorTx`). This new transaction is an execution-breaking change that would require a mandatory Avalanche Network upgrade to activate. ## Reference Implementation A full implementation will be provided once this ACP is considered `Implementable`. However, some initial ideas are presented below. ### `AddSubnetOnlyValidatorTx` ```text type AddSubnetOnlyValidatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Describes the validator // The NodeID included in [Validator] must be the Ed25519 public key. Validator `serialize:"true" json:"validator"` // ID of the subnet this validator is validating Subnet ids.ID `serialize:"true" json:"subnetID"` // [Signer] is the BLS key for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID does uniquely map to a BLS key Signer signer.Signer `serialize:"true" json:"signer"` // Where to send locked tokens when done validating LockOuts []*avax.TransferableOutput `serialize:"true" json:"lock"` // Where to send validation rewards when done validating ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"` // Where to send delegation rewards when done validating DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"` // Fee this validator charges delegators as a percentage, times 10,000 // For example, if this validator has DelegationShares=300,000 then they // take 30% of rewards from delegators DelegationShares uint32 `serialize:"true" json:"shares"` } ``` _`AddSubnetOnlyValidatorTx` is almost the same as [`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/vms/platformvm/txs/add_permissionless_validator_tx.go#L33-L58), the only exception being that `StakeOuts` are now `LockOuts`._ ### `GetSubnetPeers` To support tracking SOV IPs, a new message should be added to the P2P specification that allows Subnet Validators to request the IP of all peers a node knows about on a Subnet (these Signed IPs won't be gossiped like they are for Primary Network Validators because they don't need to be known by the entire Avalanche Network): ```text message GetSubnetPeers { bytes subnet_id = 1; } ``` _It would be a nice addition if a bloom filter could also be provided here so that an ANC only sends IPs of peers that the original sender does not know._ ANCs should respond to this incoming message with a [`PeerList` message](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/proto/p2p/p2p.proto#L135-L148). ## Security Considerations * Any Subnet Validator running in "Partial Sync Mode" will not be able to verify Atomic Imports on the P-Chain and will rely entirely on Primary Network consensus to only accept valid P-Chain blocks. * High-throughput Subnets will be better isolated from the Primary Network and should improve its resilience (i.e. surges of traffic on some Subnet cannot destabilize a Primary Network Validator). * Avalanche Network Clients (ANCs) must track IPs and provide allocated bandwidth for SOVs even though they are not Primary Network Validators. ## Open Questions * To help orient the Avalanche Community around this wide-ranging and likely to be long-running conversation around the relationship between the Primary Network and Subnets, should we come up with a project name to describe the effort? I've been casually referring to all of these things as the _Astra Upgrade Track_ but definitely up for discussion (may be more confusing than it is worth to do this). ## Appendix A draft of this ACP was posted on in the ["Ideas" Discussion Board](https://github.com/avalanche-foundation/ACPs/discussions/10#discussioncomment-7373486), as suggested by the [ACP README](https://github.com/avalanche-foundation/ACPs#step-1-post-your-idea-to-github-discussions). Feedback on this draft was collected and addressed on both the "Ideas" Discussion Board and on [HackMD](https://hackmd.io/@patrickogrady/100k-subnets#Feedback-to-Draft-Proposal). ## Acknowledgements Thanks to @luigidemeo1, @stephenbuttolph, @aaronbuchwald, @dhrubabasu, and @abi87 for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-131: Cancun Eips (/docs/acps/131-cancun-eips) --- title: "ACP-131: Cancun Eips" description: "Details for Avalanche Community Proposal 131: Cancun Eips" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/131-cancun-eips/README.md --- | ACP | 131 | | :--- | :--- | | **Title** | Activate Cancun EIPs on C-Chain and Subnet-EVM chains | | **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/139)) | | **Track** | Standards, Subnet | ## Abstract Enable new EVM opcodes and opcode changes in accordance with the following EIPs on the Avalanche C-Chain and Subnet-EVM chains: - [EIP-4844: BLOBHASH opcode](https://eips.ethereum.org/EIPS/eip-4844) - [EIP-7516: BLOBBASEFEE opcode](https://eips.ethereum.org/EIPS/eip-7516) - [EIP-1153: Transient storage](https://eips.ethereum.org/EIPS/eip-1153) - [EIP-5656: MCOPY opcode](https://eips.ethereum.org/EIPS/eip-5656) - [EIP-6780: SELFDESTRUCT only in same transaction](https://eips.ethereum.org/EIPS/eip-6780) Note blob transactions from EIP-4844 are excluded and blocks containing them will still be considered invalid. ## Motivation The listed EIPs were activated on Ethereum mainnet as part of the [Cancun upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/cancun.md#included-eips). This proposal is to activate them on the Avalanche C-Chain in the next network upgrade, to maintain compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler defaults >= [0.8.25](https://github.com/ethereum/solidity/releases/tag/v0.8.25)). Additionally, it recommends the activation of the same EIPs on Subnet-EVM chains. ## Specification & Reference Implementation The opcodes (EVM exceution modifications) and block header modifications should be adopted as specified in the EIPs themselves. Other changes such as enabling new transaction types or mempool modifications are not in scope (specifically blob transactions from EIP-4844 are excluded and blocks containing them are considered invalid). ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.13.8](https://github.com/ethereum/go-ethereum/releases/tag/v1.13.8) release in this [PR](https://github.com/ava-labs/coreth/pull/550). In particular, note the following code: - [Activation of new opcodes](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/core/vm/jump_table.go#L93) - Activation of Cancun in next Avalanche upgrade: - [C-Chain](https://github.com/ava-labs/coreth/pull/610) - [Subnet-EVM chains](https://github.com/ava-labs/subnet-evm/blob/fa909031ed148484c5072d949c5ed73d915ce1ed/params/config_extra.go#L186) - `ParentBeaconRoot` is enforced to be included and the zero value [here](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/plugin/evm/block_verification.go#L287-L288). This field is retained for future use and compatibility with upstream tooling. - Forbids blob transactions by enforcing `BlobGasUsed` to be 0 [here](https://github.com/ava-labs/coreth/pull/611/files#diff-532a2c6a5365d863807de5b435d8d6475552904679fd611b1b4b10d3bf4f5010R267). _Note:_ Subnets are sovereign in regards to their validator set and state transition rules, and can choose to opt out of this proposal by making a code change in their respective Subnet-EVM client. ## Backwards Compatibility The original EIP authors highlighted the following considerations. For full details, refer to the original EIPs: - [EIP-4844](https://eips.ethereum.org/EIPS/eip-4844#backwards-compatibility): Blob transactions are not proposed to be enabled on Avalanche, so concerns related to mempool or transaction data availability are not applicable. - [EIP-6780](https://eips.ethereum.org/EIPS/eip-6780#backwards-compatibility) "Contracts that depended on re-deploying contracts at the same address using CREATE2 (after a SELFDESTRUCT) will no longer function properly if the created contract does not call SELFDESTRUCT within the same transaction." Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade. It is recommended that Subnet-EVM chains also adopt this ACP and follow the same upgrade time as Avalanche's next network upgrade. ## Security Considerations Refer to the original EIPs for security considerations: - [EIP 1153](https://eips.ethereum.org/EIPS/eip-1153#security-considerations) - [EIP 4788](https://eips.ethereum.org/EIPS/eip-4788#security-considerations) - [EIP 4844](https://eips.ethereum.org/EIPS/eip-4844#security-considerations) - [EIP 5656](https://eips.ethereum.org/EIPS/eip-5656#security-considerations) - [EIP 6780](https://eips.ethereum.org/EIPS/eip-6780#security-considerations) - [EIP 7516](https://eips.ethereum.org/EIPS/eip-7516#security-considerations) ## Open Questions No open questions. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-151: Use Current Block Pchain Height As Context (/docs/acps/151-use-current-block-pchain-height-as-context) --- title: "ACP-151: Use Current Block Pchain Height As Context" description: "Details for Avalanche Community Proposal 151: Use Current Block Pchain Height As Context" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/151-use-current-block-pchain-height-as-context/README.md --- | ACP | 151 | | :------------ | :----------------------------------------------------------------------------------------- | | **Title** | Use current block P-Chain height as context for state verification | | **Author(s)** | Ian Suvak ([@iansuvak](https://github.com/iansuvak)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/152)) | | **Track** | Standards | ## Abstract Proposes that the ProposerVM passes inner VMs the P-Chain block height of the current block being built rather than the P-Chain block height of the parent block. Inner VMs use this P-Chain height for verifying aggregated signatures of Avalanche Interchain Messages (ICM). This will allow for a more reliable way to determine which validators should participate in signing the message, and remove unnecessary waiting periods. ## Motivation Currently the ProposerVM passes the P-Chain height of the parent block to inner VMs, which use the value to verify ICM messages in the current block. Using the parent block's P-Chain height is necessary for verifying the proposer and reaching consensus on the current block, but it is not necessary for verifying ICM messages within the block. Using the P-Chain height of the current block being built would make operations using ICM messages to modify the validator set, such as ones specified in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) be verifiable sooner and more reliably. Currently at least two new P-Chain blocks need to be produced after the relevant state change for it to be reflected for purposes of ICM aggregate signature verification. ## Specification The [block context](https://github.com/ava-labs/avalanchego/blob/d2e9d12ed2a1b6581b8fd414cbfb89a6cfa64551/snow/engine/snowman/block/block_context_vm.go#L14) contains a `PChainHeight` field that is passed from the ProposerVM to the inner VMs building the block. It is later used by the inner VMs to fetch the canonical validator set for verification of ICM aggregated signatures. The `PChainHeight` currently passed in by the ProposerVM is the P-Chain height of the parent block. The proposed change is to instead have the ProposerVM pass in the P-Chain height of the current block. ## Backwards Compatibility This change requires an upgrade to make sure that all validators verifying the validity of the ICM messages use the same P-Chain height and therefore the same validator set. Prior to activation nodes should continue to use P-Chain height of the parent block. ## Reference Implementation An implementation of this ACP for avalanchego can be found [here](https://github.com/ava-labs/avalanchego/pull/3459) ## Security Considerations ProposerVM needs to use the parent block's P-Chain height to verify proposers for security reasons but we don't have such restrictions for verifying ICM message validity in the current block being built. Therefore, this should be a safe change. ## Acknowledgments Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@michaelkaplan13](https://github.com/michaelkaplan13) for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-176: Dynamic Evm Gas Limit And Price Discovery Updates (/docs/acps/176-dynamic-evm-gas-limit-and-price-discovery-updates) --- title: "ACP-176: Dynamic Evm Gas Limit And Price Discovery Updates" description: "Details for Avalanche Community Proposal 176: Dynamic Evm Gas Limit And Price Discovery Updates" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md --- | ACP | 176 | | :- | :- | | **Title** | Dynamic EVM Gas Limits and Price Discovery Updates | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/178)) | | **Track** | Standards | ## Abstract Proposes that the C-Chain and Subnet-EVM chains adopt a dynamic fee mechanism similar to the one [introduced on the P-Chain as part of ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md), with modifications to allow for block proposers (i.e. validators) to dynamically adjust the target gas consumption per unit time. ## Motivation Currently, the C-Chain has a static gas target of [15,000,000 gas](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L32) per [10 second rolling window](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L36), and uses a modified version of the [EIP-1559](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1559.md) dynamic fee mechanism to adjust the base fee of blocks based on the gas consumed in the previous 10 second window. This has two notable drawbacks: 1. The windower mechanism used to determine the base fee of blocks can lead to outsized spikes in the gas price when there is a large block. This is because after a large block that uses all of its gas limit, blocks that follow in the same window continue to result in increased gas prices even if they are relatively small blocks that are under the target gas consumption. 2. The static gas target necessitates a required network upgrade in order to modify. This is cumbersome and makes it difficult for the network to adjust its capacity in response to performance optimizations or hardware requirement increases. To better position Avalanche EVM chains, including the C-Chain, to be able to handle future increases in load, we propose replacing the above mechanism with one that better handles blocks that consume a large amount of gas, and that allows for validators to dynamically adjust the target rate of consumption. ## Specification ### Gas Price Determination The mechanism to determine the base fee of a block is the same as the one used in [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) to determine the gas price of a block on the P-Chain. This mechanism calculates the gas price for a given block $b$ based on the following parameters:
| | | |---|---| | $T$ | the target gas consumed per second | | $M$ | minimum gas price | | $K$ | gas price update constant | | $C$ | maximum gas capacity | | $R$ | gas capacity added per second |
### Making $T$ Dynamic As noted above, the gas price determination mechanism relies on a target gas consumption per second, $T$, in order to calculate the gas price for a given block. $T$ will be adjusted dynamically according to the following specification. Let $q$ be a non-negative integer that is initialized to 0 upon activation of this mechanism. Let the target gas consumption per second be expressed as: $$T = P \cdot e^{\frac{q}{D}}$$ where $P$ is the global minimum allowed target gas consumption rate for the network, and $D$ is a constant that helps control the rate of change of the target gas consumption. After the execution of transactions in block $b$, the value of $q$ can be increased or decreased up to $Q$. It must be the case that $\left|\Delta q\right| \leq Q$, or block $b$ is considered invalid. The amount by which $q$ changes after executing block $b$ is specified by the block builder. Block builders (i.e. validators), may set their desired value for $T$ (i.e. their desired gas consumption rate) in their configuration, and their desired value for $q$ can then be calculated as: $$q_{desired} = D \cdot ln\left(\frac{T_{desired}}{P}\right)$$ Note that since $q_{desired}$ is only used locally and can be different for each node, it is safe for implementations to approximate the value of $ln\left(\frac{T_{desired}}{P}\right)$, and round the resulting value to the nearest integer. When building a block, builders can calculate their next preferred value for $q$ based on the network's current value (`q_current`) according to: ```python # Calculates a node's new desired value for q given for a given block def calc_next_q(q_current: int, q_desired: int, max_change: int) -> int: if q_desired > q_current: return q_current + min(q_desired - q_current, max_change) else: return q_current - min(q_current - q_desired, max_change) ``` As $q$ is updated after the execution of transactions within the block, $T$ is also updated such that $T = P \cdot e^{\frac{q}{D}}$ at all times. As the value of $T$ adjusts, the value of $R$ (capacity added per second) is also updated such that: $$R = 2 \cdot T$$ This ensures that the gas price can increase and decrease at the same rate. The value of $C$ must also adjust proportionately, so we set: $$C = 10 \cdot T$$ This means that the maximum stored gas capacity would be reached after 5 seconds where no blocks have been accepted. In order to keep roughly constant the time it takes for the gas price to double at sustained maximum network capacity usage, the value of $K$ used in the gas price determination mechanism must be updated proportionally to $T$ such that: $$K = 87 \cdot T$$ In order to have the gas price not be directly impacted by the change in $K$, we also update $x$ (excess gas consumption) proportionally. When updating $x$ after executing a block, instead of setting $x = x + G$ as specified in ACP-103, we set: $$x_{n+1} = (x + G) \cdot \frac{K_{n+1}}{K_{n}}$$ Note that the value of $q$ (and thus also $T$, $R$, $C$, $K$, and $x$) are updated **after** the execution of block $b$, which means they only take effect in determining the gas price of block $b+1$. The change to each of these values in block $b$ does not effect the gas price for transactions included in block $b$ itself. Allowing block builders to adjust the target gas consumption rate in blocks that they produce makes it such that the effective target gas consumption rate should converge over time to the point where 50% of the voting stake weight wants it increased and 50% of the voting stake weight wants it decreased. This is because the number of blocks each validator produces is proportional to their stake weight. As noted in ACP-103, the maximum gas consumed in a given period of time $\Delta{t}$, is $r + R \cdot \Delta{t}$, where $r$ is the remaining gas capacity at the end of previous block execution. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$. Phrased differently, the maximum amount of gas that can be consumed by any given block $b$ is: $$gasLimit_{b} = min(r + R \cdot \Delta{t}, C)$$ ### Configuration Parameters As noted above, the gas price determination mechanism depends on the values of $T$, $M$, $K$, $C$, and $R$ to be set as parameters. $T$ is adjusted dynamically from its initial value based on $D$ and $P$, and the values of $R$ and $C$ are derived from $T$. Parameters at activation on the C-Chain are:
| Parameter | Description | C-Chain Configuration| | - | - | - | | $P$ | minimum target gas consumption per second | $1,000,000$ | | $D$ | target gas consumption rate update constant | $2^{25}$ | | $Q$ | target gas consumption rate update factor change limit | $2^{15}$ | | $M$ | minimum gas price | $1*10^{-18}$ AVAX | | $K$ | initial gas price update factor | $87,000,000$ |
$P$ was chosen as a safe bound on the minimum target gas usage on the C-Chain. The current gas target of the C-Chain is $1,500,000$ per second. The target gas consumption rate will only stay at $P$ if the majority of stake weight of the network specifies $P$ as their desired gas consumption rate target. $D$ and $Q$ were chosen to give each block builder the ability to adjust the value of $T$ by roughly $\frac{1}{1024}$ of its current value, which matches the [gas limit bound divisor that Ethereum currently uses](https://github.com/ethereum/go-ethereum/blob/52766bedb9316cd6cddacbb282809e3bdfba143e/params/protocol_params.go#L26) to limit the amount that validators can change the execution layer gas limit in a single block. $D$ and $Q$ were scaled up by a factor of $2^{15}$ to provide block builders more granularity in the adjustments to $T$ that they can make. $M$ was chosen as the minimum possible denomination of the native EVM asset, such that the gas price will be more likely to consistently be in a range of price discovery. The price discovery mechanism has already been battle tested on the P-Chain (and prior to that on Ethereum for blob gas prices as defined by EIP-4844), giving confidence that it will correctly react to any increase in network usage in order to prevent a DOS attack. $K$ was chosen such that at sustained maximum capacity ($T*2$ gas/second), the fee rate will double every ~60.3 seconds. For comparison, EIP-1559 can double about ~70 seconds, and the C-Chain's current implementation can double about every ~50 seconds, depending on the time between blocks. The maximum instantaneous price multiplier is: $$e^\frac{C}{K} = e^\frac{10 \cdot T}{87 \cdot T} = e^\frac{10}{87} \simeq 1.12$$ ### Choosing $T_{desired}$ As mentioned above, this new mechanism allows for validators to specify their desired target gas consumption rate ($T_{desired}$) in their configuration, and the value that they set impacts the effective target gas consumption rate of the network over time. The higher the value of $T$, the more resources (storage, compute, etc) that are able to be used by the network. When choosing what value makes sense for them, validators should consider the resources that are required to properly support that level of gas consumption, the utility the network provides by having higher transaction per second throughput, and the stability of network should it reach that level of utilization. While Avalanche Network Clients can set default configuration values for the desired target gas consumption rate, each validator can choose to set this value independently based on their own considerations. ## Backwards Compatibility The changes proposed in this ACP require a required network upgrade in order to take effect. Prior to its activation, the current gas limit and price discovery mechanisms will continue to be used. Its activation should have relatively minor compatibility effects on any developer tooling. Notably, transaction formats, and thus wallets, are not impacted. After its activation, given that the value of $C$ is dynamically adjusted, the maximum possible gas consumed by an individual block, and thus maximum possible consumed by an individual transaction, will also dynamically adjust. The upper bound on the amount of gas consumed by a single transaction fluctuating means that transactions that are considered invalid at one time may be considered valid at a different point in time, and vice versa. While potentially unintuitive, as long as the minimum gas consumption rate is set sufficiently high this should not have significant practical impact, and is also currently the case on the Ethereum mainnet. > [!NOTE] > After the activation of this ACP, concerns were raised around the latency of inclusion for large transactions when the fee is increasing. To address these concerns, block producers SHOULD only produce blocks when there is sufficient capacity to include large transactions. Prior to this ACP, the maximum size of a transaction was $15$ million gas. Therefore, the recommended heuristic is to only produce blocks when there is at least $\min(8 \cdot T, 15 \text{ million})$ capacity. _At the time of writing, this ensures transactions with up to 12.8 million gas will be able to bid for block space._ ## Reference Implementation This ACP was implemented and merged into Coreth behind the `Fortuna` upgrade flag. The full implementation can be found in [coreth@v0.14.1-acp-176.1](https://github.com/ava-labs/coreth/releases/tag/v0.14.1-acp-176.1). ## Security Considerations This ACP changes the mechanism for determining the gas price on Avalanche EVM chains. The gas price is meant to adapt dynamically to respond to changes in demand for using the chain. If it does not react as expected, the chain could be at risk for a DOS attack (if the usage price is too low), or over charge users during period of low activity. This price discovery mechanism has already been employed on the P-Chain, but should again be thoroughly tested for use on the C-Chain prior to activation on the Avalanche Mainnet. Further, this ACP also introduces a mechanism for validators to change the gas limit of the C-Chain. If this limit is set too high, it is possible that validator nodes will not be able to keep up in the processing of blocks. An upper bound on the maximum possible gas limit could be considered to try to mitigate this risk, though it would then take further required network upgrades to scale the network past that limit. ## Acknowledgments Thanks to the following non-exhaustive list of individuals for input, discussion, and feedback on this ACP. - [Emin Gün Sirer](https://x.com/el33th4xor) - [Luigi D'Onorio DeMeo](https://x.com/luigidemeo) - [Darioush Jalali](https://github.com/darioush) - [Aaron Buchwald](https://github.com/aaronbuchwald) - [Geoff Stuart](https://github.com/geoff-vball) - [Meag FitzGerald](https://github.com/meaghanfitzgerald) - [Austin Larson](https://github.com/alarso16) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-181: P Chain Epoched Views (/docs/acps/181-p-chain-epoched-views) --- title: "ACP-181: P Chain Epoched Views" description: "Details for Avalanche Community Proposal 181: P Chain Epoched Views" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/181-p-chain-epoched-views/README.md --- | ACP | 181 | | :------------ | :----------------------------------------------------------------------------------------- | | **Title** | P-Chain Epoched Views | | **Author(s)** | Cam Schultz [@cam-schultz](https://github.com/cam-schultz) | | **Status** | Implementable ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/211)) | | **Track** | Standards | ## Abstract Proposes a standard P-Chain epoching scheme such that any VM that implements it uses a P-Chain block height known prior to the generation of its next block. This would enable VMs to optimize validator set retrievals, which currently must be done during block execution. This standard does *not* introduce epochs to the P-Chain's VM directly. Instead, it provides a standard that may be implemented by layers that inject P-Chain state into VMs, such as the ProposerVM. ## Motivation The P-Chain maintains a registry of L1 and Subnet validators (including Primary Network validators). Validators are added, removed, or their weights changed by issuing P-Chain transactions that are included in P-Chain blocks. When describing an L1 or Subnet's validator set, what is really being described are the weights, BLS keys, and Node IDs of the active validators at a particular P-Chain height. Use cases that require on-demand views of L1 or Subnet validator sets need to fetch validator sets at arbitrary P-Chain heights, while use cases that require up-to-date views need to fetch them as often as every P-Chain block. Epochs during which the P-Chain height is fixed would widen this window to a predictable epoch duration, allowing these use cases to implement optimizations such as pre-fetching validator sets once per epoch, or allowing more efficient backwards traversal of the P-Chain to fetch historical validator sets. ## Specification ### Assumptions In the following specification, we assume that a block $b_m$ has timestamp $t_m$ and P-Chain height $p_m$. ### Epoch Definition An epoch is defined as a contiguous range of blocks that share the same three values: - An Epoch Number - An Epoch P-Chain Height - An Epoch Start Time Let $E_N$ denote an epoch with epoch number $N$. $E_N$'s start time is denoted as $T_{start}^N$, and its P-Chain height as $P_N$. Let block $b_a$ be the block that activates this ACP. The first epoch ($E_0$) has $T_{start}^0 = t_{a-1}$, and $P_0 = p_{a-1}$. In other words, the first epoch start time is the timestamp of the last block prior to the activation of this ACP, and similarly, the first epoch P-Chain height is the P-Chain height of last block prior to the activation of this ACP. ### Epoch Sealing An epoch $E_N$ is *sealed* by the first block with a timestamp greater than or equal to $T_{start}^N + D$, where $D$ is a constant defined in the network upgrade that activates this ACP. Let $B_{S_N}$ denote the block that sealed $E_N$. The sealing block is defined to be a member of the epoch it seals. This guarantees that every epoch will contain at least one block. ### Advancing an Epoch We advance from the current epoch $E_N$ to the next epoch $E_{N+1}$ when the next block after $B_{S_N}$ is produced. This block will be a member of $E_{N+1}$, and will have the values: - $P_{N+1}$ equal to the P-Chain height of $B_{S_N}$ - $T_{start}^{N+1}$ equal to $B_{S_N}$'s timestamp - The epoch number, $N+1$ increments the previous epoch's epoch number by exactly $1$ ## Properties ### Epoch Duration Bounds Since an epoch's start time is set to the [timestamp of the sealing block of the previous epoch](#advancing-an-epoch), all epochs are guaranteed to have a duration of at least $D$, as measured from the epoch's starting time to the timestamp of the epoch's sealing block. However, since a sealing block is [defined](#epoch-sealing) to be a member of the epoch it seals, there is no upper bound on an epoch's duration, since that sealing block may be produced at any point in the future beyond $T_{start}^N + D$. ### Fixing the P-Chain Height When building a block, Avalanche blockchains use the P-Chain height [embedded in the block](#assumptions) to determine the validator set. If instead the epoch P-Chain height is used, then we can ensure that when a block is built, the validator set to be used for the next block is known. To see this, suppose block $b_m$ seals epoch $E_N$. Then the next block, $b_{m+1}$ will begin a new epoch, $E_{N+1}$ with $P_{N+1}$ equal to $b_m$'s P-Chain height, $p_m$. If instead $b_m$ does not seal $E_N$, then $b_{m+1}$ will continue to use $P_{N}$. Both candidates for $b_{m+1}$'s P-Chain height ($p_m$ and $P_N$) are known at $b_m$ build time. ## Use Cases ### ICM Verification Optimization For a validator to verify an ICM message, the signing L1/Subnet's validator set must be retrieved during block verification by traversing backward from the current P-Chain height to the P-Chain height provided by the ProposerVM. The traversal depth is highly variable, so to account for the worst case, VM implementations charge a large amount of gas to perform this verification. With epochs, validator set retrieval occurs at fixed P-Chain heights that increment at regular intervals, which provides opportunities to optimize this retrieval. For instance, validator retrieval may be done asynchronously from block verification as soon as an epoch has been sealed. Further, validator sets at a given height can be more effectively cached or otherwise kept in memory, because the same height will be used verify all ICM messages for the remainder of an epoch. Each of these VM optimizations allow for the potential of ICM verification costs to be safely reduced by a significant amount within VM implementations. ### Improved Relayer Reliability Current ICM VM implementations verify ICM messages against the local P-Chain state, as determined by the P-Chain height set by the ProposerVM. Off-chain relayers perform the following steps to deliver ICM messages: 1. Fetch the sending chain's validator set at the verifying chain's current proposed height 1. Collect BLS signatures from that validator set to construct the signed ICM message 1. Submit the transaction containing the signed message to the verifying chain If the validator set changes between steps 1 and 3, the ICM message will fail verification. Epochs improve upon this by fixing the P-Chain height used to verify ICM messages for a duration of time that is predictable to off-chain relayers. A relayer should be able to derive the epoch boundaries based on the specification above, or they could retrieve that information via a node API. Relayers could use that information to decide the validator set to query, knowing that it will be stable for the duration of the epoch. Further, VMs could relax the verification rules to allow ICM messages to be verified against the previous epoch as a fallback, eliminating edge cases around the epoch boundary. ## EVM ICM Verification Gas Cost Updates Since the activation of [ACP-30](https://github.com/avalanche-foundation/ACPs/tree/60cbfc32e7ee2cffed33d8daee980d7a85dded48/ACPs/30-avalanche-warp-x-evm#gas-costs), the cost to verify ICM messages in the Avalanche EVM implementations (i.e. `coreth` and `subnet-evm`) using the `WarpPrecompile` have been based on the worst-case verification flow, including the relatively expensive lookup of the source chain's validator set at an aribtrary P-Chain height used by each new block. This ACP allows for optimizing this verification, as described above. Prior to this ACP, the gas costs of relevant `WarpPrecompile` functions were: ``` const ( GetVerifiedWarpMessageBaseCost = 2 GetBlockchainIDGasCost = 2 GasCostPerWarpSigner = 500 GasCostPerWarpMessageChunk = 3_200 GasCostPerSignatureVerification = 200_000 ) ``` With optimizations implemented, based on the results of [new benchmarks](https://github.com/ava-labs/coreth/pull/1331) of the `WarpPrecompile` and roughly targeting processing 150 million gas per second, Avalanche EVM chains with this ACP activated use the following gas costs for the `WarpPrecompile`. ``` const ( GetVerifiedWarpMessageBaseCost = 750 GetBlockchainIDGasCost = 200 GasCostPerWarpSigner = 250 GasCostPerWarpMessageChunk = 512 GasCostPerSignatureVerification = 125_000 ) ``` While the performance of `GetVerifiedWarpMessageBaseCost`, `GetBlockchainIDGasCost`, and `GasCostPerWarpMessageChunk` are not directly impacted by this ACP, updated benchmark numbers show the new gas costs to be better aligned with relative time that the operations take to perform. ## Backwards Compatibility This change requires a network upgrade and is therefore not backwards compatible. Any downstream entities that depend on a VM's view of the P-Chain will also need to account for epoched P-Chain views. For instance, ICM messages are signed by an L1's validator set at a specific P-Chain height. Currently, the constructor of the signed message can in practice use the validator set at the P-Chain tip, since all deployed Avalanche VMs are at most behind the P-Chain by a fixed number of blocks. With epoching, however, the ICM message constructor must take into account the epoch P-Chain height of the verifying chain, which may be arbitrarily far behind the P-Chain tip. ## Reference Implementation The following pseudocode illustrates how an epoch may be calculated for a block: ```go // Epoch Duration const D time.Duration type Epoch struct { PChainHeight uint64 Number uint64 StartTime time.Time } type Block interface { Timestamp() time.Time PChainHeight() uint64 Epoch() Epoch } func GetPChainEpoch(parent Block) Epoch { parentTimestamp := parent.Timestamp() parentEpoch := parent.Epoch() epochEndTime := parentEpoch.StartTime.Add(D) if parentTimestamp.Before(epochEndTime) { // If the parent was issued before the end of its epoch, then it did not // seal the epoch. return parentEpoch } // The parent sealed the epoch, so the child is the first block of the new // epoch. return Epoch{ PChainHeight: parent.PChainHeight(), Number: parentEpoch.Number + 1, StartTime: parentTimestamp, } } ``` - If the parent sealed its epoch, the current block [advances the epoch](#advancing-an-epoch), refreshing the epoch height, incrementing the epoch number, and setting the epoch starting time. - Otherwise, the current block uses the current epoch height, number, and starting time, regardless of whether it seals the epoch. A full reference implementation of this ACP for avalanchego can be found [here](https://github.com/ava-labs/avalanchego/pull/4238). ### Setting the Epoch Duration The epoch duration $D$ is set on a network-wide level. For both Fuji (network ID 5) and Mainnet (network ID 1), $D$ will be set to 5 minutes upon activation of this ACP. Any changes to $D$ in the future would require another network upgrade. #### Changing the Epoch Duration Future network upgrades may change the value of $D$ to some new duration $D'$. $D'$ should not take effect until the end of the current epoch, rather than the activation time of the network upgrade that defines $D'$. This ensures an in progress epoch at the upgrade activation time cannot have a realized duration less than both $D$ and $D'$. ## Security Considerations ### Epoch P-Chain Height Skew Because epochs may have [unbounded duration](#epoch-duration-bounds), it is possible for a block's `PChainEpochHeight` to be arbitrarily far behind the tip of the P-Chain. This does not affect the *validity* of ICM verification within a VM that implements P-Chain epoched views, since the validator set at `PChainEpochHeight` is always known. However, the following considerations should be made under this scenario: 1. As validators exit the validator set, their physical nodes may be unavailable to serve BLS signature requests, making it more difficult to construct a valid ICM message 1. A valid ICM message may represent an attestation by a stale validator set. Signatures from validators that have exited the validator set between `PChainEpochHeight` and the current P-Chain tip will not represent active stake. Both of these scenarios may be mitigated by having shorter epoch lengths, which limit the delay in time between when the P-Chain is updated and when those updates are taken into account for ICM verification on a given L1, and by ensuring consistent block production, so that epochs always advance soon after $D$ time has passed. ### Excessive Validator Churn If an epoched view of the P-Chain is used by the consensus engine, then validator set changes over an epoch's duration will be concentrated into a single block at the epoch's boundary. Excessive validator churn can cause consensus failures and other dangerous behavior, so it is imperative that the amount of validator weight change at the epoch boundary is limited. One strategy to accomplish this is to queue validator set changes and spread them out over multiple epochs. Another strategy is to batch updates to the same validator together such that increases and decreases to that validator's weight cancel each other out. Given the primary use case of ICM verification improvements, which occur at the VM level, mechanisms to mitigate against this are omitted from this ACP. ## Open Questions - What should the epoch duration $D$ be set to? - Is it safe for `PChainEpochHeight` and `PChainHeight` to differ significantly within a block, due to [unbounded epoch duration](#epoch-duration-bounds)? ## Acknowledgements Thanks to [@iansuvak](https://github.com/iansuvak), [@geoff-vball](https://github.com/geoff-vball), [@yacovm](https://github.com/yacovm), [@michaelkaplan13](https://github.com/michaelkaplan13), [@StephenButtolph](https://github.com/StephenButtolph), and [@aaronbuchwald](https://github.com/aaronbuchwald) for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-191: Seamless L1 Creation (/docs/acps/191-seamless-l1-creation) --- title: "ACP-191: Seamless L1 Creation" description: "Details for Avalanche Community Proposal 191: Seamless L1 Creation" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/191-seamless-l1-creation/README.md --- | ACP | 191 | | :- | :- | | **Title** | Seamless L1 Creations (CreateL1Tx) | | **Author(s)** | Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)), Meaghan FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/197))| | **Track** | Standards | ## Abstract This ACP introduces a new P-Chain transaction type called `CreateL1Tx` that simplifies the creation of Avalanche L1s. It consolidates three existing transaction types (`CreateSubnetTx`, `CreateChainTx`, and `ConvertSubnetToL1Tx`) into a single atomic operation. This streamlines the L1 creation process, removes the need for the intermediary Subnet creation step, and eliminates the management of temporary `SubnetAuth` credentials. ## Motivation [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) introduced Avalanche L1s, providing greater sovereignty and flexibility compared to Subnets. However, creating an L1 currently requires a three-step process: 1. `CreateSubnetTx`: Create the Subnet record on the P-Chain and specify the `SubnetAuth` 2. `CreateChainTx`: Add a blockchain to the Subnet (can be called multiple times) 3. `ConvertSubnetToL1Tx`: Convert the Subnet to an L1, specifying the initial validator set and the validator manager location This process has several drawbacks: * It requires orchestrating three separate transactions that could be handled in one. * The `SubnetAuth` must be managed during creation but becomes irrelevant after conversion. * The multi-step process increases complexity and potential for errors. * It introduces unnecessary state transitions and storage overhead on the P-Chain. By introducing a single `CreateL1Tx` transaction, we can simplify the process, reduce overhead, and improve the developer experience for creating L1s. ## Specification ### New Transaction Type The following new transaction type is introduced: ```go // ChainConfig represents the configuration for a chain to be created type ChainConfig struct { // A human readable name for the chain; need not be unique ChainName string `serialize:"true" json:"chainName"` // ID of the VM running on the chain VMID ids.ID `serialize:"true" json:"vmID"` // IDs of the feature extensions running on the chain FxIDs []ids.ID `serialize:"true" json:"fxIDs"` // Byte representation of genesis state of the chain GenesisData []byte `serialize:"true" json:"genesisData"` } // CreateL1Tx is an unsigned transaction to create a new L1 with one or more chains type CreateL1Tx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Chain configurations for the L1 (can be multiple) Chains []ChainConfig `serialize:"true" json:"chains"` // Chain where the L1 validator manager lives ManagerChainID ids.ID `serialize:"true" json:"managerChainID"` // Address of the L1 validator manager ManagerAddress types.JSONByteSlice `serialize:"true" json:"managerAddress"` // Initial pay-as-you-go validators for the L1 Validators []*L1Validator `serialize:"true" json:"validators"` } ``` The `L1Validator` structure follows the same definition as in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md#convertsubnettol1tx). ### Transaction Processing When a `CreateL1Tx` transaction is processed, the P-Chain performs the following operations atomically: 1. Create a new L1. 2. Create chain records for each chain configuration in the `Chains` array. 3. Set up the L1 validator manager with the specified `ManagerChainID` and `ManagerAddress`. 4. Register the initial validators specified in the `Validators` array. ### IDs * `subnetID`: The `subnetID` of the L1 is the transaction hash. * `blockchainID`: the `blockchainID` for each blockchain is is defined as the SHA256 hash of the 37 bytes resulting from concatenating the 32 byte `subnetID` with the `0x00` byte and the 4 byte `chainIndex` (index in the `Chains` array within the transaction) * `validationID`: The `validationID` for the initial validators added through `CreateL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction). Note: Even with this updated definition of the `blockchainID`s for chains created using this new flow, the `validationID`s of the L1s initial set of validators is still compatible with the existing reference validator manager contracts as defined [here](https://github.com/ava-labs/icm-contracts/blob/4a897ba913958def3f09504338a1b9cd48fe5b2d/contracts/validator-manager/ValidatorManager.sol#L247). ### Restrictions and Validation The `CreateL1Tx` transaction has the following restrictions and validation criteria: 1. The `Chains` array must contain at least one chain configuration 2. The `ManagerChainID` must be a valid blockchain ID, but cannot be the P-Chain blockchain ID 3. Validator nodes must have unique NodeIDs within the transaction 4. Each validator must have a non-zero weight and a non-zero balance 5. The transaction inputs must provide sufficient AVAX to cover the transaction fee and all validator balances ### Warp Message After the transaction is accepted, the P-Chain must be willing to sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to the new L1, similar to what would happen after a `ConvertSubnetToL1Tx`. This ensures compatibility with existing systems that expect this message, such as the validator manager contracts. ## Backwards Compatibility This ACP introduces a new transaction type and does not modify the behavior of existing transaction types. Existing Subnets and L1s created through the three-step process will continue to function as before. This change is purely additive and does not require any changes to existing L1s or Subnets. The existing transactions `CreateSubnetTx`, `CreateChainTx` and `ConvertSubnetToL1Tx` remain unchanged for now, but may be removed in a future ACP to ensure systems have sufficient time to update to the new process. ## Reference Implementation A reference implementation must be provided in order for this ACP to be considered implementable. ## Security Considerations The `CreateL1Tx` transaction follows the same security model as the existing three-step process. By making the L1 creation atomic, it reduces the risk of partial state transitions that could occur if one of the transactions in the three-step process fails. The same continuous fee mechanism introduced in ACP-77 applies to L1s created through this new transaction type, ensuring proper metering of validator resources. The transaction verification process must ensure that all validator properties are properly validated, including unique NodeIDs, valid BLS signatures, and sufficient balances. ## Rationale and Alternatives The primary alternative is to maintain the status quo \- requiring three separate transactions to create an L1. However, this approach has clear disadvantages in terms of complexity, transaction overhead, and user experience. Another alternative would be to modify the existing `ConvertSubnetToL1Tx` to allow specifying chain configurations directly. However, this would complicate the conversion process for existing Subnets and would not fully address the desire to eliminate the Subnet intermediary step for new L1 creation. The chosen approach of introducing a new transaction type provides a clean solution that addresses all identified issues while maintaining backward compatibility. ## Acknowledgements The idea for this PR was originally formulated by Aaron Buchwald in our discussion about the creation of L1s. Special thanks to the authors of ACP-77 for their groundbreaking work on Avalanche L1s, and to the projects that have shared their experiences and challenges with the current validator manager framework. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-194: Streaming Asynchronous Execution (/docs/acps/194-streaming-asynchronous-execution) --- title: "ACP-194: Streaming Asynchronous Execution" description: "Details for Avalanche Community Proposal 194: Streaming Asynchronous Execution" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/194-streaming-asynchronous-execution/README.md --- | ACP | 194 | | :--- | :--- | | **Title** | Streaming Asynchronous Execution | | **Author(s)** | Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/196)) | | **Track** | Standards | ## Abstract Streaming Asynchronous Execution (SAE) decouples consensus and execution by introducing a queue upon which consensus is performed. A concurrent execution stream is responsible for clearing the queue and reporting a delayed state root for recording by later rounds of consensus. Validation of transactions to be pushed to the queue is lightweight but guarantees eventual execution. ## Motivation ### Performance improvements 1. Concurrent consensus and execution streams eliminate node context switching, reducing latency caused by each waiting on the other. In particular, "VM time" (akin to CPU time) more closely aligns with wall time since it is no longer eroded by consensus. This increases gas per wall-second even without an increase in gas per VM-second. 2. Lean, execution-only clients can rapidly execute the queue agreed upon by consensus, providing accelerated receipt issuance and state computation. Without the need to compute state _roots_, such clients can eschew expensive Merkle data structures. End users see expedited but identical transaction results. 3. Irregular stop-the-world events like database compaction are amortised over multiple blocks. 4. Introduces additional bursty throughput by eagerly accepting transactions, without a reduction in security guarantees. 5. Third-party accounting of non-data-dependent transactions, such as EOA-to-EOA transfers of value, can be performed prior to execution. ### Future features Performing transaction execution after consensus sequencing allows the usage of consensus artifacts in execution. This unblocks some additional future improvements: 1. Exposing a real-time VRF during transaction execution. 2. Using an encrypted mempool to reduce front-running. This ACP does not introduce these, but some form of asynchronous execution is required to correctly implement them. ### User stories 1. A sophisticated DeFi trader runs a highly optimised execution client, locally clearing the transaction queue well in advance of the network—setting the stage for HFT DeFi. 2. A custodial platform filters the queue for only those transactions sent to one of their EOAs, immediately crediting user balances. ## Description In all execution models, a block is _proposed_ and then verified by validators before being _accepted_. To assess a block's validity in _synchronous_ execution, its transactions are first _executed_ and only then _accepted_ by consensus. This immediately and implicitly _settles_ all of the block's transactions by including their execution results at the time of _acceptance_. E[Executed] --> A[Accepted/Settled]`} /> Under SAE, a block is considered valid if all of its transactions can be paid for when eventually _executed_, after which the block is _accepted_ by consensus. The act of _acceptance_ enqueues the block to be _executed_ asynchronously. In the future, some as-yet-unknown later block will reference the execution results and _settle_ all transactions from the _executed_ block. A[Accepted] A -->|variable delay| E[Executed] E -->|τ seconds| S[Settled] A -. guarantees .-> S`} /> ### Block lifecycle #### Proposing blocks The validator selection mechanism for block production is unchanged. However, block builders are no longer expected to execute transactions during block building. The block builder is expected to include transactions by building upon the most recently settled state and to apply worst-case bounds on the execution of the ancestor blocks prior to the most recently settled block. The worst-case bounds enforce minimum balances of sender accounts and the maximum required base fee. The worst-case bounds are described [below](#block-validity-and-building). Prior to adding a proposed block to consensus, all validators MUST verify that the block builder correctly enforced the worst-case bounds while building the block. This guarantees that the block can be executed successfully if it is accepted. > [!NOTE] > The worst-case bounds guarantee does not provide assurance about whether or not a transaction will revert nor whether its computation will run out of gas by reaching the specified limit. The verification only ensures the transaction is capable of paying for the accrued fees. #### Accepting blocks Once a block is marked as accepted by consensus, the block is put in a FIFO execution queue. #### Executing blocks Each client runs a block executor in parallel, which constantly executes the blocks from the FIFO queue. In addition to executing the blocks, the executor provides deterministic timestamps for the beginning and end of each block's execution. Time is measured two ways by the block executor: 1. The timestamp included in the block header. 2. The amount of gas charged during the execution of blocks. > [!NOTE] > Execution timestamps are more granular than block header timestamps to allow sub-second block execution times. As soon as there is a block available in the execution queue, the block executor starts processing the block. If the executor's current timestamp is prior to the current block's timestamp, the executor's timestamp is advanced to match the block's. Advancing the timestamp in this scenario results in unused gas capacity, reducing the gas _excess_ from which the price is determined. The block is then executed on top of the last executed (not settled) state. After executing the block, the executor advances its timestamp based on the gas usage of the block, also increasing the gas _excess_ for the pricing algorithm. The block's execution time is now timestamped and the block is available to be settled. #### Settling blocks Already-executed blocks are settled once a following block that includes the results of the executed block is accepted. The results are included by setting the state root to that of the last executed block and the receipt root to that of a MPT of all receipts since last settlement, possibly from more than one block. The following block's timestamp is used to determine which blocks to settle—blocks are settled if said timestamp is greater than or equal to the execution time of the executed block plus a constant delay. The additional delay amortises any sporadic slowdowns the block executor may have encountered. ## Specification ### Background ACP-103 introduced the following variables for calculating the gas price:
| | | |---|---| | $T$ | the target gas consumed per second | | $M$ | minimum gas price | | $K$ | gas price update constant | | $R$ | gas capacity added per second |
ACP-176 provided a mechanism to make $T$ dynamic and set: $$ \begin{align} R &= 2 \cdot T \\ K &= 87 \cdot T \end{align} $$ The _excess_ actual consumption $x \ge 0$ beyond the target $T$ is tracked via numerical integration and used to calculate the gas price as: $$M \cdot \exp\left(\frac{x}{K}\right)$$ ### Gas charged We introduce $g_L$, $g_U$, and $g_C$ as the gas _limit_, _used_, and _charged_ per transaction, respectively. We define $$ g_C := \max\left(g_U, \frac{g_L}{\lambda}\right) $$ where $\lambda$ enforces a lower bound on the gas charged based on the gas limit. > [!NOTE] > $\dfrac{g_L}{\lambda}$ is rounded up by actually calculating $\dfrac{g_L + \lambda - 1}{\lambda}$ In all previous instances where execution referenced gas used, from now on, we will reference gas charged. For example, the gas excess $x$ will be modified by $g_C$ rather than $g_U$. ### Block size The constant time delay between block execution and settlement is defined as $\tau$ seconds. The maximum allowed size of a block is defined as: $$ \omega_B ~:= R \cdot \tau \cdot \lambda $$ Any block whose total sum of gas limits for transactions exceed $\omega_B$ MUST be considered invalid. ### Queue size The maximum allowed size of the execution queue _prior_ to adding a new block is defined as: $$ \omega_Q ~:= 2 \cdot \omega_B $$ Any block that attempts to be enqueued while the current size of the queue is larger than $\omega_Q$ MUST be considered invalid. > [!NOTE] > By restricting the size of the queue _prior_ to enqueueing the new block, $\omega_B$ is guaranteed to be the only limitation on block size. ### Block executor During the activation of SAE, the block executor's timestamp $t_e$ is initialised to the timestamp of the last accepted block. Prior to executing a block with timestamp $t_b$, the executor's timestamp and excess is updated: $$ \begin{align} \Delta{t} &~:= \max\left(0, t_b - t_e\right) \\ t_e &~:= t_e + \Delta{t} \\ x &~:= \max\left(x - T \cdot \Delta{t}, 0\right) \\ \end{align} $$ The block is then executed with the gas price calculated from the current value of $x$. After executing a block that charged $g_C$ gas in total, the executor's timestamp and excess is updated: $$ \begin{align} \Delta{t} &~:= \frac{g_C}{R} \\ t_e &~:= t_e + \Delta{t} \\ x &~:= x + \Delta{t} \cdot (R - T) \\ \end{align} $$ > [!NOTE] > The update rule here assumes that $t_e$ is a timestamp that tracks the passage of time both by gas and by wall-clock time. $\frac{g_C}{R}$ MUST NOT be simply rounded. Rather, the gas accumulation MUST be left as a fraction. $t_e$ is now this block's execution timestamp. ### Handling gas target changes When a block is produced that modifies $T$, both the consensus thread and the execution thread will update to the modified $T$ after their own handling of the block. For example, restrictions of the queue size MUST be calculated based on the parent block's $T$. Similarly, the time spent executing a block MUST be calculated based on the parent block's $T$. ### Block settlement For a _proposed_ block that includes timestamp $t_b$, all ancestors whose execution timestamp $t_e$ is $t_e \leq t_b - \tau$ are considered settled. Note that $t_e$ is not an integer as it tracks fractional seconds with gas consumption, which is not the case for $t_b$. The _proposed_ block MUST include the `stateRoot` produced by the execution of the most recently settled block. For any _newly_ settled blocks, the _proposed_ block MUST include all execution artifacts: - `receiptsRoot` - `logsBloom` - `gasUsed` The receipts root MUST be computed as defined in [EIP-2718](https://eips.ethereum.org/EIPS/eip-2718) except that the tree MUST be built from the concatenation of receipts from all blocks being settled. > [!NOTE] > If the block executor has fallen behind, the node may not be able to determine precisely which ancestors should be considered settled. If this occurs, validators MUST allow the block executor to catch up prior to deciding the block's validity. ### Block validity and building After determining which blocks to settle, all remaining ancestors of the new block must be inspected to determine the worst-case bounds on $x$ and account balances. Account nonces are able to be known immediately. The worst-case bound on $x$ can be calculated by following the block executor update rules using $g_L$ rather than $g_C$. The worst-case bound on account balances can be calculated by charging the worst-case gas cost to the sender of a transaction along with deducting the value of the transaction from the sender's account balance. The `baseFeePerGas` field MUST be populated with the gas price based on the worst-case bound on $x$ at the start of block execution. ### Configuration Parameters As noted above, SAE depends on the values of $\tau$ and $\lambda$ to be set as parameters and the values of $\omega_B$ and $\omega_Q$ are derived from $T$. Parameters to specify for the C-Chain are:
| Parameter | Description | C-Chain Configuration| | - | - | - | | $\tau$ | duration between execution and settlement | $5s$ | | $\lambda$ | minimum conversion from gas limit to gas charged | $2$ |
## Backwards Compatibility This ACP modifies the meaning of multiple fields in the block. A comprehensive list of changes will be produced once a reference implementation is available. Likely fields to change include: - `stateRoot` - `receiptsRoot` - `logsBloom` - `gasUsed` - `extraData` ## Reference Implementation A reference implementation is still a work-in-progress. This ACP will be updated to include a reference implementation once one is available. ## Security Considerations ### Worst-case transaction validity To avoid a DoS vulnerability on execution, we require an upper bound on transaction gas cost (i.e. amount $\times$ price) beyond the regular requirements for transaction validity (e.g. nonce, signature, etc.). We therefore introduced "worst-case cost" validity. We can prove that if every transaction were to use its full gas limit this would result in the greatest possible: 1. Consumption of gas units (by definition of the gas limit); and 2. Gas excess $x$ (and therefore gas price) at the time of execution. For a queue of blocks $Q = \\{i\\}_ {i \ge 0}$ the gas excess $x_j$ immediately prior to execution of block $j \in Q$ is a monotonic, non-decreasing function of the gas usage of all preceding blocks in the queue; i.e. $x_j~:=~f(\\{g_i\\}_{i 0$. Hence any decrease of $x$ is $\ge$ predicted. The excess, and hence gas price, for every later block $x_{i>k}$ is therefore reduced: $$ \downarrow g_k \implies \begin{cases} \downarrow \Delta^+x \propto g_k \\ \uparrow \Delta^-x \propto R-g_k \end{cases} \implies \downarrow \Delta x_k \implies \downarrow M \cdot \exp\left(\frac{x_{i>k}}{K}\right) $$ Given maximal gas consumption under (1), the monotonicity of $f$ implies (2). Since we are working with non-negative integers, it follows that multiplying a transaction's gas limit by the hypothetical gas price of (2) results in its worst-case gas cost. Any sender able to pay for this upper bound (in addition to value transfers) is guaranteed to be able to pay for the actual execution cost. Transaction _acceptance_ under worst-case cost validity is therefore a guarantee of _settlement_. ### Queue DoS protection Worst-case cost validity only protects against DoS at the point of execution but leaves the queue vulnerable to high-limit, low-usage transactions. For example, a malicious user could send a transfer-only transaction (21k gas) with a limit set to consume the block's full gas limit. Although they would have to have sufficient funds to theoretically pay for all the reserved gas, they would never actually be charged this amount. Pushing a sufficient number of such transactions to the queue would artificially inflate the worst-case cost of other users. Therefore, the gas charged was modified from being equal to the gas usage to the above $g_C := \max\left(g_U, \frac{g_L}{\lambda}\right)$ The gas limit is typically set higher than the predicted gas consumption to allow for a buffer should the prediction be imprecise. This precludes setting $\lambda := 1$. Conversely, setting $\lambda := \infty$ would allow users to attack the queue with high-limit, low-consumption transactions. Setting $\lambda ~:= 2$ allows for a 100% buffer on gas-usage estimates without penalising the sender, while still disincentivising falsely high limits. #### Upper bound on queue DoS Recall $R$ (gas capacity per second) for rate and $g_C$ (gas charged) as already defined. The actual gas excess $x_A$ has an upper bound of the worst-case excess $x_W$, both of which can be used to calculate respective base fees $f_A$ and $f_W$ (the variable element of gas prices) from the existing exponential function: $$ f := M \cdot \exp\left( \frac{x}{K} \right). $$ Mallory is attempting to maximize the DoS ratio $$ D := \frac{f_W}{f_A} $$ by maximizing $\Sigma_{\forall i} (g_L - g_U)_i$ to maximize $x_W - x_A$. > [!TIP] > Although $D$ shadows a variable in ACP-176, that one is very different to anything here so there won't be confusion. Recall that the increasing excess occurs such that $$ x := x + g \cdot \frac{(R - T)}{R} $$ Since the largest allowed size of the queue when enqueuing a new block is $\omega_Q$, we can derive an upper bound on the difference in the changes to worst-case and actual gas excess caused by the transactions in the queue before the new block is added: $$ \begin{align} \Delta x_A &\ge \frac{\omega_Q}{\lambda} \cdot \frac{(R - T)}{R} \\ \Delta x_W &= \omega_Q \cdot \frac{(R - T)}{R} \\ \Delta x_W - \Delta x_A &\le \omega_Q \cdot \frac{(R - T)}{R} - \frac{\omega_Q}{\lambda} \cdot \frac{(R - T)}{R} \\ &= \omega_Q \cdot \frac{(R - T)}{R} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_Q \cdot \frac{(2 \cdot T - T)}{2 \cdot T} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_Q \cdot \frac{T}{2 \cdot T} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \frac{\omega_Q}{2} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \frac{2 \cdot \omega_B}{2} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_B \cdot \left(1-\frac{1}{\lambda}\right) \\ &= R \cdot \tau \cdot \lambda \cdot \left(1-\frac{1}{\lambda}\right) \\ &= R \cdot \tau \cdot (\lambda-1) \\ &= 2 \cdot T \cdot \tau \cdot (\lambda-1) \end{align} $$ Note that we can express Mallory's DoS quotient as: $$ \begin{align} D &= \frac{f_W}{f_A} \\ &= \frac{ M \cdot \exp \left( \frac{x_W}{K} \right)}{ M \cdot \exp \left( \frac{x_A}{K} \right)} \\ & = \exp \left( \frac{x_W - x_A}{K} \right). \end{align} $$ When the queue is empty (i.e. the execution stream has caught up with accepted transactions), the worst-case fee estimate $f_W$ is known to be the actual base fee $f_A$; i.e. $Q = \emptyset \implies D=1$. The previous bound on $\Delta x_W - \Delta x_A$ also bounds Mallory's ability such that: $$ \begin{align} D &\le \exp \left( \frac{2 \cdot T \cdot \tau \cdot (\lambda-1)}{K} \right)\\ &= \exp \left( \frac{2 \cdot T \cdot \tau \cdot (\lambda-1)}{87 \cdot T} \right)\\ &= \exp \left( \frac{2 \cdot \tau \cdot (\lambda-1)}{87} \right)\\ \end{align} $$ Therefore, for the values suggested by this ACP: $$ \begin{align} D &\le \exp \left( \frac{2 \cdot 5 \cdot (2 - 1)}{87} \right)\\ &= \exp \left( \frac{10}{87} \right)\\ &\simeq 1.12\\ \end{align} $$ In summary, Mallory can require users to increase their gas price by at most ~12%. In practice, the gas price often fluctuates more than 12% on a regular basis. Therefore, this does not appear to be a significant attack vector. However, any deviation that dislodges the gas price bidding mechanism from a true bidding mechanism is of note. ## Appendix ### JSON RPC methods Although asynchronous execution decouples the transactions and receipts recorded by a specific block, APIs MUST NOT alter their behavior to mirror this. In particular, the API method `eth_getBlockReceipts` MUST return the receipts corresponding to the block's transactions, not the receipts settled in the block. #### Named blocks The Ethereum Mainnet APIs allow for retrieving blocks by named parameters that the API server resolves based on their consensus mechanism. Other than the _earliest_ (genesis) named block, which MUST be interpreted in the same manner, all other named blocks are mapped to SAE in terms of the _execution_ status of blocks and MUST be interpreted as follows: * _pending_: the most recently _accepted_ block; * _latest_: the block that was most recently _executed_; * _safe_ and _finalized_: the block that was most recently _settled_. > [!NOTE] > The finality guarantees of Snowman consensus remove any distinction between _safe_ and _finalized_. > Furthermore, the _latest_ block is not at risk of re-org, only of a negligible risk of data corruption local to the API node. ### Observations around transaction prioritisation As EOA-to-EOA transfers of value are entirely guaranteed upon _acceptance_, block builders MAY choose to prioritise other transactions for earlier execution. A reliable marker of such transactions is a gas limit of 21,000 as this is an indication from the sender that they do not intend to execute bytecode. However, this could delay the ability to issue transactions that depend on these EOA-to-EOA transfers. Block builders are free to make their own decisions around which transactions to include. ## Acknowledgments Thank you to the following non-exhaustive list of individuals for input, discussion, and feedback on this ACP. * [Aaron Buchwald](https://github.com/aaronbuchwald) * [Angharad Thomas](https://x.com/divergenceharri) * [Martin Eckardt](https://github.com/martineckardt) * [Meaghan FitzGerald](https://github.com/meaghanfitzgerald) * [Michael Kaplan](https://github.com/michaelkaplan13) * [Yacov Manevich](https://github.com/yacovm) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-20: Ed25519 P2p (/docs/acps/20-ed25519-p2p) --- title: "ACP-20: Ed25519 P2p" description: "Details for Avalanche Community Proposal 20: Ed25519 P2p" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/20-ed25519-p2p/README.md --- | ACP | 20 | | :--- | :--- | | **Title** | Ed25519 p2p | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/21))| | **Track** | Standards | ## Abstract Support Ed25519 TLS certificates for p2p communications on the Avalanche network. Permit usage of Ed25519 public keys for Avalanche Network Client (ANC) NodeIDs. Support Ed25519 signatures in the ProposerVM. ## Motivation Avalanche Network Clients (ANCs) rely on TLS handshakes to facilitate p2p communications. AvalancheGo (and by extension, the Avalanche Network) only supports TLS certificates that use RSA or ECDSA as the signing algorithm and explicitly prohibits any other signing algorithms. If a TLS certificate is not present, AvalancheGo will generate and persist to disk a 4096 bit RSA private key on start-up. This key is subsequently used to generate the TLS certificate which is also persisted to disk. Finally, the TLS certificate is hashed to generate a 20 byte NodeID. Authenticated p2p messaging was required when the network started and it was sufficient to simply use a hash of the TLS certificate. With the introduction of Snowman++, validators were then required to produce shareable message signatures. The Snowman++ block headers (specified [here](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/proposervm/README.md#snowman-block-extension)) were then required to include the full TLS `Certificate` along with the `Signature`. However, TLS certificates support Ed25519 as their signing algorithm. Ed25519 are IETF recommendations ([RFC8032](https://datatracker.ietf.org/doc/html/rfc8032)) with some very nice properties, a large one being their size: - 32 byte public key - 64 byte private key - 64 byte signature Because of the small size of the public key, it can be used for the NodeID directly with a marginal hit to size (an additional 12 bytes). Additionally, the brittle reliance on static TLS certificates can be removed. Using the Ed25519 private key, a TLS certificate can be generated in-memory on node startup and used for p2p communications. This reduces the maintenance burden on node operators as they will only need to backup the Ed25519 private key instead of the TLS certificate and the RSA private key. Ed25519 has wide adoption, including in the crypto industry. A non-exhaustive list of things that use Ed25519 can be found [here](https://ianix.com/pub/ed25519-deployment.html). More information about the Ed25519 protocol itself can be found [here](https://ed25519.cr.yp.to). ## Specification ### Required Changes 1. Support registration of 32-byte NodeIDs on the P-chain 2. Generate an Ed25519 key by default (`staker.key`) on node startup 3. Use the Ed25519 key to generate a TLS certificate on node startup 4. Add support for Ed25519 keys + signatures to the proposervm 5. Remove the TLS certificate embedding in proposervm blocks when an Ed25519 NodeID is the proposer 6. Add support for Ed25519 in `PeerList` messages Changes to the p2p layer will be minimal as TLS handshakes are used to do p2p communication. Ed25519 will need to be added as a supported algorithm. The P-chain will also need to be modified to support registration of 32-byte NodeIDs. During serialization, the length of the NodeID is not serialized and was assumed to always be 20 bytes. Implementers of this ACP must take care to continue parsing old transactions correctly. This ACP could be implemented by adding a new tx type that requires Ed25519 NodeIDs only. If the implementer chooses to do this, a separate follow-up ACP must be submitted detailing the format of that transaction. ### Future Work In the future, usage of non-Ed25519 TLS certificates should be prohibited to remove any dependency on them. This will further secure the Avalanche network by reducing complexity. The path to doing so is not outlined in this ACP. ## Backwards Compatibility An implementation of this proposal should not introduce any backwards compatibility issues. NodeIDs that are 20 bytes should continue to be treated as hashes of TLS certificates. NodeIDs of 32 bytes (size of Ed25519 public key) should be supported following implementation of this proposal. ## Reference Implementation TLS certificate generation using an Ed25519 private key is standard. The golang standard library has a reference [implementation](https://github.com/golang/go/blob/go1.20.10/src/crypto/tls/generate_cert.go). Parsing TLS certificates and extracting the public key is also standard. AvalancheGo already contains [code](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/staking/verify.go#L55-L65) to verify the public key from a TLS certificate. ## Security Considerations ### Validation Criteria Although Ed25519 is standardized in [RFC8032](https://datatracker.ietf.org/doc/html/rfc8032), it does not define strict validation criteria. This has led to inconsistencies in the validation criteria across implementations of the signature scheme. This is unacceptable for any protocol that requires participants to reach consensus on signature validity. Henry de Valance highlights the complexity of this issue [here](https://hdevalence.ca/blog/2020-10-04-its-25519am). From [Chalkias et al. 2020](https://eprint.iacr.org/2020/1244.pdf): * The RFC 8032 and the NIST FIPS186-5 draft both require to reject non-canonically encoded points, but not all of the implementations follow those guidelines. * The RFC 8032 allows optionality between using a permissive verification equation and a more strict verification equation. Different implementations use different equations meaning validation results can vary even across implementations that follow RFC 8032. Zcash adopted [ZIP-215](https://zips.z.cash/zip-0215) (proposed by Henry de Valance) to explicitly define the Ed25519 validation criteria. Implementers of this ACP _*must*_ use the ZIP-215 validation criteria. The [`ed25519consensus`](https://github.com/hdevalence/ed25519consensus) golang library is a minimal fork of golang's `crypto/ed25519` package with support for ZIP-215 verification. It is maintained by [Filippo Valsorda](https://github.com/FiloSottile) who also maintains many golang stdlib cryptography packages. It is strongly recommended to use this library for golang implementations. ## Open Questions _Can this Ed25519 key be used in alternative communication protocols?_ Yes. Ed25519 can be used for alternative communication protocols like [QUIC](https://datatracker.ietf.org/group/quic/about) or [NOISE](http://www.noiseprotocol.org/noise.html). This ACP removes the reliance on TLS certificates and associates a Ed25519 public key with NodeIDs. This allows for experimentation with different communication protocols that may be better suited for a high throughput blockchain like Avalanche. _Can this Ed25519 key be used for Verifiable Random Functions?_ Yes. VRFs, as specified in [RFC9381](https://datatracker.ietf.org/doc/html/rfc9381), can be constructed using elliptic curves that are secure in the cryptographic random oracle model. Ed25519 test vectors are provided in the RFC for implementers of an Elliptic Curve VRF (ECVRF). This allows for Avalanche validators to generate a VRF per block using their associated Ed25519 keys, including for Subnets. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-204: Precompile Secp256r1 (/docs/acps/204-precompile-secp256r1) --- title: "ACP-204: Precompile Secp256r1" description: "Details for Avalanche Community Proposal 204: Precompile Secp256r1" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/204-precompile-secp256r1/README.md --- # ACP-204: Precompile for secp256r1 Curve Support | ACP | 204 | | :--- | :--- | | **Title** | Precompile for secp256r1 Curve Support | | **Author(s)** | [Santiago Cammi](https://github.com/scammi), [Arran Schlosberg](https://github.com/ARR4N) | | **Status** | Implementable ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/212)) | | **Track** | Standards | ## Abstract This proposal introduces a precompiled contract that performs signature verifications for the secp256r1 elliptic curve on Avalanche's C-Chain. The precompile will be implemented at address `0x0000000000000000000000000000000000000100` and will enable native verification of P-256 signatures, significantly improving gas efficiency for biometric authentication systems, WebAuthn, and modern device-based signing mechanisms. ## Motivation The secp256r1 (P-256) elliptic curve is the standard cryptographic curve used by modern device security systems, including Apple's Secure Enclave, Android Keystore, WebAuthn, and Passkeys. However, Avalanche currently only supports secp256k1 natively, forcing developers to use expensive Solidity-based verification that costs [200k-330k gas per signature verification](https://hackmd.io/@1ofB8klpQky-YoR5pmPXFQ/SJ0nuzD1T#Smart-Contract-Based-Verifiers). This ACP proposes implementing EIP-7951's secp256r1 precompiled contract to unlock significant ecosystem benefits: ### Enterprise & Institutional Adoption - Reduced onboarding friction: Enterprises can leverage existing biometric authentication infrastructure instead of managing seed phrases or hardware wallets - Regulatory compliance: Institutions can utilize their approved device security standards and identity management systems - Cost optimization: ~50x gas reduction (from 200k-330k to 6,900 gas) makes enterprise-scale applications economically viable The 100x gas cost reduction makes these use cases economically viable while maintaining the security properties institutions and users expect from their existing devices. Adding the precompiled contract at the same address as used in [RIP-7212](https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7212.md) provides consistency across ecosystems, and allows for any libraries that have been developed to interact with the precompile to be used unmodified across ecosystems. ## Specification This ACP implements [EIP-7951](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7951.md) for secp256r1 signature verification on Avalanche. The specification follows EIP-7951 exactly, with the precompiled contract deployed at address `0x0000000000000000000000000000000000000100`. ### Core Functionality - Input: 160 bytes (message hash + signature components r,s + public key coordinates x,y) - Output: success: 32 bytes `0x...01`; failure: no data returned - Gas Cost: 6,900 gas (based on EIP-7951 benchmarking) - Validation: Full compliance with NIST FIPS 186-3 specification ### Activation This precompile may be activated as part of Avalanche's next network upgrade. Individual Avalanche L1s and subnets could adopt this enhancement independently through their respective client software updates. For complete technical specifications, validation requirements, and implementation details, refer to [EIP-7951](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7951.md). ## Backwards Compatibility This ACP introduces a new precompiled contract and does not modify existing functionality. No backwards compatibility issues are expected since: 1. The precompile uses a previously unused address 2. No existing opcodes or consensus rules are modified 3. The change is additive and opt-in for applications Adoption requires a coordinated network upgrade for the C-Chain. Other EVM L1s can adopt this enhancement independently by upgrading their client software. ## Security Considerations ### Cryptographic Security - The secp256r1 curve is standardized by NIST and widely vetted - Security properties are comparable to secp256k1 (used by ECRECOVER) - Implementation follows NIST FIPS 186-3 specification exactly ### Implementation Security - Signature verification (vs public-key recovery) approach maximizes compatibility with existing P-256 ecosystem - No malleability check included to match NIST specification, but wrapper libraries may choose to add this - Input validation prevents invalid curve points and out-of-range signature components ### Network Security - Gas cost prevents potential DoS attacks through expensive computation - No consensus-level security implications beyond standard precompile considerations ## Reference Implementation The implementation builds upon existing work: 1. EIP-7951 Reference: The [Go-Ethereum implementation]https://github.com/ethereum/go-ethereum/pull/31991) of EIP-7951 provides the foundation 2. Coreth Implementation: Integration with Avalanche's C-Chain (Avalanche's fork of go-ethereum) 3. Cryptographic Library: Implementation utilizes Go's standard library `crypto/ecdsa` and `crypto/elliptic` packages, which implement NIST P-256 per FIPS 186-3 ([Go documentation](https://pkg.go.dev/crypto/elliptic#P256)) The implementation follows established patterns for precompile integration, adding the contract to the precompile registry and implementing the verification logic using established cryptographic libraries. This ACP was implemented and merged into Coreth and Subnet-EVM behind the `Granite` upgrade flag. The full implementation can be found in [coreth@v0.15.4-rc.4](https://github.com/ava-labs/coreth/releases/tag/v0.15.4-rc.4), [subnet-evm@v0.8.0-fuji-rc.2](https://github.com/ava-labs/subnet-evm/releases/tag/v0.8.0-fuji-rc.2) and [libevm@v1.13.14-0.3.0.release](https://github.com/ava-labs/libevm/releases/tag/v1.13.14-0.3.0.release). ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-209: Eip7702 Style Account Abstraction (/docs/acps/209-eip7702-style-account-abstraction) --- title: "ACP-209: Eip7702 Style Account Abstraction" description: "Details for Avalanche Community Proposal 209: Eip7702 Style Account Abstraction" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/209-eip7702-style-account-abstraction/README.md --- | ACP | 209 | | :--- | :--- | | **Title** | EIP-7702-style Set Code for EOAs | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/216)) | | **Track** | Standards | ## Abstract [EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md) was activated on the Ethereum mainnet in May 2025 as part of the Pectra upgrade, and introduced a new "set code transaction" type that allows Externally Owned Accounts (EOAs) to set the code in their account. This enabled several UX improvements, including batching multiple operations into a single atomic transaction, sponsoring transactions on behalf of another account, and privilege de-escalation for EOAs. This ACP proposes adding a similar transaction type and functionality to Avalanche EVM implementations in order to have them support the same style of UX available on Ethereum. Modifications to the handling of account nonce and balances are required in order for it to be safe when used in conjunction with the streaming asynchronous execution (SAE) mechanism proposed in [ACP-194](https://github.com/avalanche-foundation/ACPs/tree/4a9408346ee408d0ab81050f42b9ac5ccae328bb/ACPs/194-streaming-asynchronous-execution). ## Motivation The motivation for this ACP is the same as the motivation described in [EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#motivation). However, EIP-7702 as implemented for Ethereum breaks invariants required for EVM chains that use the ACP-194 SAE mechanism. There has been strong community feedback in support of ACP-194 for its potential to: - Allow for increasing the target gas rate of Avalanche EVM chains, including the C-Chain - Enable the use of an encrypted mempool to prevent front-running - Enable the use of real time VRF during transaction execution Given the strong support for ACP-194, bringing EIP-7702-style functionality to Avalanche EVMs requires modifications to preserve its necessary invariants, described below. ### Invariants needed for ACP-194 There are [two invariants explicitly broken by EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#backwards-compatibility) that are required for SAE. They are: 1. An account balance can only decrease as a result of a transaction originating from that account. 1. An EOA nonce may not increase after transaction execution has begun. These invariants are required for SAE in order to be able to statically analyze (i.e. determine without executing the transaction) that a transaction: - Has the proper nonce - Will have sufficient balance to pay for its worst case transaction fee plus the balance it sends As described in the ACP-194, this lightweight analysis of transactions in blocks allows blocks to be accepted by consensus with the guarantee that they can be executed successfully. Only after block acceptance are the transactions within the block then put into a queue to be executed asynchronously. If the execution of transactions in the queue can decrease an EOA's account balance or change an EOA's current nonce, then block verification is unable to ensure that transactions in the block will be valid when executed. If transactions accepted into blocks can be invalidated prior to their execution, this poses DOS vulnerabilities because the invalidated transactions use up space in the pending execution queue according to their gas limits, but they do not pay any fees. Notably, EIP-7702's violation of these invariants already presents challenges for mempool verification on Ethereum. As [noted in the security considerations section](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#transaction-propagation), EIP-7702 makes it "possible to cause transactions from other accounts to become stale" and this "poses some challenges for transaction propagation" because nodes now cannot "statically determine the validity of transactions for that account". In synchronous execution environments such as Ethereum, these issues only pose potential DOS risks to the public transaction mempool. Under an asynchronous execution scheme, the issues pose DOS risks to the chain itself since the invalidated transactions can be included in blocks prior to their execution. ## Specification The same [set code transaction as specified in EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md?ref=blockhead.co#set-code-transaction) will be added to Avalanche EVM implementations. The behavior of the transaction is the same as specified in EIP-7702. However, in order to keep the guarantee of transaction validity upon inclusion in an accepted block, two modifications are made to the transaction verification and execution rules. 1. Delegated accounts must maintain a "reserved balance" to ensure they can always pay for the transaction fees and transferred balance of transactions sent from the account. The reserved balances are managed via a new `ReservedBalanceManager` precompile, as specified below. 1. The handling of account nonces during execution is separated from the verification of nonces during block verification, as specified below. ### Reserved balances To ensure that all transactions can cover their worst case transaction fees and transferred balances upon inclusion in an accepted block, a "reserved balance" mechanism is introduced for accounts. Reserved balances are required for delegated accounts to guarantee that subsequent transactions they send after setting code for their account can still cover their fees and transfer amounts, even if transactions from other accounts reduce the account's balance prior to their execution. To allow for managing reserved balances, a new `ReservedBalanceManager` stateful precompile will be added at address `0x0200000000000000000000000000000000000006`. The `ReservedBalanceManager` precompile will have the following interface: ```solidity interface IReservedBalanceManager { /// @dev Emitted whenever an account's reserved balance is modified. event ReservedBalanceUpdated(address indexed account, uint256 newBalance); /// @dev Called to deposit the native token balance provided into the account's /// reserved balance. function depositReservedBalance(address account) external payable; /// @dev Returns the current reserved balance for the given account. function getReservedBalance(address account) external view returns (uint256 balance); } ``` The precompile will maintain a mapping of accounts to their current reserved balances. The precompile itself intentionally only allows for _increasing_ an account's reserved balance. Reducing an account's reserved balance is only ever done by the EVM when a transaction is sent from the account, as specified below. During transaction verification, the following rules are applied: - If the sender EOA account has not set code via an EIP-7702 transaction, no reserved balance is required. - The transaction is confirmed to be able to pay for its worst case transaction fee and transferred balance by looking at the sender account's regular balance and accounting for prior transactions it has sent that are still in the pending execution queue, as specified in ACP-194. - Otherwise, if the sender EOA account has previously been delegated via an EIP-7702 transaction (even if that transaction is still in the pending execution queue), then the account's current "[settled](https://github.com/avalanche-foundation/ACPs/tree/4a9408346ee408d0ab81050f42b9ac5ccae328bb/ACPs/194-streaming-asynchronous-execution#settling-blocks)" reserved balance must be sufficient to cover the sum of the worst case transaction fees and balances sent for all of the transactions in the pending execution queue after the set code transaction. During transaction execution, the following rules are applied: - When initially deducting balance from the sender EOA account for the maximum transaction fee and balance sent with the transaction, the account's regular balance is used first. The account's reserved balance is only reduced if the regular balance is insufficient. - In the execution of code as part of a transaction, only regular account balances are available. The only possible modification to reserved balances during code execution is increases via calls to the `ReservedBalanceManager` precompile `depositReservedBalance` function. - If there is a gas refund at the end of the transaction execution, the balance is first credited to the sender account's reserved balance, up to a maximum of the account's reserved balance prior to the transaction. Any remaining refund is credited to the account's regular balance. ### Handling of nonces To account for EOA account nonces being incremented during contract execution and potentially invalidating transactions from that EOA that have already been accepted, we separate the rules for how nonces are verified during block verification and how they are handled during execution. During block verification, all transactions must be verified to have a correct nonce value based on the latest "settled" state root, as defined in ACP-194, and the number of transactions from the sender account in the pending execution queue. Specifically, the required nonce is derived from the settled state root and incremented by one for each of the sender’s transactions already accepted into the pending execution queue or current block. During execution, the nonce used must be one greater than the latest nonce used by the account, accounting for both all transactions from the account and all contracts created by the account. This means that the actual nonce used by a transaction may differ from the nonce assigned in the raw transaction itself and used in verification. Separating the nonce values used for block verification and execution ensures that transactions accepted in blocks cannot be invalidated by the execution of transactions before them in the pending execution queue. It still provides the same level of replay protection to transactions, as a transaction with a given nonce from an EOA can be accepted at most once. However, this separation has a subtle potential impact on contract creation. Previously, the resulting address of a contract could be deterministically derived from a contract creation transaction based on its sender address and the nonce set in the transaction. Now, since the nonce used in execution is separate from that set in the transaction, this is no longer guaranteed. ## Backwards Compatibility The introduction of EIP-7702 transactions will require a network upgrade to be scheduled. Upon activation, a few invariants will be broken: - (From EIP-7702) `tx.origin == msg.sender` can only be true in the topmost frame of execution. - Once an account has been delegated, it can invoke multiple calls per transaction. - (From EIP-7702) An EOA nonce may not increase after transaction execution has begun. - Once an account has been delegated, the account may call a create operation during execution, causing the nonce to increase. - The contract address of a contract deployed by an EOA (via transaction with an empty "to" address) can be derived from the sender address and the transaction's nonce. - If earlier transactions cause the nonce to increase before execution, the actual nonce used in a contract creation transaction may differ from the one in the transaction payload, altering the resulting contract address. - Note that this can only occur for accounts that have been delegated, and whose delegated code involves contract creation. Additionally, at all points after the acceptance of a set code transaction, an EOA must have sufficient reserved balance to cover the sum of the worst case transactions fees and balances sent for all transactions in the pending execution queue after the set code transaction. Notably, this means that: - If a delegated account has zero reserved balance at any point, it will be unable to send any further transactions until a different account provides it with reserved balance via the `ReservedBalanceManager` precompile. - In order to initially "self-fund" its own reserved balance, an account must deposit reserved balance via the `ReservedBalanceManager` precompile prior to sending a set code transaction. - In order to transfer its full (regular + reserved) account balance, a delegated account must first deposit all of its regular balance into reserved balance. In order to support wallets as seamlessly as possible, the `eth_getBalance` RPC implementations should be updated to return the sum of an accounts regular and reserved balances. Additionally, clients should provide a new `eth_getReservedBalance` RPC method to allow for querying the reserved balance of a given account. ## Reference Implementation A reference implementation is not yet available and must be provided for this ACP to be considered implementable. ## Security Considerations All of the [security considerations from the EIP-7702 specification](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md?ref=blockhead.co#security-considerations) apply here as well, except for the considerations regarding "sponsored transaction relayers" and "transaction propagation". Those two considerations do not apply here, as they are accounted for by the modifications made to introduce reserved balances and separate the handling of nonces in execution from verification. Additionally, given that an account's reserved balance may need be updated in state when a transfer is sent from the account it must be confirmed that 21,000 gas is still a sufficiently high cost for the potential more expensive operation. Charging more gas for basic transfer transactions in this case could otherwise be an option, but would likely cause further backwards compatibility issues for smart contracts and off-chain services. ## Open Questions 1. Are the implementation and UX complexities regarding the `ReservedBalanceManager` precompile worth the UX improvements introduced by the new set code transaction type? - Except for having a contract spend an account's native token balance, most, if not all, of the UX improvements associated with the new transaction type could theoretically be implemented at the contract layer rather than the protocol layer. However, not all contracts provide support for account abstraction functionality via standards such as [ERC-2771](https://eips.ethereum.org/EIPS/eip-2771). 2. Are the implementation and UX complexities regarding the `ReservedBalanceManager` precompile worth giving delegate contracts the ability to spend native token balances? - An alternative may be to disallow delegate contracts from spending native token balances at all, and revert if they attempt to. They could use "wrapped native token" ERC20 implementations (i.e. WAVAX) to achieve the same effect. However, this may be equally or more complex at the implementation level, and would cause incompatibilies in delegate contract implementations for Ethereum. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-224: Dynamic Gas Limit In Subnet Evm (/docs/acps/224-dynamic-gas-limit-in-subnet-evm) --- title: "ACP-224: Dynamic Gas Limit In Subnet Evm" description: "Details for Avalanche Community Proposal 224: Dynamic Gas Limit In Subnet Evm" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/224-dynamic-gas-limit-in-subnet-evm/README.md --- | ACP | 224 | | :--- | :--- | | **Title** | Introduce ACP-176-Based Dynamic Gas Limits and Fee Manager Precompile in Subnet-EVM | | **Author(s)** | Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/230)) | | **Track** | Standards | ## Abstract Proposes implementing [ACP-176](https://github.com/avalanche-foundation/ACPs/blob/aa3bea24431b2fdf1c79f35a3fd7cc57eeb33108/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md) in Subnet-EVM, along with the addition of a new optional `ACP224FeeManagerPrecompile` that can be used to configure fee parameters on-chain dynamically after activation, in the same way that the existing `FeeManagerPrecompile` can be used today prior to ACP-176. ## Motivation ACP-176 updated the EVM dynamic fee mechanism to more accurately achieve the target gas consumption on-chain. It also added a mechanism for the target gas consumption rate to be dynamically updated. Until now, ACP-176 was only added to Coreth (C-Chain), primarily because most L1s prefer to control their fees and gas targets through the `FeeManagerPrecompile` and `FeeConfig` in genesis chain configuration, and the existing `FeeManagerPrecompile` is not compatible with the ACP-176 fee mechanism. [ACP-194](https://github.com/avalanche-foundation/ACPs/blob/aa3bea24431b2fdf1c79f35a3fd7cc57eeb33108/ACPs/194-streaming-asynchronous-execution/README.md) (SAE) depends on having a gas target and capacity mechanism aligned with ACP-176. Specifically, there must be a known gas capacity added per second, and maximum gas capacity. The existing windower fee mechanism employed by Subnet-EVM does not provide these properties because it does not have a fixed capacity rate, making it difficult to calculate worst-case bounds for gas prices. As such, adding ACP-176 into Subnet-EVM is a functional requirement for L1s to be able to use SAE in the future. Adding ACP-176 fee dynamics to Subnet-EVM also has the added benefit of aligning with Coreth such that only a single mechanism needs to be maintained on a go-forwards basis. While both ACP-176 and ACP-194 will be required upgrades for L1s, this ACP aims to provide similar controls for chains with a new precompile. A new dynamic fee configuration and fee manager precompile that maps well into the ACP-176 mechanism will be added, optionally allowing admins to adjust fee parameters dynamically. ## Specification ### ACP-176 Parameters This ACP uses the same parameters as in the [ACP-176 specification](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md#configuration-parameters), and allows their values to be configured on a chain-by-chain basis. The parameters and their current values used by the C-Chain are as follows: | Parameter | Description | C-Chain Configuration | | :--- | :--- | :--- | | $T$ | target gas consumed per second | dynamic | | $R$ | gas capacity added per second | 2*T | | $C$ | maximum gas capacity | 10*T | | $P$ | minimum target gas consumption per second | 1,000,000 | | $D$ | target gas consumption rate update constant | 2^25 | | $Q$ | target gas consumption rate update factor change limit | 2^15 | | $M$ | minimum gas price | 1x10^-18 AVAX | | $K$ | initial gas price update factor | 87*T | ### Prior Subnet-EVM Fee Configuration Parameters Prior to this ACP, the Subnet-EVM fee configuration and fee manager precompile used the following parameters to control the fee mechanism: **GasLimit**: Sets the max amount of gas consumed per block. **TargetBlockRate**: Sets the target rate of block production in seconds used for fee adjustments. If the actual block rate is faster than this target, block gas cost will be increased, and vice versa. **MinBaseFee**: The minimum base fee sets a lower bound on the EIP-1559 base fee of a block. Since the block's base fee sets the minimum gas price for any transaction included in that block, this effectively sets a minimum gas price for any transaction. **TargetGas**: Specifies the targeted amount of gas (including block gas cost) to consume within a rolling 10s window. When the dynamic fee algorithm observes that network activity is above/below the `TargetGas`, it increases/decreases the base fee proportionally to how far above/below the target actual network activity is. **BaseFeeChangeDenominator**: Divides the difference between actual and target utilization to determine how much to increase/decrease the base fee. A larger denominator indicates a slower changing, stickier base fee, while a lower denominator allows the base fee to adjust more quickly. **MinBlockGasCost**: Sets the minimum amount of gas to charge for the production of a block. **MaxBlockGasCost**: Sets the maximum amount of gas to charge for the production of a block. **BlockGasCostStep**: Determines how much to increase/decrease the block gas cost depending on the amount of time elapsed since the previous block. If the block is produced at the target rate, the block gas cost will stay the same as the block gas cost for the parent block. If it is produced faster/slower, the block gas cost will be increased/decreased by the step value for each second faster/slower than the target block rate accordingly. Note: if the `BlockGasCostStep` is set to a very large number, it effectively requires block production to go no faster than the `TargetBlockRate`. Ex: if a block is produced two seconds faster than the target block rate, the block gas cost will increase by `2 * BlockGasCostStep`. ### ACP-176 Parameters in Subnet-EVM ACP-176 will make `GasLimit` and `BaseFeeChangeDenominator` configurations obsolete in Subnet-EVM. `TargetBlockRate`, `MinBlockGasCost`, `MaxBlockGasCost`, and `BlockGasCostStep` will be also removed by [ACP-226](https://github.com/avalanche-foundation/ACPs/tree/ce51dfab/ACPs/226-dynamic-minimum-block-times). `MinGasPrice` is equivalent to `M` in ACP-176 and will be used to set the minimum gas price for ACP-176. This is similar to `MinBaseFee` in old Subnet-EVM fee configuration, and roughly gives the same effect. Currently the default value is `25 * 10^-18` (25 nAVAX/Gwei). This default will be changed to the minimum possible denomination of the native EVM asset (1 Wei), which is aligned with the C-Chain. `TargetGas` is equivalent to `T` (target gas consumed per second) in ACP-176 and will be used to set the target gas consumed per second for ACP-176. `MaxCapacityFactor` is equivalent to the factor in `C` in ACP-176 and controls the maximum gas capacity (i.e. block gas limit). This determines the `C` as `C = MaxCapacityFactor * T`. The default value will be 10, which is aligned with the C-Chain. `TimeToDouble` will be used to control the speed of the fee adjustment (`K`). This determines the `K` as `K = (RMult * TimeToDouble) / ln(2)`, where `RMult` is the factor in `R` which is defined as 2. The default value for `TimeToDouble` will be 60 (seconds), making `K=~87*T`, which is aligned with the C-Chain. As a result parameters will be set as follows: | Parameter | Description | Default Value | Is Configurable | | :--- | :--- | :--- | :--- | | $T$ | target gas consumed per second | 1,000,000 | :white_check_mark: | | $R$ | gas capacity added per second | 2*T | :x: | $C$ | maximum gas capacity | 10*T | :white_check_mark: Through `MaxCapacityFactor` (default 10) | $P$ | minimum target gas consumption per second | 1,000,000 | :x: | $D$ | target gas consumption rate update constant | 2^25 | :x: | $Q$ | target gas consumption rate update factor change limit | 2^15 | :x: | $M$ | minimum gas price | 1 Wei | :white_check_mark: | $K$ | gas price update constant | ~87*T | :white_check_mark: Through `TimeToDouble` (default 60s) The gas capacity added per second (`R`) always being equal to `2*T` keeps it such that the gas price is capable of increases and decrease at the same rate. The values of `Q` and `D` affect the magnitude of change to `T` that each block can have, and the granularity at which the target gas consumption rate can be updated. The proposed values match the C-Chain, allowing each block to modify the current gas target by roughly $\frac{1}{1024}$ of its current value. This has provided sufficient responsiveness and granularity as is, removing the need to make `D` and `Q` dynamic or configurable. Similarly, 1,000,000 gas/second should be a low enough minimum target gas consumption for any EVM L1. The target gas for a given L1 will be able to be increased from this value dynamically and has no maximum. ### Genesis Configuration There will be a new genesis chain configuration to set the parameters for the chain without requiring the ACP224FeeManager precompile to be activated. This will be similar to the existing fee configuration parameters in chain configuration. If there is no genesis configuration for the new fee parameters the default values for the C-Chain will be used. This will look like the following: ```json { ... "acp224Timestamp": uint64 "acp224FeeConfig": { "minGasPrice": uint64 "maxCapacityFactor": uint64 "timeToDouble": uint64 } } ``` ### Dynamic Gas Target Via Validator Preference For L1s that want their gas target to be dynamically adjusted based on the preferences of their validator sets, the same mechanism introduced on the C-Chain in ACP-176 will be employed. Validators will be able to set their `gas-target` preference in their node's configuration, and block builders can then adjust the target excess in blocks that they propose based on their preference. ### Dynamic Gas Target & Fee Configuration Via `ACP224FeeManagerPrecompile` For L1s that want an "admin" account to be able to dynamically configuration their gas target and other fee parameters, a new optional `ACP224FeeManagerPrecompile` will be introduced and can be activated. The precompile will offer similar controls as the existing `FeeManagerPrecompile` implemented in Subnet-EVM [here](https://github.com/ava-labs/subnet-evm/tree/53f5305/precompile/contracts/feemanager). The solidity interface will be as follows: ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.24; import "./IAllowList.sol"; /// @title ACP-224 Fee Manager Interface /// @notice Interface for managing dynamic gas limit and fee parameters /// @dev Inherits from IAllowList for access control interface IACP224FeeManager is IAllowList { /// @notice Configuration parameters for the dynamic fee mechanism struct FeeConfig { uint256 targetGas; // Target gas consumption per second uint256 minGasPrice; // Minimum gas price in wei uint256 maxCapacityFactor; // Maximum capacity factor (C = factor * T) uint256 timeToDouble; // Time in seconds for gas price to double at max capacity } /// @notice Emitted when fee configuration is updated /// @param sender Address that triggered the update /// @param oldFeeConfig Previous configuration /// @param newFeeConfig New configuration event FeeConfigUpdated(address indexed sender, FeeConfig oldFeeConfig, FeeConfig newFeeConfig); /// @notice Set the fee configuration /// @param config New fee configuration parameters function setFeeConfig(FeeConfig calldata config) external; /// @notice Get the current fee configuration /// @return config Current fee configuration function getFeeConfig() external view returns (FeeConfig memory config); /// @notice Get the block number when fee config was last changed /// @return blockNumber Block number of last configuration change function getFeeConfigLastChangedAt() external view returns (uint256 blockNumber); } ``` For chains with the precompile activated, `setFeeConfig` can be used to dynamically change each of the values in the fee configurations. Importantly, any updates made via calls to `setFeeConfig` in a transaction will take effect only as of _settlement_ of the transaction, not as of _acceptance_ or _execution_ (for transaction life cycles/status, refer to ACP-194 [here](https://github.com/avalanche-foundation/ACPs/tree/61d2a2a/ACPs/194-streaming-asynchronous-execution#description)). This ensures that all nodes apply the same worst-case bounds validation on transactions being accepted into the queue, since the worst-case bounds are affected by changes to the fee configuration. In addition to storing the latest fee configuration to be returned by `getFeeConfig`, the precompile will also maintain state storing the latest values of $q$ and $K$. These values can be derived from the `targetGas` and `timeToDouble` values given to the precompile, respectively. The value of $q$ can be deterministically calculated using the same method as Coreth currently employs to calculate a node's desired target excess [here](https://github.com/ava-labs/coreth/blob/b4c8300490afb7f234df704fdcc446f227e4ec2f/plugin/evm/upgrade/acp176/acp176.go#L170). Similarly, the value of $K$ could be computed directly according to: $$ K = \frac{targetGas \cdot timeToDouble}{ln(2)} $$ However, floating point math may introduce inaccuracies. Instead, a similar approach will be employed using binary search to determine the closest integer solution for $K$. Similar to the [desired target excess calculation in Coreth](https://github.com/ava-labs/coreth/blob/0255516f25964cf4a15668946f28b12935a50e0c/plugin/evm/upgrade/acp176/acp176.go#L170), which takes a node's desired gas target and calculates its desired target excess value, the `ACP224FeeManagerPrecompile` will use binary search to determine the resulting dynamic target excess value given the `targetGas` value passed to `setFeeConfig`. All blocks accepted after the settlement of such a call must have the correct target excess value as derived from the binary search result. Block building logic can follow the below diagram for determining the target excess of blocks. B{Is ACP224FeeManager precompile active?} B -- Yes --> C[Use targetExcess from precompile storage at latest settled root] B -- No --> D{Is gas-target set in node chain config file?} D -- Yes --> E[Calculate targetExcess from configured preference and allowed update bounds] D -- No --> F{Does parent block have ACP176 fields?} F -- Yes --> G[Use parent block ACP176 gas target] F -- No --> H[Use MinTargetPerSecond]`} /> #### Adjustment to ACP-176 calculations for price discovery ACP-176 defines the gas price for a block as: $$gas\_price = M \cdot e^{\frac{x}{K}}$$ Now, whenever $M$ (`minGasPrice`) or $K$ (derived from `timeToDouble`) are changed via the `ACP224FeeManagerPrecompile`, $x$ must also be updated. Specifically, when $M$ is updated from $M_0$ to $M_1$, $x$ must also be updated from $x_0$ (the current excess) to $x_1$. $x_1$ theoretically could be calculated directly as: $$x_1 = ln(\frac{M_0}{M_1}) \cdot K + x_0$$ However, this would introduce floating point inaccuracies. Instead $x_1$ can be approximated using binary search to find the minimum non-negative integer such that the resulting gas price calculated using $M_1$ is greater than or equal to the current gas price prior to the change in $M$. In effect, this means that both reducing the minimum gas price and increasing the minimum gas price to a value less than the current gas price have no immediate effect on the current gas price. However, increasing the minimum gas price to value greater than the current gas price will cause the gas price to immediately step up to the new minimum value. Similarly, when $K$ is updated from $K_0$ to $K_1$, $x$ must also be updated from $x_0$ (the current excess) to $x_1$, where $x_1$ is calculated as: $$x_1 = x_0 \cdot \frac{K_1}{K_0}$$ This makes it such that the current gas price stays the same when $K$ is changed. Changes to $K$ only impact how quickly or slowly the gas price can change going forward based on usage. ## Backwards Compatibility ACP-224 will require a network update in order to activate the new fee mechanism. Another activation will also be required to activate the new fee manager precompile. The activation of precompile should never occur before the activation of ACP-224 (the fee mechanism) since the precompile depends on ACP-224’s fee update logic to function correctly. Activation of ACP-224 mechanism will deactivate the prior fee mechanism and the prior fee manager precompile. This ensures that there is no ambiguity or overlap between legacy and new pricing logic. In order to provide a configuration for existing networks, a network upgrade override for both activation time and ACP-176 configuration parameters will be introduced. These upgrades will be optional at the moment. However, with introduction of ACP-194 (SAE), it will be required to activate this ACP; otherwise the network will not be able to use ACP-194. ## Reference Implementation A reference implementation is not yet available and must be provided for this ACP to be considered implementable. ## Security Considerations Generally, this has the same security considerations as ACP-176. However, due to the dynamic nature of parameters exposed in the `ACP224FeeManagerPrecompile` there is an additional risk of misconfiguration. Misconfiguration of parameters could leave the network vulnerable to a DoS attack or result in higher transaction fees than necessary. ## Open Questions * Should activation of the `ACP224FeeManager` precompile disable the old precompile itself or should we require it to be manually disabled as a separate upgrade? * Should we use `targetGas` in genesis/chain config as an optional field signaling whether the chain config should have a precedence over the validator preferences? * Similarly above, should we have a toggle in `ACP224FeeManager` precompile to give control to validators for `targetGas`? ## Acknowledgements * [Stephen Buttolph](https://github.com/StephenButtolph) * [Arran Schlosberg](https://github.com/ARR4N) * [Austin Larson](https://github.com/alarso16) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-226: Dynamic Minimum Block Times (/docs/acps/226-dynamic-minimum-block-times) --- title: "ACP-226: Dynamic Minimum Block Times" description: "Details for Avalanche Community Proposal 226: Dynamic Minimum Block Times" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/226-dynamic-minimum-block-times/README.md --- | ACP | 226 | | :- | :- | | **Title** | Dynamic Minimum Block Times | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Implementable ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/228)) | | **Track** | Standards | ## Abstract Proposes replacing the current block production rate limiting mechanism on Avalanche EVM chains with a new mechanism where validators collectively and dynamically determine the minimum time between blocks. ## Motivation Currently, Avalanche EVM chains employ a mechanism to limit the rate of block production by increasing the "block gas cost" that must be burned if blocks are produced more frequently than the target block rate specified for the chain. The block gas cost is paid by summing the "priority fee" amounts that all transactions included in the block collectively burn. This mechanism has a few notable suboptimal aspects: 1. There is no explicit minimum block delay time. Validators are capable of producing blocks as frequently as they would like by paying the additional fee, and too rapid block production could cause network stability issues. 1. The target block rate can only be changed in a required network upgrade, which makes updates difficult to coordinate and operationalize. 1. The target block rate can only be specified with 1-second granularity, which does not allow for configuring sub-second block times as performance improvements are made to make them feasible. With the prospect of ACP-194 removing block execution from consensus and allowing for increases to the gas target through the dynamic ACP-176 mechanism, Avalanche EVM chains would be better suited by having a dynamic minimum block delay time denominated in milliseconds. This allows networks to ensure that blocks are never produced more frequently than the minimum block delay, and allows validators to dynamically influence the minimum block delay value by setting their preference. ## Specification ### Block Header Changes Upon activation of this ACP, the `blockGasCost` field in block headers will be required to be set to 0. This means that no validation of the cumulative priority fee amounts of transactions within the block exceeding the block gas cost is required. Additionally, two new fields are added to EVM block headers: `timestampMilliseconds` and `minimumBlockDelayExcess`. #### `timestampMilliseconds` The canonical serialization and interpretation of EVM blocks already contains a block timestamp specified in seconds. Altering this would require deep changes to the EVM codebase, as well as cause breaking changes to tooling such as indexers and block explorers. Instead, a new field is added representing the unix timestamp in milliseconds. Header verification should verify the `block.timestamp` (in seconds) is aligned with the `block.timeMilliseconds`, more precisely: `block.timestampMilliseconds / 1000 == block.timestamp`. Existing tools that do not need millisecond granularity do not need to parse the new field, which limits the amount of breaking changes. The `timestampMilliseconds` field is represented in block headers as a `uint64`. #### `minimumBlockDelayExcess` The new `minimumBlockDelayExcess` field in the block header is used to derive the minimum number of milliseconds that must pass before the next block is allowed to be accepted. Specifically, if block $B$ has a `minimumBlockDelayExcess` of $q$, then the effective timestamp of block $B+1$ in milliseconds must be at least $M * e^{\frac{q}{D}}$ greater than the effective timestamp of block $B$ in milliseconds. $M$, $q$, and $D$ are defined below in the mechanism specification. The `minimumBlockDelayExcess` field is represented in block headers as a `uint64`. The value of `minimumBlockDelayExcess` can be updated in each block, similar to the gas target excess field introduced in ACP-176. The mechanism is specified below. ### Dynamic `minimumBlockDelay` mechanism The `minimumBlockDelay` can be defined as: $$ m = M * e^{\frac{q}{D}} $$ Where: - $M$ is the global minimum `minimumBlockDelay` value in milliseconds - $q$ is a non-negative integer that is initialized upon the activation of this mechanism, referred to as the `minimumBlockDelayExcess` - $D$ is a constant that helps control the rate of change of `minimumBlockDelay` After the execution of transactions in block $b$, the value of $q$ can be increased or decreased by up to $Q$. It must be the case that $\left|\Delta q\right| \leq Q$, or block $b$ is considered invalid. The amount by which $q$ changes after executing block $b$ is specified by the block builder. Block builders (i.e., validators) may set their desired value for $M$ (i.e., their desired `minimumBlockDelay`) in their configuration, and their desired value for $q$ can then be calculated as: $$q_{desired} = D \cdot ln\left(\frac{M_{desired}}{M}\right)$$ Note that since $q_{desired}$ is only used locally and can be different for each node, it is safe for implementations to approximate the value of $ln\left(\frac{M_{desired}}{M}\right)$ and round the resulting value to the nearest integer. Alternatively, client implementations can choose to use binary search to find the closest integer solution, as `coreth` [does to calculate a node's desired target excess](https://github.com/ava-labs/coreth/blob/ebaa8e028a3a8747d11e6822088b4af7863451d8/plugin/evm/upgrade/acp176/acp176.go#L170). When building a block, builders can calculate their next preferred value for $q$ based on the network's current value (`q_current`) according to: ```python # Calculates a node's new desired value for q for a given block def calc_next_q(q_current: int, q_desired: int, max_change: int) -> int: if q_desired > q_current: return q_current + min(q_desired - q_current, max_change) else: return q_current - min(q_current - q_desired, max_change) ``` As $q$ is updated after the execution of transactions within the block, $m$ is also updated such that $m = M \cdot e^{\frac{q}{D}}$ at all times. As noted above, the change to $m$ only takes effect for subsequent block production, and cannot change the time at which block $b$ can be produced itself. ### Gas Accounting Updates Currently, the amount of gas capacity available is only incremented on a per second basis, as defined by ACP-176. With this ACP, it is expected for chains to be able to have sub-second block times. However, in the case when a chain's gas capacity is fully consumed (i.e. during period of heavy transaction load), blocks would not be able to produced at sub-second intervals because at least one second would need to elapse for new gas capacity to be added. To correct this, upon activation of this ACP, gas capacity is added on a per millisecond basis. The ACP-176 mechanism for determining the target gas consumption per second remains unchanged, but its result is now used to derive the target gas consumption per millisecond by dividing by 1000, and gas capacity is added at that rate as each block advances time by some number of milliseconds. ### Activation Parameters for the C-Chain Parameters at activation on the C-Chain are:
| Parameter | Description | C-Chain Configuration| | - | - | - | | $M$ | minimum `minimumBlockDelay` value | 1 millisecond | | $q$ | initial `minimumBlockDelayExcess` | 7,970,124 | | $D$ | `minimumBlockDelay` update constant | $2^{20}$ | | $Q$ | `minimumBlockDelay` update factor change limit | 200 |
$M$ was chosen as a lower bound for `minimumBlockDelay` values to allow high-performance Avalanche L1s to be able to realize maximum performance and minimal transaction latency. Based on the 1 millisecond value for $M$, $q$ was chosen such that the effective `minimumBlockDelay` value at time of activation is as close as possible to the current target block rate of the C-Chain, which is 2 seconds. $D$ and $Q$ were chosen such that it takes approximately 3,600 consecutive blocks of the maximum allowed change in $q$ for the effective `minimumBlockDelay` value to either halve or double. ### ProposerVM `MinBlkDelay` The ProposerVM currently offers a static, configurable `MinBlkDelay` seconds for consecutive blocks. With this ACP enforcing a dynamic minimum block delay time, any EVM instance adopting this ACP that also leverages the ProposerVM should ensure that the ProposerVM `MinBlkDelay` is set to 0. ### Note on Block Building While there is no longer a requirement for blocks to burn a minimum block gas cost after the activation of this ACP, block builders should still take priority fees into account when building blocks to allow for transaction prioritization and to maximize the amount of native token (AVAX) burned in the block. From a user (transaction issuer) perspective, this means that a non-zero priority fee would only ever need to be set to ensure inclusion during periods of maximum gas utilization. ## Backwards Compatibility While this proposal requires a network upgrade and updates the EVM block header format, it does so in a way that tries to maintain as much backwards compatibility as possible. Specifically, applications that currently parse and use the existing timestamp field that is denominated in seconds can continue to do so. The `timestampMilliseconds` header value only needs to be used in cases where more granular timestamps are required. ## Reference Implementation This ACP was implemented and merged into Coreth and Subnet-EVM behind the `Granite` upgrade flag. The full implementation can be found in [coreth@v0.15.4-rc.4](https://github.com/ava-labs/coreth/releases/tag/v0.15.4-rc.4) and [subnet-evm@v0.8.0-fuji-rc.0](https://github.com/ava-labs/subnet-evm/releases/tag/v0.8.0-fuji-rc.0). ## Security Considerations Too rapid block production may cause availability issues if validators of the given blockchain are not able to keep up with blocks being proposed to consensus. This new mechanism allows validators to help influence the maximum frequency at which blocks are allowed to be produced, but potential misconfiguration or overly aggressive settings may cause problems for some validators. The mechanism for the minimum block delay time to adapt based on validator preference has already been used previously to allow for dynamic gas targets based on validator preference on the C-Chain, providing more confidence that it is suitable for controlling this network parameter as well. However, because each block is capable of changing the value of the minimum block delay by a certain amount, the lower the minimum block delay is, the more blocks that can be produced in a given time, and the faster the minimum block delay value will be able to change. This creates a dynamic where the mechanism for controlling `minimumBlockDelay` is more reactive at lower values, and less reactive at higher values. The global minimum `minimumBlockDelay` ($M$) provides a lower bound of how quickly blocks can ever be produced, but it is left to validators to ensure that the effective value does not exceed their collective preference. ## Acknowledgments Thanks to [Luigi D'Onorio DeMeo](https://x.com/luigidemeo) for continually bringing up the idea of reducing block times to provide better UX for users of Avalanche blockchains. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-23: P Chain Native Transfers (/docs/acps/23-p-chain-native-transfers) --- title: "ACP-23: P Chain Native Transfers" description: "Details for Avalanche Community Proposal 23: P Chain Native Transfers" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/23-p-chain-native-transfers/README.md --- | ACP | 23 | | :--- | :--- | | **Title** | P-Chain Native Transfers | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Support native transfers on P-chain. This enables users to transfer P-chain assets without leaving the P-chain or using a transaction type that's not meant for native transfers. ## Motivation Currently, the P-chain has no simple transfer transaction type. The X-chain supports this functionality through a `BaseTx`. Although the P-chain contains transaction types that extend `BaseTx`, the `BaseTx` transaction type itself is not a valid transaction. This leads to abnormal implementations of P-chain native transfers like in the AvalancheGo wallet which abuses [`CreateSubnetTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.15/wallet/chain/p/builder.go#L54-L63) to replicate the functionality contained in `BaseTx`. With the growing number of subnets slated for launch on the Avalanche network, simple transfers will be demanded more by users. While there are work-arounds as mentioned before, the network should support it natively to provide a cheaper option for both validators and end-users. ## Specification To support `BaseTx`, Avalanche Network Clients (like AvalancheGo) must register `BaseTx` with the type ID `0x22` in codec version `0x00`. For the specification of the transaction itself, see [here](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/platformvm/txs/base_tx.go#L29). Note that most other P-chain transactions extend this type, the only change in this ACP is to register it as a valid transaction itself. ## Backwards Compatibility Adding a new transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to reject this transaction prior to activation. This ACP only details the specification of the added `BaseTx` transaction type. ## Reference Implementation An implementation of `BaseTx` support was created [here](https://github.com/ava-labs/avalanchego/pull/2232) and subsequently merged into AvalancheGo. Since the "D" Upgrade is not activated, this transaction will be rejected by AvalancheGo. If modifications are made to the specification of the transaction as part of the ACP process, the code must be updated prior to activation. ## Security Considerations The P-chain has fixed fees which does not place any limits on chain throughput. A potentially popular transaction type like `BaseTx` may cause periods of high usage. The reference implementation in AvalancheGo sets the transaction fee to 0.001 AVAX as a deterrent (equivalent to `ImportTx` and `ExportTx`). This should be sufficient for the time being but a dynamic fee mechanism will need to be added to the P-chain in the future to mitigate this security concern. This is not addressed in this ACP as it requires a larger change to the fee dynamics on the P-chain as a whole. ## Open Questions No open questions. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on the reference implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-24: Shanghai Eips (/docs/acps/24-shanghai-eips) --- title: "ACP-24: Shanghai Eips" description: "Details for Avalanche Community Proposal 24: Shanghai Eips" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/24-shanghai-eips/README.md --- | ACP | 24 | | :--- | :--- | | **Title** | Activate Shanghai EIPs on C-Chain | | **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)) | | **Status** | Activated | | **Track** | Standards | ## Abstract This ACP proposes the adoption of the following EIPs on the Avalanche C-Chain network: - [EIP-3651: Warm COINBASE](https://eips.ethereum.org/EIPS/eip-3651) - [EIP-3855: PUSH0 instruction](https://eips.ethereum.org/EIPS/eip-3855) - [EIP-3860: Limit and meter initcode](https://eips.ethereum.org/EIPS/eip-3860) - [EIP-6049: Deprecate SELFDESTRUCT](https://eips.ethereum.org/EIPS/eip-6049) ## Motivation The listed EIPs were activated on Ethereum mainnet as part of the [Shanghai upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/shanghai.md#included-eips). This ACP proposes their activation on the Avalanche C-Chain in the next network upgrade. This maintains compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler >= [0.8.20](https://github.com/ethereum/solidity/releases/tag/v0.8.20)). ## Specification & Reference Implementation This ACP proposes the EIPs be adopted as specified in the EIPs themselves. ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.12.0](https://github.com/ethereum/go-ethereum/releases/tag/v1.12.0) release in this [PR](https://github.com/ava-labs/coreth/pull/277). In particular, note the following code: - [Activation of new opcode and dynamic gas calculations](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/vm/jump_table.go#L92) - [EIP-3860 intrinsic gas calculations](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/state_transition.go#L112-L113) - [EIP-3651 warm coinbase](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/state/statedb.go#L1197-L1199) - Note EIP-6049 marks SELFDESTRUCT as deprecated, but does not remove it. The implementation in coreth is unchanged. ## Backwards Compatibility The following backward compatibility considerations were highlighted by the original EIP authors: - [EIP-3855](https://eips.ethereum.org/EIPS/eip-3855#backwards-compatibility): "... introduces a new opcode which did not exist previously. Already deployed contracts using this opcode could change their behaviour after this EIP". - [EIP-3860](https://eips.ethereum.org/EIPS/eip-3860#backwards-compatibility) "Already deployed contracts should not be effected, but certain transactions (with initcode beyond the proposed limit) would still be includable in a block, but result in an exceptional abort." Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade. ## Security Considerations Refer to the original EIPs for security considerations: - [EIP 3855](https://eips.ethereum.org/EIPS/eip-3855#security-considerations) - [EIP 3860](https://eips.ethereum.org/EIPS/eip-3860#security-considerations) ## Open Questions No open questions. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-25: Vm Application Errors (/docs/acps/25-vm-application-errors) --- title: "ACP-25: Vm Application Errors" description: "Details for Avalanche Community Proposal 25: Vm Application Errors" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/25-vm-application-errors/README.md --- | ACP | 25 | | :--- | :--- | | **Title** | Virtual Machine Application Errors | | **Author(s)** | Joshua Kim ([@joshua-kim](https://github.com/joshua-kim)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Support a way for a Virtual Machine (VM) to signal application-defined error conditions to another VM. ## Motivation VMs are able to build their own peer-to-peer application protocols using the `AppRequest`, `AppResponse`, and `AppGossip` primitives. `AppRequest` is a message type that requires a corresponding `AppResponse` to indicate a successful response. In the unhappy path where an `AppRequest` is unable to be served, there currently is no native way for a peer to signal an error condition. VMs currently resort to timeouts in failure cases, where a client making a request will fallback to marking its request as failed after some timeout period has expired. Having a native application error type would offer a more powerful abstraction where Avalanche nodes would be able to score peers based on perceived errors. This is not currently possible because Avalanche networking isn't aware of the specific implementation details of the messages being delivered to VMs. A native application error type would also guarantee that all clients can potentially expect an `AppError` message to unblock an unsuccessful `AppRequest` and only rely on a timeout when absolutely necessary, significantly decreasing the latency for a client to unblock its request in the unhappy path. ## Specification ### Message This modifies the p2p specification by introducing a new [protobuf](https://protobuf.dev/) message type: ``` message AppError { bytes chain_id = 1; uint32 request_id = 2; uint32 error_code = 3; string error_message = 4; } ``` 1. `chain_id`: Reserves field 1. Senders **must** use the same chain id of from the original `AppRequest` this `AppError` message is being sent in response to. 2. `request_id`: Reserves field 2. Senders **must** use the same request id from the original `AppRequest` this `AppError` message is being sent in response to. 3. `error_code`: Reserves field 3. Application defined error code. Implementations _should_ use the same error codes for the same conditions to allow clients to error match. Negative error codes are reserved for protocol defined errors. VMs may reserve any error code greater than zero. 4. `error_message`: Reserves field 4. Application defined human-readable error message that _should not_ be used for error matching. For error matching, use `error_code`. ### Reserved Errors The following error codes are currently reserved by the Avalanche protocol: | Error Code | Description | | ---------- | --------------- | | 0 | undefined | | -1 | network timeout | ### Handling Clients **must** respond to an inbound `AppRequest` message with either a corresponding `AppResponse` to indicate a successful response, or an `AppError` to indicate an error condition by the requested `deadline` in the original `AppRequest`. ## Backwards Compatibility This new message type requires a network activation to require either an `AppResponse` or an `AppError` as a required response to an `AppRequest`. ## Reference Implementation - Message definition: https://github.com/ava-labs/avalanchego/pull/2111 - Handling: https://github.com/ava-labs/avalanchego/pull/2248 ## Security Considerations Optional section that discusses the security implications/considerations relevant to the proposed change. Clients should be aware that peers can arbitrarily send `AppError` messages to invoke error handling logic in a VM. ## Open Questions Optional section that lists any concerns that should be resolved prior to implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-30: Avalanche Warp X Evm (/docs/acps/30-avalanche-warp-x-evm) --- title: "ACP-30: Avalanche Warp X Evm" description: "Details for Avalanche Community Proposal 30: Avalanche Warp X Evm" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/30-avalanche-warp-x-evm/README.md --- | ACP | 30 | | :--- | :--- | | **Title** | Integrate Avalanche Warp Messaging into the EVM | | **Author(s)** | Aaron Buchwald ([aaron.buchwald56@gmail.com](mailto:aaron.buchwald56@gmail.com)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Integrate Avalanche Warp Messaging into the C-Chain and Subnet-EVM in order to bring Cross-Subnet Communication to the EVM on Avalanche. ## Motivation Avalanche Subnets enable the creation of independent blockchains within the Avalanche Network. Each Avalanche Subnet registers its validator set on the Avalanche P-Chain, which serves as an effective "membership chain" for the entire Avalanche Ecosystem. By providing read access to the validator set of every Subnet on the Avalanche Network, any Subnet can look up the validator set of any other Subnet within the Avalanche Ecosystem to verify an Avalanche Warp Message, which replaces the need for point-to-point exchange of validator set info between Subnets. This enables a light weight protocol that allows seamless, on-demand communication between Subnets. For more information on the Avalanche Warp Messaging message and payload formats see here: - [AWM Message Format](https://github.com/ava-labs/avalanchego/tree/v1.10.15/vms/platformvm/warp/README.md) - [Payload Format](https://github.com/ava-labs/avalanchego/tree/v1.10.15/vms/platformvm/warp/payload/README.md) This ACP proposes to activate Avalanche Warp Messaging on the C-Chain and offer compatible support in Subnet-EVM to provide the first standard implementation of AWM in production on the Avalanche Network. ## Specification The specification will be broken down into the Solidity interface of the Warp Precompile, a Golang example implementation, the predicate verification, and the proposed gas costs for the Warp Precompile. The Warp Precompile address is `0x0200000000000000000000000000000000000005`. ### Precompile Solidity Interface ```solidity // (c) 2022-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; struct WarpMessage { bytes32 sourceChainID; address originSenderAddress; bytes payload; } struct WarpBlockHash { bytes32 sourceChainID; bytes32 blockHash; } interface IWarpMessenger { event SendWarpMessage(address indexed sender, bytes32 indexed messageID, bytes message); // sendWarpMessage emits a request for the subnet to send a warp message from [msg.sender] // with the specified parameters. // This emits a SendWarpMessage log from the precompile. When the corresponding block is accepted // the Accept hook of the Warp precompile is invoked with all accepted logs emitted by the Warp // precompile. // Each validator then adds the UnsignedWarpMessage encoded in the log to the set of messages // it is willing to sign for an off-chain relayer to aggregate Warp signatures. function sendWarpMessage(bytes calldata payload) external returns (bytes32 messageID); // getVerifiedWarpMessage parses the pre-verified warp message in the // predicate storage slots as a WarpMessage and returns it to the caller. // If the message exists and passes verification, returns the verified message // and true. // Otherwise, returns false and the empty value for the message. function getVerifiedWarpMessage(uint32 index) external view returns (WarpMessage calldata message, bool valid); // getVerifiedWarpBlockHash parses the pre-verified WarpBlockHash message in the // predicate storage slots as a WarpBlockHash message and returns it to the caller. // If the message exists and passes verification, returns the verified message // and true. // Otherwise, returns false and the empty value for the message. function getVerifiedWarpBlockHash( uint32 index ) external view returns (WarpBlockHash calldata warpBlockHash, bool valid); // getBlockchainID returns the snow.Context BlockchainID of this chain. // This blockchainID is the hash of the transaction that created this blockchain on the P-Chain // and is not related to the Ethereum ChainID. function getBlockchainID() external view returns (bytes32 blockchainID); } ``` ### Warp Predicates and Pre-Verification Signed Avalanche Warp Messages are encoded in the [EIP-2930 Access List](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2930.md) of a transaction, so that they can be pre-verified before executing the transactions in the block. The access list can specify any number of access tuples: a pair of an address and an array of storage slots in EIP-2930. Warp Predicate verification borrows this functionality to encode signed warp messages according to the serialization format defined [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/predicate/Predicate.md). Each Warp specific access tuple included in the access list specifies the Warp Precompile address as the address. The first tuple that specifies the Warp Precompile address is considered to be at index. Each subsequent access tuple that specifies the Warp Precompile address increases the Warp Message index by 1. Access tuples that specify any other address are not included in calculating the index for a specific warp message. Avalanche Warp Messages are pre-verified (prior to block execution), and outputs a bitset for each transaction where a 1 indicates an Avalanche Warp Message that failed verification at that index. Throughout the EVM execution, the Warp Precompile checks the status of the resulting bit set to determine whether pre-verified messages are considered valid. This has the additional benefit of encoding the Warp pre-verification results in the block, so that verifying a historical block can use the encoded results instead of needing to access potentially old P-Chain state. The result bitset is encoded in the block according to the predicate result specification [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/predicate/Results.md). Each Warp Message in the access list is charged gas to pay for verifying the Warp Message (gas costs are covered below) and is verified with the following steps (see [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/config.go#L218) for reference implementation): 1. Unpack the predicate bytes 2. Parse the signed Avalanche Warp Message 3. Verify the signature according to the AWM spec in AvalancheGo [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/config.go#L218) (the quorum numerator/denominator for the C-Chain is 67/100 and is configurable in Subnet-EVM) ### Precompile Implementation All types, events, and function arguments/outputs are encoded using the ABI package according to the official [Solidity ABI Specification](https://docs.soliditylang.org/en/latest/abi-spec.html). When the precompile is invoked with a given `calldata` argument, the first four bytes (`calldata[0:4]`) are read as the [function selector](https://docs.soliditylang.org/en/latest/abi-spec.html#function-selector). If the function selector matches the function selector of one of the functions defined by the Solidity interface, the contract invokes the corresponding execution function with the remaining calldata ie. `calldata[4:]`. For the full specification of the execution functions defined in the Solidity interface, see the reference implementation here: - [sendWarpMessage](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L226) - [getVerifiedWarpMessage](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L187) - [getVerifiedWarpBlockHash](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L145) - [getBlockchainID](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L96) ### Gas Costs The Warp Precompile charges gas during the verification of included Avalanche Warp Messages, which is included in the intrinsic gas cost of the transaction, and during the execution of the precompile. #### Verification Gas Costs Pre-verification charges the following costs for each Avalanche Warp Message: - GasCostPerSignatureVerification: 20000 - GasCostPerWarpMessageBytes: 100 - GasCostPerWarpSigner: 500 These numbers were determined experimentally using the benchmarks available [here](https://github.com/ava-labs/subnet-evm/blob/master/x/warp/predicate_test.go#L687) to target approximately the same mgas/s as existing precompile benchmarks in the EVM, which ranges between 50-200 mgas/s. In addition to the benchmarks, the following assumptions and goals were taken into account: - BLS Public Key Aggregation is extremely fast, resulting in charging more for the base cost of a single BLS Multi-Signature Verification than for adding an additional public key - The cost per byte included in the transaction should be strictly higher for including Avalanche Warp Messages than via transaction calldata, so that the Warp Precompile does not change the worst case maximum block size #### Execution Gas Costs The execution gas costs were determined by summing the cost of the EVM operations that are performed throughout the execution of the precompile with special consideration for added functionality that does not have an existing corollary within the EVM. ##### sendWarpMessage `sendWarpMessage` charges a base cost of 41,500 gas + 8 gas / payload byte This is comprised of charging for the following components: - 375 gas / log operation - 3 topics * 375 gas / topic - 20k gas to produce and serve a BLS Signature - 20k gas to store the Unsigned Warp Message - 8 gas / payload byte This charges 20k gas for storing an Unsigned Warp Message although the message is stored in an independent key-value database instead of the active state. This makes it less expensive to store, so 20k gas is a conservative estimate. Additionally, the cost of serving valid signatures is significantly cheaper than serving state sync and bootstrapping requests, so the cost to validators of serving signatures over time is not considered a significant concern. `sendWarpMessage` also charges for the log operation it includes commensurate with the gas cost of a standard log operation in the EVM. A single `SendWarpMessage` log is charged: - 375 gas base cost - 375 gas per topic (`eventID`, `sender`, `messageID`) - 8 byte per / payload byte encoded in the `message` field Topics are indexed fields encoded as 32 byte values to support querying based on given specified topic values. ##### getBlockchainID `getBlockchainID` charges 2 gas to serve an already in-memory 32 byte valu commensurate with existing in-memory operations. ##### getVerifiedWarpBlockHash / getVerifiedWarpMessage `GetVerifiedWarpMessageBaseCost` charges 2 gas for serving a Warp Message (either payload type). Warp message are already in-memory, so it charges 2 gas for access. `GasCostPerWarpMessageBytes` charges 100 gas per byte of the Avalanche Warp Message that is unpacked into a Solidity struct. ## Backwards Compatibility Existing EVM opcodes and precompiles are not modified by activating Avalanche Warp Messaging in the EVM. This is an additive change to activate a Warp Precompile on the Avalanche C-Chain and can be scheduled for activation in any VM running on Avalanche Subnets that are capable of sending / verifying the specified payload types. ## Reference Implementation A full reference implementation can be found in Subnet-EVM v0.5.9 [here](https://github.com/ava-labs/subnet-evm/tree/v0.5.9/x/warp). ## Security Considerations Verifying an Avalanche Warp Message requires reading the source subnet's validator set at the P-Chain height specified in the [Snowman++ Block Extension](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/proposervm/README.md#snowman-block-extension). The Avalanche PlatformVM provides the current state of the Avalanche P-Chain and maintains reverse diff-layers in order to compute Subnets' validator sets at historical points in time. As a result, verifying a historical Avalanche Warp Message that references an old P-Chain height requires applying diff-layers from the current state back to the referenced P-Chain height. As Subnets and the P-Chain continue to produce and accept new blocks, verifying the Warp Messages in historical blocks becomes increasingly expensive. To efficiently handle historical blocks containing Avalanche Warp Messages, the EVM uses the result bitset encoded in the block to determine the validity of Avalanche Warp Messages without requiring a historical P-Chain state lookup. This is considered secure because the network already verified the Avalanche Warp Messages when they were originally verified and accepted. ## Open Questions _How should validator set lookups in Warp Message verification be effectively charged for gas?_ The verification cost of performing a validator set lookup on the P-Chain is currently excluded from the implementation. The cost of this lookup is variable depending on how old the referenced P-Chain height is from the perspective of each validator. [Ongoing work](https://github.com/ava-labs/avalanchego/pull/1611) can parallelize P-Chain validator set lookups and message verification to reduce the impact on block verification latency to be negligible and reduce costs to reflect the additional bandwidth of encoding Avalanche Warp Messages in the transaction. ## Acknowledgements Avalanche Warp Messaging and this effort to integrate it into the EVM has been a monumental effort. Thanks to all of the contributors who contributed their ideas, feedback, and development to this effort. @stephenbuttolph @patrick-ogrady @michaelkaplan13 @minghinmatthewlam @cam-schultz @xanderdunn @darioush @ceyonur ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-31: Enable Subnet Ownership Transfer (/docs/acps/31-enable-subnet-ownership-transfer) --- title: "ACP-31: Enable Subnet Ownership Transfer" description: "Details for Avalanche Community Proposal 31: Enable Subnet Ownership Transfer" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/31-enable-subnet-ownership-transfer/README.md --- | ACP | 31 | | :--- | :--- | | **Title** | Enable Subnet Ownership Transfer | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Allow the current owner of a Subnet to transfer ownership to a new owner. ## Motivation Once a Subnet is created on the P-chain through a [CreateSubnetTx](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/platformvm/txs/create_subnet_tx.go#L14-L19), the `Owner` of the subnet is currently immutable. Subnet operators may want to transition ownership of the Subnet to a new owner for a number of reasons, not least of all being rotating their control key(s) periodically. ## Specification Implement a new transaction type (`TransferSubnetOwnershipTx`) that: 1. Takes in a `Subnet` 2. Verifies that the `SubnetAuth` has the right to remove the node from the subnet by verifying it against the `Owner` field in the `CreateSubnetTx` that created the `Subnet`. 3. Takes in a new `Owner` and assigning it as the new owner of `Subnet` This transaction type should have the following format (code below is presented in Golang): ```go type TransferSubnetOwnershipTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // ID of the subnet this tx is modifying Subnet ids.ID `serialize:"true" json:"subnetID"` // Proves that the issuer has the right to remove the node from the subnet. SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"` // Who is now authorized to manage this subnet Owner fx.Owner `serialize:"true" json:"newOwner"` } ``` This transaction type should have type ID `0x21` in codec version `0x00`. This transaction type should have a fee of `0.001 AVAX`, equivalent to adding a subnet validator/delegator. ## Backwards Compatibility Adding a new transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to reject this transaction prior to activation. This ACP only details the specification of the `TransferSubnetOwnershipTx` type. ## Reference Implementation An implementation of `TransferSubnetOwnershipTx` was created [here](https://github.com/ava-labs/avalanchego/pull/2178) and subsequently merged into AvalancheGo. Since the "D" Upgrade is not activated, this transaction will be rejected by AvalancheGo. If modifications are made to the specification of the transaction as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions No open questions. ## Acknowledgements Thank you [@friskyfoxdk](https://github.com/friskyfoxdk) for filing an [issue](https://github.com/ava-labs/avalanchego/issues/1946) requesting this feature. Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on the reference implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-41: Remove Pending Stakers (/docs/acps/41-remove-pending-stakers) --- title: "ACP-41: Remove Pending Stakers" description: "Details for Avalanche Community Proposal 41: Remove Pending Stakers" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/41-remove-pending-stakers/README.md --- | ACP | 41 | | :--- | :--- | | **Title** | Remove Pending Stakers | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Remove user-specified `StartTime` for stakers. Start the staking period for a staker as soon as their staking transaction is accepted. This greatly reduces the computational load on the P-chain, increasing the efficiency of all Avalanche Network validators. ## Motivation Stakers currently set a `StartTime` for their staking period. This means that Avalanche Network Clients, like AvalancheGo, need to maintain a pending set of all stakers that have not yet started. This places a nontrivial amount of work on the P-chain: - When a new delegator transaction is verified, the pending set needs to be checked to ensure that the validator they are delegating to will not exceed `MaxValidatorStake` while they are active - When a new staker transaction is accepted, it gets added to the pending set - When time is advanced on the P-chain, any stakers in the pending set whose `StartTime <= CurrentTime` need to be moved to the current set By immediately starting every staker on acceptance, the validators do not have to do the above work when validating the P-chain. `MaxValidatorStake` will become an `O(1)` operation as only the current stake of the validator needs to be checked. The pending set can be fully removed. ## Specification 1. When adding a new staker, the current on-chain time should be used for the staker's start time. 2. When determining when to remove the staker from the staker set, the `EndTime` specified in the transaction should continue to be used. Staking transactions should now be rejected if it does not satisfy `MinStakeDuration <= EndTime - CurrentTime <= MaxStakeDuration`. `StartTime` will no longer be validated. ## Backwards Compatibility Modifying the state transition of a transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to not alter the execution behavior prior to activation. This ACP only details the new state transition. Current wallet implementations will continue to work as-is post-activation of this ACP since no transaction formats are modified or added. Wallet implementations may run into issues with their txs being rejected as a result of this ACP if `EndTime >= CurrentChainTime + MaxStakeDuration`. `CurrentChainTime` is guaranteed to be >= the latest block timestamp on the P-chain. ## Reference Implementation A reference implementation has not been created for this ACP since it deals with state management. Each ANC will need to adjust their execution step to follow the Specification detailed above. For AvalancheGo, this work is tracked in this PR: https://github.com/ava-labs/avalanchego/pull/2175 If modifications are made to the specification of the new execution behavior as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions _How will stakers stake for `MaxStakeDuration` if they cannot determine their `StartTime`?_ As mentioned above, the beginning of your staking period is the block acceptance timestamp. Unless you can accurately predict the block timestamp, you will *not* be able to fully stake for `MaxStakeDuration`. This is an explicit trade-off to guarantee that stakers will receive their original stake + any staking rewards at `EndTime`. Delegators can maximize their staking period by setting the same `EndTime` as the Validator they are delegating to. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-62: Disable Addvalidatortx And Adddelegatortx (/docs/acps/62-disable-addvalidatortx-and-adddelegatortx) --- title: "ACP-62: Disable Addvalidatortx And Adddelegatortx" description: "Details for Avalanche Community Proposal 62: Disable Addvalidatortx And Adddelegatortx" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/README.md --- | ACP | 62 | | :--- | :--- | | **Title** | Disable `AddValidatorTx` and `AddDelegatorTx` | | **Author(s)** | Jacob Everly ([@JacobEv3rly](https://twitter.com/JacobEv3rly)), Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Disable `AddValidatorTx` and `AddDelegatorTx` to push all new stakers to use `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx`. `AddPermissionlessValidatorTx` requires validators to register a BLS key. Wide adoption of registered BLS keys accelerates the timeline for future P-Chain upgrades. Additionally, this reduces the number of ways to participate in Primary Network validation from two to one. ## Motivation `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx` were activated on the Avalanche Network in October 2022 with Banff (v1.9.0). This unlocked the ability for Subnet creators to activate Proof-of-Stake validation using their own token on their own Subnet. See more details about Banff [here](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c). These new transaction types can also be used to register a Primary Network validator, leaving two redundant transactions: `AddValidatorTx` and `AddDelegatorTx`. [`AddPermissionlessDelegatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_permissionless_delegator_tx.go#L25-L37) contains the same fields as [`AddDelegatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_delegator_tx.go#L29-L39) with an additional `Subnet` field. [`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_permissionless_validator_tx.go#L35-L59) contains the same fields as [`AddValidatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_validator_tx.go#L29-L42) with additional `Subnet` and `Signer` fields. `RewardsOwner` was also split into `ValidationRewardsOwner` and `DelegationRewardsOwner` letting validators divert rewards they receive from delegators into a separate rewards owner. By disabling support of `AddValidatorTx`, all new validators on the Primary Network must use `AddPermissionlessValidatorTx` and register a BLS key with their NodeID. As more validators attach BLS keys to their nodes, future upgrades using these BLS keys can be activated through the ACP process. BLS keys can be used to efficiently sign a common message via [Public Key Aggregation](https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html). Applications of this include, but are not limited to: - **Arbitrary Subnet Rewards**: The P-Chain currently restricts Elastic Subnets to follow the reward curve defined in a `TransformSubnetTx`. With sufficient BLS key adoption, Elastic Subnets can define their own reward curve and reward conditions. The P-Chain can be modified to take in a message indicating if a Subnet validator should be rewarded with how many tokens signed with a BLS Multi-Signature. - **Subnet Attestations**: Elastic Subnets can attest to the state of their Subnet with a BLS Multi-Signature. This can enable clients to fetch the current state of the Subnet without syncing the entire Subnet. `StateSync` enables clients to download chain state from peers up to a recent block near tip. However, it is up to the client to query these peers and resolve any potential conflicts in the responses. With Subnet Attestations, clients can query an API node to prove information about a Subnet without querying the Subnet's validators. This can especially be useful for [Subnet-Only Validators](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/13-subnet-only-validators.md) to prove information about the C-Chain. To accelerate future BLS-powered advancements in the Avalanche Network, this ACP aims to disable `AddValidatorTx` and `AddDelegatorTx` in Durango. ## Specification `AddValidatorTx` and `AddDelegatorTx` should be marked as dropped when added to the mempool after activation. Any blocks including these transactions should be considered invalid. ## Backwards Compatibility Disabling a transaction type is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation. After this ACP is activated, any new issuance of `AddValidatorTx` or `AddDelegatorTx` will be considered invalid and dropped by the network. Any consumers of these transactions must transition to using `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx` to participate in Primary Network validation. The [Avalanche Ledger App](https://github.com/LedgerHQ/app-avalanche) supports both of these transaction types. Note that `AddSubnetValidatorTx` and `RemoveSubnetValidatorTx` are unchanged by this ACP. ## Reference Implementation An implementation disabling `AddValidatorTx` and `AddDelegatorTx` was created [here](https://github.com/ava-labs/avalanchego/pull/2662). Until activation, these transactions will continue to be accepted by AvalancheGo. If modifications are made to the specification as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-75: Acceptance Proofs (/docs/acps/75-acceptance-proofs) --- title: "ACP-75: Acceptance Proofs" description: "Details for Avalanche Community Proposal 75: Acceptance Proofs" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/75-acceptance-proofs/README.md --- | ACP | 75 | | :--- | :--- | | **Title** | Acceptance Proofs | | **Author(s)** | Joshua Kim | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/82)) | | **Track** | Standards | ## Abstract Introduces support for a proof of a block’s acceptance in consensus. ## Motivation Subnets are able to prove arbitrary events using warp messaging, but native support for proving block acceptance at the protocol layer enables more utility. Acceptance proofs are introduced to prove that a block has been accepted by a subnet. One example use case for acceptance proofs is to provide stronger fault isolation guarantees from the primary network to subnets. Subnets use the [ProposerVM](https://github.com/ava-labs/avalanchego/blob/416fbdf1f783c40f21e7009a9f06d192e69ba9b5/vms/proposervm/README.md) to implement soft leader election for block proposal. The ProposerVM determines the block producer schedule from a randomly shuffled validator set at a specified P-Chain block height. Validators are therefore required to have the P-Chain block referenced in a block's header to verify the block producer against the expected block producer schedule. If a block's header specifies a P-Chain height that has not been accepted yet, the block is treated as invalid. If a block referencing an unknown P-Chain height was produced virtuously, it is expected that the validator will eventually discover the block as its P-Chain height advances and accept the block. If many validators disagree about the current tip of the P-Chain, it can lead to a liveness concern on the subnet where block production entirely stalls. In practice, this almost never occurs because nodes produce blocks with a lagging P-Chain height because it’s likely that most nodes will have accepted a sufficiently stale block. This however, relies on an assumption that validators are constantly making progress in consensus on the P-Chain to prevent the subnet from stalling. This leaves an open concern where the P-Chain stalling on a node would prevent it from verifying any blocks, leading to a subnet unable to produce blocks if many validators stalled at different P-Chain heights. --- Figure 1: A Validator that has synced P-Chain blocks `A` and `B` fails verification of a block proposed at block `C`. figure 1 --- We introduce "acceptance proofs", so that a peer can verify any block accepted by consensus. In the aforementioned use-case, if a P-Chain block is unknown by a peer, it can request the block and proof at the provided height from a peer. If a block's proof is valid, the block can be executed to advance the local P-Chain and verify the proposed subnet block. Peers can request blocks from any peer without requiring consensus locally or communication with a validator. This has the added benefit of reducing the number of required connections and p2p message load served by P-Chain validators. --- Figure 2: A Validator is verifying a subnet’s block `Z` which references an unknown P-Chain block `C` in its block header figure 2 Figure 3: A Validator requests the blocks and proofs for `B` and `C` from a peer figure 3 Figure 4: The Validator accepts the P-Chain blocks and is now able to verify `Z` figure 4 --- ## Specification Note: The following is pseudocode. ### P2P #### Aggregation ```diff + message GetAcceptanceSignatureRequest { + bytes chain_id = 1; + uint32 request_id = 2; + bytes block_id = 3; + } ``` The `GetAcceptanceSignatureRequest` message is sent to a peer to request their signature for a given block id. ```diff + message GetAcceptanceSignatureResponse { + bytes chain_id = 1; + uint32 request_id = 2; + bytes bls_signature = 3; + } ``` `GetAcceptanceSignatureResponse` is sent to a peer as a response for `GetAcceptanceSignatureRequest`. `bls_signature` is the peer’s signature using their registered primary network BLS staking key over the requested `block_id`. An empty `bls_signature` field indicates that the block was not accepted yet. ## Security Considerations Nodes that bootstrap using state sync may not have the entire history of the P-Chain and therefore will not be able to provide the entire history for a block that is referenced in a block that they propose. This would be needed to unblock a node that is attempting to fast-forward their P-Chain, as they require the entire ancestry between their current accepted tip and the block they are attempting to forward to. It is assumed that nodes will have some minimum amount of recent state so that the requester can eventually be unblocked by retrying, as only one node with the requested ancestry is required to unblock the requester. An alternative is to make a churn assumption and validate the proposed block's proof with a stale validator set to avoid complexity, but this introduces more security concerns. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-77: Reinventing Subnets (/docs/acps/77-reinventing-subnets) --- title: "ACP-77: Reinventing Subnets" description: "Details for Avalanche Community Proposal 77: Reinventing Subnets" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/77-reinventing-subnets/README.md --- | ACP | 77 | | :------------ | :---------------------------------------------------------------------------------------- | | **Title** | Reinventing Subnets | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/78)) | | **Track** | Standards | | **Replaces** | [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) | ## Abstract Overhaul Subnet creation and management to unlock increased flexibility for Subnet creators by: - Separating Subnet validators from Primary Network validators (Primary Network Partial Sync, Removal of 2000 $AVAX requirement) - Moving ownership of Subnet validator set management from P-Chain to Subnets (ERC-20/ERC-721/Arbitrary Staking, Staking Reward Management) - Introducing a continuous P-Chain fee mechanism for Subnet validators (Continuous Subnet Staking) This ACP supersedes [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) and borrows some of its language. ## Motivation Each node operator must stake at least 2000 $AVAX ($70k at time of writing) to first become a Primary Network validator before they qualify to become a Subnet validator. Most Subnets aim to launch with at least 8 Subnet validators, which requires staking 16000 $AVAX ($560k at time of writing). All Subnet validators, to satisfy their role as Primary Network validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating. Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) cannot launch a Subnet because they cannot opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain <-> Subnets using Avalanche Warp Messaging/Teleporter). A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline). Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. _Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load._ Elastic Subnets, introduced in [Banff](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c), enabled Subnet creators to activate Proof-of-Stake validation and uptime-based rewards using their own token. However, this token is required to be an ANT (created on the X-Chain) and locked on the P-Chain. All staking rewards were distributed on the P-Chain with the reward curve being defined in the `TransformSubnetTx` and, once set, was unable to be modified. With no Elastic Subnets live on Mainnet, it is clear that Permissionless Subnets as they stand today could be more desirable. There are many successful Permissioned Subnets in production but many Subnet creators have raised the above as points of concern. In summary, the Avalanche community could benefit from a more flexible and affordable mechanism to launch Permissionless Subnets. ### A Note on Nomenclature Avalanche Subnets are subnetworks validated by a subset of the Primary Network validator set. The new network creation flow outlined in this ACP does not require any intersection between the new network's validator set and the Primary Network's validator set. Moreover, the new networks have greater functionality and sovereignty than Subnets. To distinguish between these two kinds of networks, the community has been referring to these new networks as _Avalanche Layer 1s_, or L1s for short. All networks created through the old network creation flow will continue to be referred to as Avalanche Subnets. ## Specification At a high-level, L1s can manage their validator sets externally to the P-Chain by setting the blockchain ID and address of their _validator manager_. The P-Chain will consume Warp messages that modify the L1's validator set. To confirm modification of the L1's validator set, the P-Chain will also produce Warp messages. L1 validators are not required to validate the Primary Network, and do not have the same 2000 $AVAX stake requirement that Subnet validators have. To maintain an active L1 validator, a continuous fee denominated in $AVAX is assessed. L1 validators are only required to sync the P-Chain (not X/C-Chain) in order to track validator set changes and support cross-L1 communication. ### P-Chain Warp Message Payloads To enable management of an L1's validator set externally to the P-Chain, Warp message verification will be added to the [`PlatformVM`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm). For a Warp message to be considered valid by the P-Chain, at least 67% of the `sourceChainID`'s weight must have participated in the aggregate BLS signature. This is equivalent to the threshold set for the C-Chain. A future ACP may be proposed to support modification of this threshold on a per-L1 basis. The following Warp message payloads are introduced on the P-Chain: - `SubnetToL1ConversionMessage` - `RegisterL1ValidatorMessage` - `L1ValidatorRegistrationMessage` - `L1ValidatorWeightMessage` The method of requesting signatures for these messages is left unspecified. A viable option for supporting this functionality is laid out in [ACP-118](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/118-warp-signature-request/README.md) with the `SignatureRequest` message. All node IDs contained within the message specifications are represented as variable length arrays such that they can support new node IDs types should the P-Chain add support for them in the future. The serialization of each of these messages is as follows. #### `SubnetToL1ConversionMessage` The P-Chain can produce a `SubnetToL1ConversionMessage` for consumers (i.e. validator managers) to be aware of the initial validator set. The following serialization is defined as the `ValidatorData`: | Field | Type | Size | | -------------: | ---------: | -----------------------: | | `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes | | `blsPublicKey` | `[48]byte` | 48 bytes | | `weight` | `uint64` | 8 bytes | | | | 60 + len(`nodeID`) bytes | The following serialization is defined as the `ConversionData`: | Field | Type | Size | | ---------------: | ----------------: | ---------------------------------------------------------: | | `codecID` | `uint16` | 2 bytes | | `subnetID` | `[32]byte` | 32 bytes | | `managerChainID` | `[32]byte` | 32 bytes | | `managerAddress` | `[]byte` | 4 + len(`managerAddress`) bytes | | `validators` | `[]ValidatorData` | 4 + sum(`validatorLengths`) bytes | | | | 74 + len(`managerAddress`) + sum(`validatorLengths`) bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `sum(validatorLengths)` is the sum of the lengths of `ValidatorData` serializations included in `validators`. - `subnetID` identifies the Subnet that is being converted to an L1 (described further below). - `managerChainID` and `managerAddress` identify the validator manager for the newly created L1. This is the (blockchain ID, address) tuple allowed to send Warp messages to modify the L1's validator set. - `validators` are the initial continuous-fee-paying validators for the given L1. The `SubnetToL1ConversionMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of: | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `conversionID` | `[32]byte` | 32 bytes | | | | 38 bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `typeID` is the payload type identifier and is `0x00000000` for this message - `conversionID` is the SHA256 hash of the `ConversionData` from a given `ConvertSubnetToL1Tx` #### `RegisterL1ValidatorMessage` The P-Chain can consume a `RegisterL1ValidatorMessage` from validator managers through a `RegisterL1ValidatorTx` to register an addition to the L1's validator set. The following is the serialization of a `PChainOwner`: | Field | Type | Size | | ----------: | -----------: | -------------------------------: | | `threshold` | `uint32` | 4 bytes | | `addresses` | `[][20]byte` | 4 + len(`addresses`) \\* 20 bytes | | | | 8 + len(`addresses`) \\* 20 bytes | - `threshold` is the number of `addresses` that must provide a signature for the `PChainOwner` to authorize an action. - Validation criteria: - If `threshold` is `0`, `addresses` must be empty - `threshold` <= len(`addresses`) - Entries of `addresses` must be unique and sorted in ascending order The `RegisterL1ValidatorMessage` is specified as an `AddressedCall` with a payload of: | Field | Type | Size | | ----------------------: | ------------: | ------------------------------------------------------------------------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `subnetID` | `[32]byte` | 32 bytes | | `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes | | `blsPublicKey` | `[48]byte` | 48 bytes | | `expiry` | `uint64` | 8 bytes | | `remainingBalanceOwner` | `PChainOwner` | 8 + len(`addresses`) \\* 20 bytes | | `disableOwner` | `PChainOwner` | 8 + len(`addresses`) \\* 20 bytes | | `weight` | `uint64` | 8 bytes | | | | 122 + len(`nodeID`) + (len(`addresses1`) + len(`addresses2`)) \\* 20 bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `typeID` is the payload type identifier and is `0x00000001` for this payload - `subnetID`, `nodeID`, `weight`, and `blsPublicKey` are for the validator being added - `expiry` is the time at which this message becomes invalid. As of a P-Chain timestamp `>= expiry`, this Avalanche Warp Message can no longer be used to add the `nodeID` to the validator set of `subnetID` - `remainingBalanceOwner` is the P-Chain owner where leftover $AVAX from the validator's Balance will be issued to when this validator it is removed from the validator set. - `disableOwner` is the only P-Chain owner allowed to disable the validator using `DisableL1ValidatorTx`, specified below. #### `L1ValidatorRegistrationMessage` The P-Chain can produce an `L1ValidatorRegistrationMessage` for consumers to verify that a validation period has either begun or has been invalidated. The `L1ValidatorRegistrationMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of: | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `validationID` | `[32]byte` | 32 bytes | | `registered` | `bool` | 1 byte | | | | 39 bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `typeID` is the payload type identifier and is `0x00000002` for this message - `validationID` identifies the validator for the message - `registered` is a boolean representing the status of the `validationID`. If true, the `validationID` corresponds to a validator in the current validator set. If false, the `validationID` does not correspond to a validator in the current validator set, and never will in the future. #### `L1ValidatorWeightMessage` The P-Chain can consume an `L1ValidatorWeightMessage` through a `SetL1ValidatorWeightTx` to update the weight of an existing validator. The P-Chain can also produce an `L1ValidatorWeightMessage` for consumers to verify that the validator weight update has been effectuated. The `L1ValidatorWeightMessage` is specified as an `AddressedCall` with the following payload. When sent from the P-Chain, the `sourceChainID` is set to the P-Chain ID, and the `sourceAddress` is set to an empty byte array. | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `validationID` | `[32]byte` | 32 bytes | | `nonce` | `uint64` | 8 bytes | | `weight` | `uint64` | 8 bytes | | | | 54 bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `typeID` is the payload type identifier and is `0x00000003` for this message - `validationID` identifies the validator for the message - `nonce` is a strictly increasing number that denotes the latest validator weight update and provides replay protection for this transaction - `weight` is the new `weight` of the validator ### New P-Chain Transaction Types Both before and after this ACP, to create a Subnet, a `CreateSubnetTx` must be issued on the P-Chain. This transaction includes an `Owner` field which defines the key that today can be used to authorize any validator set additions (`AddSubnetValidatorTx`) or removals (`RemoveSubnetValidatorTx`). To be considered a permissionless network, or Avalanche Layer 1: - This `Owner` key must no longer have the ability to modify the validator set. - New transaction types must support modification of the validator set via Warp messages. The following new transaction types are introduced on the P-Chain to support this functionality: - `ConvertSubnetToL1Tx` - `RegisterL1ValidatorTx` - `SetL1ValidatorWeightTx` - `DisableL1ValidatorTx` - `IncreaseL1ValidatorBalanceTx` #### `ConvertSubnetToL1Tx` To convert a Subnet into an L1, a `ConvertSubnetToL1Tx` must be issued to set the `(chainID, address)` pair that will manage the L1's validator set. The `Owner` key defined in `CreateSubnetTx` must provide a signature to authorize this conversion. The `ConvertSubnetToL1Tx` specification is: ```go type PChainOwner struct { // The threshold number of `Addresses` that must provide a signature in order for // the `PChainOwner` to be considered valid. Threshold uint32 `json:"threshold"` // The 20-byte addresses that are allowed to sign to authenticate a `PChainOwner`. // Note: It is required for: // - len(Addresses) == 0 if `Threshold` is 0. // - len(Addresses) >= `Threshold` // - The values in Addresses to be sorted in ascending order. Addresses []ids.ShortID `json:"addresses"` } type L1Validator struct { // NodeID of this validator NodeID []byte `json:"nodeID"` // Weight of this validator used when sampling Weight uint64 `json:"weight"` // Initial balance for this validator Balance uint64 `json:"balance"` // [Signer] is the BLS public key and proof-of-possession for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID + L1 does uniquely map to a BLS key Signer signer.ProofOfPossession `json:"signer"` // Leftover $AVAX from the [Balance] will be issued to this // owner once it is removed from the validator set. RemainingBalanceOwner PChainOwner `json:"remainingBalanceOwner"` // The only owner allowed to disable this validator on the P-Chain. DisableOwner PChainOwner `json:"disableOwner"` } type ConvertSubnetToL1Tx struct { // Metadata, inputs and outputs BaseTx // ID of the Subnet to transform // Restrictions: // - Must not be the Primary Network ID Subnet ids.ID `json:"subnetID"` // BlockchainID where the validator manager lives ChainID ids.ID `json:"chainID"` // Address of the validator manager Address []byte `json:"address"` // Initial continuous-fee-paying validators for the L1 Validators []L1Validator `json:"validators"` // Authorizes this conversion SubnetAuth verify.Verifiable `json:"subnetAuthorization"` } ``` After this transaction is accepted, `CreateChainTx` and `AddSubnetValidatorTx` are disabled on the Subnet. The only action that the `Owner` key is able to take is removing Subnet validators with `RemoveSubnetValidatorTx` that had been added using `AddSubnetValidatorTx`. Unless removed by the `Owner` key, any Subnet validators added previously with an `AddSubnetValidatorTx` will continue to validate the Subnet until their [`End`](https://github.com/ava-labs/avalanchego/blob/a1721541754f8ee23502b456af86fea8c766352a/vms/platformvm/txs/validator.go#L27) time is reached. Once all Subnet validators added with `AddSubnetValidatorTx` are no longer in the validator set, the `Owner` key is powerless. `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` must be used to manage the L1's validator set. The `validationID` for validators added through `ConvertSubnetToL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction). Once this transaction is accepted, the P-Chain must be willing sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to `ConversionData` populated with the values from this transaction. #### `RegisterL1ValidatorTx` After a `ConvertSubnetToL1Tx` has been accepted, new validators can only be added by using a `RegisterL1ValidatorTx`. The specification of this transaction is: ```go type RegisterL1ValidatorTx struct { // Metadata, inputs and outputs BaseTx // Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee. Balance uint64 `json:"balance"` // [Signer] is a BLS signature proving ownership of the BLS public key specified // below in `Message` for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID + L1 does uniquely map to a BLS key Signer [96]byte `json:"signer"` // A RegisterL1ValidatorMessage payload Message warp.Message `json:"message"` } ``` The `validationID` of validators added via `RegisterL1ValidatorTx` is defined as the SHA256 hash of the `Payload` of the `AddressedCall` in `Message`. When a `RegisterL1ValidatorTx` is accepted on the P-Chain, the validator is added to the L1's validator set. A `minNonce` field corresponding to the `validationID` will be stored on addition to the validator set (initially set to `0`). This field will be used when validating the `SetL1ValidatorWeightTx` defined below. This `validationID` will be used for replay protection. Used `validationID`s will be stored on the P-Chain. If a `RegisterL1ValidatorTx`'s `validationID` has already been used, the transaction will be considered invalid. To prevent storing an unbounded number of `validationID`s, the `expiry` of the `RegisterL1ValidatorMessage` is required to be no more than 24 hours in the future of the time the transaction is issued on the P-Chain. Any `validationIDs` corresponding to an expired timestamp can be flushed from the P-Chain's state. L1s are responsible for defining the procedure on how to retrieve the above information from prospective validators. An EVM-compatible L1 may choose to implement this step like so: - Use the number of tokens the user has staked into a smart contract on the L1 to determine the weight of their validator - Require the user to submit an on-chain transaction with their validator information - Generate the Warp message For a `RegisterL1ValidatorTx` to be valid, `Signer` must be a valid proof-of-possession of the `blsPublicKey` defined in the `RegisterL1ValidatorMessage` contained in the transaction. After a `RegisterL1ValidatorTx` is accepted, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the given `validationID` with `registered` set to `true`. This remains the case until the time at which the validator is removed from the validator set using a `SetL1ValidatorWeightTx`, as described below. When it is known that a given `validationID` _is not and never will be_ registered, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the `validationID` with `registered` set to `false`. This could be the case if the `expiry` time of the message has passed prior to the message being delivered in a `RegisterL1ValidatorTx`, or if the validator was successfully registered and then later removed. This enables the P-Chain to prove to validator managers that a validator has been removed or never added. The P-Chain must refuse to sign any `L1ValidatorRegistrationMessage` where the `validationID` does not correspond to an active validator and the `expiry` is in the future. #### `SetL1ValidatorWeightTx` `SetL1ValidatorWeightTx` is used to modify the voting weight of a validator. The specification of this transaction is: ```go type SetL1ValidatorWeightTx struct { // Metadata, inputs and outputs BaseTx // An L1ValidatorWeightMessage payload Message warp.Message `json:"message"` } ``` Applications of this transaction could include: - Increase the voting weight of a validator if a delegation is made on the L1 - Increase the voting weight of a validator if the stake amount is increased (by staking rewards for example) - Decrease the voting weight of a misbehaving validator - Remove an inactive validator The validation criteria for `L1ValidatorWeightMessage` is: - `nonce >= minNonce`. Note that `nonce` is not required to be incremented by `1` with each successive validator weight update. - When `minNonce == MaxUint64`, `nonce` must be `MaxUint64` and `weight` must be `0`. This prevents L1s from being unable to remove `nodeID` in a subsequent transaction. - If `weight == 0`, the validator being removed must not be the last one in the set. If all validators are removed, there are no valid Warp messages that can be produced to register new validators through `RegisterL1ValidatorMessage`. With no validators, block production will halt and the L1 is unrecoverable. This validation criteria serves as a guardrail against this situation. A future ACP can remove this guardrail as users get more familiar with the new L1 mechanics and tooling matures to fork an L1. When `weight != 0`, the weight of the validator is updated to `weight` and `minNonce` is updated to `nonce + 1`. When `weight == 0`, the validator is removed from the validator set. All state related to the validator, including the `minNonce` and `validationID`, are reaped from the P-Chain state. Tracking these post-removal is not required since `validationID` can never be re-initialized due to the replay protection provided by `expiry` in `RegisterL1ValidatorTx`. Any unspent $AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that `RemainingBalanceOwner` is specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`). Note: There is no explicit `EndTime` for L1 validators added in a `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`. The only time when L1 validators are removed from the L1's validator set is through this transaction when `weight == 0`. #### `DisableL1ValidatorTx` L1 validators can use `DisableL1ValidatorTx` to mark their validator as inactive. The specification of this transaction is: ```go type DisableL1ValidatorTx struct { // Metadata, inputs and outputs BaseTx // ID corresponding to the validator ValidationID ids.ID `json:"validationID"` // Authorizes this validator to be disabled DisableAuth verify.Verifiable `json:"disableAuthorization"` } ``` The `DisableOwner` specified for this validator must sign the transaction. Any unspent $AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that both `DisableOwner` and `RemainingBalanceOwner` are specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`). For full removal from an L1's validator set, a `SetL1ValidatorWeightTx` must be issued with weight `0`. To do so, a Warp message is required from the L1's validator manager. However, to support the ability to claim the unspent `Balance` for a validator without authorization is critical for failed L1s. Note that this does not modify an L1's total staking weight. This transaction marks the validator as inactive, but does not remove it from the L1's validator set. Inactive validators can re-activate at any time by increasing their balance with an `IncreaseL1ValidatorBalanceTx`. L1 creators should be aware that there is no notion of `MinStakeDuration` that is enforced by the P-Chain. It is expected that L1s who choose to enforce a `MinStakeDuration` will lock the validator's Stake for the L1's desired `MinStakeDuration`. #### `IncreaseL1ValidatorBalanceTx` L1 validators are required to maintain a non-zero balance used to pay the continuous fee on the P-Chain in order to be considered active. The `IncreaseL1ValidatorBalanceTx` can be used by anybody to add additional $AVAX to the `Balance` to a validator. The specification of this transaction is: ```go type IncreaseL1ValidatorBalanceTx struct { // Metadata, inputs and outputs BaseTx // ID corresponding to the validator ValidationID ids.ID `json:"validationID"` // Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee Balance uint64 `json:"balance"` } ``` If the validator corresponding to `ValidationID` is currently inactive (`Balance` was exhausted or `DisableL1ValidatorTx` was issued), this transaction will move them back to the active validator set. Note: The $AVAX added to `Balance` can be claimed at any time by the validator using `DisableL1ValidatorTx`. ### Bootstrapping L1 Nodes Bootstrapping a node/validator is the process of securely recreating the latest state of the blockchain locally. At the end of this process, the local state of a node/validator must be in sync with the local state of other virtuous nodes/validators. The node/validator can then verify new incoming transactions and reach consensus with other nodes/validators. To bootstrap a node/validator, a few critical questions must be answered: How does one discover peers in the network? How does one determine that a discovered peer is honestly participating in the network? For standalone networks like the Avalanche Primary Network, this is done by connecting to a hardcoded [set](https://github.com/ava-labs/avalanchego/blob/master/genesis/bootstrappers.json) of trusted bootstrappers to then discover new peers. Ethereum calls their set [bootnodes](https://ethereum.org/developers/docs/nodes-and-clients/bootnodes). Since L1 validators are not required to be Primary Network validators, a list of validator IPs to connect to (the functional bootstrappers of the L1) cannot be provided by simply connecting to the Primary Network validators. However, the Primary Network can enable nodes tracking an L1 to seamlessly connect to the validators by tracking and gossiping L1 validator IPs. L1s will not need to operate and maintain a set of bootstrappers and can rely on the Primary Network for peer discovery. ### Sidebar: L1 Sovereignty After this ACP is activated, the P-Chain will no longer support staking of any assets other than $AVAX for the Primary Network. The P-Chain will not support the distribution of staking rewards for L1s. All staking-related operations for L1 validation must be managed by the L1's validator manager. The P-Chain simply requires a continuous fee per validator. If an L1 would like to manage their validator's balances on the P-Chain, it can cover the cost for all L1 validators by posting the $AVAX balance on the P-Chain. L1s can implement any mechanism they want to pay the continuous fee charged by the P-Chain for its participants. The L1 has full ownership over its validator set, not the P-Chain. There are no restrictions on what requirements an L1 can have for validators to join. Any stake that is required to join the L1's validator set is not locked on the P-Chain. If a validator is removed from the L1's validator set via a `SetL1ValidatorWeightTx` with weight `0`, the stake will continue to be locked outside of the P-Chain. How each L1 handles stake associated with the validator is entirely left up to the L1 and can be treated independently to what happens on the P-Chain. The relationship between the P-Chain and L1s provides a dynamic where L1s can use the P-Chain as an impartial judge to modify parameters (in addition to its existing role of helping to validate incoming Avalanche Warp Messages). If a validator is misbehaving, the L1 validators can collectively generate a BLS multisig to reduce the voting weight of a misbehaving validator. This operation is fully secured by the Avalanche Primary Network (225M $AVAX or $8.325B at the time of writing). Follow-up ACPs could extend the P-Chain <-> L1 relationship to include parametrization of the 67% threshold to enable L1s to choose a different threshold based on their security model (e.g. a simple majority of 51%). ### Continuous Fee Mechanism Every additional validator on the P-Chain adds persistent load to the Avalanche Network. When a validator transaction is issued on the P-Chain, it is charged for the computational cost of the transaction itself but is not charged for the cost of an active validator over the time they are validating on the network (which may be indefinitely). This is a common problem in blockchains, spawning many state rent proposals in the broader blockchain space to address it. The following fee mechanism takes advantage of the fact that each L1 validator uses the same amount of computation and charges each L1 validator the dynamic base fee for every discrete unit of time it is active. To charge each L1 validator, the notion of a `Balance` is introduced. The `Balance` of a validator will be continuously charged during the time they are active to cover the cost of storing the associated validator properties (BLS key, weight, nonce) in memory and to track IPs (in addition to other services provided by the Primary Network). This `Balance` is initialized with the `RegisterL1ValidatorTx` that added them to the active validator set. `Balance` can be increased at any time using the `IncreaseL1ValidatorBalanceTx`. When this `Balance` reaches `0`, the validator will be considered "inactive" and will no longer participate in validating the L1. Inactive validators can be moved back to the active validator set at any time using the same `IncreaseL1ValidatorBalanceTx`. Once a validator is considered inactive, the P-Chain will remove these properties from memory and only retain them on disk. All messages from that validator will be considered invalid until it is revived using the `IncreaseL1ValidatorBalanceTx`. L1s can reduce the amount of inactive weight by removing inactive validators with the `SetL1ValidatorWeightTx` (`Weight` = 0). Since each L1 validator is charged the same amount at each point in time, tracking the fees for the entire validator set is straight-forward. The accumulated dynamic base fee for the entire network is tracked in a single uint. This accumulated value should be equal to the fee charged if a validator was active from the time the accumulator was instantiated. The validator set is maintained in a priority queue. A pseudocode implementation of the continuous fee mechanism is provided below. ```python # Pseudocode class ValidatorQueue: def __init__(self, fee_getter): self.acc = 0 self.queue = PriorityQueue() self.fee_getter = fee_getter # At each time period, increment the accumulator and # pop all validators from the top of the queue that # ran out of funds. # Note: The amount of work done in a single block # should be bounded to prevent a large number of # validator operations from happening at the same # time. def time_elapse(self, t): self.acc = self.acc + self.fee_getter(t) while True: vdr = self.queue.peek() if vdr.balance < self.acc: self.queue.pop() continue return # Validator was added def validator_enter(self, vdr): vdr.balance = vdr.balance + self.acc self.queue.add(vdr) # Validator was removed def validator_remove(self, vdrNodeID): vdr = find_and_remove(self.queue, vdrNodeID) vdr.balance = vdr.balance - self.acc vdr.refund() # Refund [vdr.balance] to [RemainingBalanceOwner] self.queue.remove() # Validator's balance was topped up def validator_increase(self, vdrNodeID, balance): vdr = find_and_remove(self.queue, vdrNodeID) vdr.balance = vdr.balance + balance self.queue.add(vdr) ``` #### Fee Algorithm [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) proposes a dynamic fee mechanism for transactions on the P-Chain. This mechanism is repurposed with minor modifications for the active L1 validator continuous fee. At activation, the number of excess active L1 validators $x$ is set to `0`. The fee rate per second for an active L1 validator is: $$M \cdot \exp\left(\frac{x}{K}\right)$$ Where: - $M$ is the minimum price for an active L1 validator - $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification ```python # Approximates factor * e ** (numerator / denominator) using Taylor expansion def fake_exponential(factor: int, numerator: int, denominator: int) -> int: i = 1 output = 0 numerator_accum = factor * denominator while numerator_accum > 0: output += numerator_accum numerator_accum = (numerator_accum * numerator) // (denominator * i) i += 1 return output // denominator ``` - $K$ is a constant to control the rate of change for the L1 validator price After every second, $x$ will be updated: $$x = \max(x + (V - T), 0)$$ Where: - $V$ is the number of active L1 validators - $T$ is the target number of active L1 validators Whenever $x$ increases by $K$, the price per active L1 validator increases by a factor of `~2.7`. If the price per active L1 validator gets too expensive, some active L1 validators will exit the active validator set, decreasing $x$, dropping the price. The price per active L1 validator constantly adjusts to make sure that, on average, the P-Chain has no more than $T$ active L1 validators. #### Block Processing Before processing the transactions inside a block, all validators that no longer have a sufficient (non-zero) balance are deactivated. After processing the transactions inside a block, all validators that do not have a sufficient balance for the next second are deactivated. ##### Block Timestamp Validity Change To ensure that validators are charged accurately, blocks are only considered valid if advancing the chain times would not cause a validator to have a negative balance. This upholds the expectation that the number of L1 validators remains constant between blocks. The block building protocol is modified to account for this change by first checking if the wall clock time removes any validator due to a lack of funds. If the wall clock time does not remove any L1 validators, the wall clock time is used to build the block. If it does, the time at which the first validator gets removed is used. ##### Fee Calculation The total validator fee assessed in $\Delta t$ is: ```python # Calculate the fee to charge over Δt def cost_over_time(V:int, T:int, x:int, Δt: int) -> int: cost = 0 for _ in range(Δt): x = max(x + V - T, 0) cost += fake_exponential(M, x, K) return cost ``` #### Parameters The parameters at activation are: | Parameter | Definition | Value | | --------- | ------------------------------------------- | ------------- | | $T$ | target number of validators | 10_000 | | $C$ | capacity number of validators | 20_000 | | $M$ | minimum fee rate | 512 nAVAX/s | | $K$ | constant to control the rate of fee changes | 1_246_488_515 | An $M$ of 512 nAVAX/s equates to ~1.33 AVAX/month to run an L1 validator, so long as the total number of continuous-fee-paying L1 validators stays at or below $T$. $K$ was chosen to set the maximum fee doubling rate to ~24 hours. This is in the extreme case that the network has $C$ validators for prolonged periods of time; if the network has $T$+1 validators for example, the fee rate would double every ~27 years. A future ACP can adjust the parameters to increase $T$, reduce $M$, and/or modify $K$. #### User Experience L1 validators are continuously charged a fee, albeit a small one. This poses a challenge for L1 validators: How do they maintain the balance over time? Node clients should expose an API to track how much balance is remaining in the validator's account. This will provide a way for L1 validators to track how quickly it is going down and top-up when needed. A nice byproduct of the above design is the balance in the validator's account is claimable. This means users can top-up as much $AVAX as they want and rest-assured knowing they can always retrieve it if there is an excessive amount. The expectation is that most users will not interact with node clients or track when or how much they need to top-up their validator account. Wallet providers will abstract away most of this process. For users who desire more convenience, L1-as-a-Service providers will abstract away all of this process. ## Backwards Compatibility This new design for Subnets proposes a large rework to all L1-related mechanics. Rollout should be done on a going-forward basis to not cause any service disruption for live Subnets. All current Subnet validators will be able to continue validating both the Primary Network and whatever Subnets they are validating. Any state execution changes must be coordinated through a mandatory upgrade. Implementors must take care to continue to verify the existing ruleset until the upgrade is activated. After activation, nodes should verify the new ruleset. Implementors must take care to only verify the presence of 2000 $AVAX prior to activation. ### Deactivated Transactions - P-Chain - `TransformSubnetTx` After this ACP is activated, Elastic Subnets will be disabled. `TransformSubnetTx` will not be accepted post-activation. As there are no Mainnet Elastic Subnets, there should be no production impact with this deactivation. ### New Transactions - P-Chain - `ConvertSubnetToL1Tx` - `RegisterL1ValidatorTx` - `SetL1ValidatorWeightTx` - `DisableL1ValidatorTx` - `IncreaseL1ValidatorBalanceTx` ## Reference Implementation ACP-77 was implemented and will be merged into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp77` label [here](https://github.com/ava-labs/avalanchego/issues?q=sort%3Aupdated-desc+label%3Aacp77). Since Etna is not yet activated, all new transactions introduced in ACP-77 will be rejected by AvalancheGo. If any modifications are made to ACP-77 as part of the ACP process, the implementation must be updated prior to activation. ## Security Considerations This ACP introduces Avalanche Layer 1s, a new network type that costs significantly less than Avalanche Subnets. This can lead to a large increase in the number of networks and, by extension, the number of validators. Each additional validator adds consistent RAM usage to the P-Chain. However, this should be appropriately metered by the continuous fee mechanism outlined above. With the sovereignty L1s have from the P-Chain, L1 staking tokens are not locked on the P-Chain. This poses a security consideration for L1 validators: Malicious chains can choose to remove validators at will and take any funds that the validator has locked on the L1. The P-Chain only provides the guarantee that L1 validators can retrieve the remaining $AVAX Balance for their validator via a `DisableL1ValidatorTx`. Any assets on the L1 is entirely under the purview of the L1. The onus is on L1 validators to vet the L1's security for any assets transferred onto the L1. With a long window of expiry (24 hours) for the Warp message in `RegisterL1ValidatorTx`, spam of validator registration could lead to high memory pressure on the P-Chain. A future ACP can reduce the window of expiry if 24 hours proves to be a problem. NodeIDs can be added to an L1's validator set involuntarily. However, it is important to note that any stake/rewards are _not_ at risk. For a node operator who was added to a validator set involuntarily, they would only need to generate a new NodeID via key rotation as there is no lock-up of any stake to create a NodeID. This is an explicit tradeoff for easier on-boarding of NodeIDs. This mirrors the Primary Network validators guarantee of no stake/rewards at risk. The continuous fee mechanism outlined above does not apply to inactive L1 validators since they are not stored in memory. However, inactive L1 validators are persisted on disk which can lead to persistent P-Chain state growth. A future ACP can introduce a mechanism to decrease the rate of P-Chain state growth or provide a state expiry path to reduce the amount of P-Chain state. ## Acknowledgements Special thanks to [@StephenButtolph](https://github.com/StephenButtolph), [@aaronbuchwald](https://github.com/aaronbuchwald), and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. Thank you to the broader Ava Labs Platform Engineering Group for their feedback on this ACP prior to publication. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-83: Dynamic Multidimensional Fees (/docs/acps/83-dynamic-multidimensional-fees) --- title: "ACP-83: Dynamic Multidimensional Fees" description: "Details for Avalanche Community Proposal 83: Dynamic Multidimensional Fees" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/83-dynamic-multidimensional-fees/README.md --- | ACP | 83 | | :--- | :--- | | **Title** | Dynamic multidimensional fees for P-chain and X-chain | | **Author(s)** | Alberto Benegiamo ([@abi87](https://github.com/abi87)) | | **Status** | Stale | | **Track** | Standards | | **Superseded-By** | [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) | ## Abstract Introduce a dynamic and multidimensional fees scheme for the P-chain and X-chain. Dynamic fees helps to preserve the stability of the chain as it provides a feedback mechanism that increases the cost of resources when the network operates above its target utilization. Multidimensional fees ensures that high demand for orthogonal resources does not drive up the price of underutilized resources. For example, networks provide and consume orthogonal resources including, but not limited to, bandwidth, chain state, read/write throughput, and CPU. By independently metering each resource, they can be granularly priced and stay closer to optimal resource utilization. ## Motivation The P-Chain and X-Chain currently have fixed fees and in some cases those fees are fixed to zero. This makes transaction issuance predictable, but does not provide feedback mechanism to preserve chain stability under high load. In contrast, the C-Chain, which has the highest and most regular load among the chains on the Primary Network, already supports dynamic fees. This ACP proposes to introduce a similar dynamic fee mechanism for the P-Chain and X-Chain to further improve the Primary Network's stability and resilience under load. However, unlike the C-Chain, we propose a multidimensional fee scheme with an exponential update rule for each fee dimension. The [HyperSDK](https://github.com/ava-labs/hypersdk) already utilizes a multidimensional fee scheme with optional priority fees and its efficiency is backed by [academic research](https://arxiv.org/abs/2208.07919). Finally, we split the fee into two parts: a `base fee` and a `priority fee`. The `base fee` is calculated by the network each block to accurately price each resource at a given point in time. Whatever amount greater than the base fee is burnt is treated as the `priority fee` to prioritize faster transaction inclusion. ## Specification We introduce the multidimensional scheme first and then how to apply the dynamic fee update rule for each fee dimension. Finally we list the new block verification rules, valid once the new fee scheme activates. ### Multidimensional scheme components We define four fee dimensions, `Bandwidth`, `Reads`, `Writes`, `Compute`, to describe transactions complexity. In more details: - `Bandwidth` measures the transaction size in bytes, as encoded by the AvalancheGo codec. Byte length is a proxy for the network resources needed to disseminate the transaction. - `Reads` measures the number of DB reads needed to verify the transactions. DB reads include UTXOs reads and any other state quantity relevant for the specific transaction. - `Writes` measures the number of DB writes following the transaction verification. DB writes include UTXOs generated as outputs of the transactions and any other state quantity relevant for the specific transaction. - `Compute` measures the number of signatures to be verified, including UTXOs ones and those related to authorization of specific operations. For each fee dimension $i$, we define: - *fee rate* $r_i$ as the price, denominated in AVAX, to be paid for a transaction with complexity $u_i$ along the fee dimension $i$. - *base fee* as the minimal fee needed to accept a transaction. Base fee is given be the formula $$base \ fee = \sum_{i=0}^3 r_i \times u_i$$ - *priority fee* as an optional fee paid on top of the base fee to speed up the transaction inclusion in a block. ### Dynamic scheme components Fee rates are updated in time, to allow a fee increase when network is getting congested. Each new block is a potential source of congestion, as its transactions carry complexity that each validator must process to verify and eventually accept the block. The more complexity carries a block, and the more rapidly blocks are produced, the higher the congestion. We seek a scheme that rapidly increases the fees when blocks complexity goes above a defined threshold and that equally rapidly decreases the fees once complexity goes down (because blocks carry less/simpler transactions, or because they are produced more slowly). We define the desired threshold as a *target complexity rate* $T$: we would want to process every second a block whose complexity is $T$. Any complexity more than that causes some congestion that we want to penalize via fees. In order to update fees rates we track, per each block and each fee dimension, a parameter called cumulative excess complexity. Fee rates applied to a block will be defined in terms of cumulative excess complexity as we show in the following. Suppose that a block $B_t$ is the current chain tip. $B_t$ has the following features: - $t$ is its timestamp. - $\Delta C_t$ is the cumulative excess complexity along fee dimension $i$. Say a new block $B_{t + \Delta T}$ is built on top of $B$, with the following features: - $t + \Delta T$ is its timestamp - $C_{t + \Delta T}$ is its complexity along fee dimension $i$. Then the fee rate $r_{t + \Delta T}$ applied for the block $B_{t + \Delta T}$ along dimension $i$ will be: $$ r_{t + \Delta T} = r^{min} \times e^{\frac{max(0, \Delta C_t - T \times \Delta T)}{Denom}} $$ where - $r^{min}$ is the minimal fee rate along fee dimension $i$ - $T$ is the target complexity rate along fee dimension $i$ - $Denom$ is a normalization constant for the fee dimension $i$ Moreover, once the block $B_{t + \Delta T}$ is accepted, the cumulative excess complexity is updated as follows: $$\Delta C_{t + \Delta T} = max\large(0, \Delta C_{t} - T \times \Delta T\large) + C_{t + \Delta T}$$ The fee rate update formula guarantees that fee rates increase if incoming blocks are complex (large $C_{t + \Delta T}$) and if blocks are emitted rapidly (small $\Delta T$). Symmetrically, fee rates decrease to the minimum if incoming blocks are less complex and if blocks are produced less frequently. The update formula has a few paramenters to be tuned, independently, for each fee dimension. We defer discussion about tuning to the [implementation section](#tuning-the-update-formula). ## Block verification rules Upon activation of the dynamic multidimensional fees scheme we modify block processing as follows: - **Bound block complexity**. For each fee dimension $i$, we define a *maximal block complexity* $Max$. A block is only valid if the block complexity $C$ is less than the maximum block complexity: $C \leq Max$. - **Verify transaction fee**. When verifying each transaction in a block, we confirm that it can cover its own base fee. Note that both base fee and optional priority fees are burned. ## User Experience ### How will the wallets estimate the fees? AvalancheGo nodes will provide new APIs exposing the current and expected fee rates, as they are likely to change block by block. Wallets can then use the fees rates to select UTXOs to pay the transaction fees. Moreover, the AvalancheGo implementation proposed above offers a `fees.Calculator` struct that can be reused by wallets and downstream projects to evaluate calculate fees. ### How will wallets be able to re-issue Txs at a higher fee? Wallets should be able to simply re-issue the transaction since current AvalancheGo implementation drops mempool transactions whose fee rate is lower than current one. More specifically, a transaction may be valid the moment it enters the mempool and it won’t be re-verified as long as it stays in there. However, as soon as the transaction is selected to be included in the next block, it is re-verified against the latest preferred tip. If fees are not enough by this time, the transaction is dropped and the wallet can simply re-issue it at a higher fee, or wait for the fee rate to go down. Note that priority fees offer some buffer space against an increase in the fee rate. A transaction paying just the base fee will be evicted from the mempool in the face of a fee rate increase, while a transaction paying some extra priority fee may have enough buffer room to stay valid after some amount of fee increase. ### How does priority fees guarantee a faster block inclusion? AvalancheGo mempool will be restructured to order transactions by priority fees. Transactions paying priority fees will be selected for block inclusion first, without violating any spend dependency. ## Backwards Compatibility Modifying the fee scheme for P-Chain and X-Chain requires a mandatory upgrade for activation. Moreover, wallets must be modified to properly handle the new fee scheme once activated. ## Reference Implementation The implementation is split across multiple PRs: - P-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2707](https://github.com/ava-labs/avalanchego/issues/2707) - X-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2708](https://github.com/ava-labs/avalanchego/issues/2708) A very important implementation step is tuning the update formula parameters for each chain and each fee dimension. We show here the principles we followed for tuning and a simulation based on historical data. ### Tuning the update formula The basic idea is to measure the complexity of blocks already accepted and derive the parameters from it. You can find the historical data in [this repo](https://github.com/abi87/complexities). To simplify the exposition I am purposefully ignoring chain specifics (like P-chain proposal blocks). We can account for chain specifics while processing the historical data. Here are the principles: - **Target block complexity rate $T$**: calculate the distribution of block complexity and pick a high enough quantile. - **Max block complexity $Max$**: this is probably the trickiest parameter to set. Historically we had [pretty big transactions](https://subnets.avax.network/p-chain/tx/27pjHPRCvd3zaoQUYMesqtkVfZ188uP93zetNSqk3kSH1WjED1) (more than $1.000$ referenced utxos). Setting a max block complexity so high that these big transactions are allowed is akin to setting no complexity cap. On the other side, we still want to allow, even encourage, UTXO consolidation, so we may want to allow transactions [like this](https://subnets.avax.network/p-chain/tx/2LxyHzbi2AGJ4GAcHXth6pj5DwVLWeVmog2SAfh4WrqSBdENhV). A principled way to set max block complexity may be the following: - calculate the target block complexity rate (see previous point) - calculate the median time elapsed among consecutive blocks - The product of these two quantities should gives us something like a target block complexity. - Set the max block complexity to say $\times 50$ the target value. - **Normalization coefficient $Denom$**: I suggest we size it as follows: - Find the largest historical peak, i.e. the sequence of consecutive blocks which contained the most complexity in the shortest period of time - Tune $Denom$ so that it would cause a $\times 10000$ increase in the fee rate for such a peak. This increase would push fees from the milliAVAX we normally pay under stable network condition up to tens of AVAX. - **Minimal fee rates $r^{min}$**: we could size them so that transactions fees do not change very much with respect to the currently fixed values. We simulate below how the update formula would behave on an peak period from Avalanche mainnet.

/> />

Figure 1 shows a peak period, starting with block [wqKJcvEv86TBpmJY2pAY7X65hzqJr3VnHriGh4oiAktWx5qT1](https://subnets.avax.network/p-chain/block/wqKJcvEv86TBpmJY2pAY7X65hzqJr3VnHriGh4oiAktWx5qT1) and going for roughly 30 blocks. We only show `Bandwidth` for clarity, but other fees dimensions have similar behaviour. The network load is much larger than target and sustained. Figure 2 show the fee dynamic in response to the peak: fees scale up from a few milliAVAX up to around 25 AVAX. Moreover as soon as the peak is over, and complexity goes back to the target value, fees are reduced very rapidly. ## Security Considerations The new fee scheme is expected to help network stability as it offers economic incentives to users to hold transactions issuance in times of high load. While fees are expected to remain generally low when the system is not loaded, a sudden load increase, with fuller blocks, would push the dynamic fees algo to increase fee rates. The increase is expected to continue until the load is reduced. Load reduction happens by both dropping unconfirmed transactions whose fee-rate is not sufficient anymore and by pushing users that optimize their transactions costs to delay transaction issuance until the fee rate goes down to an acceptable level. Note finally that the exponential fee update mechanism detailed above is [proven](https://ethresear.ch/t/multidimensional-eip-1559/11651) to be robust against strategic behaviors of users delaying transactions issuance and then suddenly push a bulk of transactions once the fee rate is low enough. ## Acknowledgements Thanks to @StephenButtolph @patrick-ogrady and @dhrubabasu for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-84: Table Preamble (/docs/acps/84-table-preamble) --- title: "ACP-84: Table Preamble" description: "Details for Avalanche Community Proposal 84: Table Preamble" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/84-table-preamble/README.md --- | ACP | 84 | | :--- | :--- | | **Title** | Table Preamble for ACPs | | **Author(s)** | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) | | **Status** | Activated | | **Track** | Meta | ## Abstract The current ACP template features a plain-text code block containing "RFC 822 style headers" as `Preamble` (see [What belongs in a successful ACP?](https://github.com/avalanche-foundation/ACPs?tab=readme-ov-file#what-belongs-in-a-successful-acp)). This header includes multiple links to discussions, authors, and other ACPs. This ACP proposes to replace the `Preamble` code block with a Markdown table format (similar to what is used in [Ethereum EIPs](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md)). ## Motivation The current ACPs `Preamble` is (i) not very readable and (ii) not user-friendly as links are not clickable. The proposed table format aims to fix these issues. ## Specification The following Markdown table format is proposed: | ACP | PR Number | | :----------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------- | | **Title** | ACP title | | **Author(s)** | A list of the author's name(s) and optionally contact info: FirstName LastName ([@GitHubUsername](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) or [email@address.com](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)) | | **Status** | Proposed, Implementable, Activated, Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)) | | **Track** | Standards, Best Practices, Meta, Subnet | | **Replaces (\\*optional)** | [ACP-XX](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) | | **Superseded-By (\\*optional)** | [ACP-XX](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) | It features all the existing fields of the current ACP template, and would replace the current `Preamble` code block in [ACPs/TEMPLATE.md](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/TEMPLATE.md). ## Backwards Compatibility Existing ACPs could be updated to use the new table format, but it is not mandatory. ## Reference Implementation For this ACP, the table would look like this: | ACP | 84 | | :------------ | :----------------------------------------------------------------------------------- | | **Title** | Table Preamble for ACPs | | **Author(s)** | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/86)) | | **Track** | Meta | ## Security Considerations NA ## Open Questions NA ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-99: Validatorsetmanager Contract (/docs/acps/99-validatorsetmanager-contract) --- title: "ACP-99: Validatorsetmanager Contract" description: "Details for Avalanche Community Proposal 99: Validatorsetmanager Contract" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/99-validatorsetmanager-contract/README.md --- | ACP | 99 | | :----------- | :-------------------------------------------------------------------------------------------------------------------------- | | Title | Validator Manager Solidity Standard | | Author(s) | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)), Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | | Status | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/165)) | | Track | Best Practices | | Dependencies | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) | ## Abstract Define a standard Validator Manager Solidity smart contract to be deployed on any Avalanche EVM chain. This ACP relies on concepts introduced in [ACP-77 (Reinventing Subnets)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets). It depends on it to be marked as `Implementable`. ## Motivation [ACP-77 (Reinventing Subnets)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets) opens the door to managing an L1 validator set (stored on the P-Chain) from any chain on the Avalanche Network. The P-Chain allows a Subnet to specify a "validator manager" if it is converted to an L1 using `ConvertSubnetToL1Tx`. This `(blockchainID, address)` pair is responsible for sending ICM messages contained within `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` on the P-Chain. This enables an on-chain program to add, modify the weight of, and remove validators. On each validator set change, the P-Chain is willing to sign an `AddressedCall` to notify any on-chain program tracking the validator set. On-chain programs must be able to interpret this message, so they can trigger the appropriate action. The 2 kinds of `AddressedCall`s [defined in ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#p-chain-warp-message-payloads) are `L1ValidatorRegistrationMessage` and `L1ValidatorWeightMessage`. Given these assumptions and the fact that most of the active blockchains on Avalanche Mainnet are EVM-based, we propose `ACP99Manager` as the standard Solidity contract specification that can: 1. Hold relevant information about the current L1 validator set 2. Send validator set updates to the P-Chain by generating `AdressedCall`s defined in ACP-77 3. Correctly update the validator set by interpreting notification messages received from the P-Chain 4. Be easily integrated into validator manager implementations that utilize various security models (e.g. Proof-of-Stake). Having an audited and open-source reference implementation freely available will contribute to lowering the cost of launching L1s on Avalanche. Once deployed, the `ACP99Manager` implementation contract can be used as the `Address` in the [`ConvertSubnetToL1Tx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#convertsubnettol1tx). ## Specification > **Note:**: The naming convention followed for the interfaces and contracts are inspired from the way [OpenZeppelin Contracts](https://docs.openzeppelin.com/contracts/5.x/) are named after ERC standards, using `ACP` instead of `ERC`. ### Type Definitions The following type definitions are used in the function signatures described in [Contract Specification](#contract-specification) ```solidity /** * @notice Description of the conversion data used to convert * a subnet to an L1 on the P-Chain. * This data is the pre-image of a hash that is authenticated by the P-Chain * and verified by the Validator Manager. */ struct ConversionData { bytes32 subnetID; bytes32 validatorManagerBlockchainID; address validatorManagerAddress; InitialValidator[] initialValidators; } /// @notice Specifies an initial validator, used in the conversion data. struct InitialValidator { bytes nodeID; bytes blsPublicKey; uint64 weight; } /// @notice L1 validator status. enum ValidatorStatus { Unknown, PendingAdded, Active, PendingRemoved, Completed, Invalidated } /** * @notice Specifies the owner of a validator's remaining balance or disable owner on the P-Chain. * P-Chain addresses are also 20-bytes, so we use the address type to represent them. */ struct PChainOwner { uint32 threshold; address[] addresses; } /** * @notice Contains the active state of a Validator. * @param status The validator status. * @param nodeID The NodeID of the validator. * @param startingWeight The weight of the validator at the time of registration. * @param sentNonce The current weight update nonce sent by the manager. * @param receivedNonce The highest nonce received from the P-Chain. * @param weight The current weight of the validator. * @param startTime The start time of the validator. * @param endTime The end time of the validator. */ struct Validator { ValidatorStatus status; bytes nodeID; uint64 startingWeight; uint64 sentNonce; uint64 receivedNonce; uint64 weight; uint64 startTime; uint64 endTime; } ``` #### About `Validator`s A `Validator` represents the continuous time frame during which a node is part of the validator set. Each `Validator` is identified by its `validationID`. If a validator was added as part of the initial set of continuous dynamic fee paying validators, its `validationID` is the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `ConvertSubnetToL1Tx` transaction ID and the 4 byte index of the initial validator within the transaction. If a validator was added to the L1's validator set post-conversion, its `validationID` is the SHA256 of the payload of the `AddressedCall` in the `RegisterL1ValidatorTx` used to add it, as defined in ACP-77. ### Contract Specification The standard `ACP99Manager` functionality is defined by a set of events, public methods, and private methods that must be included by a compliant implementation. For a full implementation, please see the [Reference Implementation](#reference-implementation) #### Events ```solidity /** * @notice Emitted when an initial validator is registered. * @notice The field index is the index of the initial validator in the conversion data. * This is used along with the subnetID as the ACP-118 justification in * signature requests to P-Chain validators over a L1ValidatorRegistrationMessage * when removing the validator */ event RegisteredInitialValidator( bytes32 indexed validationID, bytes20 indexed nodeID, bytes32 indexed subnetID, uint64 weight, uint32 index ); /// @notice Emitted when a validator registration to the L1 is initiated. event InitiatedValidatorRegistration( bytes32 indexed validationID, bytes20 indexed nodeID, bytes32 registrationMessageID, uint64 registrationExpiry, uint64 weight ); /// @notice Emitted when a validator registration to the L1 is completed. event CompletedValidatorRegistration(bytes32 indexed validationID, uint64 weight); /// @notice Emitted when removal of an L1 validator is initiated. event InitiatedValidatorRemoval( bytes32 indexed validationID, bytes32 validatorWeightMessageID, uint64 weight, uint64 endTime ); /// @notice Emitted when removal of an L1 validator is completed. event CompletedValidatorRemoval(bytes32 indexed validationID); /// @notice Emitted when a validator weight update is initiated. event InitiatedValidatorWeightUpdate( bytes32 indexed validationID, uint64 nonce, bytes32 weightUpdateMessageID, uint64 weight ); /// @notice Emitted when a validator weight update is completed. event CompletedValidatorWeightUpdate(bytes32 indexed validationID, uint64 nonce, uint64 weight); ``` #### Public Methods ```solidity /// @notice Returns the SubnetID of the L1 tied to this manager function subnetID() public view returns (bytes32 id); /// @notice Returns the validator details for a given validation ID. function getValidator(bytes32 validationID) public view returns (Validator memory validator); /// @notice Returns the total weight of the current L1 validator set. function l1TotalWeight() public view returns (uint64 weight); /** * @notice Verifies and sets the initial validator set for the chain by consuming a * SubnetToL1ConversionMessage from the P-Chain. * * Emits a {RegisteredInitialValidator} event for each initial validator in {conversionData}. * * @param conversionData The Subnet conversion message data used to recompute and verify against the ConversionID. * @param messsageIndex The index that contains the SubnetToL1ConversionMessage ICM message containing the * ConversionID to be verified against the provided {conversionData}. */ function initializeValidatorSet( ConversionData calldata conversionData, uint32 messsageIndex ) public; /** * @notice Completes the validator registration process by returning an acknowledgement of the registration of a * validationID from the P-Chain. The validator should not be considered active until this method is successfully called. * * Emits a {CompletedValidatorRegistration} event on success. * * @param messageIndex The index of the L1ValidatorRegistrationMessage to be received providing the acknowledgement. * @return validationID The ID of the registered validator. */ function completeValidatorRegistration(uint32 messageIndex) public returns (bytes32 validationID); /** * @notice Completes validator removal by consuming an RegisterL1ValidatorMessage from the P-Chain acknowledging * that the validator has been removed. * * Emits a {CompletedValidatorRemoval} on success. * * @param messageIndex The index of the RegisterL1ValidatorMessage. */ function completeValidatorRemoval(uint32 messageIndex) public returns (bytes32 validationID); /** * @notice Completes the validator weight update process by consuming an L1ValidatorWeightMessage from the P-Chain * acknowledging the weight update. The validator weight change should not have any effect until this method is successfully called. * * Emits a {CompletedValidatorWeightUpdate} event on success. * * @param messageIndex The index of the L1ValidatorWeightMessage message to be received providing the acknowledgement. * @return validationID The ID of the validator, retreived from the L1ValidatorWeightMessage. * @return nonce The nonce of the validator, retreived from the L1ValidatorWeightMessage. */ function completeValidatorWeightUpdate(uint32 messageIndex) public returns (bytes32 validationID, uint64 nonce); ``` >Note: While `getValidator` provides a way to fetch a `Validator` based on its `validationID`, no method that returns all active validators is specified. This is because a `mapping` is a reasonable way to store active validators internally, and Solidity `mapping`s are not iterable. This can be worked around by storing additional indexing metadata in the contract, but not all applications may wish to incur that added complexity. #### Private Methods The following methods are specified as `internal` to account for different semantics of initiating validator set changes, such as checking uptime attested to via ICM message, or transferring funds to be locked as stake. Rather than broaden the definitions of these functions to cover all use cases, we leave it to the implementer to define a suitable external interface and call the appropriate `ACP99Manager` function internally. ```solidity /** * @notice Initiates validator registration by issuing a RegisterL1ValidatorMessage. The validator should * not be considered active until completeValidatorRegistration is called. * * Emits an {InitiatedValidatorRegistration} event on success. * * @param nodeID The ID of the node to add to the L1. * @param blsPublicKey The BLS public key of the validator. * @param remainingBalanceOwner The remaining balance owner of the validator. * @param disableOwner The disable owner of the validator. * @param weight The weight of the node on the L1. * @return validationID The ID of the registered validator. */ function _initiateValidatorRegistration( bytes memory nodeID, bytes memory blsPublicKey, PChainOwner memory remainingBalanceOwner, PChainOwner memory disableOwner, uint64 weight ) internal returns (bytes32 validationID); /** * @notice Initiates validator removal by issuing a L1ValidatorWeightMessage with the weight set to zero. * The validator should be considered inactive as soon as this function is called. * * Emits an {InitiatedValidatorRemoval} on success. * * @param validationID The ID of the validator to remove. */ function _initiateValidatorRemoval(bytes32 validationID) internal; /** * @notice Initiates a validator weight update by issuing an L1ValidatorWeightMessage with a nonzero weight. * The validator weight change should not have any effect until completeValidatorWeightUpdate is successfully called. * * Emits an {InitiatedValidatorWeightUpdate} event on success. * * @param validationID The ID of the validator to modify. * @param weight The new weight of the validator. * @return nonce The validator nonce associated with the weight change. * @return messageID The ID of the L1ValidatorWeightMessage used to update the validator's weight. */ function _initiateValidatorWeightUpdate( bytes32 validationID, uint64 weight ) internal returns (uint64 nonce, bytes32 messageID); ``` ##### About `DisableL1ValidatorTx` In addition to calling `_initiateValidatorRemoval`, a validator may be disabled by issuing a `DisableL1ValidatorTx` on the P-Chain. This transaction allows the `DisableOwner` of a validator to disable it directly from the P-Chain to claim the unspent `Balance` linked to the validator of a failed L1. Therefore it is not meant to be called in the `Manager` contract. ## Backwards Compatibility `ACP99Manager` is a reference specification. As such, it doesn't have any impact on the current behavior of the Avalanche protocol. ## Reference Implementation A reference implementation will be provided in Ava Labs' [ICM Contracts](https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager) repository. This reference implementation will need to be updated to conform to `ACP99Manager` before this ACP may be marked `Implementable`. ### Example Integrations `ACP99Manager` is designed to be easily incorporated into any architecture. Two example integrations are included in this ACP, each of which uses a different architecture. #### Multi-contract Design The multi-contract design consists of a contract that implements `ACP99Manager`, and separate "security module" contracts that implement security models, such as PoS or PoA. Each `ACP99Manager` implementation contract is associated with one or more "security modules" that are the only contracts allowed to call the `ACP99Manager` functions that initiate validator set changes (`initiateValidatorRegistration`, and `initiateValidatorWeightUpdate`). Every time a validator is added/removed or a weight change is initiated, the `ACP99Manager` implementation will, in turn, call the corresponding function of the "security module" (`handleValidatorRegistration` or `handleValidatorWeightChange`). We recommend that the "security modules" reference an immutable `ACP99Manager` contract address for security reasons. It is up to the "security module" to decide what action to take when a validator is added/removed or a weight change is confirmed by the P-Chain. Such actions could be starting the withdrawal period and allocating rewards in a PoS L1. |Own| SecurityModule Safe -.->|Own| Manager SecurityModule <-.->|Reference| Manager Safe -->|addValidator| SecurityModule SecurityModule -->|initiateValidatorRegistration| Manager Manager -->|sendWarpMessage| P P -->|completeValidatorRegistration| Manager Manager -->|handleValidatorRegistration| SecurityModule`} /> "Security modules" could implement PoS, Liquid PoS, etc. The specification of such smart contracts is out of the scope of this ACP. A work in progress implementation is available in the [Suzaku Contracts Library](https://github.com/suzaku-network/suzaku-contracts-library/blob/main/README.md#acp99-contracts-library) repository. It will be updated until this ACP is considered `Implementable` based on the outcome of the discussion. Ava Labs' V2 Validator Manager also implements this architecture for a Proof-of-Stake security module, and is available in their [ICM Contracts Repository](https://github.com/ava-labs/icm-contracts/tree/validator-manager-v2.0.0/contracts/validator-manager/StakingManager.sol). #### Single-contract Design The single-contract design consists of a class hierarchy with the base class implementing `ACP99Manager`. The `PoAValidatorManager` child class in the below diagram may be swapped out for another class implementing a different security model, such as PoS. > ACP99Manager class ValidatorManager { completeValidatorRegistration } <> ValidatorManager class PoAValidatorManager { initiateValidatorRegistration initiateEndValidation completeEndValidation } ACP99Manager <|--ValidatorManager ValidatorManager <|-- PoAValidatorManager`} /> No reference implementation is provided for this architecture in particular, but Ava Labs' V1 [Validator Manager](https://github.com/ava-labs/icm-contracts/tree/validator-manager-v1.0.0/contracts/validator-manager) implements much of the functional behavior described by the specification. It predates the specification, however, so there are some deviations. It should at most be treated as a model of an approximate implementation of this standard. ## Security Considerations The audit process of `ACP99Manager` and reference implementations is of the utmost importance for the future of the Avalanche ecosystem as most L1s would rely upon it to secure their L1. ## Open Questions ### Is there an interest to keep historical information about the validator set on the manager chain? It is left to the implementor to decide if `getValidator` should return information about historical validators. Information about past validator performance may not be relevant for all applications (e.g. PoA has no need to know about past validator's uptimes). This information will still be available in archive nodes and offchain tools (e.g. explorers), but it is not enforced at the contract level. ### Should `ACP99Manager` include a churn control mechanism? The Ava Labs [implementation](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/ValidatorManager.sol) of the `ValidatorManager` contract includes a churn control mechanism that prevents too much weight from being added or removed from the validator set in a short amount of time. Excessive churn can cause consensus failures, so it may be appropriate to require that churn tracking is implemented in some capacity. ## Acknowledgments Special thanks to [@leopaul36](https://github.com/leopaul36), [@aaronbuchwald](https://github.com/aaronbuchwald), [@dhrubabasu](https://github.com/dhrubabasu), [@minghinmatthewlam](https://github.com/minghinmatthewlam) and [@michaelkaplan13](https://github.com/michaelkaplan13) for their reviews of previous versions of this ACP! ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # Avalanche Community Proposals (ACPs) (/docs/acps) --- title: "Avalanche Community Proposals (ACPs)" description: "Official framework for proposing improvements and gathering consensus around changes to the Avalanche Network" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/README.md ---
>
## What is an Avalanche Community Proposal (ACP)? An Avalanche Community Proposal is a concise document that introduces a change or best practice for adoption on the [Avalanche Network](https://www.avax.com). ACPs should provide clear technical specifications of any proposals and a compelling rationale for their adoption. ACPs are an open framework for proposing improvements and gathering consensus around changes to the Avalanche Network. ACPs can be proposed by anyone and will be merged into this repository as long as they are well-formatted and coherent. Once an overwhelming majority of the Avalanche Network/Community have [signaled their support for an ACP](https://docs.avax.network/nodes/configure/avalanchego-config-flags#avalanche-community-proposals), it may be scheduled for activation on the Avalanche Network by Avalanche Network Clients (ANCs). It is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible ANC, such as [AvalancheGo](https://github.com/ava-labs/avalanchego). ## ACP Tracks There are three kinds of ACP: * A `Standards Track` ACP describes a change to the design or function of the Avalanche Network, such as a change to the P2P networking protocol, P-Chain design, Subnet architecture, or any change/addition that affects the interoperability of Avalanche Network Clients (ANCs). * A `Best Practices Track` ACP describes a design pattern or common interface that should be used across the Avalanche Network to make it easier to integrate with Avalanche or for Subnets to interoperate with each other. This would include things like proposing a smart contract interface, not proposing a change to how smart contracts are executed. * A `Meta Track` ACP describes a change to the ACP process or suggests a new way for the Avalanche Community to collaborate. * A `Subnet Track` ACP describes a change to a particular Subnet. This would include things like configuration changes or coordinated Subnet upgrades. ## ACP Statuses There are four statuses of an ACP: * A `Proposed` ACP has been merged into the main branch of the ACP repository. It is actively being discussed by the Avalanche Community and may be modified based on feedback. * An `Implementable` ACP is considered "ready for implementation" by the author(s) and will no longer change meaningfully from its current form (which would require a new ACP). * An `Activated` ACP has been activated on the Avalanche Network via a coordinated upgrade by the Avalanche Community. Once an ACP is `Activated`, it is locked. * A `Stale` ACP has been abandoned by its author(s) because it is not supported by the Avalanche Community or has been replaced with another ACP. ## ACP Workflow ### Step 0: Think of a Novel Improvement to Avalanche The ACP process begins with a new idea for Avalanche. Each potential ACP must have an author(s): someone who writes the ACP using the style and format described below, shepherds the associated GitHub Discussion, and attempts to build consensus around the idea. Note that ideas and any resulting ACP is public. Authors should not post any ideas or anything in an ACP that the Author wants to keep confidential or to keep ownership rights in (such as intellectual property rights). ### Step 1: Post Your Idea to [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/ideas) The author(s) should first attempt to ascertain whether there is support for their idea by posting in the "Ideas" category of GitHub Discussions. Vetting an idea publicly before going as far as writing an ACP is meant to save both the potential author(s) and the wider Avalanche Community time. Asking the Avalanche Community first if an idea is original helps prevent too much time being spent on something that is guaranteed to be rejected based on prior discussions (searching the Internet does not always do the trick). It also helps to make sure the idea is applicable to the entire community and not just the author(s). Small enhancements or patches often don't need standardization between multiple projects; these don't need an ACP and should be injected into the relevant development workflow with a patch submission to the applicable ANC issue tracker. ### Step 2: Propose an ACP via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) Once the author(s) feels confident that an idea has a decent chance of acceptance, an ACP should be drafted and submitted as a pull request (PR). This draft must be written in ACP style as described below. It is highly recommended that a single ACP contain a single key proposal or new idea. The more focused the ACP, the more successful it tends to be. If in doubt, split your ACP into several well-focused ones. The PR number of the ACP will become its assigned number. ### Step 3: Build Consensus on [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/discussion) and Provide an Implementation (if Applicable) ACPs will be merged by ACP maintainers if the proposal is generally well-formatted and coherent. ACP editors will attempt to merge anything worthy of discussion, regardless of feasibility or complexity, that is not a duplicate or incomplete. After an ACP is merged, an official GitHub Discussion will be opened for the ACP and linked to the proposal for community discussion. It is recommended for author(s) or supportive Avalanche Community members to post an accompanying non-technical overview of their ACP for general consumption in this GitHub Discussion. The ACP should be reviewed and broadly supported before a reference implementation is started, again to avoid wasting the author(s) and the Avalanche Community's time, unless a reference implementation will aid people in studying the ACP. ### Step 4: Mark ACP as `Implementable` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) Once an ACP is considered complete by the author(s), it should be marked as `Implementable`. At this point, all open questions should be addressed and an associated reference implementation should be provided (if applicable). As mentioned earlier, the Avalanche Foundation meets periodically to recommend the ratification of specific ACPs but it is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible Avalanche Network Client (ANC), such as [AvalancheGo](https://github.com/ava-labs/avalanchego). ### [Optional] Step 5: Mark ACP as `Stale` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) An ACP can be superseded by a different ACP, rendering the original obsolete. If this occurs, the original ACP will be marked as `Stale`. ACPs may also be marked as `Stale` if the author(s) abandon work on it for a prolonged period of time (12+ months). ACPs may be reopened and moved back to `Proposed` if the author(s) restart work. ### Maintenance ACP maintainers will only merge PRs updating an ACP if it is created or approved by at least one of the author(s). ACP maintainers are not responsible for ensuring ACP author(s) approve the PR. ACP author(s) are expected to review PRs that target their unlocked ACP (`Proposed` or `Implementable`). Any PRs opened against a locked ACP (`Activated` or `Stale`) will not be merged by ACP maintainers. ## What belongs in a successful ACP? Each ACP must have the following parts: * `Preamble`: Markdown table containing metadata about the ACP, including the ACP number, a short descriptive title, the author(s), and optionally the contact info for each author, etc. * `Abstract`: Concise (~200 word) description of the ACP * `Motivation`: Rationale for adopting the ACP and the specific issue/challenge/opportunity it addresses * `Specification`: Complete description of the semantics of any change should allow any ANC/Avalanche Community member to implement the ACP * `Security Considerations`: Security implications of the proposed ACP Each ACP can have the following parts: * `Open Questions`: Questions that should be resolved before implementation Each `Standards Track` ACP must have the following parts: * `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community * `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change Each `Best Practices Track` ACP can have the following parts: * `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community * `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change ### ACP Formats and Templates Each ACP is allocated a unique subdirectory in the `ACPs` directory. The name of this subdirectory must be of the form `N-T` where `N` is the ACP number and `T` is the ACP title with any spaces replaced by hyphens. ACPs must be written in [markdown](https://daringfireball.net/projects/markdown/syntax) format and stored at `ACPs/N-T/README.md`. Please see the [ACP template](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/TEMPLATE.md) for an example of the correct layout. ### Auxiliary Files ACPs may include auxiliary files such as diagrams or code snippets. Such files should be stored in the ACP's subdirectory (`ACPs/N-T/*`). There is no required naming convention for auxiliary files. ### Waived Copyright ACP authors must waive any copyright claims before an ACP will be merged into the repository. This can be done by including the following text in an ACP: ```text ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). ``` ## Proposals _You can view the status of each ACP on the [ACP Tracker](https://github.com/orgs/avalanche-foundation/projects/1/views/1)._ | Number | Title | Author(s) | Type | |:-------|:------|:-------|:-----| |[13](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/13-subnet-only-validators/README.md)|Subnet-Only Validators (SOVs)|Patrick O'Grady (contact@patrickogrady.xyz)|Standards| |[20](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/20-ed25519-p2p/README.md)|Ed25519 p2p|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[23](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/23-p-chain-native-transfers/README.md)|P-Chain Native Transfers|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[24](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/24-shanghai-eips/README.md)|Activate Shanghai EIPs on C-Chain|Darioush Jalali ([@darioush](https://github.com/darioush))|Standards| |[25](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/25-vm-application-errors/README.md)|Virtual Machine Application Errors|Joshua Kim ([@joshua-kim](https://github.com/joshua-kim))|Standards| |[30](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/30-avalanche-warp-x-evm/README.md)|Integrate Avalanche Warp Messaging into the EVM|Aaron Buchwald (aaron.buchwald56@gmail.com)|Standards| |[31](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/31-enable-subnet-ownership-transfer/README.md)|Enable Subnet Ownership Transfer|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[41](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/41-remove-pending-stakers/README.md)|Remove Pending Stakers|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[62](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/README.md)|Disable `AddValidatorTx` and `AddDelegatorTx`|Jacob Everly (https://twitter.com/JacobEv3rly), Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[75](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/75-acceptance-proofs/README.md)|Acceptance Proofs|Joshua Kim ([@joshua-kim](https://github.com/joshua-kim))|Standards| |[77](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/77-reinventing-subnets/README.md)|Reinventing Subnets|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[83](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/83-dynamic-multidimensional-fees/README.md)|Dynamic Multidimensional Fees for P-Chain and X-Chain|Alberto Benegiamo ([@abi87](https://github.com/abi87))|Standards| |[84](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/84-table-preamble/README.md)|Table Preamble for ACPs|Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon))|Meta| |[99](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/99-validatorsetmanager-contract/README.md)|Validator Manager Solidity Standard|Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)), Cam Schultz ([@cam-schultz](https://github.com/cam-schultz))|Best Practices| |[103](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/103-dynamic-fees/README.md)|Add Dynamic Fees to the X-Chain and P-Chain|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph))|Standards| |[108](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/108-evm-event-importing/README.md)|EVM Event Importing|Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Best Practices| |[113](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/113-provable-randomness/README.md)|Provable Virtual Machine Randomness|Tsachi Herman ([@tsachiherman](https://github.com/tsachiherman))|Standards| |[118](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/118-warp-signature-request/README.md)|Standardized P2P Warp Signature Request Interface|Cam Schultz ([@cam-schultz](https://github.com/cam-schultz))|Best Practices| |[125](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/125-basefee-reduction/README.md)|Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX|Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush))|Standards| |[131](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/131-cancun-eips/README.md)|Activate Cancun EIPs on C-Chain and Subnet-EVM chains|Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur))|Standards| |[151](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/151-use-current-block-pchain-height-as-context/README.md)|Use current block P-Chain height as context for state verification|Ian Suvak ([@iansuvak](https://github.com/iansuvak))|Standards| |[176](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md)|Dynamic EVM Gas Limits and Price Discovery Updates|Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Standards| |[181](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/181-p-chain-epoched-views/README.md)|P-Chain Epoched Views|Cam Schultz ([@cam-schultz](https://github.com/cam-schultz))|Standards| |[191](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/191-seamless-l1-creation/README.md)|Seamless L1 Creations (CreateL1Tx)|Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Aaron Buchwald ([aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)), Meag FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald))|Standards| |[194](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/194-streaming-asynchronous-execution/README.md)|Streaming Asynchronous Execution|Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph))|Standards| |[204](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/204-precompile-secp256r1/README.md)|Precompile for secp256r1 Curve Support|Santiago Cammi ([@scammi](https://github.com/scammi)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N))|Standards| |[209](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/209-eip7702-style-account-abstraction/README.md)|EIP-7702-style Set Code for EOAs|Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Aaron Buchwald ([aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Standards| |[224](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/224-dynamic-gas-limit-in-subnet-evm/README.md)|Introduce ACP-176-Based Dynamic Gas Limits and Fee Manager Precompile in Subnet-EVM|Ceyhun Onur ([@ceyonur](https://github.com/ceyonur), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Standards| |[226](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/226-dynamic-minimum-block-times/README.md)|Dynamic Minimum Block Times|Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Standards| ## Contributing Before contributing to ACPs, please read the [ACP Terms of Contribution](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/CONTRIBUTING.md). # CLI Commands (/docs/tooling/cli-commands) --- title: "CLI Commands" description: "Complete list of Avalanche CLI commands and their usage." edit_url: https://github.com/ava-labs/avalanche-cli/edit/main/cmd/commands.md --- ## avalanche blockchain The blockchain command suite provides a collection of tools for developing and deploying Blockchains. To get started, use the blockchain create command wizard to walk through the configuration of your very first Blockchain. Then, go ahead and deploy it with the blockchain deploy command. You can use the rest of the commands to manage your Blockchain configurations and live deployments. **Usage:** ```bash avalanche blockchain [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-blockchain-addvalidator): The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. - [`changeOwner`](#avalanche-blockchain-changeowner): The blockchain changeOwner changes the owner of the deployed Blockchain. - [`changeWeight`](#avalanche-blockchain-changeweight): The blockchain changeWeight command changes the weight of a L1 Validator. The L1 has to be a Proof of Authority L1. - [`configure`](#avalanche-blockchain-configure): AvalancheGo nodes support several different configuration files. Each network (a Subnet or an L1) has their own config which applies to all blockchains/VMs in the network (see https://build.avax.network/docs/nodes/configure/avalanche-l1-configs) Each blockchain within the network can have its own chain config (see https://build.avax.network/docs/nodes/chain-configs/c-chain https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go for subnet-evm options). A chain can also have special requirements for the AvalancheGo node configuration itself (see https://build.avax.network/docs/nodes/configure/configs-flags). This command allows you to set all those files. - [`create`](#avalanche-blockchain-create): The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. - [`delete`](#avalanche-blockchain-delete): The blockchain delete command deletes an existing blockchain configuration. - [`deploy`](#avalanche-blockchain-deploy): The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the Subnet. Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Blockchain and deploy it on Fuji or Mainnet. - [`describe`](#avalanche-blockchain-describe): The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. - [`export`](#avalanche-blockchain-export): The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. - [`import`](#avalanche-blockchain-import): Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) - [`join`](#avalanche-blockchain-join): The blockchain join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Blockchain's admins must add the NodeID of your validator to the Blockchain's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. - [`list`](#avalanche-blockchain-list): The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. - [`publish`](#avalanche-blockchain-publish): The blockchain publish command publishes the Blockchain's VM to a repository. - [`removeValidator`](#avalanche-blockchain-removevalidator): The blockchain removeValidator command stops a whitelisted blockchain network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. - [`stats`](#avalanche-blockchain-stats): The blockchain stats command prints validator statistics for the given Blockchain. - [`upgrade`](#avalanche-blockchain-upgrade): The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. - [`validators`](#avalanche-blockchain-validators): The blockchain validators command lists the validators of a blockchain and provides several statistics about them. - [`vmid`](#avalanche-blockchain-vmid): The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Flags:** ```bash -h, --help help for blockchain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. **Usage:** ```bash avalanche blockchain addValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --balance float set the AVAX balance of the validator that will be used for continuous fee on P-Chain --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's registration (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's registration (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's registration (blockchain gas token) --bls-proof-of-possession string set the BLS proof of possession of the validator to add --bls-public-key string set the BLS public key of the validator to add --cluster string operate on the given cluster --create-local-validator create additional local validator and add it to existing running local node --default-duration (for Subnets, not L1s) set duration so as to validate until primary validator ends its period --default-start-time (for Subnets, not L1s) use default start time for subnet validator (5 minutes later for fuji & mainnet, 30 seconds later for devnet) --default-validator-params (for Subnets, not L1s) use default weight/start/duration params for subnet validator --delegation-fee uint16 (PoS only) delegation fee (in bips) (default 100) --devnet operate on a devnet network --disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint --node-id string node-id of the validator to add --output-tx-path string (for Subnets, not L1s) file path of the add validator tx --partial-sync set primary network partial sync for new validators (default true) --remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet --rpc string connect to validator manager at the given rpc endpoint --stake-amount uint (PoS only) amount of tokens to stake --staking-period duration how long this validator will be staking --start-time string (for Subnets, not L1s) UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --subnet-auth-keys strings (for Subnets, not L1s) control keys that will be used to authenticate add validator tx -t, --testnet fuji operate on testnet (alias to fuji) --wait-for-tx-acceptance (for Subnets, not L1s) just issue the add validator tx, without waiting for its acceptance (default true) --weight uint set the staking weight of the validator to add (default 20) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeOwner The blockchain changeOwner changes the owner of the deployed Blockchain. **Usage:** ```bash avalanche blockchain changeOwner [subcommand] [flags] ``` **Flags:** ```bash --auth-keys strings control keys that will be used to authenticate transfer blockchain ownership tx --cluster string operate on the given cluster --control-keys strings addresses that may make blockchain changes --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeOwner -k, --key string select the key to use [fuji/devnet] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --output-tx-path string file path of the transfer blockchain ownership tx -s, --same-control-key use the fee-paying key as control key -t, --testnet fuji operate on testnet (alias to fuji) --threshold uint32 required number of control key signatures to make blockchain changes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeWeight The blockchain changeWeight command changes the weight of a L1 Validator. The L1 has to be a Proof of Authority L1. **Usage:** ```bash avalanche blockchain changeWeight [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeWeight -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint --node-id string node-id of the validator -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the new staking weight of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### configure AvalancheGo nodes support several different configuration files. Each network (a Subnet or an L1) has their own config which applies to all blockchains/VMs in the network (see https://build.avax.network/docs/nodes/configure/avalanche-l1-configs) Each blockchain within the network can have its own chain config (see https://build.avax.network/docs/nodes/chain-configs/c-chain https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go for subnet-evm options). A chain can also have special requirements for the AvalancheGo node configuration itself (see https://build.avax.network/docs/nodes/configure/configs-flags). This command allows you to set all those files. **Usage:** ```bash avalanche blockchain configure [subcommand] [flags] ``` **Flags:** ```bash --chain-config string path to the chain configuration -h, --help help for configure --node-config string path to avalanchego node configuration --per-node-chain-config string path to per node chain configuration for local network --subnet-config string path to the subnet configuration --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. **Usage:** ```bash avalanche blockchain create [subcommand] [flags] ``` **Flags:** ```bash --custom use a custom VM template --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-path string file path of custom vm to use --custom-vm-repo-url string custom vm repository url --debug enable blockchain debugging (default true) --evm use the Subnet-EVM as the base template --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults deprecation notice: use '--production-defaults' --evm-token string token symbol to use with Subnet-EVM --external-gas-token use a gas token from another blockchain -f, --force overwrite the existing configuration if one exists --from-github-repo generate custom VM binary from github repository --genesis string file path of genesis to use -h, --help help for create --icm interoperate with other blockchains using ICM --icm-registry-at-genesis setup ICM registry smart contract on genesis [experimental] --latest use latest Subnet-EVM released version, takes precedence over --vm-version --pre-release use latest Subnet-EVM pre-released version, takes precedence over --vm-version --production-defaults use default production settings for your blockchain --proof-of-authority use proof of authority(PoA) for validator management --proof-of-stake use proof of stake(PoS) for validator management --proxy-contract-owner string EVM address that controls ProxyAdmin for TransparentProxy of ValidatorManager contract --reward-basis-points uint (PoS only) reward basis points for PoS Reward Calculator (default 100) --sovereign set to false if creating non-sovereign blockchain (default true) --teleporter interoperate with other blockchains using ICM --test-defaults use default test settings for your blockchain --validator-manager-owner string EVM address that controls Validator Manager Owner --vm string file path of custom vm to use. alias to custom-vm-path --vm-version string version of Subnet-EVM template to use --warp generate a vm with warp support (needed for ICM) (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The blockchain delete command deletes an existing blockchain configuration. **Usage:** ```bash avalanche blockchain delete [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The blockchain deploy command deploys your Blockchain configuration to Local Network, to Fuji Testnet, DevNet or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the L1 / Subnet. When deploying an L1, Avalanche-CLI lets you use your local machine as a bootstrap validator, so you don't need to run separate Avalanche nodes. This is controlled by the --use-local-machine flag (enabled by default on Local Network). If --use-local-machine is set to true: - Avalanche-CLI will call CreateSubnetTx, CreateChainTx, ConvertSubnetToL1Tx, followed by syncing the local machine bootstrap validator to the L1 and initialize Validator Manager Contract on the L1 If using your own Avalanche Nodes as bootstrap validators: - Avalanche-CLI will call CreateSubnetTx, CreateChainTx, ConvertSubnetToL1Tx - You will have to sync your bootstrap validators to the L1 - Next, Initialize Validator Manager contract on the L1 using avalanche contract initValidatorManager [L1_Name] Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (Local Network, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Blockchain and deploy it on Fuji or Mainnet. **Usage:** ```bash avalanche blockchain deploy [subcommand] [flags] ``` **Flags:** ```bash --convert-only avoid node track, restart and poa manager setup -e, --ewoq use ewoq key [local/devnet deploy only] -h, --help help for deploy -k, --key string select the key to use [fuji/devnet deploy only] -g, --ledger use ledger instead of key --ledger-addrs strings use the given ledger addresses --mainnet-chain-id uint32 use different ChainID for mainnet deployment --output-tx-path string file path of the blockchain creation tx (for multi-sig signing) -u, --subnet-id string do not create a subnet, deploy the blockchain into the given subnet id --subnet-only command stops after CreateSubnetTx and returns SubnetID Network Flags (Select One): --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --fuji operate on fuji (alias to `testnet`) --local operate on a local network --mainnet operate on mainnet --testnet operate on testnet (alias to `fuji`) Bootstrap Validators Flags: --balance float64 set the AVAX balance of each bootstrap validator that will be used for continuous fee on P-Chain (setting balance=1 equals to 1 AVAX for each bootstrap validator) --bootstrap-endpoints stringSlice take validator node info from the given endpoints --bootstrap-filepath string JSON file path that provides details about bootstrap validators --change-owner-address string address that will receive change if node is no longer L1 validator --generate-node-id set to true to generate Node IDs for bootstrap validators when none are set up. Use these Node IDs to set up your Avalanche Nodes. --num-bootstrap-validators int number of bootstrap validators to set up in sovereign L1 validator) Local Machine Flags (Use Local Machine as Bootstrap Validator): --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) --http-port uintSlice http port for node(s) --partial-sync set primary network partial sync for new validators --staking-cert-key-path stringSlice path to provided staking cert key for node(s) --staking-port uintSlice staking port for node(s) --staking-signer-key-path stringSlice path to provided staking signer key for node(s) --staking-tls-key-path stringSlice path to provided staking TLS key for node(s) --use-local-machine use local machine as a blockchain validator Local Network Flags: --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) --num-nodes uint32 number of nodes to be created on local network deploy Non Subnet-Only-Validators (Non-SOV) Flags: --auth-keys stringSlice control keys that will be used to authenticate chain creation --control-keys stringSlice addresses that may make blockchain changes --same-control-key use the fee-paying key as control key --threshold uint32 required number of control key signatures to make blockchain changes ICM Flags: --cchain-funding-key string key to be used to fund relayer account on cchain --cchain-icm-key string key to be used to pay for ICM deploys on C-Chain --icm-key string key to be used to pay for ICM deploys --icm-version string ICM version to deploy --relay-cchain relay C-Chain as source and destination --relayer-allow-private-ips allow relayer to connec to private ips --relayer-amount float64 automatically fund relayer fee payments with the given amount --relayer-key string key to be used by default both for rewards and to pay fees --relayer-log-level string log level to be used for relayer logs --relayer-path string relayer binary to use --relayer-version string relayer version to deploy --skip-icm-deploy Skip automatic ICM deploy --skip-relayer skip relayer deploy --teleporter-messenger-contract-address-path string path to an ICM Messenger contract address file --teleporter-messenger-deployer-address-path string path to an ICM Messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an ICM Messenger deployer tx file --teleporter-registry-bytecode-path string path to an ICM Registry bytecode file Proof Of Stake Flags: --pos-maximum-stake-amount uint64 maximum stake amount --pos-maximum-stake-multiplier uint8 maximum stake multiplier --pos-minimum-delegation-fee uint16 minimum delegation fee --pos-minimum-stake-amount uint64 minimum stake amount --pos-minimum-stake-duration uint64 minimum stake duration (in seconds) --pos-weight-to-value-factor uint64 weight to value factor Signature Aggregator Flags: --aggregator-log-level string log level to use with signature aggregator --aggregator-log-to-stdout use stdout for signature aggregator logs ``` ### describe The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. **Usage:** ```bash avalanche blockchain describe [subcommand] [flags] ``` **Flags:** ```bash -g, --genesis Print the genesis to the console directly instead of the summary -h, --help help for describe --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. **Usage:** ```bash avalanche blockchain export [subcommand] [flags] ``` **Flags:** ```bash --custom-vm-branch string custom vm branch --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url -h, --help help for export -o, --output string write the export data to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) **Usage:** ```bash avalanche blockchain import [subcommand] [flags] ``` **Subcommands:** - [`file`](#avalanche-blockchain-import-file): The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. - [`public`](#avalanche-blockchain-import-public): The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Flags:** ```bash -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import file The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche blockchain import file [subcommand] [flags] ``` **Flags:** ```bash --blockchain string the blockchain configuration to import from the provided repo --branch string the repo branch to use if downloading a new repo -f, --force overwrite the existing configuration if one exists -h, --help help for file --repo string the repo to import (ex: ava-labs/avalanche-plugins-core) or url to download the repo from --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import public The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche blockchain import public [subcommand] [flags] ``` **Flags:** ```bash --blockchain-id string the blockchain ID --cluster string operate on the given cluster --custom use a custom VM template --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --evm import a subnet-evm --force overwrite the existing configuration if one exists -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for public -l, --local operate on a local network -m, --mainnet operate on mainnet --node-url string [optional] URL of an already running validator -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### join The blockchain join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Blockchain's admins must add the NodeID of your validator to the Blockchain's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. **Usage:** ```bash avalanche blockchain join [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-config string file path of the avalanchego config file --cluster string operate on the given cluster --data-dir string path of avalanchego's data dir directory --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-write if true, skip to prompt to overwrite the config file -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for join -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string set the NodeID of the validator to check --plugin-dir string file path of avalanchego's plugin directory --print if true, print the manual config without prompting --stake-amount uint amount of tokens to stake on validator --staking-period duration how long validator validates for after start time --start-time string start time that validator starts validating -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. **Usage:** ```bash avalanche blockchain list [subcommand] [flags] ``` **Flags:** ```bash --deployed show additional deploy information -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### publish The blockchain publish command publishes the Blockchain's VM to a repository. **Usage:** ```bash avalanche blockchain publish [subcommand] [flags] ``` **Flags:** ```bash --alias string We publish to a remote repo, but identify the repo locally under a user-provided alias (e.g. myrepo). --force If true, ignores if the blockchain has been published in the past, and attempts a forced publish. -h, --help help for publish --no-repo-path string Do not let the tool manage file publishing, but have it only generate the files and put them in the location given by this flag. --repo-url string The URL of the repo where we are publishing --subnet-file-path string Path to the Blockchain description file. If not given, a prompting sequence will be initiated. --vm-file-path string Path to the VM description file. If not given, a prompting sequence will be initiated. --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### removeValidator The blockchain removeValidator command stops a whitelisted blockchain network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. **Usage:** ```bash avalanche blockchain removeValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --auth-keys strings (for non-SOV blockchain only) control keys that will be used to authenticate the removeValidator tx --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's removal (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's removal (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's removal (blockchain gas token) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force force validator removal even if it's not getting rewarded -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for removeValidator -k, --key string select the key to use [fuji deploy only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string remove validator that responds to the given endpoint --node-id string node-id of the validator --output-tx-path string (for non-SOV blockchain only) file path of the removeValidator tx --rpc string connect to validator manager at the given rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --uptime uint validator's uptime in seconds. If not provided, it will be automatically calculated --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stats The blockchain stats command prints validator statistics for the given Blockchain. **Usage:** ```bash avalanche blockchain stats [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stats -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. **Usage:** ```bash avalanche blockchain upgrade [subcommand] [flags] ``` **Subcommands:** - [`apply`](#avalanche-blockchain-upgrade-apply): Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. - [`export`](#avalanche-blockchain-upgrade-export): Export the upgrade bytes file to a location of choice on disk - [`generate`](#avalanche-blockchain-upgrade-generate): The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. - [`import`](#avalanche-blockchain-upgrade-import): Import the upgrade bytes file into the local environment - [`print`](#avalanche-blockchain-upgrade-print): Print the upgrade.json file content - [`vm`](#avalanche-blockchain-upgrade-vm): The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade apply Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. **Usage:** ```bash avalanche blockchain upgrade apply [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/home/runner/.avalanchego/chains") --config create upgrade config for future subnet deployments (same as generate) --force If true, don't prompt for confirmation of timestamps in the past --fuji fuji apply upgrade existing fuji deployment (alias for `testnet`) -h, --help help for apply --local local apply upgrade existing local deployment --mainnet mainnet apply upgrade existing mainnet deployment --print if true, print the manual config without prompting (for public networks only) --testnet testnet apply upgrade existing testnet deployment (alias for `fuji`) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade export Export the upgrade bytes file to a location of choice on disk **Usage:** ```bash avalanche blockchain upgrade export [subcommand] [flags] ``` **Flags:** ```bash --force If true, overwrite a possibly existing file without prompting -h, --help help for export --upgrade-filepath string Export upgrade bytes file to location of choice on disk --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade generate The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. **Usage:** ```bash avalanche blockchain upgrade generate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for generate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade import Import the upgrade bytes file into the local environment **Usage:** ```bash avalanche blockchain upgrade import [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for import --upgrade-filepath string Import upgrade bytes file into local environment --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade print Print the upgrade.json file content **Usage:** ```bash avalanche blockchain upgrade print [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for print --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade vm The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Usage:** ```bash avalanche blockchain upgrade vm [subcommand] [flags] ``` **Flags:** ```bash --binary string Upgrade to custom binary --config upgrade config for future subnet deployments --fuji fuji upgrade existing fuji deployment (alias for `testnet`) -h, --help help for vm --latest upgrade to latest version --local local upgrade existing local deployment --mainnet mainnet upgrade existing mainnet deployment --plugin-dir string plugin directory to automatically upgrade VM --print print instructions for upgrading --testnet testnet upgrade existing testnet deployment (alias for `fuji`) --version string Upgrade to custom version --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validators The blockchain validators command lists the validators of a blockchain and provides several statistics about them. **Usage:** ```bash avalanche blockchain validators [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for validators -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### vmid The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Usage:** ```bash avalanche blockchain vmid [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for vmid --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche config Customize configuration for Avalanche-CLI **Usage:** ```bash avalanche config [subcommand] [flags] ``` **Subcommands:** - [`authorize-cloud-access`](#avalanche-config-authorize-cloud-access): set preferences to authorize access to cloud resources - [`metrics`](#avalanche-config-metrics): set user metrics collection preferences - [`migrate`](#avalanche-config-migrate): migrate command migrates old ~/.avalanche-cli.json and ~/.avalanche-cli/config to /.avalanche-cli/config.json.. - [`snapshotsAutoSave`](#avalanche-config-snapshotsautosave): set user preference between auto saving local network snapshots or not - [`update`](#avalanche-config-update): set user preference between update check or not **Flags:** ```bash -h, --help help for config --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### authorize-cloud-access set preferences to authorize access to cloud resources **Usage:** ```bash avalanche config authorize-cloud-access [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for authorize-cloud-access --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### metrics set user metrics collection preferences **Usage:** ```bash avalanche config metrics [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for metrics --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### migrate migrate command migrates old ~/.avalanche-cli.json and ~/.avalanche-cli/config to /.avalanche-cli/config.json.. **Usage:** ```bash avalanche config migrate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for migrate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### snapshotsAutoSave set user preference between auto saving local network snapshots or not **Usage:** ```bash avalanche config snapshotsAutoSave [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for snapshotsAutoSave --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### update set user preference between update check or not **Usage:** ```bash avalanche config update [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche contract The contract command suite provides a collection of tools for deploying and interacting with smart contracts. **Usage:** ```bash avalanche contract [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-contract-deploy): The contract command suite provides a collection of tools for deploying smart contracts. - [`initValidatorManager`](#avalanche-contract-initvalidatormanager): Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager **Flags:** ```bash -h, --help help for contract --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The contract command suite provides a collection of tools for deploying smart contracts. **Usage:** ```bash avalanche contract deploy [subcommand] [flags] ``` **Subcommands:** - [`erc20`](#avalanche-contract-deploy-erc20): Deploy an ERC20 token into a given Network and Blockchain **Flags:** ```bash -h, --help help for deploy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### deploy erc20 Deploy an ERC20 token into a given Network and Blockchain **Usage:** ```bash avalanche contract deploy erc20 [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy the ERC20 contract into the given CLI blockchain --blockchain-id string deploy the ERC20 contract into the given blockchain ID/Alias --c-chain deploy the ERC20 contract into C-Chain --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --funded string set the funded address --genesis-key use genesis allocated key as contract deployer -h, --help help for erc20 --key string CLI stored key to use as contract deployer -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as contract deployer --rpc string deploy the contract into the given rpc endpoint --supply uint set the token supply --symbol string set the token symbol -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### initValidatorManager Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager **Usage:** ```bash avalanche contract initValidatorManager [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout dump signature aggregator logs to stdout --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as contract deployer -h, --help help for initValidatorManager --key string CLI stored key to use as contract deployer -l, --local operate on a local network -m, --mainnet operate on mainnet --pos-maximum-stake-amount uint (PoS only) maximum stake amount (default 1000) --pos-maximum-stake-multiplier uint8 (PoS only )maximum stake multiplier (default 1) --pos-minimum-delegation-fee uint16 (PoS only) minimum delegation fee (default 1) --pos-minimum-stake-amount uint (PoS only) minimum stake amount (default 1) --pos-minimum-stake-duration uint (PoS only) minimum stake duration (in seconds) (default 100) --pos-reward-calculator-address string (PoS only) initialize the ValidatorManager with reward calculator address --pos-weight-to-value-factor uint (PoS only) weight to value factor (default 1) --private-key string private key to use as contract deployer --rpc string deploy the contract into the given rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche help Help provides help for any command in the application. Simply type avalanche help [path to command] for full details. **Usage:** ```bash avalanche help [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for help --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche icm The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche icm [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-icm-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-icm-sendmsg): Sends and wait reception for a ICM msg between two blockchains. **Flags:** ```bash -h, --help help for icm --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys ICM Messenger and Registry into a given L1. For Local Networks, it also deploys into C-Chain. **Usage:** ```bash avalanche icm deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network -m, --mainnet operate on mainnet --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sendMsg Sends and wait reception for a ICM msg between two blockchains. **Usage:** ```bash avalanche icm sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche ictt The ictt command suite provides tools to deploy and manage Interchain Token Transferrers. **Usage:** ```bash avalanche ictt [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-ictt-deploy): Deploys a Token Transferrer into a given Network and Subnets **Flags:** ```bash -h, --help help for ictt --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys a Token Transferrer into a given Network and Subnets **Usage:** ```bash avalanche ictt deploy [subcommand] [flags] ``` **Flags:** ```bash --c-chain-home set the Transferrer's Home Chain into C-Chain --c-chain-remote set the Transferrer's Remote Chain into C-Chain --cluster string operate on the given cluster --deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token --deploy-native-home deploy a Transferrer Home for the Chain's Native Token --deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain --home-genesis-key use genesis allocated key to deploy Transferrer Home --home-key string CLI stored key to use to deploy Transferrer Home --home-private-key string private key to use to deploy Transferrer Home --home-rpc string use the given RPC URL to connect to the home blockchain -l, --local operate on a local network -m, --mainnet operate on mainnet --remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain --remote-genesis-key use genesis allocated key to deploy Transferrer Remote --remote-key string CLI stored key to use to deploy Transferrer Remote --remote-private-key string private key to use to deploy Transferrer Remote --remote-rpc string use the given RPC URL to connect to the remote blockchain --remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)] --remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis -t, --testnet fuji operate on testnet (alias to fuji) --use-home string use the given Transferrer's Home Address --version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche interchain The interchain command suite provides a collection of tools to set and manage interoperability between blockchains. **Usage:** ```bash avalanche interchain [subcommand] [flags] ``` **Subcommands:** - [`messenger`](#avalanche-interchain-messenger): The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. - [`relayer`](#avalanche-interchain-relayer): The relayer command suite provides a collection of tools for deploying and configuring an ICM relayers. - [`tokenTransferrer`](#avalanche-interchain-tokentransferrer): The tokenTransfer command suite provides tools to deploy and manage Token Transferrers. **Flags:** ```bash -h, --help help for interchain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### messenger The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche interchain messenger [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-messenger-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-interchain-messenger-sendmsg): Sends and wait reception for a ICM msg between two blockchains. **Flags:** ```bash -h, --help help for messenger --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### messenger deploy Deploys ICM Messenger and Registry into a given L1. For Local Networks, it also deploys into C-Chain. **Usage:** ```bash avalanche interchain messenger deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network -m, --mainnet operate on mainnet --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### messenger sendMsg Sends and wait reception for a ICM msg between two blockchains. **Usage:** ```bash avalanche interchain messenger sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### relayer The relayer command suite provides a collection of tools for deploying and configuring an ICM relayers. **Usage:** ```bash avalanche interchain relayer [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-relayer-deploy): Deploys an ICM Relayer for the given Network. - [`logs`](#avalanche-interchain-relayer-logs): Shows pretty formatted AWM relayer logs - [`start`](#avalanche-interchain-relayer-start): Starts AWM relayer on the specified network (Currently only for local network). - [`stop`](#avalanche-interchain-relayer-stop): Stops AWM relayer on the specified network (Currently only for local network, cluster). **Flags:** ```bash -h, --help help for relayer --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer deploy Deploys an ICM Relayer for the given Network. **Usage:** ```bash avalanche interchain relayer deploy [subcommand] [flags] ``` **Flags:** ```bash --allow-private-ips allow relayer to connec to private ips (default true) --amount float automatically fund l1s fee payments with the given amount --bin-path string use the given relayer binary --blockchain-funding-key string key to be used to fund relayer account on all l1s --blockchains strings blockchains to relay as source and destination --cchain relay C-Chain as source and destination --cchain-amount float automatically fund cchain fee payments with the given amount --cchain-funding-key string key to be used to fund relayer account on cchain --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --key string key to be used by default both for rewards and to pay fees -l, --local operate on a local network --log-level string log level to use for relayer logs -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest-prerelease") --config string config file (default is $HOME/.avalanche-cli/config.json) --skip-update-check skip check for new versions ``` #### relayer logs Shows pretty formatted AWM relayer logs **Usage:** ```bash avalanche interchain relayer logs [subcommand] [flags] ``` **Flags:** ```bash --endpoint string use the given endpoint for network operations --first uint output first N log lines -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for logs --last uint output last N log lines -l, --local operate on a local network --raw raw logs output -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer start Starts AWM relayer on the specified network (Currently only for local network). **Usage:** ```bash avalanche interchain relayer start [subcommand] [flags] ``` **Flags:** ```bash --bin-path string use the given relayer binary --cluster string operate on the given cluster --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for start -l, --local operate on a local network -t, --testnet fuji operate on testnet (alias to fuji) --version string version to use (default "latest-prerelease") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer stop Stops AWM relayer on the specified network (Currently only for local network, cluster). **Usage:** ```bash avalanche interchain relayer stop [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stop -l, --local operate on a local network -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### tokenTransferrer The tokenTransfer command suite provides tools to deploy and manage Token Transferrers. **Usage:** ```bash avalanche interchain tokenTransferrer [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-tokentransferrer-deploy): Deploys a Token Transferrer into a given Network and Subnets **Flags:** ```bash -h, --help help for tokenTransferrer --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### tokenTransferrer deploy Deploys a Token Transferrer into a given Network and Subnets **Usage:** ```bash avalanche interchain tokenTransferrer deploy [subcommand] [flags] ``` **Flags:** ```bash --c-chain-home set the Transferrer's Home Chain into C-Chain --c-chain-remote set the Transferrer's Remote Chain into C-Chain --cluster string operate on the given cluster --deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token --deploy-native-home deploy a Transferrer Home for the Chain's Native Token --deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain --home-genesis-key use genesis allocated key to deploy Transferrer Home --home-key string CLI stored key to use to deploy Transferrer Home --home-private-key string private key to use to deploy Transferrer Home --home-rpc string use the given RPC URL to connect to the home blockchain -l, --local operate on a local network -m, --mainnet operate on mainnet --remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain --remote-genesis-key use genesis allocated key to deploy Transferrer Remote --remote-key string CLI stored key to use to deploy Transferrer Remote --remote-private-key string private key to use to deploy Transferrer Remote --remote-rpc string use the given RPC URL to connect to the remote blockchain --remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)] --remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis -t, --testnet fuji operate on testnet (alias to fuji) --use-home string use the given Transferrer's Home Address --version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche key The key command suite provides a collection of tools for creating and managing signing keys. You can use these keys to deploy Subnets to the Fuji Testnet, but these keys are NOT suitable to use in production environments. DO NOT use these keys on Mainnet. To get started, use the key create command. **Usage:** ```bash avalanche key [subcommand] [flags] ``` **Subcommands:** - [`create`](#avalanche-key-create): The key create command generates a new private key to use for creating and controlling test Subnets. Keys generated by this command are NOT cryptographically secure enough to use in production environments. DO NOT use these keys on Mainnet. The command works by generating a secp256 key and storing it with the provided keyName. You can use this key in other commands by providing this keyName. If you'd like to import an existing key instead of generating one from scratch, provide the --file flag. - [`delete`](#avalanche-key-delete): The key delete command deletes an existing signing key. To delete a key, provide the keyName. The command prompts for confirmation before deleting the key. To skip the confirmation, provide the --force flag. - [`export`](#avalanche-key-export): The key export command exports a created signing key. You can use an exported key in other applications or import it into another instance of Avalanche-CLI. By default, the tool writes the hex encoded key to stdout. If you provide the --output flag, the command writes the key to a file of your choosing. - [`list`](#avalanche-key-list): The key list command prints information for all stored signing keys or for the ledger addresses associated to certain indices. - [`transfer`](#avalanche-key-transfer): The key transfer command allows to transfer funds between stored keys or ledger addresses. **Flags:** ```bash -h, --help help for key --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The key create command generates a new private key to use for creating and controlling test Subnets. Keys generated by this command are NOT cryptographically secure enough to use in production environments. DO NOT use these keys on Mainnet. The command works by generating a secp256 key and storing it with the provided keyName. You can use this key in other commands by providing this keyName. If you'd like to import an existing key instead of generating one from scratch, provide the --file flag. **Usage:** ```bash avalanche key create [subcommand] [flags] ``` **Flags:** ```bash --file string import the key from an existing key file -f, --force overwrite an existing key with the same name -h, --help help for create --skip-balances do not query public network balances for an imported key --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The key delete command deletes an existing signing key. To delete a key, provide the keyName. The command prompts for confirmation before deleting the key. To skip the confirmation, provide the --force flag. **Usage:** ```bash avalanche key delete [subcommand] [flags] ``` **Flags:** ```bash -f, --force delete the key without confirmation -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The key export command exports a created signing key. You can use an exported key in other applications or import it into another instance of Avalanche-CLI. By default, the tool writes the hex encoded key to stdout. If you provide the --output flag, the command writes the key to a file of your choosing. **Usage:** ```bash avalanche key export [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for export -o, --output string write the key to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The key list command prints information for all stored signing keys or for the ledger addresses associated to certain indices. **Usage:** ```bash avalanche key list [subcommand] [flags] ``` **Flags:** ```bash -a, --all-networks list all network addresses --blockchains strings blockchains to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c) -c, --cchain list C-Chain addresses (default true) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for list --keys strings list addresses for the given keys -g, --ledger uints list ledger addresses for the given indices (default []) -l, --local operate on a local network -m, --mainnet operate on mainnet --pchain list P-Chain addresses (default true) --subnets strings subnets to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c) -t, --testnet fuji operate on testnet (alias to fuji) --tokens strings provide balance information for the given token contract addresses (Evm only) (default [Native]) --use-gwei use gwei for EVM balances -n, --use-nano-avax use nano Avax for balances --xchain list X-Chain addresses (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### transfer The key transfer command allows to transfer funds between stored keys or ledger addresses. **Usage:** ```bash avalanche key transfer [subcommand] [flags] ``` **Flags:** ```bash -o, --amount float amount to send or receive (AVAX or TOKEN units) --c-chain-receiver receive at C-Chain --c-chain-sender send from C-Chain --cluster string operate on the given cluster -a, --destination-addr string destination address --destination-key string key associated to a destination address --destination-subnet string subnet where the funds will be sent (token transferrer experimental) --destination-transferrer-address string token transferrer address at the destination subnet (token transferrer experimental) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for transfer -k, --key string key associated to the sender or receiver address -i, --ledger uint32 ledger index associated to the sender or receiver address (default 32768) -l, --local operate on a local network -m, --mainnet operate on mainnet --origin-subnet string subnet where the funds belong (token transferrer experimental) --origin-transferrer-address string token transferrer address at the origin subnet (token transferrer experimental) --p-chain-receiver receive at P-Chain --p-chain-sender send from P-Chain --receiver-blockchain string receive at the given CLI blockchain --receiver-blockchain-id string receive at the given blockchain ID/Alias --sender-blockchain string send from the given CLI blockchain --sender-blockchain-id string send from the given blockchain ID/Alias -t, --testnet fuji operate on testnet (alias to fuji) --x-chain-receiver receive at X-Chain --x-chain-sender send from X-Chain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche network The network command suite provides a collection of tools for managing local Blockchain deployments. When you deploy a Blockchain locally, it runs on a local, multi-node Avalanche network. The blockchain deploy command starts this network in the background. This command suite allows you to shutdown, restart, and clear that network. This network currently supports multiple, concurrently deployed Blockchains. **Usage:** ```bash avalanche network [subcommand] [flags] ``` **Subcommands:** - [`clean`](#avalanche-network-clean): The network clean command shuts down your local, multi-node network. All deployed Subnets shutdown and delete their state. You can restart the network by deploying a new Subnet configuration. - [`start`](#avalanche-network-start): The network start command starts a local, multi-node Avalanche network on your machine. By default, the command loads the default snapshot. If you provide the --snapshot-name flag, the network loads that snapshot instead. The command fails if the local network is already running. - [`status`](#avalanche-network-status): The network status command prints whether or not a local Avalanche network is running and some basic stats about the network. - [`stop`](#avalanche-network-stop): The network stop command shuts down your local, multi-node network. All deployed Subnets shutdown gracefully and save their state. If you provide the --snapshot-name flag, the network saves its state under this named snapshot. You can reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the network saves to the default snapshot, overwriting any existing state. You can reload the default snapshot with network start. **Flags:** ```bash -h, --help help for network --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### clean The network clean command shuts down your local, multi-node network. All deployed Subnets shutdown and delete their state. You can restart the network by deploying a new Subnet configuration. **Usage:** ```bash avalanche network clean [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for clean --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### start The network start command starts a local, multi-node Avalanche network on your machine. By default, the command loads the default snapshot. If you provide the --snapshot-name flag, the network loads that snapshot instead. The command fails if the local network is already running. **Usage:** ```bash avalanche network start [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease") -h, --help help for start --num-nodes uint32 number of nodes to be created on local network (default 2) --relayer-path string use this relayer binary path --relayer-version string use this relayer version (default "latest-prerelease") --snapshot-name string name of snapshot to use to start the network from (default "default") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### status The network status command prints whether or not a local Avalanche network is running and some basic stats about the network. **Usage:** ```bash avalanche network status [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for status --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stop The network stop command shuts down your local, multi-node network. All deployed Subnets shutdown gracefully and save their state. If you provide the --snapshot-name flag, the network saves its state under this named snapshot. You can reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the network saves to the default snapshot, overwriting any existing state. You can reload the default snapshot with network start. **Usage:** ```bash avalanche network stop [subcommand] [flags] ``` **Flags:** ```bash --dont-save do not save snapshot, just stop the network -h, --help help for stop --snapshot-name string name of snapshot to use to save network state into (default "default") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche node The node command suite provides a collection of tools for creating and maintaining validators on Avalanche Network. To get started, use the node create command wizard to walk through the configuration to make your node a primary validator on Avalanche public network. You can use the rest of the commands to maintain your node and make your node a Subnet Validator. **Usage:** ```bash avalanche node [subcommand] [flags] ``` **Subcommands:** - [`addDashboard`](#avalanche-node-adddashboard): (ALPHA Warning) This command is currently in experimental mode. The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the cluster. - [`create`](#avalanche-node-create): (ALPHA Warning) This command is currently in experimental mode. The node create command sets up a validator on a cloud server of your choice. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status The created node will be part of group of validators called `clusterName` and users can call node commands with `clusterName` so that the command will apply to all nodes in the cluster - [`destroy`](#avalanche-node-destroy): (ALPHA Warning) This command is currently in experimental mode. The node destroy command terminates all running nodes in cloud server and deletes all storage disks. If there is a static IP address attached, it will be released. - [`devnet`](#avalanche-node-devnet): (ALPHA Warning) This command is currently in experimental mode. The node devnet command suite provides a collection of commands related to devnets. You can check the updated status by calling avalanche node status `clusterName` - [`export`](#avalanche-node-export): (ALPHA Warning) This command is currently in experimental mode. The node export command exports cluster configuration and its nodes config to a text file. If no file is specified, the configuration is printed to the stdout. Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information. Exported cluster configuration without secrets can be imported by another user using node import command. - [`import`](#avalanche-node-import): (ALPHA Warning) This command is currently in experimental mode. The node import command imports cluster configuration and its nodes configuration from a text file created from the node export command. Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster. Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands affecting cloud nodes like node create or node destroy will be not applicable to it. - [`list`](#avalanche-node-list): (ALPHA Warning) This command is currently in experimental mode. The node list command lists all clusters together with their nodes. - [`loadtest`](#avalanche-node-loadtest): (ALPHA Warning) This command is currently in experimental mode. The node loadtest command suite starts and stops a load test for an existing devnet cluster. - [`local`](#avalanche-node-local): The node local command suite provides a collection of commands related to local nodes - [`refresh-ips`](#avalanche-node-refresh-ips): (ALPHA Warning) This command is currently in experimental mode. The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster, and updates the local node information used by CLI commands. - [`resize`](#avalanche-node-resize): (ALPHA Warning) This command is currently in experimental mode. The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes. - [`scp`](#avalanche-node-scp): (ALPHA Warning) This command is currently in experimental mode. The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format: [clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/*.txt. File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path. If both destinations are remote, they must be nodes for the same cluster and not clusters themselves. For example: $ avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt $ avalanche node scp /tmp/file.txt [cluster1|NodeID-XXXX]:/tmp/file.txt $ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt - [`ssh`](#avalanche-node-ssh): (ALPHA Warning) This command is currently in experimental mode. The node ssh command execute a given command [cmd] using ssh on all nodes in the cluster if ClusterName is given. If no command is given, just prints the ssh command to be used to connect to each node in the cluster. For provided NodeID or InstanceID or IP, the command [cmd] will be executed on that node. If no [cmd] is provided for the node, it will open ssh shell there. - [`status`](#avalanche-node-status): (ALPHA Warning) This command is currently in experimental mode. The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network. If no cluster is given, defaults to node list behaviour. To get the bootstrap status of a node with a Blockchain, use --blockchain flag - [`sync`](#avalanche-node-sync): (ALPHA Warning) This command is currently in experimental mode. The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain. You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName` - [`update`](#avalanche-node-update): (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM config. You can check the status after update by calling avalanche node status - [`upgrade`](#avalanche-node-upgrade): (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM version. You can check the status after upgrade by calling avalanche node status - [`validate`](#avalanche-node-validate): (ALPHA Warning) This command is currently in experimental mode. The node validate command suite provides a collection of commands for nodes to join the Primary Network and Subnets as validators. If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` - [`whitelist`](#avalanche-node-whitelist): (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster. Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http. It also command adds SSH public key to all nodes in the cluster if --ssh params is there. If no params provided it detects current user IP automaticaly and whitelists it **Flags:** ```bash -h, --help help for node --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addDashboard (ALPHA Warning) This command is currently in experimental mode. The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the cluster. **Usage:** ```bash avalanche node addDashboard [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file -h, --help help for addDashboard --subnet string subnet that the dasbhoard is intended for (if any) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create (ALPHA Warning) This command is currently in experimental mode. The node create command sets up a validator on a cloud server of your choice. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status The created node will be part of group of validators called `clusterName` and users can call node commands with `clusterName` so that the command will apply to all nodes in the cluster **Usage:** ```bash avalanche node create [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file --alternative-key-pair-name string key pair name to use if default one generates conflicts --authorize-access authorize CLI to create cloud resources --auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found --avalanchego-version-from-subnet string install latest avalanchego version, that is compatible with the given subnet, on node/s --aws create node/s in AWS cloud --aws-profile string aws profile to use (default "default") --aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000) --aws-volume-size int AWS volume size in GB (default 1000) --aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125) --aws-volume-type string AWS volume type (default "gp3") --bootstrap-ids stringArray nodeIDs of bootstrap nodes --bootstrap-ips stringArray IP:port pairs of bootstrap nodes --cluster string operate on the given cluster --custom-avalanchego-version string install given avalanchego version on node/s --devnet operate on a devnet network --enable-monitoring set up Prometheus monitoring for created nodes. This option creates a separate monitoring cloud instance and incures additional cost --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --gcp create node/s in GCP cloud --gcp-credentials string use given GCP credentials --gcp-project string use given GCP project --genesis string path to genesis file --grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb -h, --help help for create --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s --latest-avalanchego-version install latest avalanchego release version on node/s -m, --mainnet operate on mainnet --node-type string cloud instance type. Use 'default' to use recommended default instance type --num-apis ints number of API nodes(nodes without stake) to create in the new Devnet --num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag --partial-sync primary network partial sync (default true) --public-http-port allow public access to avalanchego HTTP port --region strings create node(s) in given region(s). Use comma to separate multiple regions --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used -t, --testnet fuji operate on testnet (alias to fuji) --upgrade string path to upgrade file --use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth --use-static-ip attach static Public IP on cloud servers (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### destroy (ALPHA Warning) This command is currently in experimental mode. The node destroy command terminates all running nodes in cloud server and deletes all storage disks. If there is a static IP address attached, it will be released. **Usage:** ```bash avalanche node destroy [subcommand] [flags] ``` **Flags:** ```bash --all destroy all existing clusters created by Avalanche CLI --authorize-access authorize CLI to release cloud resources -y, --authorize-all authorize all CLI requests --authorize-remove authorize CLI to remove all local files related to cloud nodes --aws-profile string aws profile to use (default "default") -h, --help help for destroy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### devnet (ALPHA Warning) This command is currently in experimental mode. The node devnet command suite provides a collection of commands related to devnets. You can check the updated status by calling avalanche node status `clusterName` **Usage:** ```bash avalanche node devnet [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-node-devnet-deploy): (ALPHA Warning) This command is currently in experimental mode. The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it. It saves the deploy info both locally and remotely. - [`wiz`](#avalanche-node-devnet-wiz): (ALPHA Warning) This command is currently in experimental mode. The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed. **Flags:** ```bash -h, --help help for devnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### devnet deploy (ALPHA Warning) This command is currently in experimental mode. The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it. It saves the deploy info both locally and remotely. **Usage:** ```bash avalanche node devnet deploy [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for deploy --no-checks do not check for healthy status or rpc compatibility of nodes against subnet --subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name --subnet-only only create a subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### devnet wiz (ALPHA Warning) This command is currently in experimental mode. The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed. **Usage:** ```bash avalanche node devnet wiz [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file --alternative-key-pair-name string key pair name to use if default one generates conflicts --authorize-access authorize CLI to create cloud resources --auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found --aws create node/s in AWS cloud --aws-profile string aws profile to use (default "default") --aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000) --aws-volume-size int AWS volume size in GB (default 1000) --aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125) --aws-volume-type string AWS volume type (default "gp3") --chain-config string path to the chain configuration for subnet --custom-avalanchego-version string install given avalanchego version on node/s --custom-subnet use a custom VM as the subnet virtual machine --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url --default-validator-params use default weight/start/duration params for subnet validator --deploy-icm-messenger deploy Interchain Messenger (default true) --deploy-icm-registry deploy Interchain Registry (default true) --deploy-teleporter-messenger deploy Interchain Messenger (default true) --deploy-teleporter-registry deploy Interchain Registry (default true) --enable-monitoring set up Prometheus monitoring for created nodes. Please note that this option creates a separate monitoring instance and incures additional cost --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults use default production settings with Subnet-EVM --evm-production-defaults use default production settings for your blockchain --evm-subnet use Subnet-EVM as the subnet virtual machine --evm-test-defaults use default test settings for your blockchain --evm-token string token name to use with Subnet-EVM --evm-version string version of Subnet-EVM to use --force-subnet-create overwrite the existing subnet configuration if one exists --gcp create node/s in GCP cloud --gcp-credentials string use given GCP credentials --gcp-project string use given GCP project --grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb -h, --help help for wiz --icm generate an icm-ready vm --icm-messenger-contract-address-path string path to an icm messenger contract address file --icm-messenger-deployer-address-path string path to an icm messenger deployer address file --icm-messenger-deployer-tx-path string path to an icm messenger deployer tx file --icm-registry-bytecode-path string path to an icm registry bytecode file --icm-version string icm version to deploy (default "latest") --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s --latest-avalanchego-version install latest avalanchego release version on node/s --latest-evm-version use latest Subnet-EVM released version --latest-pre-released-evm-version use latest Subnet-EVM pre-released version --node-config string path to avalanchego node configuration for subnet --node-type string cloud instance type. Use 'default' to use recommended default instance type --num-apis ints number of API nodes(nodes without stake) to create in the new Devnet --num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag --public-http-port allow public access to avalanchego HTTP port --region strings create node/s in given region(s). Use comma to separate multiple regions --relayer run AWM relayer when deploying the vm --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used. --subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name --subnet-config string path to the subnet configuration for subnet --subnet-genesis string file path of the subnet genesis --teleporter generate an icm-ready vm --teleporter-messenger-contract-address-path string path to an icm messenger contract address file --teleporter-messenger-deployer-address-path string path to an icm messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an icm messenger deployer tx file --teleporter-registry-bytecode-path string path to an icm registry bytecode file --teleporter-version string icm version to deploy (default "latest") --use-ssh-agent use ssh agent for ssh --use-static-ip attach static Public IP on cloud servers (default true) --validators strings deploy subnet into given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export (ALPHA Warning) This command is currently in experimental mode. The node export command exports cluster configuration and its nodes config to a text file. If no file is specified, the configuration is printed to the stdout. Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information. Exported cluster configuration without secrets can be imported by another user using node import command. **Usage:** ```bash avalanche node export [subcommand] [flags] ``` **Flags:** ```bash --file string specify the file to export the cluster configuration to --force overwrite the file if it exists -h, --help help for export --include-secrets include keys in the export --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import (ALPHA Warning) This command is currently in experimental mode. The node import command imports cluster configuration and its nodes configuration from a text file created from the node export command. Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster. Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands affecting cloud nodes like node create or node destroy will be not applicable to it. **Usage:** ```bash avalanche node import [subcommand] [flags] ``` **Flags:** ```bash --file string specify the file to export the cluster configuration to -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list (ALPHA Warning) This command is currently in experimental mode. The node list command lists all clusters together with their nodes. **Usage:** ```bash avalanche node list [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### loadtest (ALPHA Warning) This command is currently in experimental mode. The node loadtest command suite starts and stops a load test for an existing devnet cluster. **Usage:** ```bash avalanche node loadtest [subcommand] [flags] ``` **Subcommands:** - [`start`](#avalanche-node-loadtest-start): (ALPHA Warning) This command is currently in experimental mode. The node loadtest command starts load testing for an existing devnet cluster. If the cluster does not have an existing load test host, the command creates a separate cloud server and builds the load test binary based on the provided load test Git Repo URL and load test binary build command. The command will then run the load test binary based on the provided load test run command. - [`stop`](#avalanche-node-loadtest-stop): (ALPHA Warning) This command is currently in experimental mode. The node loadtest stop command stops load testing for an existing devnet cluster and terminates the separate cloud server created to host the load test. **Flags:** ```bash -h, --help help for loadtest --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### loadtest start (ALPHA Warning) This command is currently in experimental mode. The node loadtest command starts load testing for an existing devnet cluster. If the cluster does not have an existing load test host, the command creates a separate cloud server and builds the load test binary based on the provided load test Git Repo URL and load test binary build command. The command will then run the load test binary based on the provided load test run command. **Usage:** ```bash avalanche node loadtest start [subcommand] [flags] ``` **Flags:** ```bash --authorize-access authorize CLI to create cloud resources --aws create loadtest node in AWS cloud --aws-profile string aws profile to use (default "default") --gcp create loadtest in GCP cloud -h, --help help for start --load-test-branch string load test branch or commit --load-test-build-cmd string command to build load test binary --load-test-cmd string command to run load test --load-test-repo string load test repo url to use --node-type string cloud instance type for loadtest script --region string create load test node in a given region --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used --use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### loadtest stop (ALPHA Warning) This command is currently in experimental mode. The node loadtest stop command stops load testing for an existing devnet cluster and terminates the separate cloud server created to host the load test. **Usage:** ```bash avalanche node loadtest stop [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for stop --load-test strings stop specified load test node(s). Use comma to separate multiple load test instance names --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### local The node local command suite provides a collection of commands related to local nodes **Usage:** ```bash avalanche node local [subcommand] [flags] ``` **Subcommands:** - [`destroy`](#avalanche-node-local-destroy): Cleanup local node. - [`start`](#avalanche-node-local-start): The node local start command creates Avalanche nodes on the local machine. Once this command is completed, you will have to wait for the Avalanche node to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status local. - [`status`](#avalanche-node-local-status): Get status of local node. - [`stop`](#avalanche-node-local-stop): Stop local node. - [`track`](#avalanche-node-local-track): Track specified blockchain with local node - [`validate`](#avalanche-node-local-validate): Use Avalanche Node set up on local machine to set up specified L1 by providing the RPC URL of the L1. This command can only be used to validate Proof of Stake L1. **Flags:** ```bash -h, --help help for local --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local destroy Cleanup local node. **Usage:** ```bash avalanche node local destroy [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for destroy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local start The node local start command creates Avalanche nodes on the local machine. Once this command is completed, you will have to wait for the Avalanche node to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status local. **Usage:** ```bash avalanche node local start [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --bootstrap-id stringArray nodeIDs of bootstrap nodes --bootstrap-ip stringArray IP:port pairs of bootstrap nodes --cluster string operate on the given cluster --custom-avalanchego-version string install given avalanchego version on node/s --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis string path to genesis file -h, --help help for start --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true) --latest-avalanchego-version install latest avalanchego release version on node/s -l, --local operate on a local network -m, --mainnet operate on mainnet --node-config string path to common avalanchego config settings for all nodes --num-nodes uint32 number of Avalanche nodes to create on local machine (default 1) --partial-sync primary network partial sync (default true) --staking-cert-key-path string path to provided staking cert key for node --staking-signer-key-path string path to provided staking signer key for node --staking-tls-key-path string path to provided staking tls key for node -t, --testnet fuji operate on testnet (alias to fuji) --upgrade string path to upgrade file --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local status Get status of local node. **Usage:** ```bash avalanche node local status [subcommand] [flags] ``` **Flags:** ```bash --blockchain string specify the blockchain the node is syncing with -h, --help help for status --l1 string specify the blockchain the node is syncing with --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local stop Stop local node. **Usage:** ```bash avalanche node local stop [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for stop --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local track Track specified blockchain with local node **Usage:** ```bash avalanche node local track [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --custom-avalanchego-version string install given avalanchego version on node/s -h, --help help for track --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true) --latest-avalanchego-version install latest avalanchego release version on node/s --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local validate Use Avalanche Node set up on local machine to set up specified L1 by providing the RPC URL of the L1. This command can only be used to validate Proof of Stake L1. **Usage:** ```bash avalanche node local validate [subcommand] [flags] ``` **Flags:** ```bash --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --balance float amount of AVAX to increase validator's balance by --blockchain string specify the blockchain the node is syncing with --delegation-fee uint16 delegation fee (in bips) (default 100) --disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction -h, --help help for validate --l1 string specify the blockchain the node is syncing with --minimum-stake-duration uint minimum stake duration (in seconds) (default 100) --remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet --rpc string connect to validator manager at the given rpc endpoint --stake-amount uint amount of tokens to stake --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### refresh-ips (ALPHA Warning) This command is currently in experimental mode. The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster, and updates the local node information used by CLI commands. **Usage:** ```bash avalanche node refresh-ips [subcommand] [flags] ``` **Flags:** ```bash --aws-profile string aws profile to use (default "default") -h, --help help for refresh-ips --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### resize (ALPHA Warning) This command is currently in experimental mode. The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes. **Usage:** ```bash avalanche node resize [subcommand] [flags] ``` **Flags:** ```bash --aws-profile string aws profile to use (default "default") --disk-size string Disk size to resize in Gb (e.g. 1000Gb) -h, --help help for resize --node-type string Node type to resize (e.g. t3.2xlarge) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### scp (ALPHA Warning) This command is currently in experimental mode. The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format: [clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/*.txt. File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path. If both destinations are remote, they must be nodes for the same cluster and not clusters themselves. For example: $ avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt $ avalanche node scp /tmp/file.txt [cluster1|NodeID-XXXX]:/tmp/file.txt $ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt **Usage:** ```bash avalanche node scp [subcommand] [flags] ``` **Flags:** ```bash --compress use compression for ssh -h, --help help for scp --recursive copy directories recursively --with-loadtest include loadtest node for scp cluster operations --with-monitor include monitoring node for scp cluster operations --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### ssh (ALPHA Warning) This command is currently in experimental mode. The node ssh command execute a given command [cmd] using ssh on all nodes in the cluster if ClusterName is given. If no command is given, just prints the ssh command to be used to connect to each node in the cluster. For provided NodeID or InstanceID or IP, the command [cmd] will be executed on that node. If no [cmd] is provided for the node, it will open ssh shell there. **Usage:** ```bash avalanche node ssh [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for ssh --parallel run ssh command on all nodes in parallel --with-loadtest include loadtest node for ssh cluster operations --with-monitor include monitoring node for ssh cluster operations --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### status (ALPHA Warning) This command is currently in experimental mode. The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network. If no cluster is given, defaults to node list behaviour. To get the bootstrap status of a node with a Blockchain, use --blockchain flag **Usage:** ```bash avalanche node status [subcommand] [flags] ``` **Flags:** ```bash --blockchain string specify the blockchain the node is syncing with -h, --help help for status --subnet string specify the blockchain the node is syncing with --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sync (ALPHA Warning) This command is currently in experimental mode. The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain. You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName` **Usage:** ```bash avalanche node sync [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for sync --no-checks do not check for bootstrapped/healthy status or rpc compatibility of nodes against subnet --subnet-aliases strings subnet alias to be used for RPC calls. defaults to subnet blockchain ID --validators strings sync subnet into given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### update (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM config. You can check the status after update by calling avalanche node status **Usage:** ```bash avalanche node update [subcommand] [flags] ``` **Subcommands:** - [`subnet`](#avalanche-node-update-subnet): (ALPHA Warning) This command is currently in experimental mode. The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM. You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName` **Flags:** ```bash -h, --help help for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### update subnet (ALPHA Warning) This command is currently in experimental mode. The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM. You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName` **Usage:** ```bash avalanche node update subnet [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM version. You can check the status after upgrade by calling avalanche node status **Usage:** ```bash avalanche node upgrade [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validate (ALPHA Warning) This command is currently in experimental mode. The node validate command suite provides a collection of commands for nodes to join the Primary Network and Subnets as validators. If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` **Usage:** ```bash avalanche node validate [subcommand] [flags] ``` **Subcommands:** - [`primary`](#avalanche-node-validate-primary): (ALPHA Warning) This command is currently in experimental mode. The node validate primary command enables all nodes in a cluster to be validators of Primary Network. - [`subnet`](#avalanche-node-validate-subnet): (ALPHA Warning) This command is currently in experimental mode. The node validate subnet command enables all nodes in a cluster to be validators of a Subnet. If the command is run before the nodes are Primary Network validators, the command will first make the nodes Primary Network validators before making them Subnet validators. If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` If The command is run before the nodes are synced to the subnet, the command will fail. You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName` **Flags:** ```bash -h, --help help for validate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### validate primary (ALPHA Warning) This command is currently in experimental mode. The node validate primary command enables all nodes in a cluster to be validators of Primary Network. **Usage:** ```bash avalanche node validate primary [subcommand] [flags] ``` **Flags:** ```bash -e, --ewoq use ewoq key [fuji/devnet only] -h, --help help for primary -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses --stake-amount uint how many AVAX to stake in the validator --staking-period duration how long validator validates for after start time --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### validate subnet (ALPHA Warning) This command is currently in experimental mode. The node validate subnet command enables all nodes in a cluster to be validators of a Subnet. If the command is run before the nodes are Primary Network validators, the command will first make the nodes Primary Network validators before making them Subnet validators. If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` If The command is run before the nodes are synced to the subnet, the command will fail. You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName` **Usage:** ```bash avalanche node validate subnet [subcommand] [flags] ``` **Flags:** ```bash --default-validator-params use default weight/start/duration params for subnet validator -e, --ewoq use ewoq key [fuji/devnet only] -h, --help help for subnet -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses --no-checks do not check for bootstrapped status or healthy status --no-validation-checks do not check if subnet is already synced or validated (default true) --stake-amount uint how many AVAX to stake in the validator --staking-period duration how long validator validates for after start time --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --validators strings validate subnet for the given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### whitelist (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster. Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http. It also command adds SSH public key to all nodes in the cluster if --ssh params is there. If no params provided it detects current user IP automaticaly and whitelists it **Usage:** ```bash avalanche node whitelist [subcommand] [flags] ``` **Flags:** ```bash -y, --current-ip whitelist current host ip -h, --help help for whitelist --ip string ip address to whitelist --ssh string ssh public key to whitelist --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche primary The primary command suite provides a collection of tools for interacting with the Primary Network **Usage:** ```bash avalanche primary [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-primary-addvalidator): The primary addValidator command adds a node as a validator in the Primary Network - [`describe`](#avalanche-primary-describe): The subnet describe command prints details of the primary network configuration to the console. **Flags:** ```bash -h, --help help for primary --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The primary addValidator command adds a node as a validator in the Primary Network **Usage:** ```bash avalanche primary addValidator [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --delegation-fee uint32 set the delegation fee (20 000 is equivalent to 2%) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -m, --mainnet operate on mainnet --nodeID string set the NodeID of the validator to add --proof-of-possession string set the BLS proof of possession of the validator to add --public-key string set the BLS public key of the validator to add --staking-period duration how long this validator will be staking --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the staking weight of the validator to add --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### describe The subnet describe command prints details of the primary network configuration to the console. **Usage:** ```bash avalanche primary describe [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster -h, --help help for describe -l, --local operate on a local network --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche transaction The transaction command suite provides all of the utilities required to sign multisig transactions. **Usage:** ```bash avalanche transaction [subcommand] [flags] ``` **Subcommands:** - [`commit`](#avalanche-transaction-commit): The transaction commit command commits a transaction by submitting it to the P-Chain. - [`sign`](#avalanche-transaction-sign): The transaction sign command signs a multisig transaction. **Flags:** ```bash -h, --help help for transaction --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### commit The transaction commit command commits a transaction by submitting it to the P-Chain. **Usage:** ```bash avalanche transaction commit [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for commit --input-tx-filepath string Path to the transaction signed by all signatories --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sign The transaction sign command signs a multisig transaction. **Usage:** ```bash avalanche transaction sign [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for sign --input-tx-filepath string Path to the transaction file for signing -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche update Check if an update is available, and prompt the user to install it **Usage:** ```bash avalanche update [subcommand] [flags] ``` **Flags:** ```bash -c, --confirm Assume yes for installation -h, --help help for update -v, --version version for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche validator The validator command suite provides a collection of tools for managing validator balance on P-Chain. Validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0, the validator will be considered inactive and will no longer participate in validating the L1 **Usage:** ```bash avalanche validator [subcommand] [flags] ``` **Subcommands:** - [`getBalance`](#avalanche-validator-getbalance): This command gets the remaining validator P-Chain balance that is available to pay P-Chain continuous fee - [`increaseBalance`](#avalanche-validator-increasebalance): This command increases the validator P-Chain balance - [`list`](#avalanche-validator-list): This command gets a list of the validators of the L1 **Flags:** ```bash -h, --help help for validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### getBalance This command gets the remaining validator P-Chain balance that is available to pay P-Chain continuous fee **Usage:** ```bash avalanche validator getBalance [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for getBalance --l1 string name of L1 -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node ID of the validator -t, --testnet fuji operate on testnet (alias to fuji) --validation-id string validation ID of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### increaseBalance This command increases the validator P-Chain balance **Usage:** ```bash avalanche validator increaseBalance [subcommand] [flags] ``` **Flags:** ```bash --balance float amount of AVAX to increase validator's balance by --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for increaseBalance -k, --key string select the key to use [fuji/devnet deploy only] --l1 string name of L1 (to increase balance of bootstrap validators only) -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node ID of the validator -t, --testnet fuji operate on testnet (alias to fuji) --validation-id string validationIDStr of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list This command gets a list of the validators of the L1 **Usage:** ```bash avalanche validator list [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for list -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` # RPC APIs (/docs/rpcs) --- title: RPC APIs description: AvalancheGo RPC API References for interacting with Avalanche nodes --- # RPC APIs This section contains comprehensive documentation for all RPC (Remote Procedure Call) APIs available in the Avalanche ecosystem. ## Chain-Specific APIs ### C-Chain (Contract Chain) The C-Chain is an instance of the Ethereum Virtual Machine (EVM). Documentation for C-Chain RPC methods and transaction formats. ### P-Chain (Platform Chain) The P-Chain manages validators, staking, and subnets. Documentation for P-Chain RPC methods and transaction formats. ### X-Chain (Exchange Chain) The X-Chain is responsible for asset creation and trading. Documentation for X-Chain RPC methods and transaction formats. ### Subnet-EVM The Subnet-EVM is an instance of the EVM for Subnet / Layer 1 chains. Documentation for Subnet-EVM RPC methods and transaction formats. ## Other APIs Additional RPC APIs for node administration, health monitoring, indexing, metrics, and more. # Introduction (/docs/virtual-machines/evm-l1-customization) --- title: Introduction description: Learn how to customize the Ethereum Virtual Machine with EVM and Precompiles. root: true --- Welcome to the EVM customization guide. This documentation provides an overview of **EVM**, the purpose of **Validator Manager Contracts**, the capabilities of **precompiles**, and how you can create custom precompiles to extend the functionality of the Ethereum Virtual Machine (EVM). ## Overview of EVM EVM is Avalanche's customized version of the Ethereum Virtual Machine, tailored to run on Avalanche L1s. It allows developers to deploy Solidity smart contracts with enhanced capabilities, benefiting from Avalanche's high throughput and low latency. EVM enables more flexibility and performance optimizations compared to the standard EVM. ## Validator Manager Contracts Validator Manager Contracts (VMCs) are smart contracts that manage the validators of an L1. They allow you to define rules and criteria for validator participation directly within smart contracts. VMCs enable dynamic validator sets, making it easier to add or remove validators without requiring a network restart. This provides greater control over the L1's validator management and enhances network governance. ## Precompiles Precompiles are specialized smart contracts that execute native Go code within the EVM context. They act as a bridge between Solidity and lower-level functionalities, allowing for performance optimizations and access to features not available in Solidity alone. ### Default Precompiles in EVM EVM comes with a set of default precompiles that extend the EVM's functionality. For detailed documentation on each precompile, visit the [Avalanche L1s Precompiles](/docs/avalanche-l1s/evm-configuration/evm-l1-customization#precompiles) section: - [AllowList](/docs/avalanche-l1s/evm-configuration/allowlist): A reusable interface for permission management - [Permissions](/docs/avalanche-l1s/evm-configuration/permissions): Control contract deployment and transaction submission - [Tokenomics](/docs/avalanche-l1s/evm-configuration/tokenomics): Manage native token supply and minting - [Transaction Fees](/docs/avalanche-l1s/evm-configuration/transaction-fees): Configure fee parameters and reward mechanisms - [Warp Messenger](/docs/avalanche-l1s/evm-configuration/warpmessenger): Perform cross-chain operations ## Custom Precompiles One of the powerful features of EVM is the ability to create custom precompiles. By writing Go code and integrating it as a precompile, you can extend the EVM's functionality to suit specific use cases. Custom precompiles allow you to: - Achieve higher performance for computationally intensive tasks. - Access lower-level system functions not available in Solidity. - Implement custom cryptographic functions or algorithms. - Interact with external systems or data sources. Creating custom precompiles opens up a wide range of possibilities for developers to optimize and expand their decentralized applications on Avalanche L1s. By leveraging EVM, Validator Manager Contracts, and precompiles, you can build customized and efficient decentralized applications with greater control and enhanced functionality. Explore the following sections to learn how to implement and utilize these powerful features. # Introduction (/docs/virtual-machines) --- title: Introduction description: Learn about the execution layer of a blockchain network. --- A Virtual Machine (VM) is a blueprint for a blockchain. Blockchains are instantiated from a VM, similar to how objects are instantiated from a class definition. VMs can define anything you want, but will generally define transactions that are executed and how blocks are created. ## Blocks and State Virtual Machines deal with blocks and state. The functionality provided by VMs is to: - Define the representation of a blockchain's state - Represent the operations in that state - Apply the operations in that state Each block in the blockchain contains a set of state transitions. Each block is applied in order from the blockchain's initial genesis block to its last accepted block to reach the latest state of the blockchain. ## Blockchain A blockchain relies on two major components: The **Consensus Engine** and the **VM**. The VM defines application specific behavior and how blocks are built and parsed to create the blockchain. All VMs run on top of the Avalanche Consensus Engine, which allows nodes in the network to agree on the state of the blockchain. Here's a quick example of how VMs interact with consensus: 1. A node wants to update the blockchain's state 2. The node's VM will notify the consensus engine that it wants to update the state 3. The consensus engine will request the block from the VM 4. The consensus engine will verify the returned block using the VM's implementation of `Verify()` 5. The consensus engine will get the network to reach consensus on whether to accept or reject the newly verified block. Every virtuous (well-behaved) node on the network will have the same preference for a particular block 6. Depending upon the consensus results, the engine will either accept or reject the block. What happens when a block is accepted or rejected is specific to the implementation of the VM AvalancheGo provides the consensus engine for every blockchain on the Avalanche Network. The consensus engine relies on the VM interface to handle building, parsing, and storing blocks as well as verifying and executing on behalf of the consensus engine. This decoupling between the application and consensus layer allows developers to build their applications quickly by implementing virtual machines, without having to worry about the consensus layer managed by Avalanche which deals with how nodes agree on whether or not to accept a block. ## Installing a VM VMs are supplied as binaries to a node running `AvalancheGo`. These binaries must be named the VM's assigned **VMID**. A VMID is a 32-byte hash encoded in CB58 that is generated when you build your VM. In order to install a VM, its binary must be installed in the `AvalancheGo` plugin path. See [here](/docs/nodes/configure/configs-flags#--plugin-dir-string) for more details. Multiple VMs can be installed in this location. Each VM runs as a separate process from AvalancheGo and communicates with `AvalancheGo` using gRPC calls. This functionality is enabled by **RPCChainVM**, a special VM which wraps around other VM implementations and bridges the VM and AvalancheGo, establishing a standardized communication protocol between them. During VM creation, handshake messages are exchanged via **RPCChainVM** between AvalancheGo and the VM installation. Ensure matching **RPCChainVM** protocol versions to avoid errors, by updating your VM or using a [different version of AvalancheGo](https://github.com/ava-labs/AvalancheGo/releases). Note that some VMs may not support the latest protocol version. ### API Handlers Users can interact with a blockchain and its VM through handlers exposed by the VM's API. VMs expose two types of handlers to serve responses for incoming requests: - **Blockchain Handlers**: Referred to as handlers, these expose APIs to interact with a blockchain instantiated by a VM. The API endpoint will be different for each chain. The endpoint for a handler is `/ext/bc/[chainID]`. - **VM Handlers**: Referred to as static handlers, these expose APIs to interact with the VM directly. One example API would be to parse genesis data to instantiate a new blockchain. The endpoint for a static handler is `/ext/vm/[vmID]`. For any readers familiar with object-oriented programming, static and non-static handlers on a VM are analogous to static and non-static methods on a class. Blockchain handlers can be thought of as methods on an object, whereas VM handlers can be thought of as static methods on a class. ### Instantiate a VM The `vm.Factory` interface is implemented to create new VM instances from which a blockchain can be initialized. The factory's `New` method shown below provides `AvalancheGo` with an instance of the VM. It's defined in the [`factory.go`](https://github.com/ava-labs/timestampvm/blob/main/timestampvm/factory.go) file of the `timestampvm` repository. ```go // Returning a new VM instance from VM's factory func (f *Factory) New(*snow.Context) (interface{}, error) { return &vm.VM{}, nil } ``` ### Initializing a VM to Create a Blockchain Before a VM can run, AvalancheGo will initialize it by invoking its `Initialize` method. Here, the VM will bootstrap itself and sets up anything it requires before it starts running. This might involve setting up its database, mempool, genesis state, or anything else the VM requires to run. ```go if err := vm.Initialize( ctx.Context, vmDBManager, genesisData, chainConfig.Upgrade, chainConfig.Config, msgChan, fxs, sender, ); ``` You can refer to the [implementation](https://github.com/ava-labs/timestampvm/blob/main/timestampvm/vm.go#L75) of `vm.initialize` in the TimestampVM repository. ## Interfaces Every VM should implement the following interfaces: ### `block.ChainVM` To reach a consensus on linear blockchains, Avalanche uses the Snowman consensus engine. To be compatible with Snowman, a VM must implement the `block.ChainVM` interface. For more information, see [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/vm.go). ```go title="snow/engine/snowman/block/vm.go" // ChainVM defines the required functionality of a Snowman VM. // // A Snowman VM is responsible for defining the representation of the state, // the representation of operations in that state, the application of operations // on that state, and the creation of the operations. Consensus will decide on // if the operation is executed and the order operations are executed. // // For example, suppose we have a VM that tracks an increasing number that // is agreed upon by the network. // The state is a single number. // The operation is setting the number to a new, larger value. // Applying the operation will save to the database the new value. // The VM can attempt to issue a new number, of larger value, at any time. // Consensus will ensure the network agrees on the number at every block height. type ChainVM interface { common.VM Getter Parser // Attempt to create a new block from data contained in the VM. // // If the VM doesn't want to issue a new block, an error should be // returned. BuildBlock() (snowman.Block, error) // Notify the VM of the currently preferred block. // // This should always be a block that has no children known to consensus. SetPreference(ids.ID) error // LastAccepted returns the ID of the last accepted block. // // If no blocks have been accepted by consensus yet, it is assumed there is // a definitionally accepted block, the Genesis block, that will be // returned. LastAccepted() (ids.ID, error) } // Getter defines the functionality for fetching a block by its ID. type Getter interface { // Attempt to load a block. // // If the block does not exist, an error should be returned. // GetBlock(ids.ID) (snowman.Block, error) } // Parser defines the functionality for fetching a block by its bytes. type Parser interface { // Attempt to create a block from a stream of bytes. // // The block should be represented by the full byte array, without extra // bytes. ParseBlock([]byte) (snowman.Block, error) } ``` ### `common.VM` `common.VM` is a type that every `VM` must implement. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/common/vm.go). ```go title="snow/engine/common/vm.go" // VM describes the interface that all consensus VMs must implement type VM interface { // Contains handlers for VM-to-VM specific messages AppHandler // Returns nil if the VM is healthy. // Periodically called and reported via the node's Health API. health.Checkable // Connector represents a handler that is called on connection connect/disconnect validators.Connector // Initialize this VM. // [ctx]: Metadata about this VM. // [ctx.networkID]: The ID of the network this VM's chain is running on. // [ctx.chainID]: The unique ID of the chain this VM is running on. // [ctx.Log]: Used to log messages // [ctx.NodeID]: The unique staker ID of this node. // [ctx.Lock]: A Read/Write lock shared by this VM and the consensus // engine that manages this VM. The write lock is held // whenever code in the consensus engine calls the VM. // [dbManager]: The manager of the database this VM will persist data to. // [genesisBytes]: The byte-encoding of the genesis information of this // VM. The VM uses it to initialize its state. For // example, if this VM were an account-based payments // system, `genesisBytes` would probably contain a genesis // transaction that gives coins to some accounts, and this // transaction would be in the genesis block. // [toEngine]: The channel used to send messages to the consensus engine. // [fxs]: Feature extensions that attach to this VM. Initialize( ctx *snow.Context, dbManager manager.Manager, genesisBytes []byte, upgradeBytes []byte, configBytes []byte, toEngine chan<- Message, fxs []*Fx, appSender AppSender, ) error // Bootstrapping is called when the node is starting to bootstrap this chain. Bootstrapping() error // Bootstrapped is called when the node is done bootstrapping this chain. Bootstrapped() error // Shutdown is called when the node is shutting down. Shutdown() error // Version returns the version of the VM this node is running. Version() (string, error) // Creates the HTTP handlers for custom VM network calls. // // This exposes handlers that the outside world can use to communicate with // a static reference to the VM. Each handler has the path: // [Address of node]/ext/VM/[VM ID]/[extension] // // Returns a mapping from [extension]s to HTTP handlers. // // Each extension can specify how locking is managed for convenience. // // For example, it might make sense to have an extension for creating // genesis bytes this VM can interpret. CreateStaticHandlers() (map[string]*HTTPHandler, error) // Creates the HTTP handlers for custom chain network calls. // // This exposes handlers that the outside world can use to communicate with // the chain. Each handler has the path: // [Address of node]/ext/bc/[chain ID]/[extension] // // Returns a mapping from [extension]s to HTTP handlers. // // Each extension can specify how locking is managed for convenience. // // For example, if this VM implements an account-based payments system, // it have an extension called `accounts`, where clients could get // information about their accounts. CreateHandlers() (map[string]*HTTPHandler, error) } ``` ### `snowman.Block` The `snowman.Block` interface It define the functionality a block must implement to be a block in a linear Snowman chain. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/block.go). ```go title="snow/consensus/snowman/block.go" // Block is a possible decision that dictates the next canonical block. // // Blocks are guaranteed to be Verified, Accepted, and Rejected in topological // order. Specifically, if Verify is called, then the parent has already been // verified. If Accept is called, then the parent has already been accepted. If // Reject is called, the parent has already been accepted or rejected. // // If the status of the block is Unknown, ID is assumed to be able to be called. // If the status of the block is Accepted or Rejected; Parent, Verify, Accept, // and Reject will never be called. type Block interface { choices.Decidable // Parent returns the ID of this block's parent. Parent() ids.ID // Verify that the state transition this block would make if accepted is // valid. If the state transition is invalid, a non-nil error should be // returned. // // It is guaranteed that the Parent has been successfully verified. Verify() error // Bytes returns the binary representation of this block. // // This is used for sending blocks to peers. The bytes should be able to be // parsed into the same block on another node. Bytes() []byte // Height returns the height of this block in the chain. Height() uint64 } ``` ### `choices.Decidable` This interface is a superset of every decidable object, such as transactions, blocks, and vertices. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/choices/decidable.go). ```go title="snow/choices/decidable.go" // Decidable represents element that can be decided. // // Decidable objects are typically thought of as either transactions, blocks, or // vertices. type Decidable interface { // ID returns a unique ID for this element. // // Typically, this is implemented by using a cryptographic hash of a // binary representation of this element. An element should return the same // IDs upon repeated calls. ID() ids.ID // Accept this element. // // This element will be accepted by every correct node in the network. Accept() error // Reject this element. // // This element will not be accepted by any correct node in the network. Reject() error // Status returns this element's current status. // // If Accept has been called on an element with this ID, Accepted should be // returned. Similarly, if Reject has been called on an element with this // ID, Rejected should be returned. If the contents of this element are // unknown, then Unknown should be returned. Otherwise, Processing should be // returned. Status() Status } ``` # Manage VM Binaries (/docs/virtual-machines/manage-vm-binaries) --- title: Manage VM Binaries description: Learn about Avalanche Plugin Manager (APM) and how to use it to manage virtual machines binaries on existing AvalancheGo instances. --- Avalanche Plugin Manager (APM) is a command-line tool to manage virtual machines binaries on existing AvalancheGo instances. It enables to add/remove nodes to Avalanche L1s, upgrade the VM plugin binaries as new versions get released to the plugin repository. GitHub: [https://github.com/ava-labs/apm](https://github.com/ava-labs/apm) ## `avalanche-plugins-core` `avalanche-plugins-core` is plugin repository that ships with the `apm`. A plugin repository consists of a set of virtual machine and Avalanche L1 definitions that the `apm` consumes to allow users to quickly and easily download and manage VM binaries. GitHub: [https://github.com/ava-labs/avalanche-plugins-core](https://github.com/ava-labs/avalanche-plugins-core) # Simple VM in Any Language (/docs/virtual-machines/simple-vm-any-language) --- title: Simple VM in Any Language description: Learn how to implement a simple virtual machine in any language. --- This is a language-agnostic high-level documentation explaining the basics of how to get started at implementing your own virtual machine from scratch. Avalanche virtual machines are grpc servers implementing Avalanche's [Proto interfaces](https://buf.build/ava-labs/avalanche). This means that it can be done in [any language that has a grpc implementation](https://grpc.io/docs/languages/). ## Minimal Implementation To get the process started, at the minimum, you will to implement the following interfaces: - [`vm.Runtime`](https://buf.build/ava-labs/avalanche/docs/main:vm.runtime) (Client) - [`vm.VM`](https://buf.build/ava-labs/avalanche/docs/main:vm) (Server) To build a blockchain taking advantage of AvalancheGo's consensus to build blocks, you will need to implement: - [AppSender](https://buf.build/ava-labs/avalanche/docs/main:appsender) (Client) - [Messenger](https://buf.build/ava-labs/avalanche/docs/main:messenger) (Client) To have a json-RPC endpoint, `/ext/bc/subnetId/rpc` exposed by AvalancheGo, you will need to implement: - [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) (Server) You can and should use a tool like `buf` to generate the (Client/Server) code from the interfaces as stated in the [Avalanche module](https://buf.build/ava-labs/avalanche)'s page. There are _server_ and _client_ interfaces to implement. AvalancheGo calls the _server_ interfaces exposed by your VM and your VM calls the _client_ interfaces exposed by AvalancheGo. ## Starting Process Your VM is started by AvalancheGo launching your binary. Your binary is started as a sub-process of AvalancheGo. While launching your binary, AvalancheGo passes an environment variable `AVALANCHE_VM_RUNTIME_ENGINE_ADDR` containing an url. We must use this url to initialize a `vm.Runtime` client. Your VM, after having started a grpc server implementing the VM interface must call the [`vm.Runtime.InitializeRequest`](https://buf.build/ava-labs/avalanche/docs/main:vm.runtime#vm.runtime.InitializeRequest) with the following parameters. - `protocolVersion`: It must match the `supported plugin version` of the [AvalancheGo release](https://github.com/ava-labs/AvalancheGo/releases) you are using. It is always part of the release notes. - `addr`: It is your grpc server's address. It must be in the following format `host:port` (example `localhost:12345`) ## VM Initialization The service methods are described in the same order as they are called. You will need to implement these methods in your server. ### Pre-Initialization Sequence AvalancheGo starts/stops your process multiple times before launching the real initialization sequence. 1. [VM.Version](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Version) - Return: your VM's version. 2. [VM.CreateStaticHandler](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateStaticHandlers) - Return: an empty array - (Not absolutely required). 3. [VM.Shutdown](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Shutdown) - You should gracefully stop your process. - Return: Empty ### Initialization Sequence 1. [VM.CreateStaticHandlers](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateStaticHandlers) - Return an empty array - (Not absolutely required). 2. [VM.Initialize](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Initialize) - Param: an [InitializeRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.InitializeRequest). - You must use this data to initialize your VM. - You should add the genesis block to your blockchain and set it as the last accepted block. - Return: an [InitializeResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.InitializeResponse) containing data about the genesis extracted from the `genesis_bytes` that was sent in the request. 3. [VM.VerifyHeightIndex](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.VerifyHeightIndex) - Return: a [VerifyHeightIndexResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VerifyHeightIndexResponse) with the code `ERROR_UNSPECIFIED` to indicate that no error has occurred. 4. [VM.CreateHandlers](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateHandlers) - To serve json-RPC endpoint, `/ext/bc/subnetId/rpc` exposed by AvalancheGo - See [json-RPC](#json-rpc) for more detail - Create a [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) server and get its url. - Return: a `CreateHandlersResponse` containing a single item with the server's url. (or an empty array if not implementing the json-RPC endpoint) 5. [VM.StateSyncEnabled](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.StateSyncEnabled) - Return: `true` if you want to enable StateSync, `false` otherwise. 6. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) _If you had specified `true` in the `StateSyncEnabled` result_ - Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `StateSyncing` value - Set your blockchain's state to `StateSyncing` - Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 7. [VM.GetOngoingSyncStateSummary](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.GetOngoingSyncStateSummary) _If you had specified `true` in the `StateSyncEnabled` result_ - Return: a [GetOngoingSyncStateSummaryResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.GetOngoingSyncStateSummaryResponse) built from the genesis block. 8. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) - Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `Bootstrapping` value - Set your blockchain's state to `Bootstrapping` - Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 9. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) - Param: `SetPreferenceRequest` containing the preferred block ID - Return: Empty 10. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) - Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `NormalOp` value - Set your blockchain's state to `NormalOp` - Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 11. [VM.Connected](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Connected) (for every other node validating this Avalanche L1 in the network) - Param: a [ConnectedRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ConnectedRequest) with the NodeID and the version of AvalancheGo. - Return: Empty 12. [VM.Health](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Health) - Param: Empty - Return: a [HealthResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.HealthResponse) with an empty `details` property. 13. [VM.ParseBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.ParseBlock) - Param: A byte array containing a Block (the genesis block in this case) - Return: a [ParseBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ParseBlockResponse) built from the last accepted block. At this point, your VM is fully started and initialized. ### Building Blocks #### Transaction Gossiping Sequence When your VM receives transactions (for example using the [json-RPC](#json-rpc) endpoints), it can gossip them to the other nodes by using the [AppSender](https://buf.build/ava-labs/avalanche/docs/main:appsender) service. Supposing we have a 3 nodes network with nodeX, nodeY, nodeZ. Let's say NodeX has received a new transaction on it's json-RPC endpoint. [`AppSender.SendAppGossip`](https://buf.build/ava-labs/avalanche/docs/main:appsender#appsender.AppSender.SendAppGossip) (_client_): You must serialize your transaction data into a byte array and call the `SendAppGossip` to propagate the transaction. AvalancheGo then propagates this to the other nodes. [VM.AppGossip](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.AppGossip): You must deserialize the transaction and store it for the next block. - Param: A byte array containing your transaction data, and the NodeID of the node which sent the gossip message. - Return: Empty #### Block Building Sequence Whenever your VM is ready to build a new block, it will initiate the block building process by using the [Messenger](https://buf.build/ava-labs/avalanche/docs/main:messenger) service. Supposing that nodeY wants to build the block. you probably will implement some kind of background worker checking every second if there are any pending transactions: _client_ [`Messenger.Notify`](https://buf.build/ava-labs/avalanche/docs/main:messenger#messenger.Messenger.Notify): You must issue a notify request to AvalancheGo by calling the method with the `MESSAGE_BUILD_BLOCK` value. 1. [VM.BuildBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BuildBlock) - Param: Empty - You must build a block with your pending transactions. Serialize it to a byte array. - Store this block in memory as a "pending blocks" - Return: a [BuildBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.BuildBlockResponse) from the newly built block and it's associated data (`id`, `parent_id`, `height`, `timestamp`). 2. [VM.BlockVerify](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockVerify) - Param: The byte array containing the block data - Return: the block's timestamp 3. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) - Param: The block's ID - You must mark this block as the next preferred block. - Return: Empty 1. [VM.ParseBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.ParseBlock) - Param: A byte array containing a the newly built block's data - Store this block in memory as a "pending blocks" - Return: a [ParseBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ParseBlockResponse) built from the last accepted block. 2. [VM.BlockVerify](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockVerify) - Param: The byte array containing the block data - Return: the block's timestamp 3. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) - Param: The block's ID - You must mark this block as the next preferred block. - Return: Empty [VM.BlockAccept](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockAccept): You must accept this block as your last final block. - Param: The block's ID - Return: Empty #### Managing Conflicts Conflicts happen when two or more nodes propose the next block at the same time. AvalancheGo takes care of this and decides which block should be considered final, and which blocks should be rejected using Snowman consensus. On the VM side, all there is to do is implement the `VM.BlockAccept` and `VM.BlockReject` methods. _nodeX proposes block `0x123...`, nodeY proposes block `0x321...` and nodeZ proposes block `0x456`_ There are three conflicting blocks (different hashes), and if we look at our VM's log files, we can see that AvalancheGo uses Snowman to decide which block must be accepted. ```bash ... snowman/voter.go:58 filtering poll results ... ... snowman/voter.go:65 finishing poll ... ... snowman/voter.go:87 Snowman engine can't quiesce ... ... snowman/voter.go:58 filtering poll results ... ... snowman/voter.go:65 finishing poll ... ... snowman/topological.go:600 accepting block ``` Supposing that AvalancheGo accepts block `0x123...`. The following RPC methods are called on all nodes: 1. [VM.BlockAccept](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockAccept): You must accept this block as your last final block. - Param: The block's ID (`0x123...`) - Return: Empty 2. [VM.BlockReject](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockReject): You must mark this block as rejected. - Param: The block's ID (`0x321...`) - Return: Empty 3. [VM.BlockReject](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockReject): You must mark this block as rejected. - Param: The block's ID (`0x456...`) - Return: Empty ### JSON-RPC To enable your json-RPC endpoint, you must implement the [HandleSimple](https://buf.build/ava-labs/avalanche/docs/main:http#http.HTTP.HandleSimple) method of the [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) interface. - Param: a [HandleSimpleHTTPRequest](https://buf.build/ava-labs/avalanche/docs/main:http#http.HandleSimpleHTTPRequest) containing the original request's method, url, headers, and body. - Analyze, deserialize and handle the request. For example: if the request represents a transaction, we must deserialize it, check the signature, store it and gossip it to the other nodes using the [messenger client](#block-building-sequence)). - Return the [HandleSimpleHTTPResponse](https://buf.build/ava-labs/avalanche/docs/main:http#http.HandleSimpleHTTPResponse) response that will be sent back to the original sender. This server is registered with AvalancheGo during the [initialization process](#initialization-sequence) when the `VM.CreateHandlers` method is called. You must simply respond with the server's url in the `CreateHandlersResponse` result. # Getting Started (/docs/api-reference/metrics-api/getting-started) --- title: Getting Started description: Getting Started with the Metrics API icon: Rocket --- The Metrics API is designed to be simple and accessible, requiring no authentication to get started. Just choose your endpoint, make your query, and instantly access on-chain data and analytics to power your applications. The following query retrieves the daily count of active addresses on the Avalanche C-Chain(43114) over the course of one month (from August 1, 2024 12:00:00 AM to August 31, 2024 12:00:00 AM), providing insights into user activity on the chain for each day during that period. With this data you can use JavaScript visualization tools like Chart.js, D3.js, Highcharts, Plotly.js, or Recharts to create interactive and insightful visual representations. ```bash curl --request GET \ --url 'https://metrics.avax.network/v2/chains/43114/metrics/activeAddresses?startTimestamp=1722470400&endTimestamp=1725062400&timeInterval=day&pageSize=31' ``` Response: ```json { "results": [ { "value": 37738, "timestamp": 1724976000 }, { "value": 53934, "timestamp": 1724889600 }, { "value": 58992, "timestamp": 1724803200 }, { "value": 73792, "timestamp": 1724716800 }, { "value": 70057, "timestamp": 1724630400 }, { "value": 46452, "timestamp": 1724544000 }, { "value": 46323, "timestamp": 1724457600 }, { "value": 73399, "timestamp": 1724371200 }, { "value": 52661, "timestamp": 1724284800 }, { "value": 52497, "timestamp": 1724198400 }, { "value": 50574, "timestamp": 1724112000 }, { "value": 46999, "timestamp": 1724025600 }, { "value": 45320, "timestamp": 1723939200 }, { "value": 54964, "timestamp": 1723852800 }, { "value": 60251, "timestamp": 1723766400 }, { "value": 48493, "timestamp": 1723680000 }, { "value": 71091, "timestamp": 1723593600 }, { "value": 50456, "timestamp": 1723507200 }, { "value": 46989, "timestamp": 1723420800 }, { "value": 50984, "timestamp": 1723334400 }, { "value": 46988, "timestamp": 1723248000 }, { "value": 66943, "timestamp": 1723161600 }, { "value": 64209, "timestamp": 1723075200 }, { "value": 57478, "timestamp": 1722988800 }, { "value": 80553, "timestamp": 1722902400 }, { "value": 70472, "timestamp": 1722816000 }, { "value": 53678, "timestamp": 1722729600 }, { "value": 70818, "timestamp": 1722643200 }, { "value": 99842, "timestamp": 1722556800 }, { "value": 76515, "timestamp": 1722470400 } ] } ``` Congratulations! You’ve successfully made your first query to the Metrics API. 🚀🚀🚀 # Metrics API (/docs/api-reference/metrics-api) --- title: Metrics API description: Access real-time and historical metrics for Avalanche networks icon: ChartLine --- ### What is the Metrics API? The Metrics API equips web3 developers with a robust suite of tools to access and analyze on-chain activity across Avalanche’s primary network, Avalanche L1s, and other supported EVM chains. This API delivers comprehensive metrics and analytics, enabling you to seamlessly integrate historical data on transactions, gas consumption, throughput, staking, and more into your applications. The Metrics API, along with the [Data API](/docs/api-reference/data-api) are the driving force behind every graph you see on the [Avalanche Explorer](https://explorer.avax.network/). From transaction trends to staking insights, the visualizations and data presented are all powered by these APIs, offering real-time and historical insights that are essential for building sophisticated, data-driven blockchain products.. ### Features * **Chain Throughput:** Retrieve detailed metrics on gas consumption, Transactions Per Second (TPS), and gas prices, including rolling windows of data for granular analysis. * **Cumulative Metrics:** Access cumulative data on addresses, contracts, deployers, and transaction counts, providing insights into network growth over time. * **Staking Information:** Obtain staking-related data, including the number of validators and delegators, along with their respective weights, across different subnets. * **Blockchains and Subnets:** Get information about supported blockchains, including EVM Chain IDs, blockchain IDs, and subnet associations, facilitating multi-chain analytics. * **Composite Queries:** Perform advanced queries by combining different metric types and conditions, enabling detailed and customizable data retrieval. The Metrics API is designed to provide developers with powerful tools to analyze and monitor on-chain activity across Avalanche’s primary network, Avalanche L1s, and other supported EVM chains. Below is an overview of the key features available: ### Chain Throughput Metrics * **Gas Consumption**
Track the average and maximum gas consumption per second, helping to understand network performance and efficiency. * **Transactions Per Second (TPS)**
Monitor the average and peak TPS to assess the network’s capacity and utilization. * **Gas Prices**
Analyze average and maximum gas prices over time to optimize transaction costs and predict fee trends. Monitor the average and peak TPS to assess the network’s capacity and utilization. ### Cumulative Metrics * **Address Growth**
Access the cumulative number of active addresses on a chain, providing insights into network adoption and user activity. * **Contract Deployment**
Monitor the cumulative number of smart contracts deployed, helping to gauge developer engagement and platform usage. * **Transaction Count**
Track the cumulative number of transactions, offering a clear view of network activity and transaction volume. ### Staking Information * **Validator and Delegator Counts**
Retrieve the number of active validators and delegators for a given L1, crucial for understanding network security and decentralization. * **Staking Weights**
Access the total stake weight of validators and delegators, helping to assess the distribution of staked assets across the network. ### Rolling Window Analytics * **Short-Term and Long-Term Metrics:** Perform rolling window analysis on various metrics like gas used, TPS, and gas prices, allowing for both short-term and long-term trend analysis. * **Customizable Time Frames:** Choose from different time intervals (hourly, daily, monthly) to suit your specific analytical needs. ### Blockchain and L1 Information * **Chain and L1 Mapping:** Get detailed information about EVM chains and their associated L1s, including chain IDs, blockchain IDs, and subnet IDs, facilitating cross-chain analytics. ### Advanced Composite Queries * **Custom Metrics Combinations**: Combine multiple metrics and apply logical operators to perform sophisticated queries, enabling deep insights and tailored analytics. * **Paginated Results:** Handle large datasets efficiently with paginated responses, ensuring seamless data retrieval in your applications. The Metrics API equips developers with the tools needed to build robust analytics, monitoring, and reporting solutions, leveraging the full power of multi-chain data across the Avalanche ecosystem and beyond. # Rate Limits (/docs/api-reference/metrics-api/rate-limits) --- title: Rate Limits description: Rate Limits for the Metrics API icon: Clock --- # Rate Limits Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations. ## Rate Limit Tiers The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table: | Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) | | :----------------- | :--------------------- | :------------------ | | Free | 8,000 | 1,200,000 | > We are working on new subscription tiers with higher rate limits to support even greater request volumes. ## Rate Limit Categories The CUs for each category are defined in the following table: | Weight | CU Value | | :----- | :------- | | Free | 1 | | Small | 20 | | Medium | 100 | | Large | 500 | | XL | 1000 | | XXL | 3000 | ## Rate Limits for Metrics Endpoints The CUs for each route are defined in the table below: | Endpoint | Method | Weight | CU Value | | :---------------------------------------------------------- | :----- | :----- | :------- | | `/v2/health-check` | GET | Free | 1 | | `/v2/chains` | GET | Free | 1 | | `/v2/chains/{chainId}` | GET | Free | 1 | | `/v2/chains/{chainId}/metrics/{metric}` | GET | Medium | 100 | | `/v2/chains/{chainId}/teleporterMetrics/{metric}` | GET | Medium | 100 | | `/v2/chains/{chainId}/rollingWindowMetrics/{metric}` | GET | Medium | 100 | | `/v2/networks/{network}/metrics/{metric}` | GET | Medium | 100 | | `/v2/chains/{chainId}/contracts/{address}/nfts:listHolders` | GET | Large | 500 | | `/v2/chains/{chainId}/contracts/{address}/balances` | GET | XL | 1000 | | `/v2/chains/43114/btcb/bridged:getAddresses` | GET | Large | 500 | | `/v2/subnets/{subnetId}/validators:getAddresses` | GET | Large | 500 | | `/v2/lookingGlass/compositeQuery` | POST | XXL | 3000 | All rate limits, weights, and CU values are subject to change. # Usage Guide (/docs/api-reference/metrics-api/usage-guide) --- title: Usage Guide description: Usage Guide for the Metrics API icon: Code --- The Metrics API does not require authentication, making it straightforward to integrate into your applications. You can start making API requests without the need for an API key or any authentication headers. #### Making Requests You can interact with the Metrics API by sending HTTP GET requests to the provided endpoints. Below is an example of a simple `curl` request. ```bash curl -H "Content-Type: Application/json" "https://metrics.avax.network/v1/avg_tps/{chainId}" ``` In the above request Replace `chainId` with the specific chain ID you want to query. For example, to retrieve the average transactions per second (TPS) for a specific chain (in this case, chain ID 43114), you can use the following endpoint: ```bash curl "https://metrics.avax.network/v1/avg_tps/43114" ``` The API will return a JSON response containing the average TPS for the specified chain over a series of timestamps and `lastRun` is a timestamp indicating when the last data point was updated: ```json { "results": [ {"timestamp": 1724716800, "value": 1.98}, {"timestamp": 1724630400, "value": 2.17}, {"timestamp": 1724544000, "value": 1.57}, {"timestamp": 1724457600, "value": 1.82}, // Additional data points... ], "status": 200, "lastRun": 1724780812 } ``` ### Rate Limits Even though the Metrics API does not require authentication, it still enforces rate limits to ensure stability and performance. If you exceed these limits, the server will respond with a 429 Too Many Requests HTTP response code. ### Error Types The API generates standard error responses along with error codes based on provided requests and parameters. Typically, response codes within the `2XX` range signifies successful requests, while those within the `4XX` range points to errors originating from the client's side. Meanwhile, response codes within the `5XX` range indicates problems on the server's side. The error response body is formatted like this: ```json { "message": ["Invalid address format"], // route specific error message "error": "Bad Request", // error type "statusCode": 400 // http response code } ``` Let's go through every error code that we can respond with: | Error Code | Error Type | Description | | ---------- | --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **400** | Bad Request | Bad requests generally mean the client has passed invalid or malformed parameters. Error messages in the response could help in evaluating the error. | | **401** | Unauthorized | When a client attempts to access resources that require authorization credentials but the client lacks proper authentication in the request, the server responds with 401. | | **403** | Forbidden | When a client attempts to access resources with valid credentials but doesn't have the privilege to perform that action, the server responds with 403. | | **404** | Not Found | The 404 error is mostly returned when the client requests with either mistyped URL, or the passed resource is moved or deleted, or the resource doesn't exist. | | **500** | Internal Server Error | The 500 error is a generic server-side error that is returned for any uncaught and unexpected issues on the server side. This should be very rare, and you may reach out to us if the problem persists for a longer duration. | | **502** | Bad Gateway | This is an internal error indicating invalid response received by the client-facing proxy or gateway from the upstream server. | | **503** | Service Unavailable | The 503 error is returned for certain routes on a particular Subnet. This indicates an internal problem with our Subnet node, and may not necessarily mean the Subnet is down or affected. | ### Pagination For endpoints that return large datasets, the Metrics API employs pagination to manage the results. When querying for lists of data, you may receive a nextPageToken in the response, which can be used to request the next page of data. Example response with pagination: ```json { "results": [...], "nextPageToken": "3d22deea-ea64-4d30-8a1e-c2a353b67e90" } ``` To retrieve the next set of results, include the nextPageToken in your subsequent request: ```bash curl -H "Content-Type: Application/json" \ "https://metrics.avax.network/v1/avg_tps/{chainId}?pageToken=3d22deea-ea64-4d30-8a1e-c2a353b67e90" ``` ### Pagination Details #### Page Token Structure The `nextPageToken` is a UUID-based token provided in the response when additional pages of data are available. This token serves as a pointer to the next set of data. * **UUID Generation**: The `nextPageToken` is generated uniquely for each pagination scenario, ensuring security and ensuring predictability. * **Expiration**: The token is valid for 24 hours from the time it is generated. After this period, the token will expire, and a new request starting from the initial page will be required. * **Presence**: The token is only included in the response when there is additional data available. If no more data exists, the token will not be present. #### Integration and Usage To use the pagination system effectively: * Check if the `nextPageToken` is present in the response. * If present, include this token in the subsequent request to fetch the next page of results. * Ensure that the follow-up request is made within the 24-hour window after the token was generated to avoid token expiration. By utilizing the pagination mechanism, you can efficiently manage and navigate through large datasets, ensuring a smooth data retrieval process. ### Swagger API Reference You can explore the full API definitions and interact with the endpoints in the Swagger documentation at: [https://metrics.avax.network/api](https://metrics.avax.network/api) # Webhooks API (/docs/api-reference/webhook-api) --- title: Webhooks API description: Real-time notifications for blockchain events on Avalanche networks icon: Webhook --- ### What is the Webhooks API? The Webhooks API lets you monitor real-time events on the Avalanche ecosystem, including the C-chain, L1s, and the Platform Chain (P/X chains). By subscribing to specific events, you can receive instant notifications for on-chain occurrences without continuously polling the network. webhooks ### Key Features: * **Real-time notifications:** Receive immediate updates on specified on-chain activities without polling. * **Customizable:** Specify the desired event type to listen for, customizing notifications based on your individual requirements. * **Secure:** Employ shared secrets and signature-based verification to ensure that notifications originate from a trusted source. * **Broad Coverage:** * **C-chain:** Mainnet and testnet, covering smart contract events, NFT transfers, and wallet-to-wallet transactions. * **Platform Chain (P and X chains):** Address and validator events, staking activities, and other platform-level transactions. By supporting both the C-chain and the Platform Chain, you can monitor an even wider range of Avalanche activities. ### Use cases * **NFT marketplace transactions**: Get alerts for NFT minting, transfers, auctions, bids, sales, and other interactions within NFT marketplaces. * **Wallet notifications**: Receive alerts when an address performs actions such as sending, receiving, swapping, or burning assets. * **DeFi activities**: Receive notifications for various DeFi activities such as liquidity provisioning, yield farming, borrowing, lending, and liquidations. * **Staking rewards:** Get real-time notifications when a validator stakes, receives delegation, or earns staking rewards on the P-Chain, enabling seamless monitoring of validator earnings and participation. ## APIs for continuous polling vs. Webhooks for events data The following example uses the address activity webhook topic to illustrate the difference between polling an API for wallet event data versus subscribing to a webhook topic to receive wallet events. webhooks ### Continous polling Continuous polling is a method where your application repeatedly sends requests to an API at fixed intervals to check for new data or events. Think of it like checking your mailbox every five minutes to see if new mail has arrived, whether or not anything is there. * You want to track new transactions for a specific wallet. * Your application calls an API every few seconds (e.g., every 5 seconds) with a query like, “Are there any new transactions for this wallet since my last check?” * The API responds with either new transaction data or a confirmation that nothing has changed. **Downsides of continuous polling** * **Inefficiency:** Your app makes requests even when no new transactions occur, wasting computational resources, bandwidth, and potentially incurring higher API costs. For example, if no transactions happen for an hour, your app still sends hundreds of unnecessary requests. * **Delayed updates:** Since polling happens at set intervals, there’s a potential delay in detecting events. If a transaction occurs just after a poll, your app won’t know until the next check—up to 5 seconds later in our example. This lag can be critical for time-sensitive applications, like trading or notifications. * **Scalability challenges:** Monitoring one wallet might be manageable, but if you’re tracking dozens or hundreds of wallets, the number of requests multiplies quickly. ### Webhook subscription Webhooks are an event-driven alternative where your application subscribes to specific events, and the Avalanche service notifies you instantly when those events occur. It’s like signing up for a delivery alert—when the package (event) arrives, you get a text message right away, instead of checking the tracking site repeatedly. * Your app registers a webhook specifying an endpoint (e.g., `https://your-app.com/webhooks/transactions`) and the event type (e.g., `address_activity`). * When a new transaction occurs we send a POST request to your endpoint with the transaction details. * Your app receives the data only when something happens, with no need to ask repeatedly. **Benefits of Avalanche webhooks** * **Real-Time updates:** Notifications arrive the moment a transaction is processed, eliminating delays inherent in polling. This is ideal for applications needing immediate responses, like alerting users or triggering automated actions. * **Efficiency:** Your app doesn’t waste resources making requests when there’s no new data. Data flows only when events occur. This reduces server load, bandwidth usage, and API call quotas. * **Scalability:** You can subscribe to events for multiple wallets or event types (e.g., transactions, smart contract calls) without increasing the number of requests your app makes. We handle the event detection and delivery, so your app scales effortlessly as monitoring needs grow. ## Event payload structure The Event structure always begins with the following parameters: ```json theme={null} { "webhookId": "6d1bd383-aa8d-47b5-b793-da6d8a115fde", "eventType": "address_activity", "messageId": "8e4e7284-852a-478b-b425-27631c8d22d2", "event": { } } ``` **Parameters:** * `webhookId`: Unique identifier for the webhook in your account. * `eventType`: The event that caused the webhook to be triggered. In the future, there will be multiple types of events, for the time being only the address\_activity event is supported. The address\_activity event gets triggered whenever the specified addresses participate in a token or AVAX transaction. * `messageId`: Unique identifier per event sent. * `event`: Event payload. It contains details about the transaction, logs, and traces. By default logs and internal transactions are not included, if you want to include them use `"includeLogs": true`, and `"includeInternalTxs": true`. ### Address Activity webhook The address activity webhook allows you to track any interaction with an address (any address). Here is an example of this type of event: ```json theme={null} { "webhookId": "263942d1-74a4-4416-aeb4-948b9b9bb7cc", "eventType": "address_activity", "messageId": "94df1881-5d93-49d1-a1bd-607830608de2", "event": { "transaction": { "blockHash": "0xbd093536009f7dd785e9a5151d80069a93cc322f8b2df63d373865af4f6ee5be", "blockNumber": "44568834", "from": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7", "gas": "651108", "gasPrice": "31466275484", "maxFeePerGas": "31466275484", "maxPriorityFeePerGas": "31466275484", "txHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4", "txStatus": "1", "input": "0xb80c2f090000000000000000000000000000000000000000000000000000000000000000000000000000000000000000eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee000000000000000000000000b97ef9ef8734c71904d8002f8b6bc66dd9c48a6e000000000000000000000000000000000000000000000000006ca0c737b131f2000000000000000000000000000000000000000000000000000000000011554e000000000000000000000000000000000000000000000000000000006627dadc0000000000000000000000000000000000000000000000000000000000000120000000000000000000000000000000000000000000000000000000000000016000000000000000000000000000000000000000000000000000000000000004600000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000006ca0c737b131f2000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000000000160000000000000000000000000b31f66aa3c1e785363f0875a1b74e27b85fd66c70000000000000000000000000000000000000000000000000000000000000001000000000000000000000000be882fb094143b59dc5335d32cecb711570ebdd40000000000000000000000000000000000000000000000000000000000000001000000000000000000000000be882fb094143b59dc5335d32cecb711570ebdd400000000000000000000000000000000000000000000000000000000000000010000000000000000000027100e663593657b064e1bae76d28625df5d0ebd44210000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000060000000000000000000000000b31f66aa3c1e785363f0875a1b74e27b85fd66c7000000000000000000000000b97ef9ef8734c71904d8002f8b6bc66dd9c48a6e0000000000000000000000000000000000000000000000000000000000000bb80000000000000000000000000000000000000000000000000000000000000000", "nonce": "4", "to": "0x1dac23e41fc8ce857e86fd8c1ae5b6121c67d96d", "transactionIndex": 0, "value": "30576074978046450", "type": 0, "chainId": "43114", "receiptCumulativeGasUsed": "212125", "receiptGasUsed": "212125", "receiptEffectiveGasPrice": "31466275484", "receiptRoot": "0xf355b81f3e76392e1b4926429d6abf8ec24601cc3d36d0916de3113aa80dd674", "erc20Transfers": [ { "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4", "type": "ERC20", "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "value": "30576074978046450", "blockTimestamp": 1713884373, "logIndex": 2, "erc20Token": { "address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "valueWithDecimals": "0.030576074978046448" } }, { "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4", "type": "ERC20", "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7", "value": "1195737", "blockTimestamp": 1713884373, "logIndex": 3, "erc20Token": { "address": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "name": "USD Coin", "symbol": "USDC", "decimals": 6, "valueWithDecimals": "1.195737" } }, { "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4", "type": "ERC20", "from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "to": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "value": "30576074978046450", "blockTimestamp": 1713884373, "logIndex": 4, "erc20Token": { "address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "valueWithDecimals": "0.030576074978046448" } } ], "erc721Transfers": [], "erc1155Transfers": [], "internalTransactions": [ { "from": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7", "to": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "internalTxType": "CALL", "value": "30576074978046450", "gasUsed": "212125", "gasLimit": "651108", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xF2781Bb34B6f6Bb9a6B5349b24de91487E653119", "internalTxType": "DELEGATECALL", "value": "30576074978046450", "gasUsed": "176417", "gasLimit": "605825", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "9750", "gasLimit": "585767", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6", "internalTxType": "DELEGATECALL", "value": "0", "gasUsed": "2553", "gasLimit": "569571", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "CALL", "value": "30576074978046450", "gasUsed": "23878", "gasLimit": "566542", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "CALL", "value": "0", "gasUsed": "25116", "gasLimit": "540114", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "internalTxType": "CALL", "value": "0", "gasUsed": "81496", "gasLimit": "511279", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "491", "gasLimit": "501085", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "to": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "internalTxType": "CALL", "value": "0", "gasUsed": "74900", "gasLimit": "497032", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "internalTxType": "CALL", "value": "0", "gasUsed": "32063", "gasLimit": "463431", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6", "internalTxType": "DELEGATECALL", "value": "0", "gasUsed": "31363", "gasLimit": "455542", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "2491", "gasLimit": "430998", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "internalTxType": "CALL", "value": "0", "gasUsed": "7591", "gasLimit": "427775", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "CALL", "value": "0", "gasUsed": "6016", "gasLimit": "419746", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "491", "gasLimit": "419670", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "3250", "gasLimit": "430493", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6", "internalTxType": "DELEGATECALL", "value": "0", "gasUsed": "2553", "gasLimit": "423121", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "1250", "gasLimit": "426766", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6", "internalTxType": "DELEGATECALL", "value": "0", "gasUsed": "553", "gasLimit": "419453", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" } ], "blockTimestamp": 1713884373 } } } ``` # Rate Limits (/docs/api-reference/webhook-api/rate-limits) --- title: Rate Limits description: Rate Limits for the Webhooks API icon: Clock --- Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations. ## Rate Limit Tiers The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table: | Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) | | :----------------- | :--------------------- | :------------------ | | Unauthenticated | 6,000 | 1,200,000 | | Free | 8,000 | 2,000,000 | | Base | 10,000 | 3,750,000 | | Growth | 14,000 | 11,200,000 | | Pro | 20,000 | 25,000,000 | To update your subscription level use the [AvaCloud Portal](https://app.avacloud.io/) Note: Rate limits apply collectively across both Webhooks and Data APIs, with usage from each counting toward your total CU limit. ## Rate Limit Categories The CUs for each category are defined in the following table: | Weight | CU Value | | :----- | :------- | | Free | 1 | | Small | 10 | | Medium | 20 | | Large | 50 | | XL | 100 | | XXL | 200 | ## Rate Limits for Webhook Endpoints The CUs for each route are defined in the table below: | Endpoint | Method | Weight | CU Value | | :------------------------------------------ | :----- | :----- | :------- | | `/v1/webhooks` | POST | Medium | 20 | | `/v1/webhooks` | GET | Small | 10 | | `/v1/webhooks/{id}` | GET | Small | 10 | | `/v1/webhooks/{id}` | DELETE | Medium | 20 | | `/v1/webhooks/{id}` | PATCH | Medium | 20 | | `/v1/webhooks:generateOrRotateSharedSecret` | POST | Medium | 20 | | `/v1/webhooks:getSharedSecret` | GET | Small | 10 | | `/v1/webhooks/{id}/addresses` | PATCH | Medium | 20 | | `/v1/webhooks/{id}/addresses` | DELETE | Medium | 20 | | `/v1/webhooks/{id}/addresses` | GET | Medium | 20 | All rate limits, weights, and CU values are subject to change. # Retry mechanism (/docs/api-reference/webhook-api/retries) --- title: Retry mechanism description: Retry mechanism for the Webhook API icon: RotateCcw --- Our webhook system is designed to ensure you receive all your messages, even if temporary issues prevent immediate delivery. To achieve this, we’ve implemented a retry mechanism that resends messages if they don’t get through on the first attempt. Importantly, **retries are handled on a per-message basis**, meaning each webhook message follows its own independent retry schedule. This ensures that the failure of one message doesn’t affect the delivery attempts of others. This guide will walk you through how the retry mechanism works, the differences between free and paid tier users, and practical steps you can take to ensure your system handles webhooks effectively. ## How it works When we send a webhook message to your server, we expect a `200` status code within 10 seconds to confirm successful receipt. Your server should return this response immediately and process the message afterward. Processing the message before sending the response can lead to timeouts and trigger unnecessary retries. webhooks * **Attempt 1:** We send the message expecting a respose with `200` status code. If we do not receive a `200` status code within **10 seconds**, the attempt is considered failed. During this window, any non-`2xx` responses are ignored. * **Attempt 2:** Occurs **10 seconds** after the first attempt, with another 10-second timeout and the same rule for ignoring non-`2xx` responses. * **Retry Queue After Two Failed Attempts** If both initial attempts fail, the message enters a **retry queue** with progressively longer intervals between attempts. Each retry attempt still has a 10-second timeout, and non-`2xx` responses are ignored during this window. The retry schedule is as follows: | Attempt | Interval | | ------- | -------- | | 3 | 1 min | | 4 | 5 min | | 5 | 10 min | | 6 | 30 min | | 7 | 2 hours | | 8 | 6 hours | | 9 | 12 hours | | 10 | 24 hours | **Total Retry Duration:** Up to approximately 44.8 hours (2,688 minutes) if all retries are exhausted. **Interval Timing:** Each retry interval starts 10 seconds after the previous attempt is deemed failed. For example, if attempt 2 fails at t=20 seconds, attempt 3 will start at t=80 seconds (20s + 1 minute interval + 10s). Since retries are per message, multiple messages can be in different stages of their retry schedules simultaneously without interfering with each other. ## Differences Between Free and Paid Tier Users The behavior of the retry mechanism varies based on your subscription tier: **Free tier users** * **Initial attempts limit:** If six messages fail both the first and second attempts, your webhook will be automatically deactivated. * **Retry queue limit:** Only five messages can enter the retry queue over the lifetime of the subscription. If a sixth message requires retry queuing, or if any message fails all 10 retry attempts, the subscription will be deactivated. **Paid tier users** * For paid users, webhooks will be deactivated if a single message, retried at the 24-hour interval, fails to process successfully. ## What you can do **Ensure server availability:** * Keep your server running smoothly to receive webhook messages without interruption. * Implement logging for incoming webhook requests and your server's responses to help identify any issues quickly. **Design for idempotency** * Set up your webhook handler so it can safely process the same message multiple times without causing errors or unwanted effects. This way, if retries occur, they won't negatively impact your system. * The webhook retry mechanism is designed to maximize the reliability of message delivery while minimizing the impact of temporary issues. By understanding how retries work—especially the per-message nature of the system—and following best practices like ensuring server availability and designing for idempotency, you can ensure a seamless experience with webhooks. ## Key Takeaways * Each message has its own retry schedule, ensuring isolation and reliability. * Free tier users have limits on failed attempts and retry queue entries, while paid users do not. * Implement logging and idempotency to handle retries effectively and avoid disruptions. * By following this guide, you’ll be well-equipped to manage webhooks and ensure your system remains robust, even in the face of temporary challenges. # Webhook Signature (/docs/api-reference/webhook-api/webhooks-signature) --- title: Webhook Signature description: Webhook Signature for the Webhook API icon: Signature --- To make your webhooks extra secure, you can verify that they originated from our side by generating an HMAC SHA-256 hash code using your Authentication Token and request body. You can get the signing secret through the AvaCloud portal or Glacier API. ### Find your signing secret **Using the portal**\ Navigate to the webhook section and click on Generate Signing Secret. Create the secret and copy it to your code. **Using Data API**\ The following endpoint retrieves a shared secret: ```bash curl --location 'https://glacier-api.avax.network/v1/webhooks:getSharedSecret' \ --header 'x-glacier-api-key: ' \ ``` ### Validate the signature received Every outbound request will include an authentication signature in the header. This signature is generated by: 1. **Canonicalizing the JSON Payload**: This means arranging the JSON data in a standard format. 2. **Generating a Hash**: Using the HMAC SHA256 hash algorithm to create a hash of the canonicalized JSON payload. To verify that the signature is from us, follow these steps: 1. Generate the HMAC SHA256 hash of the received JSON payload. 2. Compare this generated hash with the signature in the request header. This process, known as verifying the digital signature, ensures the authenticity and integrity of the request. **Example Request Header** ``` Content-Type: application/json; x-signature: your-hashed-signature ``` ### Example Signature Validation Function This Node.js code sets up an HTTP server using the Express framework. It listens for POST requests sent to the `/callback` endpoint. Upon receiving a request, it validates the signature of the request against a predefined `signingSecret`. If the signature is valid, it logs match; otherwise, it logs no match. The server responds with a JSON object indicating that the request was received. ### Node (JavaScript) ```javascript const express = require('express'); const crypto = require('crypto'); const { canonicalize } = require('json-canonicalize'); const app = express(); app.use(express.json({limit: '50mb'})); const signingSecret = 'c13cc017c4ed63bcc842c8edfb49df37512280326a32826de3b885340b8a3d53'; function isValidSignature(signingSecret, signature, payload) { const canonicalizedPayload = canonicalize(payload); const hmac = crypto.createHmac('sha256', Buffer.from(signingSecret, 'hex')); const digest = hmac.update(canonicalizedPayload).digest('base64'); console.log("signature: ", signature); console.log("digest", digest); return signature === digest; } app.post('/callback', express.json({ type: 'application/json' }), (request, response) => { const { body, headers } = request; const signature = headers['x-signature']; // Handle the event switch (body.evenType) { case 'address\*activity': console.log("\*\** Address*activity \*\*\*"); console.log(body); if (isValidSignature(signingSecret, signature, body)) { console.log("match"); } else { console.log("no match"); } break; // ... handle other event types default: console.log(`Unhandled event type ${body}`); } // Return a response to acknowledge receipt of the event response.json({ received: true }); }); const PORT = 8000; app.listen(PORT, () => console.log(`Running on port ${PORT}`)); ``` ### Python (Flask) ```python from flask import Flask, request, jsonify import hmac import hashlib import base64 import json app = Flask(__name__) SIGNING_SECRET = 'c13cc017c4ed63bcc842c8edfb49df37512280326a32826de3b885340b8a3d53' def canonicalize(payload): """Function to canonicalize JSON payload""" # In Python, canonicalization can be achieved by using sort_keys=True in json.dumps return json.dumps(payload, separators=(',', ':'), sort_keys=True) def is_valid_signature(signing_secret, signature, payload): canonicalized_payload = canonicalize(payload) hmac_obj = hmac.new(bytes.fromhex(signing_secret), canonicalized_payload.encode('utf-8'), hashlib.sha256) digest = base64.b64encode(hmac_obj.digest()).decode('utf-8') print("signature:", signature) print("digest:", digest) return signature == digest @app.route('/callback', methods=['POST']) def callback_handler(): body = request.json signature = request.headers.get('x-signature') # Handle the event if body.get('eventType') == 'address_activity': print("*** Address_activity ***") print(body) if is_valid_signature(SIGNING_SECRET, signature, body): print("match") else: print("no match") else: print(f"Unhandled event type {body}") # Return a response to acknowledge receipt of the event return jsonify({"received": True}) if __name__ == '__main__': PORT = 8000 print(f"Running on port {PORT}") app.run(port=PORT) ``` ### Go (net/http) ```go package main import ( "crypto/hmac" "crypto/sha256" "encoding/base64" "encoding/hex" "encoding/json" "fmt" "net/http" "sort" "strings" ) const signingSecret = "c13cc017c4ed63bcc842c8edfb49df37512280326a32826de3b885340b8a3d53" // Canonicalize function sorts the JSON keys and produces a canonicalized string func Canonicalize(payload map[string]interface{}) (string, error) { var sb strings.Builder var keys []string for k := range payload { keys = append(keys, k) } sort.Strings(keys) sb.WriteString("{") for i, k := range keys { v, err := json.Marshal(payload[k]) if err != nil { return "", err } sb.WriteString(fmt.Sprintf("\"%s\":%s", k, v)) if i < len(keys)-1 { sb.WriteString(",") } } sb.WriteString("}") return sb.String(), nil } func isValidSignature(signingSecret, signature string, payload map[string]interface{}) bool { canonicalizedPayload, err := Canonicalize(payload) if err != nil { fmt.Println("Error canonicalizing payload:", err) return false } key, err := hex.DecodeString(signingSecret) if err != nil { fmt.Println("Error decoding signing secret:", err) return false } h := hmac.New(sha256.New, key) h.Write([]byte(canonicalizedPayload)) digest := h.Sum(nil) encodedDigest := base64.StdEncoding.EncodeToString(digest) fmt.Println("signature:", signature) fmt.Println("digest:", encodedDigest) return signature == encodedDigest } func callbackHandler(w http.ResponseWriter, r *http.Request) { var body map[string]interface{} err := json.NewDecoder(r.Body).Decode(&body) if err != nil { fmt.Println("Error decoding body:", err) http.Error(w, "Invalid request body", http.StatusBadRequest) return } signature := r.Header.Get("x-signature") eventType, ok := body["eventType"].(string) if !ok { fmt.Println("Error parsing eventType") http.Error(w, "Invalid event type", http.StatusBadRequest) return } switch eventType { case "address_activity": fmt.Println("*** Address_activity ***") fmt.Println(body) if isValidSignature(signingSecret, signature, body) { fmt.Println("match") } else { fmt.Println("no match") } default: fmt.Printf("Unhandled event type %s\n", eventType) } w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(map[string]bool{"received": true}) } func main() { http.HandleFunc("/callback", callbackHandler) fmt.Println("Running on port 8000") http.ListenAndServe(":8000", nil) } ``` ### Rust (actix-web) ```rust use actix_web::{web, App, HttpServer, HttpResponse, Responder, post}; use serde::Deserialize; use hmac::{Hmac, Mac}; use sha2::Sha256; use base64::encode; use std::collections::BTreeMap; type HmacSha256 = Hmac; const SIGNING_SECRET: &str = "c13cc017c4ed63bcc842c8edfb49df37512280326a32826de3b885340b8a3d53"; #[derive(Deserialize)] struct EventPayload { eventType: String, // Add other fields as necessary } // Canonicalize the JSON payload by sorting keys fn canonicalize(payload: &BTreeMap) -> String { serde_json::to_string(payload).unwrap() } fn is_valid_signature(signing_secret: &str, signature: &str, payload: &BTreeMap) -> bool { let canonicalized_payload = canonicalize(payload); let mut mac = HmacSha256::new_from_slice(signing_secret.as_bytes()) .expect("HMAC can take key of any size"); mac.update(canonicalized_payload.as_bytes()); let result = mac.finalize(); let digest = encode(result.into_bytes()); println!("signature: {}", signature); println!("digest: {}", digest); digest == signature } #[post("/callback")] async fn callback(body: web::Json>, req: web::HttpRequest) -> impl Responder { let signature = req.headers().get("x-signature").unwrap().to_str().unwrap(); if let Some(event_type) = body.get("eventType").and_then(|v| v.as_str()) { match event_type { "address_activity" => { println!("*** Address_activity ***"); println!("{:?}", body); if is_valid_signature(SIGNING_SECRET, signature, &body) { println!("match"); } else { println!("no match"); } } _ => { println!("Unhandled event type: {}", event_type); } } } else { println!("Error parsing eventType"); return HttpResponse::BadRequest().finish(); } HttpResponse::Ok().json(serde_json::json!({ "received": true })) } #[actix_web::main] async fn main() -> std::io::Result<()> { HttpServer::new(|| { App::new() .service(callback) }) .bind("0.0.0.0:8000")? .run() .await } ``` ### TypeScript (ChainKit SDK) ```typescript import { isValidSignature } from '@avalanche-sdk/chainkit/utils'; import express from 'express'; const app = express(); app.use(express.json()); const signingSecret = 'your-signing-secret'; // Replace with your signing secret app.post('/webhook', (req, res) => { const signature = req.headers['x-signature']; const payload = req.body; if (isValidSignature(signingSecret, signature, payload)) { console.log('Valid signature'); // Process the request } else { console.log('Invalid signature'); } res.json({ received: true }); }); app.listen(8000, () => console.log('Server running on port 8000')); ``` # WebSockets vs Webhooks (/docs/api-reference/webhook-api/wss-vs-webhooks) --- title: WebSockets vs Webhooks description: WebSockets vs Webhooks for the Webhook API icon: GitCompare --- Reacting to real-time events from Avalanche smart contracts allows for immediate responses and automation, improving user experience and streamlining application functionality. It ensures that applications stay synchronized with the blockchain state. There are two primary methods for receiving these on-chain events: * **WebSockets**, using libraries like Ethers.js or Viem * **Webhooks**, which send structured event data directly to your app via HTTP POST. Both approaches enable real-time interactions, but they differ drastically in their reliability, ease of implementation, and long-term maintainability. In this post, we break down why Webhooks are the better, more resilient choice for most Avalanche developers. ## Architecture Overview The diagram below compares the two models side-by-side: wss_vs_webhooks **WebSockets** * The app connects to the Avalanche RPC API over WSS to receive raw log data. * It must decode logs, manage connection state, and store data locally. * On disconnection, it must re-sync via an external Data API or using standard `eth_*` RPC calls (e.g., `eth_getLogs`, `eth_getBlockByNumber`). Important: WSS is a transport protocol—not real-time by itself. Real-time capabilities come from the availability of `eth_subscribe`, which requires node support. **Webhooks** * The app exposes a simple HTTP endpoint. * Decoded event data is pushed directly via POST, including token metadata. * Built-in retries ensure reliable delivery, even during downtime. Important: Webhooks have a 48-hour retry window. If your app is down for longer, you still need a re-sync strategy using `eth_*` calls to recover older missed events. *** ## Using WebSockets: Real-time but high maintenance WebSockets allow you to subscribe to events using methods like eth\_subscribe. These subscriptions notify your app in real-time whenever new logs, blocks, or pending transactions meet your criteria. ```javascript import { createPublicClient, webSocket, formatUnits } from 'viem'; import { avalancheFuji } from 'viem/chains'; import { usdcAbi } from './usdc-abi.mjs'; // Ensure this includes the Transfer event // Your wallet address (case-insensitive comparison) const MY_WALLET = '0x8ae323046633A07FB162043f28Cea39FFc23B50A'.toLowerCase(); //Chrome async function monitorTransfers() { try { // USDC.e contract address on Avalanche Fuji const usdcAddress = '0x5425890298aed601595a70AB815c96711a31Bc65'; // Set up the WebSocket client for Avalanche Fuji const client = createPublicClient({ chain: avalancheFuji, transport: webSocket('wss://api.avax-test.network/ext/bc/C/ws'), }); // Watch for Transfer events on the USDC contract client.watchContractEvent({ address: usdcAddress, abi: usdcAbi, eventName: 'Transfer', onLogs: (logs) => { logs.forEach((log) => { const { from, to, value } = log.args; const fromLower = from.toLowerCase(); // Filter for transactions where 'from' matches your wallet if (fromLower === MY_WALLET) { console.log('*******'); console.log('Transfer from my wallet:'); console.log(`From: ${from}`); console.log(`To: ${to}`); console.log(`Value: ${formatUnits(value, 6)} USDC`); // USDC has 6 decimals console.log(`Transaction Hash: ${log.transactionHash}`); } }); }, onError: (error) => { console.error('Event watching error:', error.message); }, }); console.log('Monitoring USDC Transfer events on Fuji...'); } catch (error) { console.error('Error setting up transfer monitoring:', error.message); } } // Start monitoring monitorTransfers(); ``` The downside? If your connection drops, you lose everything in between. You’ll need to: * Set up a database to track the latest processed block and log index. * Correctly handling dropped connections and reconnection by hand can be challenging to get right. * Use `eth_getLogs` to re-fetch missed logs. * Decode and process raw logs yourself to rebuild app state. This requires extra infrastructure, custom recovery logic, and significant maintenance overhead. *** ## Webhooks: Resilient and developer-friendly Webhooks eliminate the complexity of managing live connections. Instead, you register an HTTP endpoint to receive blockchain event payloads when they occur. Webhook payload example: ```json { "eventType": "address_activity", "event": { "transaction": { "txHash": "0x1d8f...", "from": "0x3D3B...", "to": "0x9702...", "erc20Transfers": [ { "valueWithDecimals": "110.56", "erc20Token": { "symbol": "USDt", "decimals": 6 } } ] } } } ``` You get everything you need: * Decoded event data * Token metadata (name, symbol, decimals) * Full transaction context * No extra calls. No parsing. No manual re-sync logic. *** ## Key Advantages of Webhooks * **Reliable delivery with zero effort:** Built-in retries ensure no missed events during downtime * **Instant enrichment:** Payloads contain decoded logs, token metadata, and transaction context * **No extra infrastructure:** No WebSocket connections, no DB, no external APIs * **Faster development:** Go from idea to production with fewer moving parts * **Lower operational cost:** Less compute, fewer network calls, smaller surface area to manage If we compare using a table: | Feature | WebSockets (Ethers.js/Viem) | Webhooks | | | | :----------------------------- | :------------------------------------------------- | :--------------------------------------------------- | - | - | | **Interruption Handling** | Manual; Requires complex custom logic | Automatic; Built-in queues & retries | | | | **Data Recovery** | Requires DB + External API for re-sync | Handled by provider; No re-sync logic needed | | | | **Dev Complexity** | High; Error-prone custom resilience code | Low; Focus on processing incoming POST data | | | | **Infrastructure** | WSS connection + DB + Potential Data API cost | Application API endpoint | | | | **Data Integrity** | Risk of gaps if recovery logic fails | High; Ensures eventual delivery | | | | **Payload** | Often raw; Requires extra calls for context | Typically enriched and ready-to-use | | | | **Multiple addresses** | Manual filtering or separate listeners per address | Supports direct configuration for multiple addresses | | | | **Listen to wallet addresses** | Requires manual block/transaction filtering | Can monitor wallet addresses and smart contracts | | | ## Summary * WebSockets offer real-time access to Avalanche data, but come with complexity: raw logs, reconnect logic, re-sync handling, and decoding responsibilities. * Webhooks flip the model: the data comes to you, pre-processed and reliable. You focus on your product logic instead of infrastructure. * If you want to ship faster, operate more reliably, and reduce overhead, Webhooks are the better path forward for Avalanche event monitoring. # Data API vs RPC (/docs/api-reference/data-api/data-vs-rpc) --- title: Data API vs RPC description: Comparison of the Data API and RPC methods icon: Server --- In the rapidly evolving world of Web3 development, efficiently retrieving token balances for a user's address is a fundamental requirement. Whether you're building DeFi platforms, wallets, analytics tools, or exchanges, displaying accurate token balances is crucial for user engagement and trust. A typical use case involves showing a user's token portfolio in a wallet application, in this case, we have sAvax and USDC. title Developers generally have two options to fetch this data: 1. **Using RPC methods to index blockchain data on their own** 2. **Leveraging an indexer provider like the Data API** While both methods aim to achieve the same goal, the Data API offers a more efficient, scalable, and developer-friendly solution. This article delves into why using the Data API is better than relying on traditional RPC (Remote Procedure Call) methods. ### What Are RPC methods and their challenges? Remote Procedure Call (RPC) methods allow developers to interact directly with blockchain nodes. One of their key advantages is that they are standardized and universally understood by blockchain developers across different platforms. With RPC, you can perform tasks such as querying data, submitting transactions, and interacting with smart contracts. These methods are typically low-level and synchronous, meaning they require a deep understanding of the blockchain’s architecture and specific command structures. You can refer to the [official documentation](https://ethereum.org/en/developers/docs/apis/json-rpc/) to gain a more comprehensive understanding of the JSON-RPC API. Here’s an example using the `eth_getBalance` method to retrieve the native balance of a wallet: ```bash curl --location 'https://api.avax.network/ext/bc/C/rpc' \ --header 'Content-Type: application/json' \ --data '{"method":"eth_getBalance","params":["0x8ae323046633A07FB162043f28Cea39FFc23B50A", "latest"],"id":1,"jsonrpc":"2.0"}' ``` This call returns the following response: ```json { "jsonrpc": "2.0", "id": 1, "result": "0x284476254bc5d594" } ``` The balance in this wallet is 2.9016 AVAX. However, despite the wallet holding multiple tokens such as USDC, the `eth_getBalance` method only returns the AVAX amount and it does so in Wei and in hexadecimal format. This is not particularly human-readable, adding to the challenge for developers who need to manually convert the balance to a more understandable format. #### No direct RPC methods to retrieve token balances Despite their utility, RPC methods come with significant limitations when it comes to retrieving detailed token and transaction data. Currently, RPC methods do not provide direct solutions for the following: * **Listing all tokens held by a wallet**: There is no RPC method that provides a complete list of ERC-20 tokens owned by a wallet. * **Retrieving all transactions for a wallet**: : There is no direct method for fetching all transactions associated with a wallet. * **Getting ERC-20/721/1155 token balances**: The `eth_getBalance` method only returns the balance of the wallet’s native token (such as AVAX on Avalanche) and cannot be used to retrieve ERC-20/721/1155 token balances. To achieve these tasks using RPC methods alone, you would need to: * **Query every block for transaction logs**: Scan the entire blockchain, which is resource-intensive and impractical. * **Parse transaction logs**: Identify and extract ERC-20 token transfer events from each transaction. * **Aggregate data**: Collect and process this data to compute balances and transaction histories. #### Manual blockchain indexing is difficult and costly Using RPC methods to fetch token balances involves an arduous process: 1. You must connect to a node and subscribe to new block events. 2. For each block, parse every transaction to identify ERC-20 token transfers involving the user's address. 3. Extract contract addresses and other relevant data from the parsed transactions. 4. Compute balances by processing transfer events. 5. Store the processed data in a database for quick retrieval and aggregation. #### Why this is difficult: * **Resource-Intensive**: Requires significant computational power and storage to process and store blockchain data. * **Time-consuming**: Processing millions of blocks and transactions can take an enormous amount of time. * **Complexity**: Handling edge cases like contract upgrades, proxy contracts, and non-standard implementations adds layers of complexity. * **Maintenance**: Keeping the indexed data up-to-date necessitates continuous synchronization with new blocks being added to the blockchain. * **High Costs**: Associated with servers, databases, and network bandwidth. ### The Data API Advantage The Data API provides a streamlined, efficient, and scalable solution for fetching token balances. Here's why it's the best choice: With a single API call, you can retrieve all ERC-20 token balances for a user's address: ```javascript avalancheSDK.data.evm.balances.listErc20Balances({ address: "0xYourAddress", }); ``` Sample Response: ```json { "erc20TokenBalances": [ { "ercType": "ERC-20", "chainId": "43114", "address": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "name": "USD Coin", "symbol": "USDC", "decimals": 6, "price": { "value": 1.0, "currencyCode": "usd" }, "balance": "15000000", "balanceValue": { "currencyCode": "usd", "value": 9.6 }, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/e50058c1-2296-4e7e-91ea-83eb03db95ee/8db2a492ce64564c96de87c05a3756fd/43114-0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E.png" } // Additional tokens... ] } ``` As you can see with a single call the API returns an array of token balances for all the wallet tokens, including: * **Token metadata**: Contract address, name, symbol, decimals. * **Balance information**: Token balance in both hexadecimal and decimal formats, Also retrieves balances of native assets like ETH or AVAX. * **Price data**: Current value in USD or other supported currencies, saving you the effort of integrating another API. * **Visual assets**: Token logo URI for better user interface integration. If you’re building a wallet, DeFi app, or any application that requires displaying balances, transaction history, or smart contract interactions, relying solely on RPC methods can be challenging. Just as there’s no direct RPC method to retrieve token balances, there’s also no simple way to fetch all transactions associated with a wallet, especially for ERC-20, ERC-721, or ERC-1155 token transfers. However, by using the Data API, you can retrieve all token transfers for a given wallet **with a single API call**, making the process much more efficient. This approach simplifies tracking and displaying wallet activity without the need to manually scan the entire blockchain. Below are two examples that demonstrate the power of the Data API: in the first, it returns all ERC transfers, including ERC-20, ERC-721, and ERC-1155 tokens, and in the second, it shows all internal transactions, such as when one contract interacts with another. [Lists ERC transfers](/data-api/evm-transactions/list-erc-transfers) for an ERC-20, ERC-721, or ERC-1155 contract address. ```javascript theme={null} import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.data.evm.transactions.listTransfers({ startBlock: 6479329, endBlock: 6479330, pageSize: 10, address: "0x71C7656EC7ab88b098defB751B7401B5f6d8976F", }); for await (const page of result) { // Handle the page console.log(page); } } run(); ``` Example response ```json theme={null} { "nextPageToken": "", "transfers": [ { "blockNumber": "339", "blockTimestamp": 1648672486, "blockHash": "0x17533aeb5193378b9ff441d61728e7a2ebaf10f61fd5310759451627dfca2e7c", "txHash": "0x3e9303f81be00b4af28515dab7b914bf3dbff209ea10e7071fa24d4af0a112d4", "from": { "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F" }, "to": { "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F" }, "logIndex": 123, "value": "10000000000000000000", "erc20Token": { "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F", "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "ercType": "ERC-20", "price": { "currencyCode": "usd", "value": "42.42" } } } ] } ``` [Returns a list of internal transactions](/data-api/evm-transactions/list-internal-transactions) for an address and chain. Filterable by block range. ```javascript theme={null} import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.data.evm.transactions.listInternalTransactions({ startBlock: 6479329, endBlock: 6479330, pageSize: 10, address: "0x71C7656EC7ab88b098defB751B7401B5f6d8976F", }); for await (const page of result) { // Handle the page console.log(page); } } run(); ``` Example response ```json theme={null} { "nextPageToken": "", "transactions": [ { "blockNumber": "339", "blockTimestamp": 1648672486, "blockHash": "0x17533aeb5193378b9ff441d61728e7a2ebaf10f61fd5310759451627dfca2e7c", "txHash": "0x3e9303f81be00b4af28515dab7b914bf3dbff209ea10e7071fa24d4af0a112d4", "from": { "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F" }, "to": { "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F" }, "internalTxType": "UNKNOWN", "value": "10000000000000000000", "isReverted": true, "gasUsed": "", "gasLimit": "" } ] } ``` ### Conclusion Using the Data API over traditional RPC methods for fetching token balances offers significant advantages: * **Efficiency**: Retrieve all necessary information in a single API call. * **Simplicity**: Eliminates complex data processing and reduces development time. * **Scalability**: Handles large volumes of data efficiently, suitable for real-time applications. * **Comprehensive Data**: Provides enriched information, including token prices and logos. * **Reliability**: Ensures data accuracy and consistency without the need for extensive error handling. For developers building Web3 applications, leveraging the Data API is the smarter choice. It not only simplifies your codebase but also enhances the user experience by providing accurate and timely data. If you’re building cutting-edge Web3 applications, this API is the key to improving your workflow and performance. Whether you’re developing DeFi solutions, wallets, or analytics platforms, take your project to the next level. [Start today with the Data API](/data-api/getting-started) and experience the difference! # Getting Started (/docs/api-reference/data-api/getting-started) --- title: Getting Started description: Getting Started with the Data API icon: Book --- To begin, create your free account by visiting [Builder Hub Console](https://build.avax.network/login?callbackUrl=%2Fconsole%2Futilities%2Fdata-api-keys). Once the account is created: 1. Navigating to [**Data API Keys**](https://build.avax.network/console/utilities/data-api-keys) 2. Click on **Create API Key** 3. Set an alias and click on **create** 4. Copy the the value Always keep your API keys in a secure environment. Never expose them in public repositories, such as GitHub, or share them with unauthorized individuals. Compromised API keys can lead to unauthorized access and potential misuse of your account. With your API Key you can start making queries, for example to get the latest block on the C-chain(43114): ```bash theme={null} curl --location 'https://data-api.avax.network/v1/chains/43114/blocks' \ --header 'accept: application/json' \ --header 'x-glacier-api-key: ' \ ``` And you should see something like this: ```json theme={null} { "blocks": [ { "blockNumber": "49889407", "blockTimestamp": 1724990250, "blockHash": "0xd34becc82943e3e49048cdd3f75b80a87e44eb3aed6b87cc06867a7c3b9ee213", "txCount": 1, "baseFee": "25000000000", "gasUsed": "53608", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0xf4917efb4628a1d8f4d101b3d15bce9826e62ef2c93c3e16ee898d27cf02f3d4", "feesSpent": "1435117553916960", "cumulativeTransactions": "500325352" }, { "blockNumber": "49889406", "blockTimestamp": 1724990248, "blockHash": "0xf4917efb4628a1d8f4d101b3d15bce9826e62ef2c93c3e16ee898d27cf02f3d4", "txCount": 2, "baseFee": "25000000000", "gasUsed": "169050", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x2a54f142fa3acee92a839b071bb6c7cca7abc2a797cf4aac68b07f79406ac0cb", "feesSpent": "4226250000000000", "cumulativeTransactions": "500325351" }, { "blockNumber": "49889405", "blockTimestamp": 1724990246, "blockHash": "0x2a54f142fa3acee92a839b071bb6c7cca7abc2a797cf4aac68b07f79406ac0cb", "txCount": 4, "baseFee": "25000000000", "gasUsed": "618638", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x0cda1bb5c86e790976c9330c9fc26e241a705afbad11a4caa44df1c81058451d", "feesSpent": "16763932426044724", "cumulativeTransactions": "500325349" }, { "blockNumber": "49889404", "blockTimestamp": 1724990244, "blockHash": "0x0cda1bb5c86e790976c9330c9fc26e241a705afbad11a4caa44df1c81058451d", "txCount": 3, "baseFee": "25000000000", "gasUsed": "254544", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x60e55dd9eacc095c07f50a73e02d81341c406584f7abbf5d10d938776a4c893c", "feesSpent": "6984642298020000", "cumulativeTransactions": "500325345" }, { "blockNumber": "49889403", "blockTimestamp": 1724990242, "blockHash": "0x60e55dd9eacc095c07f50a73e02d81341c406584f7abbf5d10d938776a4c893c", "txCount": 2, "baseFee": "25000000000", "gasUsed": "65050", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0xa3e9f91f45a85ed00b8ebe8e5e976ed1a1f52612143eddd3de9d2588d05398b8", "feesSpent": "1846500000000000", "cumulativeTransactions": "500325342" }, { "blockNumber": "49889402", "blockTimestamp": 1724990240, "blockHash": "0xa3e9f91f45a85ed00b8ebe8e5e976ed1a1f52612143eddd3de9d2588d05398b8", "txCount": 2, "baseFee": "25000000000", "gasUsed": "74608", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x670db772edfc2fdae322d55473ba0670690aed6358a067a718492c819d63356a", "feesSpent": "1997299851936960", "cumulativeTransactions": "500325340" }, { "blockNumber": "49889401", "blockTimestamp": 1724990238, "blockHash": "0x670db772edfc2fdae322d55473ba0670690aed6358a067a718492c819d63356a", "txCount": 1, "baseFee": "25000000000", "gasUsed": "273992", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x75742cf45383ce54823690b9dd2e85a743be819281468163d276f145d077902a", "feesSpent": "7334926295195040", "cumulativeTransactions": "500325338" }, { "blockNumber": "49889400", "blockTimestamp": 1724990236, "blockHash": "0x75742cf45383ce54823690b9dd2e85a743be819281468163d276f145d077902a", "txCount": 1, "baseFee": "25000000000", "gasUsed": "291509", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0xe5055eae3e1fd2df24b61e9c691f756c97e5619cfc66b69cbcb6025117d1bde7", "feesSpent": "7724988500000000", "cumulativeTransactions": "500325337" }, { "blockNumber": "49889399", "blockTimestamp": 1724990234, "blockHash": "0xe5055eae3e1fd2df24b61e9c691f756c97e5619cfc66b69cbcb6025117d1bde7", "txCount": 8, "baseFee": "25000000000", "gasUsed": "824335", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0xbcacff928f7dd20cc1522155e7c9b9716997914b53ab94034b813c3f207174ef", "feesSpent": "21983004380692400", "cumulativeTransactions": "500325336" }, { "blockNumber": "49889398", "blockTimestamp": 1724990229, "blockHash": "0xbcacff928f7dd20cc1522155e7c9b9716997914b53ab94034b813c3f207174ef", "txCount": 1, "baseFee": "25000000000", "gasUsed": "21000", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x0b686812078429d33e4224d2b48bd26b920db8dbb464e7f135d980759ca7e947", "feesSpent": "562182298020000", "cumulativeTransactions": "500325328" } ], "nextPageToken": "9f9e1d25-14a9-49f4-8742-fd4bf12f7cd8" } ``` Congratulations! You’ve successfully set up your account and made your first query to the Data API 🚀🚀🚀 # Data API (/docs/api-reference/data-api) --- title: Data API description: Access comprehensive blockchain data for Avalanche networks icon: Database --- ### What is the Data API? The Data API provides web3 application developers with multi-chain data related to Avalanche's primary network, Avalanche L1s, and Ethereum. With the Data API, you can easily build products that leverage real-time and historical transaction and transfer history, native and token balances, and various types of token metadata. Data API The [Data API](/docs/api-reference/data-api), along with the [Metrics API](/docs/api-reference/metrics-api), are the engines behind the [Avalanche Explorer](https://subnets.avax.network/stats/) and the [Core wallet](https://core.app/en/). They are used to display transactions, logs, balances, NFTs, and more. The data and visualizations presented are all powered by these APIs, offering real-time and historical insights that are essential for building sophisticated, data-driven blockchain products. ### Features * **Extensive L1 Support**: Gain access to data from over 100+ L1s across both mainnet and testnet. If an L1 is listed on the [Avalanche Explorer](https://subnets.avax.network/), you can query its data using the Data API. * **Transactions and UTXOs**: easily retrieve details related to transactions, UTXOs, and token transfers from Avalanche EVMs, Ethereum, and Avalanche's Primary Network - the P-Chain, X-Chain and C-Chain. * **Blocks**: retrieve latest blocks and block details * **Balances**: fetch balances of native, ERC-20, ERC-721, and ERC-1155 tokens along with relevant metadata. * **Tokens**: augment your user experience with asset details. * **Staking**: get staking related data for active and historical validations. ### Supported Chains Avalanche’s architecture supports a diverse ecosystem of interconnected L1 blockchains, each operating independently while retaining the ability to seamlessly communicate with other L1s within the network. Central to this architecture is the Primary Network—Avalanche’s foundational network layer, which all validators are required to validate prior to [ACP-77](/docs/acps/77-reinventing-subnets). The Primary Network runs three essential blockchains: * The Contract Chain (C-Chain) * The Platform Chain (P-Chain) * The Exchange Chain (X-Chain) However, with the implementation of [ACP-77](/docs/acps/77-reinventing-subnets), this requirement will change. Subnet Validators will be able to operate independently of the Primary Network, allowing for more flexible and affordable Subnet creation and management. The **Data API** supports a wide range of L1 blockchains (**over 100**) across both **mainnet** and **testnet**, including popular ones like Beam, DFK, Lamina1, Dexalot, Shrapnel, and Pulsar. In fact, every L1 you see on the [Avalanche Explorer](https://explorer.avax.network/) can be queried through the Data API. This list is continually expanding as we keep adding more L1s. For a full list of supported chains, visit [List chains](/docs/api-reference/data-api/evm-chains/list-chains). #### The Contract Chain (C-Chain) The C-Chain is an implementation of the Ethereum Virtual Machine (EVM). The primary network endpoints only provide information related to C-Chain atomic memory balances and import/export transactions. For additional data, please reference the [EVM APIs](/docs/rpcs/c-chain/rpc). #### The Platform Chain (P-Chain) The P-Chain is responsible for all validator and L1-level operations. The P-Chain supports the creation of new blockchains and L1s, the addition of validators to L1s, staking operations, and other platform-level operations. #### The Exchange Chain (X-Chain) The X-Chain is responsible for operations on digital smart assets known as Avalanche Native Tokens. A smart asset is a representation of a real-world resource (for example, equity, or a bond) with sets of rules that govern its behavior, like "can’t be traded until tomorrow." The X-Chain supports the creation and trade of Avalanche Native Tokens. | Feature | Description | | :--------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Chains** | Utilize this endpoint to retrieve the Primary Network chains that an address has transaction history associated with. | | **Blocks** | Blocks are the container for transactions executed on the Primary Network. Retrieve the latest blocks, a specific block by height or hash, or a list of blocks proposed by a specified NodeID on Primary Network chains. | | **Vertices** | Prior to Avalanche Cortina (v1.10.0), the X-Chain functioned as a DAG with vertices rather than blocks. These endpoints allow developers to retrieve historical data related to that period of chain history. Retrieve the latest vertices, a specific vertex, or a list of vertices at a specific height from the X-Chain. | | **Transactions** | Transactions are a user's primary form of interaction with a chain and provide details around their on-chain activity, including staking-related behavior. Retrieve a list of the latest transactions, a specific transaction, a list of active staking transactions for a specified address, or a list of transactions associated with a provided asset id from Primary Network chains. | | **UTXOs** | UTXOs are fundamental elements that denote the funds a user has available. Get a list of UTXOs for provided addresses from the Primary Network chains. | | **Balances** | User balances are an essential function of the blockchain. Retrieve balances related to the X and P-Chains, as well as atomic memory balances for the C-Chain. | | **Rewards** | Staking is the process where users lock up their tokens to support a blockchain network and, in return, receive rewards. It is an essential part of proof-of-stake (PoS) consensus mechanisms used by many blockchain networks, including Avalanche. Using the Data API, you can easily access pending and historical rewards associated with a set of addresses. | | **Assets** | Get asset details corresponding to the given asset id on the X-Chain. | #### EVM The C-Chain is an instance of the Coreth Virtual Machine, and many Avalanche L1s are instances of the *Subnet-EVM*, which is a Virtual Machine (VM) that defines the L1 Contract Chains. *Subnet-EVM* is a simplified version of *Coreth VM* (C-Chain). | Feature | Description | | :--------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Chains** | There are a number of chains supported by the Data API. These endpoints can be used to understand which chains are included/indexed as part of the API and retrieve information related to a specific chain. | | **Blocks** | Blocks are the container for transactions executed within the EVM. Retrieve the latest blocks or a specific block by height or hash. | | **Transactions** | Transactions are a user's primary form of interaction with a chain and provide details around their on-chain activity. These endpoints can be used to retrieve information related to specific transaction details, internal transactions, contract deployments, specific token standard transfers, and more! | | **Balances** | User balances are an essential function of the blockchain. Easily retrieve native token, collectible, and fungible token balances related to an EVM chain with these endpoints. | #### Operations The Operations API allows users to easily access their on-chain history by creating transaction exports returned in a CSV format. This API supports EVMs as well as non-EVM Primary Network chains. # Rate Limits (/docs/api-reference/data-api/rate-limits) --- title: Rate Limits description: Rate Limits for the Data API icon: Clock --- Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations. ## Rate Limit Tiers The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table: | Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) | | :----------------- | :--------------------- | :------------------ | | Unauthenticated | 6,000 | 1,200,000 | | Free | 8,000 | 2,000,000 | | Base | 10,000 | 3,750,000 | | Growth | 14,000 | 11,200,000 | | Pro | 20,000 | 25,000,000 | To update your subscription level use the [AvaCloud Portal](https://app.avacloud.io/) Note: Rate limits apply collectively across both Webhooks and Data APIs, with usage from each counting toward your total CU limit. ## Rate Limit Categories The CUs for each category are defined in the following table: | Weight | CU Value | | :----- | :------- | | Free | 1 | | Small | 10 | | Medium | 20 | | Large | 50 | | XL | 100 | | XXL | 200 | ## Rate Limits for Data API Endpoints The CUs for each route are defined in the table below: | Endpoint | Method | Weight | CU Value | | :-------------------------------------------------------------------------------- | :----- | :----- | :------- | | `/v1/health-check` | GET | Medium | 20 | | `/v1/address/{address}/chains` | GET | Medium | 20 | | `/v1/transactions` | GET | Medium | 20 | | `/v1/blocks` | GET | Medium | 20 | | `/v1/chains/{chainId}/nfts/collections/{address}/tokens/{tokenId}:reindex` | POST | Small | 10 | | `/v1/chains/{chainId}/nfts/collections/{address}/tokens` | GET | Medium | 20 | | `/v1/chains/{chainId}/nfts/collections/{address}/tokens/{tokenId}` | GET | Medium | 20 | | `/v1/operations/{operationId}` | GET | Small | 10 | | `/v1/operations/transactions:export` | POST | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/transactions/{txHash}` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/transactions` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/transactions:listStaking` | GET | XL | 100 | | `/v1/networks/{network}/rewards:listPending` | GET | XL | 100 | | `/v1/networks/{network}/rewards` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/utxos` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/balances` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/blocks/{blockId}` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/nodes/{nodeId}/blocks` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/blocks` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/vertices` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/vertices/{vertexHash}` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/vertices:listByHeight` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/assets/{assetId}` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/assets/{assetId}/transactions` | GET | XL | 100 | | `/v1/networks/{network}/addresses:listChainIds` | GET | XL | 100 | | `/v1/networks/{network}` | GET | XL | 100 | | `/v1/networks/{network}/blockchains` | GET | Medium | 20 | | `/v1/networks/{network}/subnets` | GET | Medium | 20 | | `/v1/networks/{network}/subnets/{subnetId}` | GET | Medium | 20 | | `/v1/networks/{network}/validators` | GET | Medium | 20 | | `/v1/networks/{network}/validators/{nodeId}` | GET | Medium | 20 | | `/v1/networks/{network}/delegators` | GET | Medium | 20 | | `/v1/networks/{network}/l1Validators` | GET | Medium | 20 | | `/v1/teleporter/messages/{messageId}` | GET | Medium | 20 | | `/v1/teleporter/messages` | GET | Medium | 20 | | `/v1/teleporter/addresses/{address}/messages` | GET | Medium | 20 | | `/v1/icm/messages/{messageId}` | GET | Medium | 20 | | `/v1/icm/messages` | GET | Medium | 20 | | `/v1/icm/addresses/{address}/messages` | GET | Medium | 20 | | `/v1/apiUsageMetrics` | GET | XXL | 200 | | `/v1/apiLogs` | GET | XXL | 200 | | `/v1/subnetRpcUsageMetrics` | GET | XXL | 200 | | `/v1/rpcUsageMetrics` | GET | XXL | 200 | | `/v1/primaryNetworkRpcUsageMetrics` | GET | XXL | 200 | | `/v1/signatureAggregator/{network}/aggregateSignatures` | POST | Medium | 20 | | `/v1/signatureAggregator/{network}/aggregateSignatures/{txHash}` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:getNative` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:listErc20` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:listErc721` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:listErc1155` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:listCollectibles` | GET | Medium | 20 | | `/v1/chains/{chainId}/blocks` | GET | Small | 10 | | `/v1/chains/{chainId}/blocks/{blockId}` | GET | Small | 10 | | `/v1/chains/{chainId}/contracts/{address}/transactions:getDeployment` | GET | Medium | 20 | | `/v1/chains/{chainId}/contracts/{address}/deployments` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}` | GET | Medium | 20 | | `/v1/chains` | GET | Free | 1 | | `/v1/chains/{chainId}` | GET | Free | 1 | | `/v1/chains/address/{address}` | GET | Free | 1 | | `/v1/chains/allTransactions` | GET | Free | 1 | | `/v1/chains/allBlocks` | GET | Free | 1 | | `/v1/chains/{chainId}/tokens/{address}/transfers` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listNative` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listErc20` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listErc721` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listErc1155` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listInternals` | GET | Medium | 20 | | `/v1/chains/{chainId}/transactions/{txHash}` | GET | Medium | 20 | | `/v1/chains/{chainId}/blocks/{blockId}/transactions` | GET | Medium | 20 | | `/v1/chains/{chainId}/transactions` | GET | Medium | 20 | ## Rate Limits for RPC endpoints The CUs for RPC calls are calculated based on the RPC method(s) within the request. The CUs assigned to each method are defined in the table below: | Method | Weight | CU Value | | :---------------------------------------- | :----- | :------- | | `eth_accounts` | Free | 1 | | `eth_blockNumber` | Small | 10 | | `eth_call` | Small | 10 | | `eth_coinbase` | Small | 10 | | `eth_chainId` | Free | 1 | | `eth_gasPrice` | Small | 10 | | `eth_getBalance` | Small | 10 | | `eth_getBlockByHash` | Small | 10 | | `eth_getBlockByNumber` | Small | 10 | | `eth_getBlockTransactionCountByNumber` | Medium | 20 | | `eth_getCode` | Medium | 20 | | `eth_getLogs` | XXL | 200 | | `eth_getStorageAt` | Medium | 20 | | `eth_getTransactionByBlockNumberAndIndex` | Medium | 20 | | `eth_getTransactionByHash` | Small | 10 | | `eth_getTransactionCount` | Small | 10 | | `eth_getTransactionReceipt` | Small | 10 | | `eth_signTransaction` | Medium | 20 | | `eth_sendTransaction` | Medium | 20 | | `eth_sign` | Medium | 20 | | `eth_sendRawTransaction` | Small | 10 | | `eth_syncing` | Free | 1 | | `net_listening` | Free | 1 | | `net_peerCount` | Medium | 20 | | `net_version` | Free | 1 | | `web3_clientVersion` | Small | 10 | | `web3_sha3` | Small | 10 | | `eth_newPendingTransactionFilter` | Medium | 20 | | `eth_maxPriorityFeePerGas` | Small | 10 | | `eth_baseFee` | Small | 10 | | `rpc_modules` | Free | 1 | | `eth_getChainConfig` | Small | 10 | | `eth_feeConfig` | Small | 10 | | `eth_getActivePrecompilesAt` | Small | 10 | All rate limits, weights, and CU values are subject to change. # Snowflake Datashare (/docs/api-reference/data-api/snowflake) --- title: Snowflake Datashare description: Snowflake Datashare for Avalanche blockchain data icon: Snowflake --- Avalanche Primary Network data (C-chain, P-chain, and X-chain blockchains) can be accessed in a sql-based table format via the [Snowflake Data Marketplace.](https://app.snowflake.com/marketplace) Explore the blockchain state since the Genesis Block. These tables provide insights on transaction gas fees, DeFi activity, the historical stake of validators on the primary network, AVAX emissions rewarded to past validators/delegators, and fees paid by Avalanche L1 Validators to the primary network. ## Available Blockchain Data #### Primary Network * **C-chain:** * Blocks * Transactions * Logs * Internal Transactions * Receipts * Messages * **P-chain:** * Blocks * Transactions * UTXOs * **X-chain:** * Blocks * Transactions * Vertices before the [X-chain Linearization](https://www.avax.network/blog/cortina-x-chain-linearization) in the Cortina Upgrade * **Dictionary:** A data dictionary is provided with the listing with column and table descriptions. Example columns include: * `c_blocks.blockchash` * `c_transactions.transactionfrom` * `c_logs.topichex_0` * `p_blocks.block_hash` * `p_blocks.block_index` * `p_blocks.type` * `p_transactions.timestamp` * `p_transactions.transaction_hash` * `utxos.utxo_id` * `utxos.address` * `vertices.vertex_hash` * `vertices.parent_hash` * `x_blocks.timestamp` * `x_blocks.proposer_id` * `x_transactions.transaction_hash` * `x_transactions.type` #### Available Avalanche L1s * **Gunzilla** * **Dexalot** * **DeFi Kingdoms (DFK)** * **Henesys (MapleStory Universe)** #### L1 Data * Blocks * Transactions * Logs * Internal Transactions (currently unavailable for DFK) * Receipts * Messages ## Access Search for "Ava Labs" on the [Snowflake Data Marketplace](https://app.snowflake.com/marketplace). # Usage Guide (/docs/api-reference/data-api/usage) --- title: Usage Guide description: Usage Guide for the Data API icon: Code --- ### Setup and Authentication In order to utilize your accounts rate limits, you will need to make API requests with an API key. You can generate API Keys from the AvaCloud portal. Once you've created and retrieved that, you will be able to make authenticated queries by passing in your API key in the `x-glacier-api-key` header of your HTTP request. An example curl request can be found below: ```bash curl -H "Content-Type: Application/json" -H "x-glacier-api-key: your_api_key" \ "https://glacier-api.avax.network/v1/chains" ``` ### Rate Limits The Data API has rate limits in place to maintain it's stability and protect from bursts of incoming traffic. The rate limits associated with various plans can be found within AvaCloud. When you hit your rate limit, the server will respond with a 429 http response code, and response headers to help you determine when you should start to make additional requests. The response headers follow the standards set in the RateLimit header fields for HTTP draft from the Internet Engineering Task Force. With every response with a valid api key, the server will include the following headers: * `ratelimit-policy` - The rate limit policy tied to your api key. * `ratelimit-limit` - The number of requests you can send according to your policy. * `ratelimit-remaining` - How many request remaining you can send in the period for your policy For any request after the rate limit has been reached, the server will also respond with these headers: * `ratelimit-reset` * `retry-after` Both of these headers are set to the number of seconds until your period is over and requests will start succeeding again. If you start receiving rate limit errors with the 429 response code, we recommend you discontinue sending requests to the server. You should wait to retry requests for the duration specified in the response headers. Alternatively, you can implement an exponential backoff algorithm to prevent continuous errors. Failure to discontinue requests may result in being temporarily blocked from accessing the API. Error Types The Data API generates standard error responses along with error codes based on provided requests and parameters. Typically, response codes within the `2XX` range signifies successful requests, while those within the `4XX` range points to errors originating from the client's side. Meanwhile, response codes within the `5XX` range indicates problems on the server's side. ### Error Types The Glacier API generates standard error responses along with error codes based on provided requests and parameters. Typically, response codes within the `2XX` range signifies successful requests, while those within the `4XX` range points to errors originating from the client's side. Meanwhile, response codes within the `5XX` range indicates problems on the server's side. The error response body is formatted like this: ```json { "message": ["Invalid address format"], // route specific error message "error": "Bad Request", // error type "statusCode": 400 // http response code } ``` Let's go through every error code that we can respond with: | Error Code | Error Type | Description | | :--------- | :-------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **400** | Bad Request | Bad requests generally mean the client has passed invalid or malformed parameters. Error messages in the response could help in evaluating the error. | | **401** | Unauthorized | When a client attempts to access resources that require authorization credentials but the client lacks proper authentication in the request, the server responds with 401. | | **403** | Forbidden | When a client attempts to access resources with valid credentials but doesn't have the privilege to perform that action, the server responds with 403. | | **404** | Not Found | The 404 error is mostly returned when the client requests with either mistyped URL, or the passed resource is moved or deleted, or the resource doesn't exist. | | **500** | Internal Server Error | The 500 error is a generic server-side error that is returned for any uncaught and unexpected issues on the server side. This should be very rare, and you may reach out to us if the problem persists for a longer duration. | | **502** | Bad Gateway | This is an internal error indicating invalid response received by the client-facing proxy or gateway from the upstream server. | | **503** | Service Unavailable | The 503 error is returned for certain routes on a particular Subnet. This indicates an internal problem with our Subnet node, and may not necessarily mean the Subnet is down or affected. | The above list is not exhaustive of all the errors that you'll receive, but is categorized on the basis of error codes. You may see route-specific errors along with detailed error messages for better evaluating the response. Reach out to our team when you see an error in the `5XX` range for a longer duration. These errors should be very rare, but we try to fix them as soon as possible once detected. ### Pagination When utilizing pagination for endpoints that return lists of data such as transactions, UTXOs, or blocks, our API uses a straightforward mechanism to manage the navigation through large datasets. We divide data into pages and each page is limited with a `pageSize` number of elements as passed in the request. Users can navigate to subsequent pages using the page token received in the `nextPageToken` field. This method ensures efficient retrieval. Routes with pagination have a following common response format: ```json { "blocks": [""], // This field name will vary by route "nextPageToken": "3d22deea-ea64-4d30-8a1e-c2a353b67e90" } ``` ### Page Token Structure * If there's more data in the dataset for the request, the API will include a UUID-based page token in the response. This token acts as a pointer to the next page of data. * The UUID page token is generated randomly and uniquely for each pagination scenario, enhancing security and minimizing predictability. * It's important to note that the page token is only returned when a next page is present. If there's no further data to retrieve, a page token will not be included in the response. * The generated page token has an expiration window of 24 hours. Beyond this timeframe, the token will no longer be valid for accessing subsequent pages. ### Integration and Usage: To make use of the pagination system, simply examine the API response. If a UUID page token is present, it indicates the availability of additional data on the next page. You can extract this token and include it in the subsequent request to access the subsequent page of results. Please note that you must ensure that the subsequent request is made within the 24-hour timeframe after the original token's generation. Beyond this duration, the token will expire, and you will need to initiate a fresh request from the initial page. By incorporating UUID page tokens, our API offers a secure, efficient, and user-friendly approach to navigating large datasets, streamlining your data retrieval proces ### Swagger API Reference You can explore the full API definitions and interact with the endpoints in the Swagger documentation at: [https://glacier-api.avax.network/api](https://glacier-api.avax.network/api) # Chain Components (/docs/builderkit/components/chains) --- title: Chain Components description: "Components for displaying and selecting blockchain networks." --- # Chain Components Chain components help you manage network selection and display chain information. ## ChainIcon The ChainIcon component displays chain logos. ```tsx import { ChainIcon } from '@avalabs/builderkit'; // Basic usage ``` ### Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID to display | | `className` | `string` | - | Additional CSS classes | ## ChainDropdown The ChainDropdown component provides network selection functionality. ```tsx import { ChainDropdown } from '@avalabs/builderkit'; // Basic usage { console.log('Selected chain:', chainId); }} /> ``` ### Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `selected` | `number` | - | Currently selected chain ID | | `list` | `number[]` | - | List of available chain IDs | | `onSelectionChanged` | `(chain_id: number) => void` | - | Selection change callback | | `className` | `string` | - | Additional CSS classes | ## ChainRow The ChainRow component displays detailed chain information. ```tsx import { ChainRow } from '@avalabs/builderkit'; // Basic usage ``` ### Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID | | `name` | `string` | - | Chain name | | `className` | `string` | - | Additional CSS classes | # Control Components (/docs/builderkit/components/control) --- title: Control Components description: "Interactive control components like buttons and wallet connection interfaces." --- # Control Components Control components provide interactive elements for your Web3 application. ## Button The Button component is a versatile control that supports multiple states and actions. ```tsx import { Button } from '@avalabs/builderkit'; // Basic usage ``` Run the project using Nodejs. ```bash npm install express axios path body-parser dotenv node app.js ``` Open a Chrome tab and type `http://localhost:3000`, you should see something like this. Then click on Connect and accept receiving push notifications. If you are using MacOS, check in **System Settings** > **Notifications** that you have enabled notifications for the browser. If everything runs correctly your browser should be registered in OneSignal. To check go to **Audience** > **Subscriptions** and verify that your browser is registered. ### Step 3 - Backend Setup Now, let's configure the backend to manage webhook events and dispatch notifications based on the incoming data. Here's the step-by-step process: 1. **Transaction Initiation:** When someone starts a transaction with your wallet as the destination, the webhooks detect the transaction and generate an event. 2. **Event Triggering:** The backend receives the event triggered by the transaction, containing the destination address. 3. **ExternalID Retrieval:** Using the received address, the backend retrieves the corresponding `externalID` associated with that wallet. 4. **Notification Dispatch:** The final step involves sending a notification through OneSignal, utilizing the retrieved `externalID`. OneSignal Backend #### 3.1 - Use Ngrok to tunnel the traffic to localhost If we want to test the webhook in our computer and we are behind a proxy/NAT device or a firewall we need a tool like Ngrok. Glacier will trigger the webhook and make a POST to the Ngrok cloud, then the request is forwarded to your local Ngrok client who in turn forwards it to the Node.js app listening on port 3000. Go to [https://ngrok.com/](https://ngrok.com/) create a free account, download the binary, and connect to your account. Create a Node.js app with Express and paste the following code to receive the webhook: To start an HTTP tunnel forwarding to your local port 3000 with Ngrok, run this next: ```bash ./ngrok http 3000 ``` You should see something like this: ``` ngrok (Ctrl+C to quit) Take our ngrok in production survey! https://forms.gle/aXiBFWzEA36DudFn6 Session Status online Account javier.toledo@avalabs.org (Plan: Free) Version 3.8.0 Region United States (us) Latency 48ms Web Interface http://127.0.0.1:4040 Forwarding https://c902-2600-1700-5220-11a0-813c-d5ac-d72c-f7fd.ngrok-free.app -> http://localhost:3000 Connections ttl opn rt1 rt5 p50 p90 33 0 0.00 0.00 5.02 5.05 HTTP Requests ------------- ``` #### 3.2 - Create the webhook The webhook can be created using the [Avacloud Dashboard](https://app.avacloud.io/) or Glacier API. For convenience, we are going to use cURL. For that copy the forwarding URL generated by Ngrok and append the `/callbackpath` and our address. ```bash curl --location 'https://glacier-api-dev.avax.network/v1/webhooks' \ --header 'x-glacier-api-key: ' \ --header 'Content-Type: application/json' \ --data '{ "url": " https://c902-2600-1700-5220-11a0-813c-d5ac-d72c-f7fd.ngrok-free.app/callback", "chainId": "43113", "eventType": "address_activity", "includeInternalTxs": true, "includeLogs": true, "metadata": { "addresses": ["0x8ae323046633A07FB162043f28Cea39FFc23B50A"] }, "name": "My wallet", "description": "My wallet" }' ``` Don't forget to add your API Key. If you don't have one go to the [Avacloud Dashboard](https://app.avacloud.io/) and create a new one. #### 3.3 - The backend To run the backend we need to add the environment variables in the root of your project. For that create an `.env` file with the following values: ``` PORT=3000 ONESIGNAL_API_KEY= APP_ID= ``` To get the APP ID from OneSignal go to **Settings** > **Keys and IDs** Since we are simulating the connection to a database to retrieve the externalID, we need to add the wallet address and the OneSignal externalID to the myDB array. ```javascript //simulating a DB const myDB = [ { name: 'wallet1', address: '0x8ae323046633A07FB162043f28Cea39FFc23B50A', externalID: '9c96e91d40c7a44c763fb55960e12293afbcfaf6228860550b0c1cc09cd40ac3' }, { name: 'wallet2', address: '0x1f83eC80D755A87B31553f670070bFD897c40CE0', externalID: '0xd39d39c99305c6df2446d5cc3d584dc1eb041d95ac8fb35d4246f1d2176bf330' } ]; ``` The code handles a webhook event triggered when a wallet receives a transaction, performs a lookup in the simulated "database" using the receiving address to retrieve the corresponding OneSignal `externalID`, and then sends an instruction to OneSignal to dispatch a notification to the browser, with OneSignal ultimately delivering the web push notification to the browser. ```javascript require('dotenv').config(); const axios = require('axios'); const express = require('express'); const bodyParser = require('body-parser'); const path = require('path'); const app = express(); const port = process.env.PORT || 3000; // Serve static website app.use(bodyParser.json()); app.use(express.static(path.join(__dirname, './client'))); //simulating a DB const myDB = [ { name: 'wallet1', address: '0x8ae323046633A07FB162043f28Cea39FFc23B50A', externalID: '9c96e91d40c7a44c763fb55960e12293afbcfaf6228860550b0c1cc09cd40ac3' }, { name: 'wallet2', address: '0x1f83eC80D755A87B31553f670070bFD897c40CE0', externalID: '0xd39d39c99305c6df2446d5cc3d584dc1eb041d95ac8fb35d4246f1d2176bf330' } ]; app.post('/callback', async (req, res) => { const { body } = req; try { res.sendStatus(200); handleTransaction(body.event.transaction).catch(error => { console.error('Error processing transaction:', error); }); } catch (error) { console.error('Error processing transaction:', error); res.status(500).json({ error: 'Internal server error' }); } }); // Handle transaction async function handleTransaction(transaction) { console.log('*****Transaction:', transaction); const notifications = []; const erc20Transfers = transaction?.erc20Transfers || []; for (const transfer of erc20Transfers) { const externalID = await getExternalID(transfer.to); const { symbol, valueWithDecimals } = transfer.erc20Token; notifications.push({ type: transfer.type, sender: transfer.from, receiver: transfer.to, amount: valueWithDecimals, token: symbol, externalID }); } if (transaction?.networkToken) { const { tokenSymbol, valueWithDecimals } = transaction.networkToken; const externalID = await getExternalID(transaction.to); notifications.push({ sender: transaction.from, receiver: transaction.to, amount: valueWithDecimals, token: tokenSymbol, externalID }); } if (notifications.length > 0) { sendNotifications(notifications); } } //connect to DB and return externalID async function getExternalID(address) { const entry = myDB.find(entry => entry.address.toLowerCase() === address.toLowerCase()); return entry ? entry.externalID : null; } // Send notifications async function sendNotifications(notifications) { for (const notification of notifications) { try { const data = { include_aliases: { external_id: [notification.externalID.toLowerCase()] }, target_channel: 'push', isAnyWeb: true, contents: { en: `You've received ${notification.amount} ${notification.token}` }, headings: { en: 'Core wallet' }, name: 'Notification', app_id: process.env.APP_ID }; console.log('data:', data); const response = await axios.post('https://onesignal.com/api/v1/notifications', data, { headers: { Authorization: `Bearer ${process.env.ONESIGNAL_API_KEY}`, 'Content-Type': 'application/json' } }); console.log('Notification sent:', response.data); } catch (error) { console.error('Error sending notification:', error); // Optionally, implement retry logic here } } } // Start the server app.listen(port, () => { console.log(`App listening at http://localhost:${port}`); }); ``` You can now start your backend server by running: ```shell node app.js ``` Send AVAX from another wallet to the wallet being monitored by the webhook and you should receive a notification with the amount of Avax received. You can try it with any other ERC20 token as well. ### Conclusion In this tutorial, we've set up a frontend to connect to the Core wallet and enable push notifications using OneSignal. We've also implemented a backend to handle webhook events and send notifications based on the received data. By integrating the frontend with the backend, users can receive real-time notifications for blockchain events. # Add addresses to EVM activity webhook (/docs/api-reference/webhook-api/webhooks/addAddressesToWebhook) --- title: Add addresses to EVM activity webhook full: true _openapi: method: PATCH route: /v1/webhooks/{id}/addresses toc: [] structuredData: headings: [] contents: - content: Add addresses to webhook. Only valid for EVM activity webhooks. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Add addresses to webhook. Only valid for EVM activity webhooks. # Create a webhook (/docs/api-reference/webhook-api/webhooks/createWebhook) --- title: Create a webhook full: true _openapi: method: POST route: /v1/webhooks toc: [] structuredData: headings: [] contents: - content: Create a new webhook. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Create a new webhook. # Deactivate a webhook (/docs/api-reference/webhook-api/webhooks/deactivateWebhook) --- title: Deactivate a webhook full: true _openapi: method: DELETE route: /v1/webhooks/{id} toc: [] structuredData: headings: [] contents: - content: Deactivates a webhook by ID. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Deactivates a webhook by ID. # Generate or rotate a shared secret (/docs/api-reference/webhook-api/webhooks/generateOrRotateSharedSecret) --- title: Generate or rotate a shared secret full: true _openapi: method: POST route: /v1/webhooks:generateOrRotateSharedSecret toc: [] structuredData: headings: [] contents: - content: Generates a new shared secret or rotate an existing one. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Generates a new shared secret or rotate an existing one. # List adresses by EVM activity webhooks (/docs/api-reference/webhook-api/webhooks/getAddressesFromWebhook) --- title: List adresses by EVM activity webhooks full: true _openapi: method: GET route: /v1/webhooks/{id}/addresses toc: [] structuredData: headings: [] contents: - content: List adresses by webhook. Only valid for EVM activity webhooks. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List adresses by webhook. Only valid for EVM activity webhooks. # Get a shared secret (/docs/api-reference/webhook-api/webhooks/getSharedSecret) --- title: Get a shared secret full: true _openapi: method: GET route: /v1/webhooks:getSharedSecret toc: [] structuredData: headings: [] contents: - content: Get a previously generated shared secret. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get a previously generated shared secret. # Get a webhook by ID (/docs/api-reference/webhook-api/webhooks/getWebhook) --- title: Get a webhook by ID full: true _openapi: method: GET route: /v1/webhooks/{id} toc: [] structuredData: headings: [] contents: - content: Retrieves a webhook by ID. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Retrieves a webhook by ID. # List webhooks (/docs/api-reference/webhook-api/webhooks/listWebhooks) --- title: List webhooks full: true _openapi: method: GET route: /v1/webhooks toc: [] structuredData: headings: [] contents: - content: Lists webhooks for the user. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists webhooks for the user. # Remove addresses from EVM activity webhook (/docs/api-reference/webhook-api/webhooks/removeAddressesFromWebhook) --- title: Remove addresses from EVM activity webhook full: true _openapi: method: DELETE route: /v1/webhooks/{id}/addresses toc: [] structuredData: headings: [] contents: - content: Remove addresses from webhook. Only valid for EVM activity webhooks. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Remove addresses from webhook. Only valid for EVM activity webhooks. # Update a webhook (/docs/api-reference/webhook-api/webhooks/updateWebhook) --- title: Update a webhook full: true _openapi: method: PATCH route: /v1/webhooks/{id} toc: [] structuredData: headings: [] contents: - content: Updates an existing webhook. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Updates an existing webhook. # Get AVAX supply information (/docs/api-reference/data-api/avax-supply/getAvaxSupply) --- title: Get AVAX supply information full: true _openapi: method: GET route: /v1/avax/supply toc: [] structuredData: headings: [] contents: - content: >- Get AVAX supply information that includes total supply, circulating supply, total p burned, total c burned, total x burned, total staked, total locked, total rewards, and last updated. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get AVAX supply information that includes total supply, circulating supply, total p burned, total c burned, total x burned, total staked, total locked, total rewards, and last updated. # Get logs for requests made by client (/docs/api-reference/data-api/data-api-usage-metrics/getApiLogs) --- title: Get logs for requests made by client full: true _openapi: method: GET route: /v1/apiLogs toc: [] structuredData: headings: [] contents: - content: >- Gets logs for requests made by client over a specified time interval for a specific organization. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets logs for requests made by client over a specified time interval for a specific organization. # Get usage metrics for the Data API (/docs/api-reference/data-api/data-api-usage-metrics/getApiUsageMetrics) --- title: Get usage metrics for the Data API full: true _openapi: method: GET route: /v1/apiUsageMetrics toc: [] structuredData: headings: [] contents: - content: >- Gets metrics for Data API usage over a specified time interval aggregated at the specified time-duration granularity. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets metrics for Data API usage over a specified time interval aggregated at the specified time-duration granularity. # Get usage metrics for the Primary Network RPC (/docs/api-reference/data-api/data-api-usage-metrics/getPrimaryNetworkRpcUsageMetrics) --- title: Get usage metrics for the Primary Network RPC full: true _openapi: method: GET route: /v1/primaryNetworkRpcUsageMetrics toc: [] structuredData: headings: [] contents: - content: >- Gets metrics for public Primary Network RPC usage over a specified time interval aggregated at the specified time-duration granularity. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets metrics for public Primary Network RPC usage over a specified time interval aggregated at the specified time-duration granularity. # Get usage metrics for the Subnet RPC (/docs/api-reference/data-api/data-api-usage-metrics/getSubnetRpcUsageMetrics) --- title: Get usage metrics for the Subnet RPC full: true _openapi: method: GET route: /v1/subnetRpcUsageMetrics toc: [] structuredData: headings: [] contents: - content: >- Gets metrics for public Subnet RPC usage over a specified time interval aggregated at the specified time-duration granularity. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets metrics for public Subnet RPC usage over a specified time interval aggregated at the specified time-duration granularity. # Get native token balance (/docs/api-reference/data-api/evm-balances/getNativeBalance) --- title: Get native token balance full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:getNative toc: [] structuredData: headings: [] contents: - content: >- Gets native token balance of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets native token balance of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. # List collectible (ERC-721/ERC-1155) balances (/docs/api-reference/data-api/evm-balances/listCollectibleBalances) --- title: List collectible (ERC-721/ERC-1155) balances full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:listCollectibles toc: [] structuredData: headings: [] contents: - content: >- Lists ERC-721 and ERC-1155 token balances of a wallet address. Balance for a specific contract can be retrieved with the `contractAddress` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-721 and ERC-1155 token balances of a wallet address. Balance for a specific contract can be retrieved with the `contractAddress` parameter. # List ERC-1155 balances (/docs/api-reference/data-api/evm-balances/listErc1155Balances) --- title: List ERC-1155 balances full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:listErc1155 toc: [] structuredData: headings: [] contents: - content: >- Lists ERC-1155 token balances of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. Balance for a specific contract can be retrieved with the `contractAddress` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-1155 token balances of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. Balance for a specific contract can be retrieved with the `contractAddress` parameter. # List ERC-20 balances (/docs/api-reference/data-api/evm-balances/listErc20Balances) --- title: List ERC-20 balances full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:listErc20 toc: [] structuredData: headings: [] contents: - content: >- Lists ERC-20 token balances of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. Balance for specific contracts can be retrieved with the `contractAddresses` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-20 token balances of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. Balance for specific contracts can be retrieved with the `contractAddresses` parameter. # List ERC-721 balances (/docs/api-reference/data-api/evm-balances/listErc721Balances) --- title: List ERC-721 balances full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:listErc721 toc: [] structuredData: headings: [] contents: - content: >- Lists ERC-721 token balances of a wallet address. Balance for a specific contract can be retrieved with the `contractAddress` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-721 token balances of a wallet address. Balance for a specific contract can be retrieved with the `contractAddress` parameter. # Get block (/docs/api-reference/data-api/evm-blocks/getBlock) --- title: Get block full: true _openapi: method: GET route: /v1/chains/{chainId}/blocks/{blockId} toc: [] structuredData: headings: [] contents: - content: Gets the details of an individual block on the EVM-compatible chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the details of an individual block on the EVM-compatible chain. # List latest blocks (/docs/api-reference/data-api/evm-blocks/getLatestBlocks) --- title: List latest blocks full: true _openapi: method: GET route: /v1/chains/{chainId}/blocks toc: [] structuredData: headings: [] contents: - content: >- Lists the latest indexed blocks on the EVM-compatible chain sorted in descending order by block timestamp. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the latest indexed blocks on the EVM-compatible chain sorted in descending order by block timestamp. # List latest blocks across all supported EVM chains (/docs/api-reference/data-api/evm-blocks/listLatestBlocksAllChains) --- title: List latest blocks across all supported EVM chains full: true _openapi: method: GET route: /v1/blocks toc: [] structuredData: headings: [] contents: - content: >- Lists the most recent blocks from all supported EVM-compatible chains. The results can be filtered by network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the most recent blocks from all supported EVM-compatible chains. The results can be filtered by network. # Get chain information (/docs/api-reference/data-api/evm-chains/getChainInfo) --- title: Get chain information full: true _openapi: method: GET route: /v1/chains/{chainId} toc: [] structuredData: headings: [] contents: - content: >- Gets chain information for the EVM-compatible chain if supported by AvaCloud. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets chain information for the EVM-compatible chain if supported by AvaCloud. # List all chains associated with a given address (/docs/api-reference/data-api/evm-chains/listAddressChains) --- title: List all chains associated with a given address full: true _openapi: method: GET route: /v1/address/{address}/chains toc: [] structuredData: headings: [] contents: - content: >- Lists the chains where the specified address has participated in transactions or ERC token transfers, either as a sender or receiver. The data is refreshed every 15 minutes. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the chains where the specified address has participated in transactions or ERC token transfers, either as a sender or receiver. The data is refreshed every 15 minutes. # List chains (/docs/api-reference/data-api/evm-chains/supportedChains) --- title: List chains full: true _openapi: method: GET route: /v1/chains toc: [] structuredData: headings: [] contents: - content: >- Lists the AvaCloud supported EVM-compatible chains. Filterable by network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the AvaCloud supported EVM-compatible chains. Filterable by network. # Get contract metadata (/docs/api-reference/data-api/evm-contracts/getContractMetadata) --- title: Get contract metadata full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address} toc: [] structuredData: headings: [] contents: - content: Gets metadata about the contract at the given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets metadata about the contract at the given address. # Get deployment transaction (/docs/api-reference/data-api/evm-transactions/getDeploymentTransaction) --- title: Get deployment transaction full: true _openapi: method: GET route: /v1/chains/{chainId}/contracts/{address}/transactions:getDeployment toc: [] structuredData: headings: [] contents: - content: >- If the address is a smart contract, returns the transaction in which it was deployed. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} If the address is a smart contract, returns the transaction in which it was deployed. # Get transaction (/docs/api-reference/data-api/evm-transactions/getTransaction) --- title: Get transaction full: true _openapi: method: GET route: /v1/chains/{chainId}/transactions/{txHash} toc: [] structuredData: headings: [] contents: - content: Gets the details of a single transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the details of a single transaction. # List transactions for a block (/docs/api-reference/data-api/evm-transactions/getTransactionsForBlock) --- title: List transactions for a block full: true _openapi: method: GET route: /v1/chains/{chainId}/blocks/{blockId}/transactions toc: [] structuredData: headings: [] contents: - content: Lists the transactions that occured in a given block. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the transactions that occured in a given block. # List deployed contracts (/docs/api-reference/data-api/evm-transactions/listContractDeployments) --- title: List deployed contracts full: true _openapi: method: GET route: /v1/chains/{chainId}/contracts/{address}/deployments toc: [] structuredData: headings: [] contents: - content: Lists all contracts deployed by the given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists all contracts deployed by the given address. # List ERC-1155 transfers (/docs/api-reference/data-api/evm-transactions/listErc1155Transactions) --- title: List ERC-1155 transfers full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listErc1155 toc: [] structuredData: headings: [] contents: - content: Lists ERC-1155 transfers for an address. Filterable by block range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-1155 transfers for an address. Filterable by block range. # List ERC-20 transfers (/docs/api-reference/data-api/evm-transactions/listErc20Transactions) --- title: List ERC-20 transfers full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listErc20 toc: [] structuredData: headings: [] contents: - content: Lists ERC-20 transfers for an address. Filterable by block range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-20 transfers for an address. Filterable by block range. # List ERC-721 transfers (/docs/api-reference/data-api/evm-transactions/listErc721Transactions) --- title: List ERC-721 transfers full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listErc721 toc: [] structuredData: headings: [] contents: - content: Lists ERC-721 transfers for an address. Filterable by block range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-721 transfers for an address. Filterable by block range. # List internal transactions (/docs/api-reference/data-api/evm-transactions/listInternalTransactions) --- title: List internal transactions full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listInternals toc: [] structuredData: headings: [] contents: - content: >- Returns a list of internal transactions for an address and chain. Filterable by block range. Note that the internal transactions list only contains `CALL` or `CALLCODE` transactions with a non-zero value and `CREATE`/`CREATE2`/`CREATE3` transactions. To get a complete list of internal transactions use the `debug_` prefixed RPC methods on an archive node. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a list of internal transactions for an address and chain. Filterable by block range. Note that the internal transactions list only contains `CALL` or `CALLCODE` transactions with a non-zero value and `CREATE`/`CREATE2`/`CREATE3` transactions. To get a complete list of internal transactions use the `debug_` prefixed RPC methods on an archive node. # List latest transactions (/docs/api-reference/data-api/evm-transactions/listLatestTransactions) --- title: List latest transactions full: true _openapi: method: GET route: /v1/chains/{chainId}/transactions toc: [] structuredData: headings: [] contents: - content: Lists the latest transactions. Filterable by status. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the latest transactions. Filterable by status. # List the latest transactions across all supported EVM chains (/docs/api-reference/data-api/evm-transactions/listLatestTransactionsAllChains) --- title: List the latest transactions across all supported EVM chains full: true _openapi: method: GET route: /v1/transactions toc: [] structuredData: headings: [] contents: - content: >- Lists the most recent transactions from all supported EVM-compatible chains. The results can be filtered based on transaction status. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the most recent transactions from all supported EVM-compatible chains. The results can be filtered based on transaction status. # List native transactions (/docs/api-reference/data-api/evm-transactions/listNativeTransactions) --- title: List native transactions full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listNative toc: [] structuredData: headings: [] contents: - content: Lists native transactions for an address. Filterable by block range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists native transactions for an address. Filterable by block range. # List transactions (/docs/api-reference/data-api/evm-transactions/listTransactions) --- title: List transactions full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions toc: [] structuredData: headings: [] contents: - content: >- Returns a list of transactions where the given wallet address had an on-chain interaction for the given chain. The ERC-20 transfers, ERC-721 transfers, ERC-1155, and internal transactions returned are only those where the input address had an interaction. Specifically, those lists only inlcude entries where the input address was the sender (`from` field) or the receiver (`to` field) for the sub-transaction. Therefore the transactions returned from this list may not be complete representations of the on-chain data. For a complete view of a transaction use the `/chains/:chainId/transactions/:txHash` endpoint. Filterable by block ranges. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a list of transactions where the given wallet address had an on-chain interaction for the given chain. The ERC-20 transfers, ERC-721 transfers, ERC-1155, and internal transactions returned are only those where the input address had an interaction. Specifically, those lists only inlcude entries where the input address was the sender (`from` field) or the receiver (`to` field) for the sub-transaction. Therefore the transactions returned from this list may not be complete representations of the on-chain data. For a complete view of a transaction use the `/chains/:chainId/transactions/:txHash` endpoint. Filterable by block ranges. # List ERC transfers (/docs/api-reference/data-api/evm-transactions/listTransfers) --- title: List ERC transfers full: true _openapi: method: GET route: /v1/chains/{chainId}/tokens/{address}/transfers toc: [] structuredData: headings: [] contents: - content: >- Lists ERC transfers for an ERC-20, ERC-721, or ERC-1155 contract address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC transfers for an ERC-20, ERC-721, or ERC-1155 contract address. # Get the health of the service (/docs/api-reference/data-api/health-check/data-health-check) --- title: Get the health of the service full: true _openapi: method: GET route: /v1/health-check toc: [] structuredData: headings: [] contents: - content: Check the health of the service. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Check the health of the service. # Get an ICM message (/docs/api-reference/data-api/interchain-messaging/getIcmMessage) --- title: Get an ICM message full: true _openapi: method: GET route: /v1/icm/messages/{messageId} toc: [] structuredData: headings: [] contents: - content: Gets an ICM message by teleporter message ID. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets an ICM message by teleporter message ID. # List ICM messages (/docs/api-reference/data-api/interchain-messaging/listIcmMessages) --- title: List ICM messages full: true _openapi: method: GET route: /v1/icm/messages toc: [] structuredData: headings: [] contents: - content: Lists ICM messages. Ordered by timestamp in descending order. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ICM messages. Ordered by timestamp in descending order. # List ICM messages by address (/docs/api-reference/data-api/interchain-messaging/listIcmMessagesByAddress) --- title: List ICM messages by address full: true _openapi: method: GET route: /v1/icm/addresses/{address}/messages toc: [] structuredData: headings: [] contents: - content: >- Lists ICM messages by address. Ordered by timestamp in descending order. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ICM messages by address. Ordered by timestamp in descending order. # Get token details (/docs/api-reference/data-api/nfts/getTokenDetails) --- title: Get token details full: true _openapi: method: GET route: /v1/chains/{chainId}/nfts/collections/{address}/tokens/{tokenId} toc: [] structuredData: headings: [] contents: - content: Gets token details for a specific token of an NFT contract. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets token details for a specific token of an NFT contract. # List tokens (/docs/api-reference/data-api/nfts/listTokens) --- title: List tokens full: true _openapi: method: GET route: /v1/chains/{chainId}/nfts/collections/{address}/tokens toc: [] structuredData: headings: [] contents: - content: Lists tokens for an NFT contract. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists tokens for an NFT contract. # Reindex NFT metadata (/docs/api-reference/data-api/nfts/reindexNft) --- title: Reindex NFT metadata full: true _openapi: method: POST route: /v1/chains/{chainId}/nfts/collections/{address}/tokens/{tokenId}:reindex toc: [] structuredData: headings: [] contents: - content: >- Triggers reindexing of token metadata for an NFT token. Reindexing can only be called once per hour for each NFT token. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Triggers reindexing of token metadata for an NFT token. Reindexing can only be called once per hour for each NFT token. # Get operation (/docs/api-reference/data-api/operations/getOperationResult) --- title: Get operation full: true _openapi: method: GET route: /v1/operations/{operationId} toc: [] structuredData: headings: [] contents: - content: Gets operation details for the given operation id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets operation details for the given operation id. # Create transaction export operation (/docs/api-reference/data-api/operations/postTransactionExportJob) --- title: Create transaction export operation full: true _openapi: method: POST route: /v1/operations/transactions:export toc: [] structuredData: headings: [] contents: - content: >- Trigger a transaction export operation with given parameters. The transaction export operation runs asynchronously in the background. The status of the job can be retrieved from the `/v1/operations/:operationId` endpoint using the `operationId` returned from this endpoint. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Trigger a transaction export operation with given parameters. The transaction export operation runs asynchronously in the background. The status of the job can be retrieved from the `/v1/operations/:operationId` endpoint using the `operationId` returned from this endpoint. # Get asset details (/docs/api-reference/data-api/primary-network/getAssetDetails) --- title: Get asset details full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/assets/{assetId} toc: [] structuredData: headings: [] contents: - content: Gets asset details corresponding to the given asset id on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets asset details corresponding to the given asset id on the X-Chain. # Get blockchain details by ID (/docs/api-reference/data-api/primary-network/getBlockchainById) --- title: Get blockchain details by ID full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId} toc: [] structuredData: headings: [] contents: - content: Get details of the blockchain registered on the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get details of the blockchain registered on the network. # Get chain interactions for addresses (/docs/api-reference/data-api/primary-network/getChainIdsForAddresses) --- title: Get chain interactions for addresses full: true _openapi: method: GET route: /v1/networks/{network}/addresses:listChainIds toc: [] structuredData: headings: [] contents: - content: >- Returns Primary Network chains that each address has touched in the form of an address mapped array. If an address has had any on-chain interaction for a chain, that chain's chain id will be returned. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns Primary Network chains that each address has touched in the form of an address mapped array. If an address has had any on-chain interaction for a chain, that chain's chain id will be returned. # Get network details (/docs/api-reference/data-api/primary-network/getNetworkDetails) --- title: Get network details full: true _openapi: method: GET route: /v1/networks/{network} toc: [] structuredData: headings: [] contents: - content: Gets network details such as validator and delegator stats. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets network details such as validator and delegator stats. # Get single validator details (/docs/api-reference/data-api/primary-network/getSingleValidatorDetails) --- title: Get single validator details full: true _openapi: method: GET route: /v1/networks/{network}/validators/{nodeId} toc: [] structuredData: headings: [] contents: - content: >- List validator details for a single validator. Filterable by validation status. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List validator details for a single validator. Filterable by validation status. # Get Subnet details by ID (/docs/api-reference/data-api/primary-network/getSubnetById) --- title: Get Subnet details by ID full: true _openapi: method: GET route: /v1/networks/{network}/subnets/{subnetId} toc: [] structuredData: headings: [] contents: - content: Get details of the Subnet registered on the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get details of the Subnet registered on the network. # List blockchains (/docs/api-reference/data-api/primary-network/listBlockchains) --- title: List blockchains full: true _openapi: method: GET route: /v1/networks/{network}/blockchains toc: [] structuredData: headings: [] contents: - content: Lists all blockchains registered on the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists all blockchains registered on the network. # List delegators (/docs/api-reference/data-api/primary-network/listDelegators) --- title: List delegators full: true _openapi: method: GET route: /v1/networks/{network}/delegators toc: [] structuredData: headings: [] contents: - content: Lists details for delegators. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists details for delegators. # List L1 validators (/docs/api-reference/data-api/primary-network/listL1Validators) --- title: List L1 validators full: true _openapi: method: GET route: /v1/networks/{network}/l1Validators toc: [] structuredData: headings: [] contents: - content: >- Lists details for L1 validators. By default, returns details for all active L1 validators. Filterable by validator node ids, subnet id, and validation id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists details for L1 validators. By default, returns details for all active L1 validators. Filterable by validator node ids, subnet id, and validation id. # List subnets (/docs/api-reference/data-api/primary-network/listSubnets) --- title: List subnets full: true _openapi: method: GET route: /v1/networks/{network}/subnets toc: [] structuredData: headings: [] contents: - content: Lists all subnets registered on the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists all subnets registered on the network. # List validators (/docs/api-reference/data-api/primary-network/listValidators) --- title: List validators full: true _openapi: method: GET route: /v1/networks/{network}/validators toc: [] structuredData: headings: [] contents: - content: >- Lists details for validators. By default, returns details for all validators. The nodeIds parameter supports substring matching. Filterable by validation status, delegation capacity, time remaining, fee percentage, uptime performance, and subnet id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists details for validators. By default, returns details for all validators. The nodeIds parameter supports substring matching. Filterable by validation status, delegation capacity, time remaining, fee percentage, uptime performance, and subnet id. # Get balances (/docs/api-reference/data-api/primary-network-balances/getBalancesByAddresses) --- title: Get balances full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/balances toc: [] structuredData: headings: [] contents: - content: >- Gets primary network balances for one of the Primary Network chains for the supplied addresses. C-Chain balances returned are only the shared atomic memory balance. For EVM balance, use the `/v1/chains/:chainId/addresses/:addressId/balances:getNative` endpoint. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets primary network balances for one of the Primary Network chains for the supplied addresses. C-Chain balances returned are only the shared atomic memory balance. For EVM balance, use the `/v1/chains/:chainId/addresses/:addressId/balances:getNative` endpoint. # Get block (/docs/api-reference/data-api/primary-network-blocks/getBlockById) --- title: Get block full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/blocks/{blockId} toc: [] structuredData: headings: [] contents: - content: >- Gets a block by block height or block hash on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets a block by block height or block hash on one of the Primary Network chains. # List latest blocks (/docs/api-reference/data-api/primary-network-blocks/listLatestPrimaryNetworkBlocks) --- title: List latest blocks full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/blocks toc: [] structuredData: headings: [] contents: - content: Lists latest blocks on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists latest blocks on one of the Primary Network chains. # List blocks proposed by node (/docs/api-reference/data-api/primary-network-blocks/listPrimaryNetworkBlocksByNodeId) --- title: List blocks proposed by node full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/nodes/{nodeId}/blocks toc: [] structuredData: headings: [] contents: - content: >- Lists the latest blocks proposed by a given NodeID on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the latest blocks proposed by a given NodeID on one of the Primary Network chains. # List historical rewards (/docs/api-reference/data-api/primary-network-rewards/listHistoricalPrimaryNetworkRewards) --- title: List historical rewards full: true _openapi: method: GET route: /v1/networks/{network}/rewards toc: [] structuredData: headings: [] contents: - content: >- Lists historical rewards on the Primary Network for the supplied addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists historical rewards on the Primary Network for the supplied addresses. # List pending rewards (/docs/api-reference/data-api/primary-network-rewards/listPendingPrimaryNetworkRewards) --- title: List pending rewards full: true _openapi: method: GET route: /v1/networks/{network}/rewards:listPending toc: [] structuredData: headings: [] contents: - content: >- Lists pending rewards on the Primary Network for the supplied addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists pending rewards on the Primary Network for the supplied addresses. # Get transaction (/docs/api-reference/data-api/primary-network-transactions/getTxByHash) --- title: Get transaction full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/transactions/{txHash} toc: [] structuredData: headings: [] contents: - content: >- Gets the details of a single transaction on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the details of a single transaction on one of the Primary Network chains. # List staking transactions (/docs/api-reference/data-api/primary-network-transactions/listActivePrimaryNetworkStakingTransactions) --- title: List staking transactions full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/transactions:listStaking toc: [] structuredData: headings: [] contents: - content: >- Lists active staking transactions on the P-Chain for the supplied addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists active staking transactions on the P-Chain for the supplied addresses. # List asset transactions (/docs/api-reference/data-api/primary-network-transactions/listAssetTransactions) --- title: List asset transactions full: true _openapi: method: GET route: >- /v1/networks/{network}/blockchains/{blockchainId}/assets/{assetId}/transactions toc: [] structuredData: headings: [] contents: - content: >- Lists asset transactions corresponding to the given asset id on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists asset transactions corresponding to the given asset id on the X-Chain. # List latest transactions (/docs/api-reference/data-api/primary-network-transactions/listLatestPrimaryNetworkTransactions) --- title: List latest transactions full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/transactions toc: [] structuredData: headings: [] contents: - content: >- Lists the latest transactions on one of the Primary Network chains. Transactions are filterable by addresses, txTypes, and timestamps. When querying for latest transactions without an address parameter, filtering by txTypes and timestamps is not supported. An address filter must be provided to utilize txTypes and timestamp filters. For P-Chain, you can fetch all L1 validators related transactions like ConvertSubnetToL1Tx, IncreaseL1ValidatorBalanceTx etc. using the unique L1 validation ID. These transactions are further filterable by txTypes and timestamps as well. Given that each transaction may return a large number of UTXO objects, bounded only by the maximum transaction size, the query may return less transactions than the provided page size. The result will contain less results than the page size if the number of utxos contained in the resulting transactions reach a performance threshold. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the latest transactions on one of the Primary Network chains. Transactions are filterable by addresses, txTypes, and timestamps. When querying for latest transactions without an address parameter, filtering by txTypes and timestamps is not supported. An address filter must be provided to utilize txTypes and timestamp filters. For P-Chain, you can fetch all L1 validators related transactions like ConvertSubnetToL1Tx, IncreaseL1ValidatorBalanceTx etc. using the unique L1 validation ID. These transactions are further filterable by txTypes and timestamps as well. Given that each transaction may return a large number of UTXO objects, bounded only by the maximum transaction size, the query may return less transactions than the provided page size. The result will contain less results than the page size if the number of utxos contained in the resulting transactions reach a performance threshold. # List UTXOs (/docs/api-reference/data-api/primary-network-utxos/getUtxosByAddresses) --- title: List UTXOs full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/utxos toc: [] structuredData: headings: [] contents: - content: >- Lists UTXOs on one of the Primary Network chains for the supplied addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists UTXOs on one of the Primary Network chains for the supplied addresses. # List UTXOs v2 - Supports querying for more addresses (/docs/api-reference/data-api/primary-network-utxos/getUtxosByAddressesV2) --- title: List UTXOs v2 - Supports querying for more addresses full: true _openapi: method: POST route: /v1/networks/{network}/blockchains/{blockchainId}/utxos toc: [] structuredData: headings: [] contents: - content: >- Lists UTXOs on one of the Primary Network chains for the supplied addresses. This v2 route supports increased page size and address limit. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists UTXOs on one of the Primary Network chains for the supplied addresses. This v2 route supports increased page size and address limit. # Get vertex (/docs/api-reference/data-api/primary-network-vertices/getVertexByHash) --- title: Get vertex full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/vertices/{vertexHash} toc: [] structuredData: headings: [] contents: - content: Gets a single vertex on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets a single vertex on the X-Chain. # List vertices by height (/docs/api-reference/data-api/primary-network-vertices/getVertexByHeight) --- title: List vertices by height full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/vertices:listByHeight toc: [] structuredData: headings: [] contents: - content: Lists vertices at the given vertex height on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists vertices at the given vertex height on the X-Chain. # List vertices (/docs/api-reference/data-api/primary-network-vertices/listLatestXChainVertices) --- title: List vertices full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/vertices toc: [] structuredData: headings: [] contents: - content: Lists latest vertices on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists latest vertices on the X-Chain. # Aggregate Signatures (/docs/api-reference/data-api/signature-aggregator/aggregateSignatures) --- title: Aggregate Signatures full: true _openapi: method: POST route: /v1/signatureAggregator/{network}/aggregateSignatures toc: [] structuredData: headings: [] contents: - content: Aggregates Signatures for a Warp message from Subnet validators. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Aggregates Signatures for a Warp message from Subnet validators. # Get Aggregated Signatures (/docs/api-reference/data-api/signature-aggregator/getAggregatedSignatures) --- title: Get Aggregated Signatures full: true _openapi: method: GET route: /v1/signatureAggregator/{network}/aggregateSignatures/{txHash} toc: [] structuredData: headings: [] contents: - content: Get Aggregated Signatures for a P-Chain L1 related Warp Message. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get Aggregated Signatures for a P-Chain L1 related Warp Message. # ChainDropdown (/docs/builderkit/components/chains/chain-dropdown) --- title: ChainDropdown description: "A dropdown component for selecting Avalanche chains with visual chain information." --- # ChainDropdown The ChainDropdown component provides a styled dropdown menu for selecting between different Avalanche chains. ## Usage ```tsx import { ChainDropdown } from '@avalabs/builderkit'; // Basic usage console.log('Selected chain:', chainId)} /> // With custom styling ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `selected` | `number` | - | Currently selected chain ID | | `list` | `number[]` | - | Array of available chain IDs | | `onSelectionChanged` | `(chain_id: number) => void` | - | Callback when selection changes | | `className` | `string` | - | Additional CSS classes | ## Features - Displays chain information using ChainRow component - Maintains selected state internally - Styled dropdown with Tailwind CSS - Uses common Select components for consistent UI - Automatic chain information lookup ## Examples ### Basic Chain Selection ```tsx ``` ### With Network Switching ```tsx { try { await switchNetwork(chainId); } catch (error) { console.error('Failed to switch network:', error); } }} /> ``` ### Custom Styling ```tsx ``` ### With Chain Filtering ```tsx supportedChains.includes(id))} onSelectionChanged={handleChainChange} /> ``` ## Component Structure The dropdown consists of: 1. **Trigger**: Shows currently selected chain 2. **Content**: List of available chains 3. **Items**: Individual chain rows with icons and names ## Visual States 1. **Default**: Shows selected chain 2. **Open**: Displays list of available chains 3. **Hover**: Highlights chain under cursor 4. **Selected**: Indicates current selection ## Chain Information The component uses the `useChains` hook to fetch chain information, which includes: - Chain ID - Chain name - Chain icon (via ChainRow component) ## Styling Default styling includes: - Primary background color - Contrasting text color - Rounded corners - Proper padding and spacing - Hover and focus states # ChainIcon (/docs/builderkit/components/chains/chain-icon) --- title: ChainIcon description: "A component for displaying chain logos based on chain ID." --- # ChainIcon The ChainIcon component displays chain logos for Avalanche networks using a standardized path structure. ## Usage ```tsx import { ChainIcon } from '@avalabs/builderkit'; // Basic usage // With custom styling ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID to display logo for | | `className` | `string` | - | Additional CSS classes | ## Features - Displays chain logos from a standardized path structure - Uses common Icon component for consistent display - Supports custom styling through className - Simple and lightweight implementation ## Examples ### Basic Chain Icon ```tsx ``` ### Custom Size ```tsx ``` ### In a List ```tsx
``` ### With Border ```tsx ``` ## Asset Requirements The component expects chain logo images to be available at: ``` /chains/logo/{chain_id}.png ``` For example: ``` /chains/logo/43114.png // Avalanche C-Chain /chains/logo/43113.png // Fuji Testnet ``` # ChainRow (/docs/builderkit/components/chains/chain-row) --- title: ChainRow description: "A component for displaying chain information in a row layout with icon and name." --- # ChainRow The ChainRow component displays chain information in a horizontal layout, combining a chain icon with its name. ## Usage ```tsx import { ChainRow } from '@avalabs/builderkit'; // Basic usage // With custom styling ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID | | `name` | `string` | - | Chain name | | `className` | `string` | - | Additional CSS classes | ## Features - Combines ChainIcon with chain name - Horizontal layout with proper spacing - Flexible styling through className - Simple and lightweight implementation - Consistent alignment and spacing ## Examples ### Basic Chain Display ```tsx ``` ### In a List ```tsx
``` ### Interactive Row ```tsx ``` ### With Custom Styling ```tsx ``` ## Layout Structure The component uses a flex layout with: - ChainIcon on the left - Chain name on the right - Gap between icon and name - Center alignment of items # Collectible (/docs/builderkit/components/collectibles/collectible) --- title: Collectible description: "A component for displaying NFT collectibles with metadata and image support." --- # Collectible The Collectible component displays NFT collectibles with automatic metadata resolution and image display. ## Usage ```tsx import { Collectible } from '@avalabs/builderkit'; // Basic usage // With custom styling ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID where the NFT exists | | `address` | `string` | - | NFT contract address | | `token_id` | `number` | - | Token ID of the NFT | | `className` | `string` | - | Additional CSS classes | ## Features - Automatic metadata resolution from IPFS - Displays NFT image and name - Shows token ID - Supports ERC721 standard - Responsive layout with fixed dimensions - Loading state handling ## Examples ### Basic NFT Display ```tsx ``` ### In a Grid Layout ```tsx
{nfts.map(nft => ( ))}
``` ### With Custom Styling ```tsx ``` ## Component Structure 1. **Container** - Fixed width of 120px - Rounded corners - Border - Overflow hidden 2. **Image** - Fixed dimensions (120x120) - Maintains aspect ratio - Centered display 3. **Info Section** - White background - Token ID display - NFT name - Proper padding ## Metadata Resolution The component automatically: 1. Fetches token URI from the contract 2. Resolves IPFS metadata 3. Extracts image URL and name 4. Handles IPFS gateway resolution # Button (/docs/builderkit/components/control/button) --- title: Button description: "A versatile button component that supports multiple states and actions." --- # Button The Button component is a versatile control that supports multiple states and actions. ## Usage ```tsx import { Button } from '@avalabs/builderkit'; // Basic usage ``` ## Validation States 1. **Initial**: No validation indicator 2. **Valid**: No visual feedback (clean state) 3. **Invalid**: Red ring around input 4. **Disabled**: Grayed out appearance # AmountInput (/docs/builderkit/components/input/amount-input) --- title: AmountInput description: "A specialized input component for handling numeric amounts with proper formatting." --- # AmountInput The AmountInput component provides a specialized input field for handling numeric amounts with automatic formatting and validation. ## Usage ```tsx import { AmountInput } from '@avalabs/builderkit'; import { DollarSign } from 'lucide-react'; // Basic usage console.log('Amount:', value)} /> // With currency icon } onChange={handleAmountChange} /> ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `type` | `string` | - | Input type (usually "text") | | `placeholder` | `string` | - | Placeholder text | | `value` | `string` | `""` | Controlled input value | | `disabled` | `boolean` | `false` | Whether the input is disabled | | `icon` | `ReactNode` | - | Optional icon element | | `onChange` | `(value: string) => void` | - | Called with formatted amount | | `className` | `string` | - | Additional CSS classes | ## Features - Automatic number formatting - Prevents invalid number inputs - Handles decimal points appropriately - Optional icon support - Controlled value management - Background color inheritance ## Examples ### Basic Amount Input ```tsx ``` ### With Maximum Value ```tsx { if (parseFloat(value) <= maxAmount) { setAmount(value); } }} /> ``` ### With Currency Symbol ```tsx $} onChange={handleAmount} className="text-right" /> ``` ### In a Form ```tsx
``` ## Number Formatting The component uses `parseNumberInput` utility to: - Allow only numeric input - Handle decimal points - Remove leading zeros - Maintain proper number format # Input (/docs/builderkit/components/input/input) --- title: Input description: "A base input component with icon support and controlled value management." --- # Input The Input component provides a base text input with support for icons, controlled values, and custom styling. ## Usage ```tsx import { Input } from '@avalabs/builderkit'; import { Search } from 'lucide-react'; // Basic usage console.log('Input value:', value)} /> // With icon } onChange={handleSearch} /> // Disabled state ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `type` | `string` | - | Input type (e.g., 'text', 'number') | | `placeholder` | `string` | - | Placeholder text | | `value` | `any` | `""` | Controlled input value | | `disabled` | `boolean` | `false` | Whether the input is disabled | | `icon` | `ReactNode` | - | Optional icon element | | `onChange` | `(value: any) => void` | - | Value change callback | | `className` | `string` | - | Additional CSS classes | ## Features - Controlled input value management - Optional icon support - Disabled state handling - Flexible styling with Tailwind CSS - Consistent padding and spacing - Background inheritance for seamless integration ## Examples ### Text Input ```tsx ``` ### Search Input ```tsx } onChange={handleSearch} className="bg-gray-100" /> ``` ### Number Input ```tsx ``` ### With Validation ```tsx ``` ## Layout Structure The component uses a flex container with: - Optional icon on the left - Full-width input field - Consistent padding and gap - Background color inheritance # MultiChainTokenInput (/docs/builderkit/components/input/multi-chain-token-input) --- title: MultiChainTokenInput description: "A token selection component that supports tokens across multiple chains." --- # MultiChainTokenInput The MultiChainTokenInput component provides a token selection interface that allows users to select tokens from different chains. ## Usage ```tsx import { MultiChainTokenInput } from '@avalabs/builderkit'; // Basic usage console.log('Selected token:', token)} showBalances={true} /> ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `selected` | `{ address: string, chain_id: number } & Partial` | - | Currently selected token with chain | | `list` | `TokenItem[]` | - | Array of tokens across all chains | | `onSelectionChanged` | `(token: TokenItem) => void` | - | Called when token selection changes | | `showBalances` | `boolean` | - | Whether to show token balances | | `className` | `string` | - | Additional CSS classes | ## Features - Token selection across multiple chains - Chain selection interface - Displays token with chain icon - Shows token balances (optional) - Searchable token list per chain - Responsive dialog design ## Examples ### Basic Multi-Chain Selection ```tsx ``` ### With Balances ```tsx ``` ### In a Cross-Chain Form ```tsx
t.chain_id !== sourceToken.chain_id)} onSelectionChanged={setDestinationToken} showBalances={true} />
``` ### Custom Styling ```tsx ``` ## Component Structure 1. **Trigger** - TokenChip with chain icon - Chevron down indicator - Click to open dialog 2. **Dialog** - Header with back button - Chain selection row - Search input - Chain-specific token list - Token balances (if enabled) ## Chain Selection The component automatically extracts unique chain IDs from the token list and displays them as selectable options: ```tsx let chains = Array.from(new Set(list.map(t => t.chain_id))); ``` # TokenInput (/docs/builderkit/components/input/token-input) --- title: TokenInput description: "A token selection component with a searchable token list and balance display." --- # TokenInput The TokenInput component provides a token selection interface with a modal dialog containing a searchable list of tokens. ## Usage ```tsx import { TokenInput } from '@avalabs/builderkit'; // Basic usage console.log('Selected token:', token)} showBalances={true} /> ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `selected` | `{ address: string } & Partial` | - | Currently selected token | | `chain_id` | `number` | - | Chain ID for the token list | | `list` | `TokenItem[]` | - | Array of available tokens | | `onSelectionChanged` | `(token: TokenItem) => void` | - | Called when token selection changes | | `showBalances` | `boolean` | - | Whether to show token balances | | `className` | `string` | - | Additional CSS classes | ## Features - Token selection through modal dialog - Displays token icon and symbol - Shows token balances (optional) - Searchable token list - Uses TokenChip for selected token display - Responsive dialog design ## Examples ### Basic Token Selection ```tsx ``` ### With Balances ```tsx ``` ### In a Form ```tsx
t.address !== fromToken.address)} onSelectionChanged={setToToken} showBalances={true} />
``` ### Custom Styling ```tsx ``` ## Component Structure 1. **Trigger** - TokenChip showing selected token - Chevron down indicator - Click to open dialog 2. **Dialog** - Header with back button - Search input - Scrollable token list - Token balances (if enabled) ## Token Item Structure ```tsx type TokenItem = { chain_id: number; address: string; name: string; symbol: string; balance?: BigNumber; } ``` # TokenChip (/docs/builderkit/components/tokens/token-chip) --- title: TokenChip description: "A compact component for displaying token information with optional chain icon and copy functionality." --- # TokenChip The TokenChip component provides a compact way to display token information, including token icon, name, symbol, and optional chain information. ## Usage ```tsx import { TokenChip } from '@avalabs/builderkit'; // Basic usage // With chain icon and copy functionality ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID of the token | | `address` | `string` | - | Token contract address | | `symbol` | `string` | - | Token symbol | | `name` | `string` | - | Token name (optional) | | `showChainIcon` | `boolean` | `false` | Whether to show chain icon alongside token icon | | `showName` | `boolean` | `true` | Whether to show token name (if false, shows only symbol) | | `allowCopyToClipboard` | `boolean` | `false` | Enable copy address to clipboard functionality | | `className` | `string` | - | Additional CSS classes | ## Features - Displays token icon with optional chain icon - Shows token name and/or symbol - Copy to clipboard functionality with visual feedback - Flexible layout options - Tailwind CSS styling support ## Examples ### Basic Token Display ```tsx ``` ### With Chain Icon ```tsx ``` ### Symbol Only ```tsx ``` ### With Copy Functionality ```tsx ``` ### Custom Styling ```tsx ``` ## Visual States 1. **Default**: Shows token information with icon 2. **Copy Button**: Shows copy icon when `allowCopyToClipboard` is true 3. **Copy Confirmation**: Shows check icon briefly after copying 4. **Chain Display**: Shows chain icon when `showChainIcon` is true # TokenIconWithChain (/docs/builderkit/components/tokens/token-icon-with-chain) --- title: TokenIconWithChain description: "A component for displaying token logos with an overlaid chain icon." --- # TokenIconWithChain The TokenIconWithChain component displays a token logo with its corresponding chain icon overlaid in the bottom-right corner. ## Usage ```tsx import { TokenIconWithChain } from '@avalabs/builderkit'; // Basic usage // With custom styling ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID of the token | | `address` | `string` | - | Token contract address | | `className` | `string` | - | Additional CSS classes (applied to both icons) | ## Features - Combines TokenIcon with chain logo - Chain icon is automatically positioned and scaled - Supports custom styling through className - Uses consistent icon paths for both token and chain logos - Responsive layout with Tailwind CSS ## Examples ### Basic Usage ```tsx ``` ### Custom Size ```tsx ``` ### In a Token List ```tsx
``` ### With Custom Border ```tsx ``` ## Asset Requirements The component requires two image assets: 1. Token logo at: ``` /tokens/logo/{chain_id}/{address}.png ``` 2. Chain logo at: ``` /chains/logo/{chain_id}.png ``` For example: ``` /tokens/logo/43114/0x1234567890123456789012345678901234567890.png /chains/logo/43114.png ``` ## Layout Details The component uses the following layout structure: - Main token icon as the primary element - Chain icon positioned at bottom-right - Chain icon scaled to 50% of the token icon size - Chain icon has a white border for visual separation # TokenIcon (/docs/builderkit/components/tokens/token-icon) --- title: TokenIcon description: "A component for displaying token logos." --- # TokenIcon The TokenIcon component displays token logos based on the token's chain ID and contract address. ## Usage ```tsx import { TokenIcon } from '@avalabs/builderkit'; // Basic usage // With custom styling ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID of the token | | `address` | `string` | - | Token contract address | | `className` | `string` | - | Additional CSS classes | ## Features - Displays token logos from a standardized path structure - Supports custom styling through className - Uses common Icon component for consistent display - Follows `/tokens/logo/{chain_id}/{address}.png` path convention ## Examples ### Basic Token Icon ```tsx ``` ### Custom Size ```tsx ``` ### In a List ```tsx
``` ### With Border ```tsx ``` ## Asset Requirements The component expects token logo images to be available at: ``` /tokens/logo/{chain_id}/{address}.png ``` For example: ``` /tokens/logo/43114/0x1234567890123456789012345678901234567890.png ``` # TokenList (/docs/builderkit/components/tokens/token-list) --- title: TokenList description: "A searchable list component for displaying and selecting tokens with optional balance information." --- # TokenList The TokenList component provides a searchable list of tokens with support for balance display and token selection. ## Usage ```tsx import { TokenList } from '@avalabs/builderkit'; // Basic usage console.log('Selected token:', address)} /> // With balances and selected token ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID for the token list | | `list` | `TokenItem[]` | - | Array of tokens to display | | `onClick` | `(address: string) => void` | - | Callback when a token is selected | | `selected` | `{ address: string }` | - | Currently selected token (optional) | | `showBalances` | `boolean` | `false` | Whether to show token balances | | `className` | `string` | - | Additional CSS classes | ## Features - Search by token name, symbol, or contract address - Displays token balances (optional) - Highlights selected token - Supports token whitelisting - Responsive scrollable list - Search input with clear functionality ## Examples ### Basic Token List ```tsx ``` ### With Balances ```tsx ``` ### With Selected Token ```tsx ``` ## Token Item Structure Each token in the list should follow this structure: ```tsx type TokenItem = { chain_id: number; address: string; name: string; symbol: string; balance?: BigNumber; // Optional, used when showBalances is true whitelisted?: boolean; // Optional, for token verification } ``` ## States 1. **Loading**: Shows loading state when fetching balances 2. **Empty Search**: Displays all tokens in the list 3. **Search Results**: Shows filtered tokens based on search input 4. **Selected**: Highlights the currently selected token 5. **Non-whitelisted**: Shows warning for non-whitelisted tokens ## Search Functionality The component supports searching by: - Token name - Token symbol - Exact contract address When searching by contract address: - Must be a valid Ethereum address - Shows token details if found in the list - Can show import option for non-whitelisted tokens # TokenRow (/docs/builderkit/components/tokens/token-row) --- title: TokenRow description: "A component for displaying token information in a row layout with optional balance display." --- # TokenRow The TokenRow component displays token information in a row layout, combining a TokenChip with optional balance information. ## Usage ```tsx import { TokenRow } from '@avalabs/builderkit'; // Basic usage // With balance and click handler handleTokenSelect()} /> ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID of the token | | `address` | `string` | - | Token contract address | | `name` | `string` | - | Token name | | `symbol` | `string` | - | Token symbol | | `balance` | `BigNumber` | - | Token balance (optional) | | `onClick` | `() => void` | - | Click handler (optional) | | `className` | `string` | - | Additional CSS classes | ## Features - Displays token information using TokenChip - Shows token balance with 3 decimal places - Supports click interactions - Flexible styling with Tailwind CSS - Responsive layout with proper alignment ## Examples ### Basic Display ```tsx ``` ### With Balance ```tsx ``` ### Interactive Row ```tsx selectToken("AVAX")} className="hover:bg-gray-100 cursor-pointer" /> ``` ### In a List ```tsx
``` ## Layout Structure The component uses a flex layout with: - Left side: TokenChip (icon, name, and symbol) - Right side (if balance provided): - Token balance (3 decimal places) - USD value (currently hardcoded to $0) - Proper spacing and alignment - Optional hover and click interactions # TransactionButton (/docs/builderkit/components/transaction/transaction-button) --- title: TransactionButton description: "A button component that handles blockchain transaction submission with built-in status tracking and notifications." --- # TransactionButton The TransactionButton component handles individual transaction submission with built-in status tracking and notifications. ## Usage ```tsx import { TransactionButton } from '@avalabs/builderkit'; // Basic usage // With callbacks { console.log('Transaction sent at:', timestamp); }} onTransactionConfirmed={(receipt) => { console.log('Transaction confirmed:', receipt); }} /> ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID for the transaction | | `title` | `string` | - | Transaction title | | `description` | `string` | - | Transaction description | | `data` | `any` | - | Transaction data | | `onTransactionSent` | `(timestamp: number) => void` | - | Called when transaction is sent | | `onTransactionConfirmed` | `(receipt: any) => void` | - | Called when transaction is confirmed | | `className` | `string` | - | Additional CSS classes | ## Features - Automatic wallet connection handling - Network switching support - Transaction status tracking - Toast notifications with explorer links - Loading states and error handling ## Examples ### Basic Transaction ```tsx ``` ### With Custom Styling ```tsx ``` ## Component States 1. **Not Connected** - Shows "Connect Wallet" button - Handles wallet connection 2. **Wrong Network** - Shows "Wrong Network" button - Handles network switching 3. **Ready** - Shows transaction button - Enables transaction submission 4. **Processing** - Shows loading indicator - Tracks transaction status ## Notifications The component provides toast notifications for: - Transaction sent - Transaction confirmed - Transaction failed Each notification includes: - Timestamp - Transaction explorer link - Appropriate styling # TransactionManager (/docs/builderkit/components/transaction/transaction-manager) --- title: TransactionManager description: "A component that orchestrates multiple blockchain transactions in sequence." --- # TransactionManager The TransactionManager component orchestrates multiple blockchain transactions in sequence, handling the flow between steps and providing status tracking. ## Usage ```tsx import { TransactionManager } from '@avalabs/builderkit'; // Basic usage { console.log('Step completed at:', timestamp); }} onTransactionConfirmed={(receipt) => { console.log('All transactions completed:', receipt); }} /> ``` ## Props | Prop | Type | Default | Description | |------|------|---------|-------------| | `chain_id` | `number` | - | Chain ID for the transactions | | `transactions` | `TransactionProps[]` | - | Array of transactions to process | | `onTransactionSent` | `(timestamp: number) => void` | - | Called when a step completes | | `onTransactionConfirmed` | `(receipt: any) => void` | - | Called when all transactions complete | | `className` | `string` | - | Additional CSS classes | ## Features - Sequential transaction execution - Step-by-step progress tracking - Automatic state management - Transaction dependency handling - Consistent error handling - Status notifications for each step ## Examples ### Token Approval and Transfer ```tsx ``` ### Multi-step Protocol Interaction ```tsx ``` ## Transaction Flow 1. **Initialization** - Validates transaction array - Sets up initial state - Prepares first transaction 2. **Step Execution** - Processes current transaction - Waits for confirmation - Updates progress state 3. **Step Transition** - Validates completion - Moves to next transaction - Updates UI state 4. **Completion** - Confirms all steps finished - Triggers completion callback - Resets internal state # Deploy Custom VM (/docs/tooling/avalanche-cli/create-avalanche-nodes/deploy-custom-vm) --- title: Deploy Custom VM description: This page demonstrates how to deploy a custom VM into cloud-based validators using Avalanche-CLI. --- Currently, only Fuji network and Devnets are supported. ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to have: - Created a cloud server node as described [here](/docs/tooling/create-avalanche-nodes/run-validators-aws) - Created a Custom VM, as described [here](/docs/virtual-machines). - (Ignore for Devnet) Set up a key to be able to pay for transaction Fees, as described [here](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-fuji-testnet). Currently, only AWS & GCP cloud services are supported. Deploying the VM[​](#deploying-the-vm "Direct link to heading") --------------------------------------------------------------- We will be deploying the [MorpheusVM](https://github.com/ava-labs/hypersdk/tree/main/examples/morpheusvm) example built with the HyperSDK. The following settings will be used: - Repo url: `https://github.com/ava-labs/hypersdk/` - Branch Name: `vryx-poc` - Build Script: `examples/morpheusvm/scripts/build.sh` The CLI needs a public repo url in order to be able to download and install the custom VM on cloud. ### Genesis File[​](#genesis-file "Direct link to heading") The following contents will serve as the chain genesis. They were generated using `morpheus-cli` as shown [here](https://github.com/ava-labs/hypersdk/blob/main/examples/morpheusvm/scripts/run.sh). Save it into a file with path `` (for example `~/morpheusvm_genesis.json`): ```bash { "stateBranchFactor":16, "minBlockGap":1000, "minUnitPrice":[1,1,1,1,1], "maxChunkUnits":[1800000,18446744073709551615,18446744073709551615,18446744073709551615,18446744073709551615], "epochDuration":60000, "validityWindow":59000, "partitions":8, "baseUnits":1, "baseWarpUnits":1024, "warpUnitsPerSigner":128, "outgoingWarpComputeUnits":1024, "storageKeyReadUnits":5, "storageValueReadUnits":2, "storageKeyAllocateUnits":20, "storageValueAllocateUnits":5, "storageKeyWriteUnits":10, "storageValueWriteUnits":3, "customAllocation": [ { "address":"morpheus1qrzvk4zlwj9zsacqgtufx7zvapd3quufqpxk5rsdd4633m4wz2fdjk97rwu", "balance":3000000000000000000 }, {"address":"morpheus1qryyvfut6td0l2vwn8jwae0pmmev7eqxs2vw0fxpd2c4lr37jj7wvrj4vc3", "balance":3000000000000000000 }, {"address":"morpheus1qp52zjc3ul85309xn9stldfpwkseuth5ytdluyl7c5mvsv7a4fc76g6c4w4", "balance":3000000000000000000 }, {"address":"morpheus1qzqjp943t0tudpw06jnvakdc0y8w790tzk7suc92aehjw0epvj93s0uzasn", "balance":3000000000000000000 }, {"address":"morpheus1qz97wx3vl3upjuquvkulp56nk20l3jumm3y4yva7v6nlz5rf8ukty8fh27r", "balance":3000000000000000000 } ] } ``` Create the Avalanche L1[​](#create-the-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------- Let's create an Avalanche L1 called ``, with custom VM binary and genesis. ```bash avalanche blockchain create ``` Choose custom ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose your VM: Subnet-EVM ▸ Custom ``` Provide path to genesis: ```bash ✗ Enter path to custom genesis: ``` Provide the source code repo url: ```bash ✗ Source code repository URL: https://github.com/ava-labs/hypersdk/ ``` Set the branch and finally set the build script: ```bash ✗ Build script: examples/morpheusvm/scripts/build.sh ``` CLI will generate a locally compiled binary, and then create the Avalanche L1. ```bash Cloning into ... Successfully created subnet configuration ``` ## Deploy Avalanche L1 For this example, we will deploy the Avalanche L1 and blockchain on Fuji. Run: ```bash avalanche blockchain deploy ``` Choose Fuji: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: Local Network ▸ Fuji Mainnet ``` Use the stored key: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which key source should be used to pay transaction fees?: ▸ Use stored key Use ledger ``` Choose `` as the key to use to pay the fees: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which stored key should be used to pay transaction fees?: ▸ ``` Use the same key as the control key for the Avalanche L1: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? How would you like to set your control keys?: ▸ Use fee-paying key Use all stored keys Custom list ``` The successfully creation of our Avalanche L1 and blockchain is confirmed by the following output: ```bash Your Subnet's control keys: [P-fuji1dlwux652lkflgz79g3nsphjzvl6t35xhmunfk1] Your subnet auth keys for chain creation: [P-fuji1dlwux652lkflgz79g3nsphjzvl6t35xhmunfk1] Subnet has been created with ID: RU72cWmBmcXber6ZBPT7R5scFFuVSoFRudcS3vayf3L535ZE3 Now creating blockchain... +--------------------+----------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+----------------------------------------------------+ | Chain Name | blockchainName | +--------------------+----------------------------------------------------+ | Subnet ID | RU72cWmBmcXber6ZBPT7R5scFFuVSoFRudcS3vayf3L535ZE3 | +--------------------+----------------------------------------------------+ | VM ID | srEXiWaHq58RK6uZMmUNaMF2FzG7vPzREsiXsptAHk9gsZNvN | +--------------------+----------------------------------------------------+ | Blockchain ID | 2aDgZRYcSBsNoLCsC8qQH6iw3kUSF5DbRHM4sGEqVKwMSfBDRf | +--------------------+ + | P-Chain TXID | | +--------------------+----------------------------------------------------+ ``` Set the Config Files[​](#set-the-config-files "Direct link to heading") ----------------------------------------------------------------------- Avalanche-CLI supports uploading the full set of configuration files for a blockchain: - Genesis File - Blockchain Config - Avalanche L1 Config - Network Upgrades - AvalancheGo Config The following example uses all of them, but the user can decide to provide a subset of those. ### AvalancheGo Flags[​](#avalanchego-flags "Direct link to heading") Save the following content (as defined [here](https://github.com/ava-labs/hypersdk/blob/vryx-poc/examples/morpheusvm/tests/e2e/e2e_test.go)) into a file with path `` (for example `~/morpheusvm_avago.json`): ```json { "log-level":"INFO", "log-display-level":"INFO", "proposervm-use-current-height":true, "throttler-inbound-validator-alloc-size":"10737418240", "throttler-inbound-at-large-alloc-size":"10737418240", "throttler-inbound-node-max-processing-msgs":"1000000", "throttler-inbound-node-max-at-large-bytes":"10737418240", "throttler-inbound-bandwidth-refill-rate":"1073741824", "throttler-inbound-bandwidth-max-burst-size":"1073741824", "throttler-inbound-cpu-validator-alloc":"100000", "throttler-inbound-cpu-max-non-validator-usage":"100000", "throttler-inbound-cpu-max-non-validator-node-usage":"100000", "throttler-inbound-disk-validator-alloc":"10737418240000", "throttler-outbound-validator-alloc-size":"10737418240", "throttler-outbound-at-large-alloc-size":"10737418240", "throttler-outbound-node-max-at-large-bytes":"10737418240", "consensus-on-accept-gossip-validator-size":"10", "consensus-on-accept-gossip-peer-size":"10", "network-compression-type":"zstd", "consensus-app-concurrency":"128", "profile-continuous-enabled":true, "profile-continuous-freq":"1m", "http-host":"", "http-allowed-origins": "*", "http-allowed-hosts": "*" } ``` Then set the Avalanche L1 to use it by executing: ```bash avalanche blockchain configure blockchainName ``` Select node-config.json: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which configuration file would you like to provide?: ▸ node-config.json chain.json subnet.json per-node-chain.json ``` Provide the path to the AvalancheGo config file: ```bash ✗ Enter the path to your configuration file: ``` Finally, choose no: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to provide the chain.json file as well?: ▸ No Yes File ~/.avalanche-cli/subnets/blockchainName/node-config.json successfully written ``` ### Blockchain Config[​](#blockchain-config "Direct link to heading") `morpheus-cli` as shown [here](https://github.com/ava-labs/hypersdk/blob/vryx-poc/examples/morpheusvm/scripts/run.sh). Save the following content (generated by this [script](https://github.com/ava-labs/hypersdk/blob/vryx-poc/examples/morpheusvm/scripts/run.sh)) in a known file path (for example `~/morpheusvm_chain.json`): ```json { "chunkBuildFrequency": 250, "targetChunkBuildDuration": 250, "blockBuildFrequency": 100, "mempoolSize": 2147483648, "mempoolSponsorSize": 10000000, "authExecutionCores": 16, "precheckCores": 16, "actionExecutionCores": 8, "missingChunkFetchers": 48, "verifyAuth": true, "authRPCCores": 48, "authRPCBacklog": 10000000, "authGossipCores": 16, "authGossipBacklog": 10000000, "chunkStorageCores": 16, "chunkStorageBacklog": 10000000, "streamingBacklogSize": 10000000, "continuousProfilerDir":"/home/ubuntu/morpheusvm-profiles", "logLevel": "INFO" } ``` Then set the Avalanche L1 to use it by executing: ```bash avalanche blockchain configure blockchainName ``` Select chain.json: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which configuration file would you like to provide?: node-config.json ▸ chain.json subnet.json per-node-chain.json ``` Provide the path to the blockchain config file: ```bash ✗ Enter the path to your configuration file: ~/morpheusvm_chain.json ``` Finally choose no: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to provide the subnet.json file as well?: ▸ No Yes File ~/.avalanche-cli/subnets/blockchainName/chain.json successfully written ``` ### Avalanche L1 Config[​](#avalanche-l1-config "Direct link to heading") Save the following content (generated by this [script](https://github.com/ava-labs/hypersdk/blob/vryx-poc/examples/morpheusvm/scripts/run.sh)) in a known path (for example `~/morpheusvm_subnet.json`): ```json { "proposerMinBlockDelay": 0, "proposerNumHistoricalBlocks": 512 } ``` Then set the Avalanche L1 to use it by executing: ```bash avalanche blockchain configure blockchainName ``` Select `subnet.json`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which configuration file would you like to provide?: node-config.json chain.json ▸ subnet.json per-node-chain.json ``` Provide the path to the Avalanche L1 config file: ```bash ✗ Enter the path to your configuration file: ~/morpheusvm_subnet.json ``` Choose no: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to provide the chain.json file as well?: ▸ No Yes File ~/.avalanche-cli/subnets/blockchainName/subnet.json successfully written ``` ### Network Upgrades[​](#network-upgrades "Direct link to heading") Save the following content (currently with no network upgrades) in a known path (for example `~/morpheusvm_upgrades.json`): Then set the Avalanche L1 to use it by executing: ```bash avalanche blockchain upgrade import blockchainName ``` Provide the path to the network upgrades file: ```bash ✗ Provide the path to the upgrade file to import: ~/morpheusvm_upgrades.json ``` Deploy Our Custom VM[​](#deploy-our-custom-vm "Direct link to heading") ----------------------------------------------------------------------- To deploy our Custom VM, run: ```bash avalanche node sync ``` ```bash Node(s) successfully started syncing with Subnet! ``` Your custom VM is successfully deployed! You can also use `avalanche node update blockchain ` to reinstall the binary when the branch is updated, or update the config files. # Execute SSH Command (/docs/tooling/avalanche-cli/create-avalanche-nodes/execute-ssh-commands) --- title: Execute SSH Command description: This page demonstrates how to execute a SSH command on a Cluster or Node managed by Avalanche-CLI --- ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to have a cluster managed by CLI, either a [Fuji Cluster using AWS](/docs/tooling/create-avalanche-nodes/run-validators-aws), a [Fuji Cluster using GCP](/docs/tooling/create-avalanche-nodes/run-validators-gcp), or a [Devnet](/docs/tooling/create-avalanche-nodes/setup-devnet), SSH Warning[​](#ssh-warning "Direct link to heading") ----------------------------------------------------- Note: An expected warning may be seen when executing the command on a given cluster for the first time: ```bash Warning: Permanently added 'IP' (ED25519) to the list of known hosts. ``` Get SSH Connection Instructions for All Clusters[​](#get-ssh-connection-instructions-for-all-clusters "Direct link to heading") ------------------------------------------------------------------------------------------------------------------------------- Just execute `node ssh`: ```bash avalanche node ssh Cluster "" (Devnet) [i-0cf58a280bf3ef9a1] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem [i-0e2abd71a586e56b4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem [i-027417a4f2ca0a478] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem [i-0360a867aa295d8a4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem [i-0759b102acfd5b585] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem ``` Get the AvalancheGo PID for All Nodes in ``[​](#get-the-avalanchego-pid-for-all-nodes-in-clustername "Direct link to heading") ------------------------------------------------------------------------------------------------------------------------------------------- ```bash avalanche node ssh pgrep avalanchego [i-0cf58a280bf3ef9a1] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14508 [i-0e2abd71a586e56b4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14555 [i-027417a4f2ca0a478] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14545 [i-0360a867aa295d8a4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14531 [i-0759b102acfd5b585] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14555 ``` Please note that commands via `ssh` on cluster are executed sequentially by default. It's possible to run command on all nodes at the same time by using `--parallel=true` flag Get the AvalancheGo Configuration for All Nodes in ``[​](#get-the-avalanchego-configuration-for-all-nodes-in-clustername "Direct link to heading") --------------------------------------------------------------------------------------------------------------------------------------------------------------- ```bash avalanche node ssh cat /home/ubuntu/.avalanchego/configs/node.json [i-0cf58a280bf3ef9a1] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "", "bootstrap-ips": "", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "44.219.113.190", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } [i-0e2abd71a586e56b4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "NodeID-EzxsrhoumLsQSWxsohfMFrM1rJcaiaBK8", "bootstrap-ips": "44.219.113.190:9651", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "3.212.206.161", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } [i-027417a4f2ca0a478] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "NodeID-EzxsrhoumLsQSWxsohfMFrM1rJcaiaBK8,NodeID-6veKG5dAz1uJvKc7qm7v6wAPDod8hctb9", "bootstrap-ips": "44.219.113.190:9651,3.212.206.161:9651", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "54.87.168.26", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } [i-0360a867aa295d8a4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "NodeID-EzxsrhoumLsQSWxsohfMFrM1rJcaiaBK8,NodeID-6veKG5dAz1uJvKc7qm7v6wAPDod8hctb9,NodeID-ASseyUweBT82XquiGpmUFjd9QfkUjxiAY", "bootstrap-ips": "44.219.113.190:9651,3.212.206.161:9651,54.87.168.26:9651", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "3.225.42.57", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } [i-0759b102acfd5b585] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "NodeID-EzxsrhoumLsQSWxsohfMFrM1rJcaiaBK8,NodeID-6veKG5dAz1uJvKc7qm7v6wAPDod8hctb9,NodeID-ASseyUweBT82XquiGpmUFjd9QfkUjxiAY,NodeID-LfwbUp9dkhmWTSGffer9kNWNzqUQc2TEJ", "bootstrap-ips": "44.219.113.190:9651,3.212.206.161:9651,54.87.168.26:9651,3.225.42.57:9651", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "107.21.158.224", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } ``` Executing Command on a Single Node[​](#executing-command-on-a-single-node "Direct link to heading") --------------------------------------------------------------------------------------------------- As we all know command can be executed on single node similar to the examples above To execute ssh command on a single node, use ``, `` or `` instead of `` as an argument. For example: ```bash avalanche node ssh i-0225fc39626b1edd3 [or] avalanche node ssh NodeID-9wdKQ3KJU3GqvgFTc4CUYvmefEFe8t6ka [or] avalanche node ssh 54.159.59.123 ``` In this case `--parallel=true` flag will be ignored Opening SSH Shell for ``[​](#opening-ssh-shell-for-nodeid "Direct link to heading") ------------------------------------------------------------------------------------------- If no command is provided, Avalanche-CLI will open an interactive session for the specified node. For example: ```bash avalanche node ssh i-0225fc39626b1edd3 [or] avalanche node ssh NodeID-9wdKQ3KJU3GqvgFTc4CUYvmefEFe8t6ka [or] avalanche node ssh 54.159.59.123 ``` Please use `exit` shell command or Ctrl+D to end this session. # Run Load Test (/docs/tooling/avalanche-cli/create-avalanche-nodes/run-loadtest) --- title: Run Load Test description: This page demonstrates how to run load test on an Avalanche L1 deployed on a cluster of cloud-based validators using Avalanche-CLI. --- ## Prerequisites Before we begin, you will need to have: - Created an AWS account and have an updated AWS `credentials` file in home directory with [default] profile or set up your GCP account according to [here](/docs/tooling/create-avalanche-nodes/run-validators-gcp) - Created a cluster of cloud servers with monitoring enabled - Deployed an Avalanche L1 into the cluster - Added the cloud servers as validator nodes in the Avalanche L1 ## Run Load Test When the load test command is run, a new cloud server will be created to run the load test. The created cloud server is referred by the name `` and you can use any name of your choice. To start load test, run: ```bash avalanche node loadtest start ``` Next, you will need to provide the load test Git repository URL, load test Git Branch, the command to build the load test binary and the command to run the load test binary. We will use an example of running load test on an Avalanche L1 running custom VM MorpheusVM built with [HyperSDK](https://github.com/ava-labs/hypersdk/tree/main/examples/morpheusvm). The following settings will be used: - Load Test Repo URL: `https://github.com/ava-labs/hypersdk/` - Load Test Branch: `vryx-poc` - Load Test Build Script: `cd /home/ubuntu/hypersdk/examples/morpheusvm; CGO_CFLAGS=\"-O -D__BLST_PORTABLE__\" go build -o ~/simulator ./cmd/morpheus-cli` - Load Test Run Script: `/home/ubuntu/simulator spam run ed25519 --accounts=10000000 --txs-per-second=100000 --min-capacity=15000 --step-size=1000 --s-zipf=1.0001 --v-zipf=2.7 --conns-per-host=10 --cluster-info=/home/ubuntu/clusterInfo.yaml --private-key=323b1d8f4eed5f0da9da93071b034f2dce9d2d22692c172f3cb252a64ddfafd01b057de320297c29ad0c1f589ea216869cf1938d88c9fbd70d6748323dbf2fa7` Once the command is run, you will be able to see the logs from the load test in the cluster's Grafana URL like the example below: ![Centralized Logs](/images/centralized-logs.png) ## Stop Load Test To stop the load test process on the load test instance `` and terminate the load test instance, run: ```bash avalanche node loadtest stop ``` # Run Validator on AWS (/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-aws) --- title: Run Validator on AWS description: This page demonstrates how to deploy Avalanche validators on AWS using just one Avalanche-CLI command. --- This page demonstrates how to deploy Avalanche validators on AWS using just one Avalanche-CLI command. Currently, only Fuji network and Devnets are supported. ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to create an AWS account and have an AWS `credentials` file in home directory with \[default\] profile set. More info can be found [here](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds) ## Create Validators To create Avalanche validators, run: ```bash avalanche node create ``` The created nodes will be part of cluster `clusterName` and all avalanche node commands applied to cluster `clusterName` will apply to all nodes in the cluster. Please note that running a validator on AWS will incur costs. Ava Labs is not responsible for the cost incurred from running an Avalanche validator on cloud services via Avalanche-CLI. Currently, we have set the following specs of the AWS cloud server to a fixed value, but we plan to enable customization in the near future: - OS Image: `Ubuntu 20.04 LTS (HVM), SSD Volume Type` - Storage: `1 TB` Instance type can be specified via `--node-type` parameter or via interactive menu. `c5.2xlarge` is the default(recommended) instance size. The command will ask which region you want to set up your cloud server in: ```bash Which AWS region do you want to set up your node in?: ▸ us-east-1 us-east-2 us-west-1 us-west-2 Choose custom region (list of regions available at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) ``` The command will next ask whether you want to set up monitoring for your nodes. ```bash Do you want to set up a separate instance to host monitoring? (This enables you to monitor all your set up instances in one dashboard): ▸ Yes No ``` Setting up monitoring on a separate AWS instance enables you to have a centralized Grafana logs and dashboard for all nodes in a cluster, as seen below: ![Centralized Logs](/images/centralized-logs.png) ![Main Dashboard](/images/run-validators1.png) The separate monitoring AWS instance will have similar specs to the default AWS cloud server, except for its storage, which will be set to 50 GB. Please note that setting up monitoring on a separate AWS instance will incur additional cost of setting up an additional AWS cloud server. The command will then ask which Avalanche Go version you would like to install in the cloud server. You can choose `default` (which will install the latest version) or you can enter the name of an Avalanche L1 created with CLI that you plan to be validated by this node (we will get the latest version that is compatible with the deployed Avalanche L1's RPC version). Once the command has successfully completed, Avalanche-CLI outputs all the created cloud server node IDs as well as the public IP that each node can be reached at. Avalanche-CLI also outputs the command that you can use to ssh into each cloud server node. Finally, if monitoring is set up, Avalanche-CLI will also output the Grafana link where the centralized dashboards and logs can be accessed. By the end of successful run of `create` command, Avalanche-CLI would have: - Installed Avalanche Go in cloud server - Installed Avalanche CLI in cloud server - Downloaded the `.pem` private key file to access the cloud server into your local `.ssh` directory. Back up this private key file as you will not be able to ssh into the cloud server node without it (unless `ssh-agent` is used). - Downloaded `staker.crt` and `staker.key` files to your local `.avalanche-cli` directory so that you can back up your node. More info about node backup can be found [here](/docs/nodes/maintain/backup-restore) - Started the process of bootstrapping your new Avalanche node to the Primary Network (for non-Devnet only). Please note that Avalance CLI can be configured to use `ssh-agent` for ssh communication. In this case public key will be read from there and cloud server will be accessible using it. Yubikey hardware can be also used to store private ssh key. Please use official Yubikey documentation, for example \[[https://developers.yubico.com/PGP/SSH\_authentication/](https://developers.yubico.com/PGP/SSH_authentication/)\] for more details. Check Bootstrap Status[​](#check-bootstrap-status "Direct link to heading") --------------------------------------------------------------------------- Ignore for Devnet Please note that you will have to wait until the nodes have finished bootstrapping before the nodes can be Primary Network or Avalanche L1 Validators. To check whether all the nodes in a cluster have finished bootstrapping, run `avalanche node status `. # Run Validator on GCP (/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-gcp) --- title: Run Validator on GCP description: This page demonstrates how to deploy Avalanche validators on GCP using just one Avalanche-CLI command. --- This page demonstrates how to deploy Avalanche validators on GCP using just one Avalanche-CLI command. Currently, only Fuji network and Devnets are supported. ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to: - Create a GCP account [here](https://console.cloud.google.com/freetrial) and create a new project - Enable Compute Engine API [here](https://console.cloud.google.com/apis/api/compute.googleapis.com) - Download the key json for the automatically created service account as shown [here](https://cloud.google.com/iam/docs/keys-create-delete#creating) ## Create Validator To create Avalanche validators, run: ```bash avalanche node create ``` The created nodes will be part of cluster `clusterName` and all avalanche node commands applied to cluster `clusterName` will apply to all nodes in the cluster. Please note that running a validator on GCP will incur costs. Ava Labs is not responsible for the cost incurred from running an Avalanche validator on cloud services via Avalanche-CLI. Currently, we have set the following specs of the GCP cloud server to a fixed value, but we plan to enable customization in the near future: - OS Image: `Ubuntu 20.04 LTS` - Storage: `1 TB` Instance type can be specified via `--node-type` parameter or via interactive menu. `e2-standard-8` is default(recommended) instance size. The command will ask which region you want to set up your cloud server in: ```bash Which Google Region do you want to set up your node(s) in?: ▸ us-east1 us-central1 us-west1 Choose custom Google Region (list of Google Regions available at https://cloud.google.com/compute/docs/regions-zones/) ``` The command will next ask whether you want to set up monitoring for your nodes. ```bash Do you want to set up a separate instance to host monitoring? (This enables you to monitor all your set up instances in one dashboard): ▸ Yes No ``` Setting up monitoring on a separate GCP instance enables you to have a unified Grafana dashboard for all nodes in a cluster, as seen below: ![Centralized Logs](/images/centralized-logs.png) ![Main Dashboard](/images/gcp1.png) The separate monitoring GCP instance will have similar specs to the default GCP cloud server, except for its storage, which will be set to 50 GB. Please note that setting up monitoring on a separate GCP instance will incur additional cost of setting up an additional GCP cloud server. The command will then ask which Avalanche Go version you would like to install in the cloud server. You can choose `default` (which will install the latest version) or you can enter the name of an Avalanche L1 created with CLI that you plan to be validated by this node (we will get the latest version that is compatible with the deployed Avalanche L1's RPC version). Once the command has successfully completed, Avalanche-CLI outputs all the created cloud server node IDs as well as the public IP that each node can be reached at. Avalanche-CLI also outputs the command that you can use to ssh into each cloud server node. Finally, if monitoring is set up, Avalanche-CLI will also output the Grafana link where the centralized dashboards and logs can be accessed. By the end of successful run of `create` command, Avalanche-CLI would have: - Installed Avalanche Go in cloud server - Installed Avalanche CLI in cloud server - Downloaded the `.pem` private key file to access the cloud server into your local `.ssh` directory. Back up this private key file as you will not be able to ssh into the cloud server node without it (unless `ssh-agent` is used). - Downloaded `staker.crt` and `staker.key` files to your local `.avalanche-cli` directory so that you can back up your node. More info about node backup can be found [here](/docs/nodes/maintain/backup-restore) - Started the process of bootstrapping your new Avalanche node to the Primary Network Please note that Avalanche CLI can be configured to use `ssh-agent` for ssh access to cloud server. Yubikey hardware can be also used to store private ssh key. Please use official Yubikey documentation, for example \[[https://developers.yubico.com/PGP/SSH\_authentication/](https://developers.yubico.com/PGP/SSH_authentication/)\] for more details. Check Bootstrap Status[​](#check-bootstrap-status "Direct link to heading") --------------------------------------------------------------------------- Ignore for Devnet Please note that you will have to wait until the nodes have finished bootstrapping before the nodes can be Primary Network or Avalanche L1 Validators. To check whether all the nodes in a cluster have finished bootstrapping, run `avalanche node status `. # Setup a Devnet (/docs/tooling/avalanche-cli/create-avalanche-nodes/setup-devnet) --- title: Setup a Devnet description: This page demonstrates how to setup a Devnet of cloud-based validators using Avalanche-CLI, and deploy a VM into it. --- Devnets (Developer Networks) are isolated Avalanche networks deployed on the cloud. Similar to local networks in terms of configuration and usage but installed on remote nodes. Think of DevNets as being an intermediate step in the developer testing process after local network and before Fuji network. ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to create an AWS account and have an updated AWS `credentials` file in home directory with [default] profile or set up your GCP account according to [here](/docs/tooling/create-avalanche-nodes/run-validators-gcp). Note: the tutorial uses AWS hosts, but Devnets can also be created and operated in other supported cloud providers, such as GCP. ## Setting up a Devnet Setting up a Devnet consists of: - Creating a cluster of cloud servers - Deploying an Avalanche L1 into the cluster - Adding the cloud servers as validator nodes in the Avalanche L1 To execute all steps above in one command, run: ```bash avalanche node devnet wiz ``` Command line flags can be used instead of interacting with the prompts. The complete command line flags for `devnet wiz` command can be found [here](/docs/tooling/cli-commands#node-devnet-wiz). Let's go through several examples with the full command (with flags) provided. ### Create a Devnet and Deploy Subnet-EVM Based Avalanche L1 into the Devnet For example, to spin up a Devnet with 5 validator nodes and 1 API node in 5 regions each (us-west-2,us-east-1,ap-south-1,ap-northeast-1,eu-west-1) in AWS with each node having spec of c7g.8xlarge AWS EC2 instance type and io2 volume type, with Avalanche L1 `` deployed into the Devnet, we will run : ```bash avalanche node devnet wiz --authorize-access --aws --num-apis 1,1,1,1,1 --num-validators 5,5,5,5,5 --region us-west-2,us-east-1,ap-south-1,ap-northeast-1,eu-west-1 --default-validator-params --node-type c7g.8xlarge --aws-volume-type=io2 Creating the devnet ... Waiting for node(s) in cluster to be healthy... ... Nodes healthy after 33 seconds Deploying the subnet ... Setting the nodes as subnet trackers ... Waiting for node(s) in cluster to be healthy... Nodes healthy after 33 seconds ... Waiting for node(s) in cluster to be syncing subnet ... Nodes Syncing after 5 seconds Adding nodes as subnet validators ... Waiting for node(s) in cluster to be validating subnet ... Nodes Validating after 23 seconds Devnet has been created and is validating subnet ! ``` ### Create a Devnet and Deploy a Custom VM based Avalanche L1 into the Devnet For this example, we will be using the custom VM [MorpheusVM](https://github.com/ava-labs/hypersdk/tree/main/examples/morpheusvm) built with [HyperSDK](https://github.com/ava-labs/hypersdk). The following settings will be used: - `` `https://github.com/ava-labs/hypersdk/` - `` `vryx-poc` - `` `examples/morpheusvm/scripts/build.sh` - `` [Genesis File](/docs/tooling/create-avalanche-nodes/deploy-custom-vm#genesis-file) - `` [Blockchain Config](/docs/tooling/create-avalanche-nodes/deploy-custom-vm#blockchain-config) - `` [Avalanche L1 Config](/docs/tooling/create-avalanche-nodes/deploy-custom-vm#avalanche-l1-config) - `` [AvalancheGo Config](/docs/tooling/create-avalanche-nodes/deploy-custom-vm#avalanchego-flags) To spin up a Devnet with 5 validator nodes and 1 API node in 5 regions each (us-west-2,us-east-1,ap-south-1,ap-northeast-1,eu-west-1) in AWS with each node having spec of c7g.8xlarge AWS EC2 instance type and io2 volume type, with the Custom VM based Avalanche L1 `` deployed into the Devnet, we will run : ```bash avalanche node devnet wiz --custom-subnet \ --subnet-genesis --custom-vm-repo-url \ --custom-vm-branch --custom-vm-build-script \ --chain-config --subnet-config \ --node-config --authorize-access --aws --num-apis 1,1,1,1,1 \ --num-validators 5,5,5,5,5 --region us-west-2,us-east-1,ap-south-1,ap-northeast-1,eu-west-1 \ --default-validator-params --node-type default Creating the subnet ... Creating the devnet ... Waiting for node(s) in cluster to be healthy... ... Nodes healthy after 33 seconds Deploying the subnet ... Setting the nodes as subnet trackers ... Waiting for node(s) in cluster to be healthy... Nodes healthy after 33 seconds ... Waiting for node(s) in cluster to be syncing subnet ... Nodes Syncing after 5 seconds Adding nodes as subnet validators ... Waiting for node(s) in cluster to be validating subnet ... Nodes Validating after 23 seconds Devnet has been created and is validating subnet ! ``` # Terminate All Nodes (/docs/tooling/avalanche-cli/create-avalanche-nodes/stop-node) --- title: Terminate All Nodes description: This page provides instructions for terminating cloud server nodes created by Avalanche-CLI. --- ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. Terminating All Nodes[​](#terminating-all-nodes "Direct link to heading") ------------------------------------------------------------------------- To terminate all nodes in a cluster, run: ```bash avalanche node destroy ``` ALPHA WARNING: This command will delete all files associated with the cloud servers in the cluster. This includes the downloaded `staker.crt` and `staker.key` files in your local `.avalanche-cli` directory (which are used to back up your node). More info about node backup can be found [here](/docs/nodes/maintain/backup-restore). Once completed, the instance set up on AWS / GCP would have been terminated and the Static Public IP associated with it would have been released. # Validate the Primary Network (/docs/tooling/avalanche-cli/create-avalanche-nodes/validate-primary-network) --- title: Validate the Primary Network description: This page demonstrates how to configure nodes to validate the Primary Network. Validation via Avalanche-CLI is currently only supported on Fuji. --- ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to have: - Created a Cloud Server node as described for [AWS](/docs/tooling/create-avalanche-nodes/run-validators-aws) or [GCP](/docs/tooling/create-avalanche-nodes/run-validators-gcp) - A node bootstrapped to the Primary Network (run `avalanche node status `to check bootstrap status as described [here](/docs/tooling/create-avalanche-nodes/run-validators-aws#check-bootstrap-status) - Stored key / Ledger with AVAX to pay for gas fess associated with adding node as Primary Network. Instructions on how to fund stored key on Fuji can be found [here](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-fuji-testnet#funding-the-key). Be a Primary Network Validator[​](#be-a-primary-network-validator "Direct link to heading") ------------------------------------------------------------------------------------------- Once all nodes in a cluster are bootstrapped to Primary Network, we can now have the nodes be Primary Network Validators. To have all nodes in cluster `clusterName` be Primary Network Validators, run: ```bash avalanche node validate primary ``` The nodes will start validating the Primary Network 20 seconds after the command is run. The wizard will ask us how we want to pay for the transaction fee. Choose `Use stored key` for Fuji: ```bash Which key source should be used to pay transaction fees?: ▸ Use stored key Use ledger ``` Once you have selected the key to pay with, choose how many AVAX you would like to stake in the validator. Default is the minimum amount of AVAX that can be staked in a Fuji Network Validator. More info regarding minimum staking amount in different networks can be found [here](/docs/nodes/validate/how-to-stake#fuji-testnet). ```bash What stake weight would you like to assign to the validator?: ▸ Default (1.00 AVAX) Custom ``` Next, choose how long the node will be validating for: ```bash How long should your validator validate for?: ▸ Minimum staking duration on primary network Custom ``` Once all the inputs are completed you will see transaction IDs indicating that all the nodes in the cluster will be Primary Network Validators once the start time has elapsed. # On Local Network (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-locally) --- title: On Local Network description: This guide shows you how to deploy an Avalanche L1 to a local Avalanche network. --- This how-to guide focuses on taking an already created Avalanche L1 configuration and deploying it to a local Avalanche network. ## Prerequisites - [Avalanche-CLI installed](/docs/tooling/get-avalanche-cli) - You have [created an Avalanche L1 configuration](/docs/tooling/create-avalanche-l1#create-your-avalanche-l1-configuration) Deploying Avalanche L1s Locally[​](#deploying-avalanche-l1s-locally "Direct link to heading") --------------------------------------------------------------------------------- In the following commands, make sure to substitute the name of your Avalanche L1 configuration for ``. To deploy your Avalanche L1, run: ```bash avalanche blockchain deploy ``` and select `Local Network` to deploy on. Alternatively, you can bypass this prompt by providing the `--local` flag. For example: ```bash avalanche blockchain deploy --local ``` The command may take a couple minutes to run. Note: If you run `bash` on your shell and are running Avalanche-CLI on ARM64 on Mac, you will require Rosetta 2 to be able to deploy Avalanche L1s locally. You can download Rosetta 2 using `softwareupdate --install-rosetta` . ### Results[​](#results "Direct link to heading") If all works as expected, the command output should look something like this: ```bash > avalanche blockchain deploy myblockchain ✔ Local Network Deploying [myblockchain] to Local Network AvalancheGo path: /Users/felipe.madero/.avalanche-cli/bin/avalanchego/avalanchego-v1.13.0/avalanchego Booting Network. Wait until healthy... Node logs directory: /Users/felipe.madero/.avalanche-cli/runs/network_20250410_104205//logs Network ready to use. Using [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] to be set as a change owner for leftover AVAX AvalancheGo path: /Users/felipe.madero/.avalanche-cli/bin/avalanchego/avalanchego-v1.13.0/avalanchego ✓ Local cluster myblockchain-local-node-local-network not found. Creating... Starting local avalanchego node using root: /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network ... ✓ Booting Network. Wait until healthy... ✓ Avalanchego started and ready to use from /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network Node logs directory: /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network//logs Network ready to use. URI: http://127.0.0.1:60172 NodeID: NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN Your blockchain control keys: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] Your blockchain auth keys for chain creation: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] CreateSubnetTx fee: 0.000010278 AVAX Blockchain has been created with ID: 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw Now creating blockchain... CreateChainTx fee: 0.000129564 AVAX +--------------------------------------------------------------------+ | DEPLOYMENT RESULTS | +---------------+----------------------------------------------------+ | Chain Name | myblockchain | +---------------+----------------------------------------------------+ | Subnet ID | 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw | +---------------+----------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+----------------------------------------------------+ | Blockchain ID | Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y | +---------------+ | | P-Chain TXID | | +---------------+----------------------------------------------------+ Now calling ConvertSubnetToL1Tx... ConvertSubnetToL1Tx fee: 0.000036992 AVAX ConvertSubnetToL1Tx ID: 2d2EE7AorEhfKLBtnDGnAtcDYMGfPbWnHYDpNDm3SopYg6VtpV Waiting for the Subnet to be converted into a sovereign L1 ... 100% [===============] Validator Manager Protocol: ACP99 Restarting node NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN to track newly deployed subnet/s Waiting for blockchain Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y to be bootstrapped ✓ Local Network successfully tracking myblockchain ✓ Checking if node is healthy... ✓ Node is healthy after 0 seconds Initializing Proof of Authority Validator Manager contract on blockchain myblockchain ... ✓ Proof of Authority Validator Manager contract successfully initialized on blockchain myblockchain Your L1 is ready for on-chain interactions. RPC Endpoint: http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc ICM Messenger successfully deployed to myblockchain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) ICM Registry successfully deployed to myblockchain (0xEc7018552DC7E197Af85f157515f5976b1A15B12) ICM Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) ICM Registry successfully deployed to c-chain (0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25) ✓ ICM is successfully deployed Generating relayer config file at /Users/felipe.madero/.avalanche-cli/runs/network_20250410_104205/icm-relayer-config.json Relayer version icm-relayer-v1.6.2 Executing Relayer ✓ Relayer is successfully deployed +--------------------------------------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN | +---------------+----------------------------------------------------------------------------------------------------------------+ | Name | myblockchain | +---------------+----------------------------------------------------------------------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+----------------------------------------------------------------------------------------------------------------+ | VM Version | v0.7.3 | +---------------+----------------------------------------------------------------------------------------------------------------+ | Validation | Proof Of Authority | +---------------+--------------------------+-------------------------------------------------------------------------------------+ | Local Network | ChainID | 888 | | +--------------------------+-------------------------------------------------------------------------------------+ | | SubnetID | 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw | | +--------------------------+-------------------------------------------------------------------------------------+ | | Owners (Threhold=1) | P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p | | +--------------------------+-------------------------------------------------------------------------------------+ | | BlockchainID (CB58) | Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y | | +--------------------------+-------------------------------------------------------------------------------------+ | | BlockchainID (HEX) | 0x48644613a5ef255fa171bf4773df668b57ea0ea9593df8927a6d9f32376a9c6f | | +--------------------------+-------------------------------------------------------------------------------------+ | | RPC Endpoint | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +---------------+--------------------------+-------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+ | ICM | +---------------+-----------------------+--------------------------------------------+ | Local Network | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | +-----------------------+--------------------------------------------+ | | ICM Registry Address | 0xEc7018552DC7E197Af85f157515f5976b1A15B12 | +---------------+-----------------------+--------------------------------------------+ +--------------------------+ | TOKEN | +--------------+-----------+ | Token Name | TST Token | +--------------+-----------+ | Token Symbol | TST | +--------------+-----------+ +---------------------------------------------------------------------------------------------------------------------------------------+ | INITIAL TOKEN ALLOCATION | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | DESCRIPTION | ADDRESS AND PRIVATE KEY | AMOUNT (TST) | AMOUNT (WEI) | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | Main funded account | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | 1000000 | 1000000000000000000000000 | | ewoq | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | | | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | Used by ICM | 0xf34408C05e3B339B1c89d15163d4B9D96845597A | 600 | 600000000000000000000 | | cli-teleporter-deployer | 30d57c7b6e7e393e2e4ce8166768b497cc37930361a15b1c647d6e665d88afff | | | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ +----------------------------------------------------------------------------------------------------------------------------------+ | SMART CONTRACTS | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | DESCRIPTION | ADDRESS | DEPLOYER | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Validator Messages Lib | 0x9C00629cE712B0255b17A4a657171Acd15720B8C | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Proxy Admin | 0xC0fFEE1234567890aBCdeF1234567890abcDef34 | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | ACP99 Compatible PoA Validator Manager | 0x0C0DEbA5E0000000000000000000000000000000 | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Transparent Proxy | 0x0Feedc0de0000000000000000000000000000000 | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ +----------------------------------------------------------------------+ | INITIAL PRECOMPILE CONFIGS | +------------+-----------------+-------------------+-------------------+ | PRECOMPILE | ADMIN ADDRESSES | MANAGER ADDRESSES | ENABLED ADDRESSES | +------------+-----------------+-------------------+-------------------+ | Warp | n/a | n/a | n/a | +------------+-----------------+-------------------+-------------------+ +-------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN RPC URLS | +-----------+-------------------------------------------------------------------------------------+ | Localhost | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +-----------+-------------------------------------------------------------------------------------+ +------------------------------------------------------------------+ | PRIMARY NODES | +------------------------------------------+-----------------------+ | NODE ID | LOCALHOST ENDPOINT | +------------------------------------------+-----------------------+ | NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg | http://127.0.0.1:9650 | +------------------------------------------+-----------------------+ | NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ | http://127.0.0.1:9652 | +------------------------------------------+-----------------------+ +----------------------------------------------------------------------------------+ | L1 NODES | +------------------------------------------+------------------------+--------------+ | NODE ID | LOCALHOST ENDPOINT | L1 | +------------------------------------------+------------------------+--------------+ | NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN | http://127.0.0.1:60172 | myblockchain | +------------------------------------------+------------------------+--------------+ +-------------------------------------------------------------------------------------------------------+ | WALLET CONNECTION | +-----------------+-------------------------------------------------------------------------------------+ | Network RPC URL | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +-----------------+-------------------------------------------------------------------------------------+ | Network Name | myblockchain | +-----------------+-------------------------------------------------------------------------------------+ | Chain ID | 888 | +-----------------+-------------------------------------------------------------------------------------+ | Token Symbol | TST | +-----------------+-------------------------------------------------------------------------------------+ | Token Name | TST Token | +-----------------+-------------------------------------------------------------------------------------+ ✓ L1 is successfully deployed on Local Network ``` You can use the deployment details to connect to and interact with your Avalanche L1. Deploying Avalanche L1s Locally[​](#deploying-avalanche-l1s-locally) --------------------------------------------------------------------------------- To deploy your Avalanche L1, run: ```bash avalanche blockchain deploy myblockchain ``` Make sure to substitute the name of your Avalanche L1 if you used a different one than `myblockchain`. ```bash ? Choose a network for the operation: ▸ Local Network Devnet Etna Devnet Fuji Testnet Mainnet ``` Next, select `Local Network`. This command boots a three node Avalanche network on your machine: - Two nodes to act as primary validators for the local network, that will validate the local P and C chains (unrelated to testnet/mainnet).. - One node to act as sovereign validator for the new L1 that is deployed into the local network. The command needs to download the latest versions of AvalancheGo and Subnet-EVM. It may take a couple minutes to run. Note: If you run `bash` on your shell and are running Avalanche-CLI on ARM64 on Mac, you will require Rosetta 2 to be able to deploy Avalanche L1s locally. You can download Rosetta 2 using `softwareupdate --install-rosetta` . If all works as expected, the command output should look something like this: ```bash avalanche blockchain deploy myblockchain # output ✔ Local Network Deploying [myblockchain] to Local Network AvalancheGo path: /Users/felipe.madero/.avalanche-cli/bin/avalanchego/avalanchego-v1.13.0/avalanchego Booting Network. Wait until healthy... Node logs directory: /Users/felipe.madero/.avalanche-cli/runs/network_20250410_104205//logs Network ready to use. Using [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] to be set as a change owner for leftover AVAX AvalancheGo path: /Users/felipe.madero/.avalanche-cli/bin/avalanchego/avalanchego-v1.13.0/avalanchego ✓ Local cluster myblockchain-local-node-local-network not found. Creating... Starting local avalanchego node using root: /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network ... ✓ Booting Network. Wait until healthy... ✓ Avalanchego started and ready to use from /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network Node logs directory: /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network//logs Network ready to use. URI: http://127.0.0.1:60172 NodeID: NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN Your blockchain control keys: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] Your blockchain auth keys for chain creation: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] CreateSubnetTx fee: 0.000010278 AVAX Blockchain has been created with ID: 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw Now creating blockchain... CreateChainTx fee: 0.000129564 AVAX +--------------------------------------------------------------------+ | DEPLOYMENT RESULTS | +---------------+----------------------------------------------------+ | Chain Name | myblockchain | +---------------+----------------------------------------------------+ | Subnet ID | 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw | +---------------+----------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+----------------------------------------------------+ | Blockchain ID | Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y | +---------------+ | | P-Chain TXID | | +---------------+----------------------------------------------------+ Now calling ConvertSubnetToL1Tx... ConvertSubnetToL1Tx fee: 0.000036992 AVAX ConvertSubnetToL1Tx ID: 2d2EE7AorEhfKLBtnDGnAtcDYMGfPbWnHYDpNDm3SopYg6VtpV Waiting for the Subnet to be converted into a sovereign L1 ... 100% [===============] Validator Manager Protocol: ACP99 Restarting node NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN to track newly deployed subnet/s Waiting for blockchain Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y to be bootstrapped ✓ Local Network successfully tracking myblockchain ✓ Checking if node is healthy... ✓ Node is healthy after 0 seconds Initializing Proof of Authority Validator Manager contract on blockchain myblockchain ... ✓ Proof of Authority Validator Manager contract successfully initialized on blockchain myblockchain Your L1 is ready for on-chain interactions. RPC Endpoint: http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc ICM Messenger successfully deployed to myblockchain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) ICM Registry successfully deployed to myblockchain (0xEc7018552DC7E197Af85f157515f5976b1A15B12) ICM Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) ICM Registry successfully deployed to c-chain (0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25) ✓ ICM is successfully deployed Generating relayer config file at /Users/felipe.madero/.avalanche-cli/runs/network_20250410_104205/icm-relayer-config.json Relayer version icm-relayer-v1.6.2 Executing Relayer ✓ Relayer is successfully deployed +--------------------------------------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN | +---------------+----------------------------------------------------------------------------------------------------------------+ | Name | myblockchain | +---------------+----------------------------------------------------------------------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+----------------------------------------------------------------------------------------------------------------+ | VM Version | v0.7.3 | +---------------+----------------------------------------------------------------------------------------------------------------+ | Validation | Proof Of Authority | +---------------+--------------------------+-------------------------------------------------------------------------------------+ | Local Network | ChainID | 888 | | +--------------------------+-------------------------------------------------------------------------------------+ | | SubnetID | 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw | | +--------------------------+-------------------------------------------------------------------------------------+ | | Owners (Threhold=1) | P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p | | +--------------------------+-------------------------------------------------------------------------------------+ | | BlockchainID (CB58) | Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y | | +--------------------------+-------------------------------------------------------------------------------------+ | | BlockchainID (HEX) | 0x48644613a5ef255fa171bf4773df668b57ea0ea9593df8927a6d9f32376a9c6f | | +--------------------------+-------------------------------------------------------------------------------------+ | | RPC Endpoint | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +---------------+--------------------------+-------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+ | ICM | +---------------+-----------------------+--------------------------------------------+ | Local Network | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | +-----------------------+--------------------------------------------+ | | ICM Registry Address | 0xEc7018552DC7E197Af85f157515f5976b1A15B12 | +---------------+-----------------------+--------------------------------------------+ +--------------------------+ | TOKEN | +--------------+-----------+ | Token Name | TST Token | +--------------+-----------+ | Token Symbol | TST | +--------------+-----------+ +---------------------------------------------------------------------------------------------------------------------------------------+ | INITIAL TOKEN ALLOCATION | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | DESCRIPTION | ADDRESS AND PRIVATE KEY | AMOUNT (TST) | AMOUNT (WEI) | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | Main funded account | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | 1000000 | 1000000000000000000000000 | | ewoq | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | | | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | Used by ICM | 0xf34408C05e3B339B1c89d15163d4B9D96845597A | 600 | 600000000000000000000 | | cli-teleporter-deployer | 30d57c7b6e7e393e2e4ce8166768b497cc37930361a15b1c647d6e665d88afff | | | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ +----------------------------------------------------------------------------------------------------------------------------------+ | SMART CONTRACTS | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | DESCRIPTION | ADDRESS | DEPLOYER | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Validator Messages Lib | 0x9C00629cE712B0255b17A4a657171Acd15720B8C | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Proxy Admin | 0xC0fFEE1234567890aBCdeF1234567890abcDef34 | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | ACP99 Compatible PoA Validator Manager | 0x0C0DEbA5E0000000000000000000000000000000 | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Transparent Proxy | 0x0Feedc0de0000000000000000000000000000000 | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ +----------------------------------------------------------------------+ | INITIAL PRECOMPILE CONFIGS | +------------+-----------------+-------------------+-------------------+ | PRECOMPILE | ADMIN ADDRESSES | MANAGER ADDRESSES | ENABLED ADDRESSES | +------------+-----------------+-------------------+-------------------+ | Warp | n/a | n/a | n/a | +------------+-----------------+-------------------+-------------------+ +-------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN RPC URLS | +-----------+-------------------------------------------------------------------------------------+ | Localhost | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +-----------+-------------------------------------------------------------------------------------+ +------------------------------------------------------------------+ | PRIMARY NODES | +------------------------------------------+-----------------------+ | NODE ID | LOCALHOST ENDPOINT | +------------------------------------------+-----------------------+ | NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg | http://127.0.0.1:9650 | +------------------------------------------+-----------------------+ | NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ | http://127.0.0.1:9652 | +------------------------------------------+-----------------------+ +----------------------------------------------------------------------------------+ | L1 NODES | +------------------------------------------+------------------------+--------------+ | NODE ID | LOCALHOST ENDPOINT | L1 | +------------------------------------------+------------------------+--------------+ | NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN | http://127.0.0.1:60172 | myblockchain | +------------------------------------------+------------------------+--------------+ +-------------------------------------------------------------------------------------------------------+ | WALLET CONNECTION | +-----------------+-------------------------------------------------------------------------------------+ | Network RPC URL | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +-----------------+-------------------------------------------------------------------------------------+ | Network Name | myblockchain | +-----------------+-------------------------------------------------------------------------------------+ | Chain ID | 888 | +-----------------+-------------------------------------------------------------------------------------+ | Token Symbol | TST | +-----------------+-------------------------------------------------------------------------------------+ | Token Name | TST Token | +-----------------+-------------------------------------------------------------------------------------+ ✓ L1 is successfully deployed on Local Network ``` To manage the newly deployed local Avalanche network, see [the `avalanche network` command tree](/docs/tooling/cli-commands#avalanche-network). You can use the deployment details to connect to and interact with your Avalanche L1. Now it's time to interact with it. ## Interacting with Your Avalanche L1 You can use the value provided by `Browser Extension connection details` to connect to your Avalanche L1 with Core, MetaMask, or any other wallet. To allow API calls from other machines, use `--http-host=0.0.0.0` in the config. ```bash Browser Extension connection details (any node URL from above works): RPC URL: http://127.0.0.1:9650/ext/bc/2BK8CKA4Vfvi69TBTc5GW94JQ9nPiL8xPpPNeeckb9UFSPYedD/rpc Funded address: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC with 1000000 (10^18) - private key: 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 Network name: myblockchain Chain ID: 888 Currency Symbol: TST ``` This tutorial uses Core. ### Importing the Test Private Key[​](#importing-the-test-private-key) This address derives from a well-known private key. Anyone can steal funds sent to this address. Only use it on development networks that only you have access to. If you send production funds to this address, attackers may steal them instantly. First, you need to import your airdrop private key into Core. In the Accounts screen, select the `Imported` tab. Click on `Import private key`. ![](/images/deploy-subnet1.png) Here, enter the private key. Import the well-known private key `0x56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027`. ![](/images/deploy-subnet2.png) Next, rename the Core account to prevent confusion. On the `Imported` tab, click on the pen icon next to your account. Rename the account `DO NOT USE -- Public test key` to prevent confusion with any personal wallets. ![Rename Account](/images/deploy-subnet3.png) ![Rename Account](/images/deploy-subnet4.png) ### Connect to the Avalanche L1[​](#connect-to-the-avalanche-l1) Next, you need to add your Avalanche L1 to Core's networks. In the Core Extension click, `See All Networks` and then select the `+` icon in the top right. ![Add network](/images/deploy-subnet5.png) Enter your Avalanche L1's details, found in the output of your `avalanche blockchain deploy` [command](#deploying-avalanche-l1s-locally), into the form and click `Save`. ![Add network 2](/images/deploy-subnet6.png) If all worked as expected, your balance should read 1 million tokens. Your Avalanche L1 is ready for action. You might want to try to [Deploy a Smart Contract on Your Subnet-EVM Using Remix and Core](/docs/avalanche-l1s/add-utility/deploy-smart-contract). ![Avalanche L1 in Core](/images/deploy-subnet7.png) ## Deploying Multiple Avalanche L1s[​](#deploying-multiple-avalanche-l1s "Direct link to heading") You may deploy multiple Avalanche L1s concurrently, but you can't deploy the same Avalanche L1 multiple times without resetting all deployed Avalanche L1 state. ## Redeploying the Avalanche L1 To redeploy the Avalanche L1, you first need to wipe the Avalanche L1 state. This permanently deletes all data from all locally deployed Avalanche L1s. To do so, run ```bash avalanche network clean ``` You are now free to redeploy your Avalanche L1 with ```bash avalanche blockchain deploy --local ``` ## Stopping the Local Network To gracefully stop a running local network while preserving state, run: ```bash avalanche network stop # output Network stopped successfully. ``` When restarted, all of your deployed Avalanche L1s resume where they left off. ### Resuming the Local Network To resume a stopped network, run: ```bash avalanche network start # output Starting previously deployed and stopped snapshot Booting Network. Wait until healthy... ............... Network ready to use. Local network node endpoints: +-------+----------+------------------------------------------------------------------------------------+ | NODE | VM | URL | +-------+----------+------------------------------------------------------------------------------------+ | node5 | myblockchain | http://127.0.0.1:9658/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ | node1 | myblockchain | http://127.0.0.1:9650/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ | node2 | myblockchain | http://127.0.0.1:9652/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ | node3 | myblockchain | http://127.0.0.1:9654/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ | node4 | myblockchain | http://127.0.0.1:9656/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ ``` The network resumes with the same state it paused with. ## Next Steps After you feel comfortable with this deployment flow, try deploying smart contracts on your chain with [Remix](https://remix.ethereum.org/), [Hardhat](https://hardhat.org/), or [Foundry](https://github.com/foundry-rs/foundry). You can also experiment with customizing your Avalanche L1 by addingprecompiles or adjusting the airdrop. Once you've developed a stable Avalanche L1 you like, see [Create an EVM Avalanche L1 on Fuji Testnet](/docs/avalanche-l1s/upgrade/customize-avalanche-l1) to take your Avalanche L1 one step closer to production. ## FAQ **How is the Avalanche L1 ID (SubnetID) determined upon creation?** The Avalanche L1 ID (SubnetID) is the hash of the transaction that created the Avalanche L1. # On Fuji Testnet (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet) --- title: On Fuji Testnet description: This tutorial shows how to deploy an Avalanche L1 on Fuji Testnet. --- This document describes how to use the new Avalanche-CLI to deploy an Avalanche L1 on `Fuji`. After trying out an Avalanche L1 on a local box by following [this tutorial](/docs/tooling/create-deploy-avalanche-l1s/deploy-locally), next step is to try it out on `Fuji` Testnet. In this article, it's shown how to do the following on `Fuji` Testnet. - Create an Avalanche L1. - Deploy a virtual machine based on Subnet-EVM. - Join a node to the newly created Avalanche L1. - Add a node as a validator to the Avalanche L1. All IDs in this article are for illustration purposes. They can be different in your own run-through of this tutorial. ## Prerequisites - [`Avalanche-CLI`](https://github.com/ava-labs/avalanche-cli) installed Virtual Machine[​](#virtual-machine "Direct link to heading") ------------------------------------------------------------- Avalanche can run multiple blockchains. Each blockchain is an instance of a [Virtual Machine](/docs/quick-start/virtual-machines), much like an object in an object-oriented language is an instance of a class. That's, the VM defines the behavior of the blockchain. [Subnet-EVM](https://github.com/ava-labs/subnet-evm) is the VM that defines the Avalanche L1 Contract Chains. Subnet-EVM is a simplified version of [Avalanche C-Chain](https://github.com/ava-labs/coreth). This chain implements the Ethereum Virtual Machine and supports Solidity smart contracts as well as most other Ethereum client features. Avalanche-CLI[​](#avalanche-cli "Direct link to heading") --------------------------------------------------------- If not yet installed, install `Avalanche-CLI` following the tutorial at [Avalanche-CLI installation](/docs/tooling/get-avalanche-cli) ### Private Key[​](#private-key "Direct link to heading") All commands which issue a transaction require either a private key loaded into the tool, or a connected ledger device. This tutorial focuses on stored key usage and leave ledger operation details for the `Mainnet` deploy one, as `Mainnet` operations requires ledger usage, while for `Fuji` it's optional. `Avalanche-CLI` supports the following key operations: - create - delete - export - list You should only use the private key created for this tutorial for testing operations on `Fuji` or other testnets. Don't use this key on `Mainnet`. CLI is going to store the key on your file system. Whoever gets access to that key is going to have access to all funds secured by that private key. Run `create` if you don't have any private key available yet. You can create multiple named keys. Each command requiring a key is going to therefore require the appropriate key name you want to use. ```bash avalanche key create mytestkey ``` This is going to generate a new key named `mytestkey`. The command is going to then also print addresses associated with the key: ```bash Generating new key... Key created +-----------+-------------------------------+-------------------------------------------------+---------------+ | KEY NAME | CHAIN | ADDRESS | NETWORK | +-----------+-------------------------------+-------------------------------------------------+---------------+ | mytestkey | C-Chain (Ethereum hex format) | 0x86BB07a534ADF43786ECA5Dd34A97e3F96927e4F | All | + +-------------------------------+-------------------------------------------------+---------------+ | | P-Chain (Bech32 format) | P-custom1a3azftqvygc4tlqsdvd82wks2u7nx85rg7v8ta | Local Network | + + +-------------------------------------------------+---------------+ | | | P-fuji1a3azftqvygc4tlqsdvd82wks2u7nx85rhk6zqh | Fuji | +-----------+-------------------------------+-------------------------------------------------+---------------+ ``` You may use the C-Chain address (`0x86BB07a534ADF43786ECA5Dd34A97e3F96927e4F`) to fund your key: - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet tokens automatically - **Alternative:** Use the [external faucet](https://core.app/tools/testnet-faucet/) The command also prints P-Chain addresses for both the default local network and `Fuji`. The latter (`P-fuji1a3azftqvygc4tlqsdvd82wks2u7nx85rhk6zqh`) is the one needed for this tutorial. The `delete` command of course deletes a private key: ```bash avalanche key delete mytestkey ``` Be careful though to always have a key available for commands involving transactions. The `export` command is going to **print your private key** in hex format to stdout. ```bash avalanche key export mytestkey 21940fbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb5f0b ``` This key is intentionally modified. You can also **import** a key by using the `--file` flag with a path argument and also providing a name to it: ```bash avalanche key create othertest --file /tmp/test.pk Loading user key... Key loaded ``` Finally, the `list` command is going to list all your keys in your system and their associated addresses (CLI stores the keys in a special directory on your file system, tampering with the directory is going to result in malfunction of the tool). ```bash avalanche key list +-----------+-------------------------------+-------------------------------------------------+---------------+ | KEY NAME | CHAIN | ADDRESS | NETWORK | +-----------+-------------------------------+-------------------------------------------------+---------------+ | othertest | C-Chain (Ethereum hex format) | 0x36c83263e33f9e87BB98D3fEb54a01E35a3Fa735 | All | + +-------------------------------+-------------------------------------------------+---------------+ | | P-Chain (Bech32 format) | P-custom1n5n4h99j3nx8hdrv50v8ll7aldm383nap6rh42 | Local Network | + + +-------------------------------------------------+---------------+ | | | P-fuji1n5n4h99j3nx8hdrv50v8ll7aldm383na7j4j7q | Fuji | +-----------+-------------------------------+-------------------------------------------------+---------------+ | mytestkey | C-Chain (Ethereum hex format) | 0x86BB07a534ADF43786ECA5Dd34A97e3F96927e4F | All | + +-------------------------------+-------------------------------------------------+---------------+ | | P-Chain (Bech32 format) | P-custom1a3azftqvygc4tlqsdvd82wks2u7nx85rg7v8ta | Local Network | + + +-------------------------------------------------+---------------+ | | | P-fuji1a3azftqvygc4tlqsdvd82wks2u7nx85rhk6zqh | Fuji | +-----------+-------------------------------+-------------------------------------------------+---------------+ ``` #### Funding the Key[​](#funding-the-key "Direct link to heading") Do these steps only to follow this tutorial for `Fuji` addresses. To access the wallet for `Mainnet`, the use of a ledger device is strongly recommended. 1. A newly created key has no funds on it. You have several options to fund it: - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet tokens automatically on C-Chain and P-Chain - **Alternative:** Use the [external faucet](https://core.app/tools/testnet-faucet/) with your C-Chain address. If you have an AVAX balance on Mainnet, you can request tokens directly. Otherwise, request a faucet coupon on [Guild](https://guild.xyz/avalanche) or contact admins/mods on [Discord](https://discord.com/invite/RwXY7P6) 2. Export your key via the `avalanche key export` command. The output is your private key, which will help you [import](https://support.avax.network/en/articles/6821877-core-extension-how-can-i-import-an-account) your account into the Core extension. 3. Connect Core extension to [Core web](https://core.app/), and move the test funds from C-Chain to the P-Chain by clicking Stake, then Cross-Chain Transfer (find more details on [this tutorial](https://support.avax.network/en/articles/8133713-core-web-how-do-i-make-cross-chain-transfers-in-core-stake)). After following these 3 steps, your test key should now have a balance on the P-Chain on `Fuji` Testnet. Create an EVM Avalanche L1[​](#create-an-evm-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------------- Creating an Avalanche L1 with `Avalanche-CLI` for `Fuji` works the same way as with a local network. In fact, the `create` commands only creates a specification of your Avalanche L1 on the local file system. Afterwards the Avalanche L1 needs to be _deployed_. This allows to reuse configs, by creating the config with the `create` command, then first deploying to a local network and successively to `Fuji` - and eventually to `Mainnet`. To create an EVM Avalanche L1, run the `blockchain create` command with a name of your choice: ```bash avalanche blockchain create testblockchain ``` This is going to start a series of prompts to customize your EVM Avalanche L1 to your needs. Most prompts have some validation to reduce issues due to invalid input. The first prompt asks for the type of the virtual machine (see [Virtual Machine](#virtual-machine)). ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose your VM: ▸ SubnetEVM Custom ``` As you want to create an EVM Avalanche L1, just accept the default `Subnet-EVM`. Choose either Proof of Authority (PoA) or Proof of Stake (PoS) as your consensus mechanism. ```bash ? Which validator management type would you like to use in your blockchain?: ▸ Proof Of Authority Proof Of Stake Explain the difference ``` For this tutorial, select `Proof of Authority (PoA)`. For more info, reference the [Validator Management Contracts](/docs/avalanche-l1s/validator-manager/contract). ```bash Which address do you want to enable as controller of ValidatorManager contract?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` This address will be able to add and remove validators from your Avalanche L1. You can either use an existing key or create a new one. In addition to being the PoA owner, this address will also be the owner of the `ProxyAdmin` contract of the Validator Manager's `TransparentUpgradeableProxy`. This address will be able to upgrade (PoA -> PoS) the Validator Manager implementation through updating the proxy. Next, CLI will ask for blockchain configuration values. Since we are deploying to Fuji, select `I want to use defaults for a production environment`. ```bash ? Do you want to use default values for the Blockchain configuration?: ▸ I want to use defaults for a test environment I want to use defaults for a production environment I don't want to use default values Explain the difference ``` The default values for production environment: - Use latest Subnet-EVM release - Allocate 1 million tokens to: 1. **a newly created key (production)**: name of this key will be in the format of `subnet_blockchainName_airdrop` 2. **ewoq address (test)**: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC - Supply of the native token will be hard-capped - Set gas fee config as low throughput (12 million gas per block) - Use constant gas prices - Disable further adjustments in transaction fee configuration - Transaction fees are burned - Enable interoperability with other blockchains - Allow any user to deploy smart contracts, send transactions, and interact with your blockchain. Next, CLI asks for the ChainID. You should provide your own ID. Check [chainlist.org](https://chainlist.org/) to see if the value you'd like is already in use. ```bash ✔ Subnet-EVM creating Avalanche L1 test blockchain Enter your Avalanche L1's ChainId. It can be any positive integer. ChainId: 3333 ``` Now, provide a symbol of your choice for the token of this EVM: ```bash Select a symbol for your Avalanche L1's native token Token symbol: TST ``` It's possible to end the process with Ctrl-C at any time. At this point, CLI creates the specification of the new Avalanche L1 on disk, but isn't deployed yet. Print the specification to disk by running the `describe` command: ```bash avalanche blockchain describe testblockchain ``` ```bash +------------------------------------------------------------------+ | TESTBLOCKCHAIN | +------------+-----------------------------------------------------+ | Name | testblockchain | +------------+-----------------------------------------------------+ | VM ID | tGBrM94jbkesczgqsL1UaxjrdxRQQobs3MZTNQ4GrfhzvpiE8 | +------------+-----------------------------------------------------+ | VM Version | v0.6.12 | +------------+-----------------------------------------------------+ | Validation | Proof Of Authority | +------------+-----------------------------------------------------+ +--------------------------+ | TOKEN | +--------------+-----------+ | Token Name | TST Token | +--------------+-----------+ | Token Symbol | TST | +--------------+-----------+ +-----------------------------------------------------------------------------------------------------------------------------------+ | INITIAL TOKEN ALLOCATION | +---------------------+------------------------------------------------------------------+--------------+---------------------------+ | DESCRIPTION | ADDRESS AND PRIVATE KEY | AMOUNT (TST) | AMOUNT (WEI) | +---------------------+------------------------------------------------------------------+--------------+---------------------------+ | Main funded account | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | 1000000 | 1000000000000000000000000 | | ewoq | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | | | +---------------------+------------------------------------------------------------------+--------------+---------------------------+ +-----------------------------------------------------------------------------------------------------------------+ | SMART CONTRACTS | +-----------------------+--------------------------------------------+--------------------------------------------+ | DESCRIPTION | ADDRESS | DEPLOYER | +-----------------------+--------------------------------------------+--------------------------------------------+ | Proxy Admin | 0xC0fFEE1234567890aBCdeF1234567890abcDef34 | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +-----------------------+--------------------------------------------+--------------------------------------------+ | PoA Validator Manager | 0x0C0DEbA5E0000000000000000000000000000000 | | +-----------------------+--------------------------------------------+--------------------------------------------+ | Transparent Proxy | 0x0Feedc0de0000000000000000000000000000000 | | +-----------------------+--------------------------------------------+--------------------------------------------+ +----------------------------------------------------------------------+ | INITIAL PRECOMPILE CONFIGS | +------------+-----------------+-------------------+-------------------+ | PRECOMPILE | ADMIN ADDRESSES | MANAGER ADDRESSES | ENABLED ADDRESSES | +------------+-----------------+-------------------+-------------------+ | Warp | n/a | n/a | n/a | +------------+-----------------+-------------------+-------------------+ ``` Deploy the Avalanche L1[​](#deploy-the-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------- To deploy the Avalanche L1, you will need some testnet AVAX on the P-chain. To deploy the new Avalanche L1, run: ```bash avalanche blockchain deploy testblockchain ``` This is going to start a new prompt series. ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: ▸ Local Network Fuji Mainnet ``` This tutorial is about deploying to `Fuji`, so navigate with the arrow keys to `Fuji` and hit enter. The user is then asked to provide which private key to use for the deployment. Select a key to has P-Chain AVAX to pay for transaction fees. Also, this tutorial assumes that a node is up running, fully bootstrapped on `Fuji`, and runs from the **same** box. ```bash ✔ Fuji Deploying [testblockchain] to Fuji Use the arrow keys to navigate: ↓ ↑ → ← ? Which private key should be used to issue the transaction?: test ▸ mytestkey ``` Avalanche L1s require bootstrap validators during creation process. Avalanche CLI has enabled using local machine as a bootstrap validator on the blockchain. This means that you don't have to to set up a remote server on a cloud service (e.g. AWS / GCP) to be a validator on the blockchain. We will select `Yes` on using our local machine as a bootstrap validator. Note that since we need to sync our node with Fuji, this process will take around 3 minutes. ```bash You can use your local machine as a bootstrap validator on the blockchain This means that you don't have to to set up a remote server on a cloud service (e.g. AWS / GCP) to be a validator on the blockchain. Use the arrow keys to navigate: ↓ ↑ → ← ? Do you want to use your local machine as a bootstrap validator?: ▸ Yes No ``` Well done. You have just created your own Avalanche L1 on `Fuji`. You will be able to see information on the deployed L1 at the end of `avalanche blockchain deploy` command: ```bash +--------------------+----------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+----------------------------------------------------+ | Chain Name | testblockchain | +--------------------+----------------------------------------------------+ | Subnet ID | 2cNuyBhvAd4jH5bFSGndezhB66Z4UHYAsLCMGoCpvhXVhrZfgd | +--------------------+----------------------------------------------------+ | VM ID | qcvkEX1zWSz7PtGd7CKvPRBqLVTzA7qyMPvkh5NMDWkuhrcCu | +--------------------+----------------------------------------------------+ | Blockchain ID | 2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx | +--------------------+ + | P-Chain TXID | | +--------------------+----------------------------------------------------+ ``` To get your new Avalanche L1 information, visit the [Avalanche L1 Explorer](https://subnets-test.avax.network/). The search best works by blockchain ID, so in this example, enter `2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx` into the search box and you should see your shiny new blockchain information. Add a Validator[​](#add-a-validator "Direct link to heading") ------------------------------------------------------------- Before proceeding to add a validator to our Avalanche L1, we will need to have the validator's NodeID, BLS public key and proof of possession. These can be obtained by ssh into the node itself and run the `getNodeID` API specified [here](/docs/api-reference/info-api#infogetnodeid). To add a validator to an Avalanche L1, the owner of the key that acts as the controller of `ValidatorManager` contract specified in `avalanche blockchain create` command above run: ```bash avalanche blockchain addValidator testblockchain ``` Choose `Fuji`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: ▸ Fuji ``` You will need to specify which private key to use to pay for the transaction fees: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which key should be used to pay for transaction fees on P-Chain?: test ▸ mytestkey ``` Now enter the **NodeID** of the new validator to be added. ```bash What is the NodeID of the validator you'd like to whitelist?: NodeID-BFa1paAAAAAAAAAAAAAAAAAAAAQGjPhUy ``` Next, enter the node's BLS public key and proof of possession. Now, enter the amount of AVAX that you would like to allocate to the new validator. The validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0, the validator will be considered inactive and will no longer participate in validating the L1. 1 AVAX should last the validator about a month. ```bash What balance would you like to assign to the validator (in AVAX)?: 1 ``` Next, select a key that will receive the leftover AVAX if the validator is removed from the L1: ```bash Which stored key should be used be set as a change owner for leftover AVAX?: test ▸ mytestkey ``` Next, select a key that can remove the validator: ```bash ? Which stored key should be used be able to disable the validator using P-Chain transactions?: test ▸ mytestkey ``` By the end of the command, you would have successfully added a new validator to the Avalanche L1 on Fuji Testnet! Appendix[​](#appendix "Direct link to heading") ----------------------------------------------- ### Connect with Core[​](#connect-with-core "Direct link to heading") To connect Core (or MetaMask) with your blockchain on the new Avalanche L1 running on your local computer, you can add a new network on your Core wallet with the following values: ```bash - Network Name: testblockchain - RPC URL: [http://127.0.0.1:9650/ext/bc/2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx/rpc] - Chain ID: 3333 - Symbol: TST ``` # On Avalanche Mainnet (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-mainnet) --- title: On Avalanche Mainnet description: Deploy an Avalanche L1 to Avalanche Mainnet --- Deploying an Avalanche L1 to Mainnet has many risks. Doing so safely requires a laser focus on security. This tutorial does its best to point out common pitfalls, but there may be other risks not discussed here. This tutorial is an educational resource and provides no guarantees that following it results in a secure deployment. Additionally, this tutorial takes some shortcuts that aid the understanding of the deployment process at the expense of security. The text highlights these shortcuts and they shouldn't be used for a production deployment. After managing a successful Avalanche L1 deployment on the `Fuji Testnet`, you're ready to deploy your Avalanche L1 on Mainnet. If you haven't done so, first [Deploy an Avalanche L1 on Testnet](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-fuji-testnet). This tutorial shows how to do the following on `Mainnet`. - Deploy an Avalanche L1. - Add a node as a validator to the Avalanche L1. All IDs in this article are for illustration purposes only. They are guaranteed to be different in your own run-through of this tutorial. ## Prerequisites - An Avalanche node running and [fully bootstrapped](/docs/nodes) on `Mainnet` - [Avalanche-CLI is installed](/docs/tooling/get-avalanche-cli) on each validator node's box - A [Ledger](https://www.ledger.com/) device - You've [created an Avalanche L1 configuration](/docs/tooling/create-avalanche-l1#create-your-avalanche-l1-configuration) and fully tested a [Fuji Testnet Avalanche L1 deployment](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-fuji-testnet) ### Setting up Your Ledger[​](#setting-up-your-ledger "Direct link to heading") In the interest of security, all Avalanche-CLI `Mainnet` operations require the use of a connected Ledger device. You must unlock your Ledger and run the Avalanche App. See [How to Use Ledger](https://support.avax.network/en/articles/6150237-how-to-use-a-ledger-nano-s-or-nano-x-with-avalanche) for help getting set up. Ledger devices support TX signing for any address inside a sequence automatically generated by the device. By default, Avalanche-CLI uses the first address of the derivation, and that address needs funds to issue the TXs to create the Avalanche L1 and add validators. To get the first `Mainnet` address of your Ledger device, first make sure it is connected, unblocked, and running the Avalanche app. Then execute the `key list` command: ```bash avalanche key list --ledger 0 --mainnet ``` ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1ucykh6ls8thqpuwhg3vp8vvu6spg5e8tp8a25j | 11 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` The command prints the P-Chain address for `Mainnet`, `P-avax1ucykh6ls8thqpuwhg3vp8vvu6spg5e8tp8a25j`, and its balance. You can use the `key list` command to get any Ledger address in the derivation sequence by changing the index parameter from `0` to the one desired, or to a list of them (for example: `2`, or `0,4,7`). Also, you can ask for addresses on `Mainnet` with the `--mainnet` parameter, and local networks with the `--local` parameter. #### Funding the Ledger[​](#funding-the-ledger "Direct link to heading") A new Ledger device has no funds on the addresses it controls. You'll need to send funds to it by exporting them from C-Chain to the P-Chain using [Core web](https://core.app/) connected to [Core extension](https://core.app/). You can load the Ledger's C-Chain address in Core extension, or load in a different private key to [Core extension](https://core.app/), and then connect to Core web . You can move test funds from the C-Chain to the P-Chain by clicking Stake on Core web , then Cross-Chain Transfer (find more details on [this tutorial](https://support.avax.network/en/articles/8133713-core-web-how-do-i-make-cross-chain-transfers-in-core-stake)). Deploy the Avalanche L1[​](#deploy-the-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------- To deploy the Avalanche L1, you will need some AVAX on the P-Chain. For our Fuji example, we used our local machine as a bootstrap validator. However, since bootstrapping a node to Mainnet will take several hours, we will use an Avalanche node set up on an AWS server that is already bootstrapped to Mainnet for this example. To check if the Avalanche node is done bootstrapping, ssh into the node and call [`info.isBootstrapped`](/docs/api-reference/info-api#infoisbootstrapped) by copying and pasting the following command: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"P" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` If this returns `true`, it means that the chain is bootstrapped and we will proceed to deploying our L1. We will need to have the Avalanche node's NodeID, BLS public key and proof of possession. These can be obtained by ssh into the node itself and run the `getNodeID` API specified [here](/docs/api-reference/info-api#infogetnodeid) To deploy the new Avalanche L1, with your Ledger unlocked and running the Avalanche app, run: ```bash avalanche blockchain deploy testblockchain ``` This is going to start a new prompt series. ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: Local Network Fuji ▸ Mainnet ``` This tutorial is about deploying to `Mainnet`, so navigate with the arrow keys to `Mainnet` and hit enter. The user is then asked to provide which private key to use for the deployment. Select a key to has P-Chain AVAX to pay for transaction fees. ```bash ✔ Mainnet Deploying [testblockchain] to Mainnet ``` After that, CLI shows the `Mainnet` Ledger address used to fund the deployment: ```bash Ledger address: P-avax1ucykh6ls8thqpuwhg3vp8vvu6spg5e8tp8a25j ``` Select `No` to using local machine as a bootstrap validator on the blockchain. ```bash You can use your local machine as a bootstrap validator on the blockchain This means that you don't have to to set up a remote server on a cloud service (e.g. AWS / GCP) to be a validator on the blockchain. Use the arrow keys to navigate: ↓ ↑ → ← ? Do you want to use your local machine as a bootstrap validator?: Yes ▸ No ``` Enter 1 as the number of bootstrap validators we will be setting up. ```bash ✔ No ✗ How many bootstrap validators do you want to set up?: 1 ``` Select `Yes` since we have already set up our Avalanche Node on AWS. ```bash If you have set up your own Avalanche Nodes, you can provide the Node ID and BLS Key from those nodes in the next step. Otherwise, we will generate new Node IDs and BLS Key for you. Use the arrow keys to navigate: ↓ ↑ → ← ? Have you set up your own Avalanche Nodes?: ▸ Yes No ``` Next, we will enter the node's Node-ID: ```bash Getting info for bootstrap validator 1 ✗ What is the NodeID of the node you want to add as bootstrap validator?: █ ``` And BLS public key and proof of possession: ```bash Next, we need the public key and proof of possession of the node's BLS Check https://build.avax.network/docs/api-reference/info-api#infogetnodeid for instructions on calling info.getNodeID API ✗ What is the node's BLS public key?: █ ``` Next, the CLI generates a CreateSubnet TX to create the Subnet and asks the user to sign it by using the Ledger. ```bash *** Please sign Avalanche L1 creation hash on the ledger device *** ``` This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. If the Ledger doesn't have enough funds, the user may see an error message: ```bash *** Please sign Avalanche L1 creation hash on the ledger device *** Error: insufficient funds: provided UTXOs need 1000000000 more units of asset "U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK" ``` If successful, the CLI next asks you to sign a CreateChain Tx. Once CreateChain Tx is signed, it will then ask you to sign ConvertSubnetToL1 Tx. Well done. You have just created your own Avalanche L1 on `Mainnet`. You will be able to see information on the deployed L1 at the end of `avalanche blockchain deploy` command: ```bash +--------------------+----------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+----------------------------------------------------+ | Chain Name | testblockchain | +--------------------+----------------------------------------------------+ | Subnet ID | 2cNuyBhvAd4jH5bFSGndezhB66Z4UHYAsLCMGoCpvhXVhrZfgd | +--------------------+----------------------------------------------------+ | VM ID | qcvkEX1zWSz7PtGd7CKvPRBqLVTzA7qyMPvkh5NMDWkuhrcCu | +--------------------+----------------------------------------------------+ | Blockchain ID | 2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx | +--------------------+ + | P-Chain TXID | | +--------------------+----------------------------------------------------+ ``` To get your new Avalanche L1 information, there are two options: - Call `avalanche blockchain describe` command or - Visit the [Avalanche L1 Explorer](https://subnets-test.avax.network/). The search best works by blockchain ID, so in this example, enter `2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx` into the search box and you should see your shiny new blockchain information. Add a Validator[​](#add-a-validator "Direct link to heading") ------------------------------------------------------------- Before proceeding to add a validator to our Avalanche L1, we will need to have the validator's NodeID, BLS public key and proof of possession. These can be obtained by ssh into the node itself and run the `getNodeID` API specified [here](/docs/api-reference/info-api#infogetnodeid) To add a validator to an Avalanche L1, the owner of the key that acts as the controller of ValidatorManager contract specified in `avalanche blockchain create` command above run: ```bash avalanche blockchain addValidator testblockchain ``` Choose `Mainnet`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: ▸ Mainnet ``` The CLI will show the Ledger address that will be used to pay for add validator tx: ```bash Ledger address: P-avax1ucykh6ls8thqpuwhg3vp8vvu6spg5e8tp8a25j ``` Now enter the **NodeID** of the new validator to be added. ```bash What is the NodeID of the validator you'd like to whitelist?: NodeID-BFa1paAAAAAAAAAAAAAAAAAAAAQGjPhUy ``` Next, enter the node's BLS public key and proof of possession. Now, enter the amount of AVAX that you would like to allocate to the new validator. The validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0, the validator will be considered inactive and will no longer participate in validating the L1. 1 AVAX should last the validator about a month. ```bash What balance would you like to assign to the validator (in AVAX)?: 1 ``` Sign the addValidatorTx with your Ledger: ```bash *** Please sign add validator hash on the ledger device *** ``` This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. This might take a couple of seconds. After, it prints: ```bash Transaction successful, transaction ID: r3tJ4Wr2CWA8AaticmFrKdKgAs5AhW2wwWTaQHRBZKwJhsXzb ``` This means the node is now a validator on the given Avalanche L1 on `Mainnet`! Going Live[​](#going-live "Direct link to heading") --------------------------------------------------- For the safety of your validators, you should setup dedicated API nodes to process transactions, but for test purposes, you can issue transactions directly to one of your validator's RPC interface. # On Production Infra (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-production-infra) --- title: On Production Infra description: Learn how to deploy an Avalanche L1 on production infrastructure. --- After architecting your Avalanche L1 environment on the [local machine](/docs/tooling/create-deploy-avalanche-l1s/deploy-locally), proving the design and testing it out on [the Fuji Testnet](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-fuji-testnet), eventually you will need to deploy your Avalanche L1 to production environment. Running an Avalanche L1 in production is much more involved than local and Testnet deploys, as your Avalanche L1 will have to take care of real world usage, maintaining uptime, upgrades and all of that in a potentially adversarial environment. The purpose of this document is to point out a set of general considerations and propose potential solutions to them. The architecture of the environment your particular Avalanche L1 will use will be greatly influenced by the type of load and activity your Avalanche L1 is designed to support so your solution will most likely differ from what we propose here. Still, it might be useful to follow along, to build up the intuition for the type of questions you will need to consider. Node Setup[​](#node-setup "Direct link to heading") --------------------------------------------------- Avalanche nodes are essential elements for running your Avalanche L1 in production. At a minimum, your Avalanche L1 will need validator nodes, potentially also nodes that act as RPC servers, indexers or explorers. Running a node is basically running an instance of [AvalancheGo](/docs/nodes) on a server. ### Server OS[​](#server-os "Direct link to heading") Although AvalancheGo can run on a MacOS or a Windows computer, we strongly recommend running nodes on computers running Linux as they are designed specifically for server loads and all the tools and utilities needed for administering a server are native to Linux. ### Hardware Specification[​](#hardware-specification "Direct link to heading") For running AvalancheGo as a validator on the Primary Network the recommended configuration is as follows: - CPU: Equivalent of 8 AWS vCPU - RAM: 16 GiB - Storage: 1 TiB with at least 3000 IOPS - OS: Ubuntu 20.04 - Network: Reliable IPv4 or IPv6 network connection, with an open public port That is the configuration sufficient for running a Primary Network node. Any resource requirements for your Avalanche L1 come on top of this, so you should not go below this configuration, but may need to step up the specification if you expect your Avalanche L1 to handle a significant amount of transactions. Be sure to set up monitoring of resource consumption for your nodes because resource exhaustion may cause your node to slow down or even halt, which may severely impact your Avalanche L1 negatively. ### Server Location[​](#server-location "Direct link to heading") You can run a node on a physical computer that you own and run, or on a cloud instance. Although running on your own HW may seem like a good idea, unless you have a sizeable DevOps 24/7 staff we recommend using cloud service providers as they generally provide reliable computing resources that you can count on to be properly maintained and monitored. #### Local Servers[​](#local-servers "Direct link to heading") If you plan on running nodes on your own hardware, make sure they satisfy the minimum HW specification as outlined earlier. Pay close attention to proper networking setup, making sure the p2p port (9651) is accessible and public IP properly configured on the node. Make sure the node is connected to the network physically (not over Wi-Fi), and that the router is powerful enough to handle a couple of thousands of persistent TCP connections and that network bandwidth can accommodate at least 5Mbps of steady upstream and downstream network traffic. When installing the AvalancheGo node on the machines, unless you have a dedicated DevOps staff that will take care of node setup and configuration, we recommend using the [installer script](/docs/nodes/using-install-script/installing-avalanche-go) to set up the nodes. It will abstract most of the setup process for you, set up the node as a system service and will enable easy node upgrades. #### Cloud Providers[​](#cloud-providers "Direct link to heading") There are a number of different cloud providers. We have documents that show how to set up a node on the most popular ones: - [Amazon Web Services](/docs/nodes/on-third-party-services/amazon-web-services) - [Azure](/docs/nodes/on-third-party-services/microsoft-azure) - [Google Cloud Platform](/docs/nodes/on-third-party-services/google-cloud) There is a whole range of other cloud providers that may offer lower prices or better deals for your particular needs, so it makes sense to shop around. Once you decide on a provider (or providers), if they offer instances in multiple data centers, it makes sense to spread the nodes geographically since that provides a better resilience and stability against outages. ### Number of Validators[​](#number-of-validators "Direct link to heading") Number of validators on an Avalanche L1 is a crucial decision you need to make. For stability and decentralization, you should strive to have as many validators as possible. For stability reasons our recommendation is to have **at least** 5 full validators on your Avalanche L1. If you have less than 5 validators your Avalanche L1 liveness will be at risk whenever a single validator goes offline, and if you have less than 4 even one offline node will halt your Avalanche L1. You should be aware that 5 is the minimum we recommend. But, from a decentralization standpoint having more validators is always better as it increases the stability of your Avalanche L1 and makes it more resilient to both technical failures and adversarial action. In a nutshell: run as many Avalanche L1 validators as you can. Considering that at times you will have to take nodes offline, for routine maintenance (at least for node upgrades which happen with some regularity) or unscheduled outages and failures you need to be able to routinely handle at least one node being offline without your Avalanche L1 performance degrading. ### Node Bootstrap[​](#node-bootstrap "Direct link to heading") Once you set up the server and install AvalancheGo on them, nodes will need to bootstrap (sync with the network). This is a lengthy process, as the nodes need to catch up and replay all the network activity since the genesis up to the present moment. Full bootstrap on a node can take more than a week, but there are ways to shorten that process, depending on your circumstances. #### State Sync[​](#state-sync "Direct link to heading") If the nodes you will be running as validators don't need to have the full transaction history, then you can use [state sync](/docs/nodes/chain-configs/c-chain#state-sync-enabled). With this flag enabled, instead of replaying the whole history to get to the current state, nodes simply download only the current state from other network peers, shortening the bootstrap process from multiple days to a couple of hours. If the nodes will be used for Avalanche L1 validation exclusively, you can use the state sync without any issues. Currently, state sync is only available for the C-Chain, but since the bulk of the transactions on the platform happen there it still has a significant impact on the speed of bootstrapping. #### Database Copy[​](#database-copy "Direct link to heading") Good way to cut down on bootstrap times on multiple nodes is database copy. Database is identical across nodes, and as such can safely be copied from one node to another. Just make sure to that the node is not running during the copy process, as that can result in a corrupted database. Database copy procedure is explained in detail [here](/docs/nodes/maintain/backup-restore#database). Please make sure you don't reuse any node's NodeID by accident, especially don't restore another node's ID, see [here](/docs/nodes/maintain/backup-restore#nodeid) for details. Each node must has its own unique NodeID, otherwise, the nodes sharing the same ID will not behave correctly, which will impact your validator's uptime, thus staking rewards, and the stability of your Avalanche L1. Avalanche L1 Deploy[​](#avalanche-l1-deploy "Direct link to heading") --------------------------------------------------------- Once you have the nodes set up you are ready to deploy the actual Avalanche L1. Right now, the recommended tool to do that is [Avalanche-CLI](https://github.com/ava-labs/avalanche-cli). Instructions for deployment by Avalanche-CLI can be found [here](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-mainnet). ### Ledger Hardware Wallet[​](#ledger-hw-wallet "Direct link to heading") When creating the Avalanche L1, you will be required to have a private key that will control the administrative functions of the Avalanche L1 (adding validators, managing the configuration). Needless to say, whoever has this private key has complete control over the Avalanche L1 and the way it runs. Therefore, protecting that key is of the utmost operational importance. Which is why we strongly recommend using a hardware wallet such as a [Ledger HW Wallet](https://www.ledger.com/) to store and access that private key. General instruction on how to use a Ledger device with Avalanche can be found [here](https://support.avax.network/en/articles/6150237-how-to-use-a-ledger-nano-s-or-nano-x-with-avalanche). ### Genesis File[​](#genesis-file "Direct link to heading") The structure that defines the most important parameters in an Avalanche L1 is found in the genesis file, which is a `json` formatted, human-readable file. Describing the contents and the options available in the genesis file is beyond the scope of this document, and if you're ready to deploy your Avalanche L1 to production you probably have it mapped out already. If you want to review, we have a description of the genesis file in our document on [customizing EVM Avalanche L1s](/docs/avalanche-l1s/upgrade/customize-avalanche-l1). Validator Configuration[​](#validator-configuration "Direct link to heading") ----------------------------------------------------------------------------- Running nodes as Avalanche L1 validators warrants some additional considerations, above those when running a regular node or a Primary Network-only validator. ### Joining an Avalanche L1[​](#joining-a-avalanche-l1 "Direct link to heading") For a node to join an Avalanche L1, there are two prerequisites: - Primary Network validation - Avalanche L1 tracking Primary Network validation means that a node cannot join an Avalanche L1 as a validator before becoming a validator on the Primary Network itself. So, after you add the node to the validator set on the Primary Network, node can join an Avalanche L1. Of course, this is valid only for Avalanche L1 validators, if you need a non-validating Avalanche L1 node, then the node doesn't need to be a validator at all. To have a node start syncing the Avalanche L1, you need to add the `--track-subnets` command line option, or `track-subnets` key to the node config file (found at `.avalanchego/configs/node.json` for installer-script created nodes). A single node can sync multiple Layer 1s, so you can add them as a comma-separated list of Avalanche L1 IDs (SubnetID). An example of a node config syncing two Avalanche L1s: ```json { "public-ip-resolution-service": "opendns", "http-host": "", "track-subnets": "28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY,Ai42MkKqk8yjXFCpoHXw7rdTWSHiKEMqh5h8gbxwjgkCUfkrk" } ``` But that is not all. Besides tracking the SubnetID, the node also needs to have the plugin that contains the VM instance the blockchain in the Avalanche L1 will run. You should have already been through that on Testnet and Fuji, but for a refresher, you can refer to [this tutorial](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-fuji-testnet). So, name the VM plugin binary as the `VMID` of the Avalanche L1 chain and place it in the `plugins` directory where the node binary is (for installer-script created nodes that would be `~/avalanche-node/plugins/`). ### Avalanche L1 Bootstrapping[​](#avalanche-l1-bootstrapping "Direct link to heading") After you have tracked the Avalanche L1 and placed the VM binary in the correct directory, your node is ready to start syncing with the Avalanche L1. Restart the node and monitor the log output. You should notice something similar to: ```bash Jul 30 18:26:31 node-fuji avalanchego[1728308]: [07-30|18:26:31.422] INFO chains/manager.go:262 creating chain: Jul 30 18:26:31 node-fuji avalanchego[1728308]: ID: 2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Jul 30 18:26:31 node-fuji avalanchego[1728308]: VMID:srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy ``` That means the node has detected the Avalanche L1, and is attempting to initialize it and start bootstrapping the Avalanche L1. It might take some time (if there are already transactions on the Avalanche L1), and eventually it will finish the bootstrap with a message like: ```bash Jul 30 18:27:21 node-fuji avalanchego[1728308]: [07-30|18:27:21.055] INFO <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain> snowman/transitive.go:333 consensus starting with J5wjmotMCrM2DKxeBTBPfwgCPpvsjtuqWNozLog2TomTjSuGK as the last accepted block ``` That means the node has successfully bootstrapped the Avalanche L1 and is now in sync. If the node is one of the validators, it will start validating any transactions that get posted to the Avalanche L1. ### Monitoring[​](#monitoring "Direct link to heading") If you want to inspect the process of Avalanche L1 syncing, you can use the RPC call to check for the [blockchain status](/docs/api-reference/p-chain/api#platformgetblockchainstatus). For a more in-depth look into Avalanche L1 operation, check out the blockchain log. By default, the log can be found in `~/.avalanchego/logs/ChainID.log` where you replace the `ChainID` with the actual ID of the blockchain in your Avalanche L1. For an even more thorough (and pretty!) insight into how the node and the Avalanche L1 is behaving, you can install the Prometheus+Grafana monitoring system with the custom dashboards for the regular node operation, as well as a dedicated dashboard for Avalanche L1 data. Check out the [tutorial](/docs/nodes/maintain/monitoring) for information on how to set it up. ### Managing Validation[​](#managing-validation "Direct link to heading") On Avalanche all validations are limited in time and can range from two weeks up to one year. Furthermore, Avalanche L1 validations are always a subset of the Primary Network validation period (must be shorter or the same). That means that periodically your validators will expire and you will need to submit a new validation transaction for both the Primary Network and your Avalanche L1. Unless managed properly and in a timely manner, that can be disruptive for your Avalanche L1 (if all validators expire at the same time your Avalanche L1 will halt). To avoid that, keep notes on when a particular validation is set to expire and be ready to renew it as soon as possible. Also, when initially setting up the nodes, make sure to stagger the validator expiry so they don't all expire on the same date. Setting end dates at least a day apart is a good practice, as well as setting reminders for each expiry. Conclusion[​](#conclusion "Direct link to heading") --------------------------------------------------- Hopefully, by reading this document you have a better picture of the requirements and considerations you need to make when deploying your Avalanche L1 to production and you are now better prepared to launch your Avalanche L1 successfully. Keep in mind, running an Avalanche L1 in production is not a one-and-done kind of situation, it is in fact running a fleet of servers 24/7. And as with any real time service, you should have a robust logging, monitoring and alerting systems to constantly check the nodes and Avalanche L1 health and alert you if anything out of the ordinary happens. If you have any questions, doubts or would like to chat, please check out our [Discord server](https://discord.gg/avax/), where we host a dedicated `#subnet-chat` channel dedicated to talking about all things Avalanche L1. # With Custom VM (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-with-custom-vm) --- title: With Custom VM description: Learn how to create an Avalanche L1 with a custom virtual machine and deploy it locally. --- This tutorial walks through the process of creating an Avalanche L1 with a custom virtual machine and deploying it locally. Although the tutorial uses a fork of Subnet-EVM as an example, you can extend its lessons to support any custom VM binary. Fork Subnet-EVM[​](#fork-subnet-evm "Direct link to heading") ------------------------------------------------------------- Instead of building a custom VM from scratch, this tutorial starts with forking Subnet-EVM. ### Clone Subnet-EVM[​](#clone-subnet-evm "Direct link to heading") First off, clone the Subnet-EVM repository into a directory of your choosing. ```bash git clone https://github.com/ava-labs/subnet-evm.git ``` The repository cloning method used is HTTPS, but SSH can be used too: `git clone git@github.com:ava-labs/subnet-evm.git` You can find more about SSH and how to use it [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/about-ssh). ### Modify and Build Subnet-EVM[​](#modify-and-build-subnet-evm "Direct link to heading") To prove you're running your custom binary and not the stock Subnet-EVM included with Avalanche-CLI, you need to modify the Subnet-EVM binary by making a minor change. Navigate to the directory you cloned Subnet-EVM into and generate a new commit: ```bash git commit -a --allow-empty -m "custom vm commit" ``` Take note of the new commit hash: ```bash git rev-parse HEAD c0fe6506a40da466285f37dd0d3c044f494cce32 ``` In this case, `c0fe6506a40da466285f37dd0d3c044f494cce32`. Now build your custom binary by running: ```bash ./scripts/build.sh custom_vm.bin ``` This command builds the binary and saves it at `./custom_vm.bin`. ### Create a Custom Genesis[​](#create-a-custom-genesis "Direct link to heading") To start a VM, you need to provide a genesis file. Here is a basic Subnet-EVM genesis that's compatible with your custom VM. This genesis includes the PoA Validator Manager, a Transparent Proxy and Proxy Admin as predeployed contracts. `0c0deba5e0000000000000000000000000000000` is the ValidatorManager address. `0feedc0de0000000000000000000000000000000` is the Transparent Proxy address. `c0ffee1234567890abcdef1234567890abcdef34` is the Proxy Admin contract. These proxy contracts are from [OpenZeppelin v4.9](https://github.com/OpenZeppelin/openzeppelin-contracts/tree/release-v4.9/contracts/proxy/transparent) You can add your own predeployed contracts by running `forge build` and collecting the `deployedBytecode` output from `out/MyContract.sol` ```json { "config": { "berlinBlock": 0, "byzantiumBlock": 0, "chainId": 1, "constantinopleBlock": 0, "eip150Block": 0, "eip155Block": 0, "eip158Block": 0, "feeConfig": { "gasLimit": 12000000, "targetBlockRate": 2, "minBaseFee": 25000000000, "targetGas": 60000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 }, "homesteadBlock": 0, "istanbulBlock": 0, "londonBlock": 0, "muirGlacierBlock": 0, "petersburgBlock": 0, "warpConfig": { "blockTimestamp": 1736356569, "quorumNumerator": 67, "requirePrimaryNetworkSigners": true } }, "nonce": "0x0", "timestamp": "0x677eb2d9", "extraData": "0x", "gasLimit": "0xb71b00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "alloc": { "0c0deba5e0000000000000000000000000000000": { "code": "0x608060405234801561000f575f80fd5b5060043610610132575f3560e01c80639ba96b86116100b4578063c974d1b611610079578063c974d1b6146102a7578063d588c18f146102af578063d5f20ff6146102c2578063df93d8de146102e2578063f2fde38b146102ec578063fd7ac5e7146102ff575f80fd5b80639ba96b861461024c578063a3a65e481461025f578063b771b3bc14610272578063bc5fbfec14610280578063bee0a03f14610294575f80fd5b8063715018a6116100fa578063715018a6146101be578063732214f8146101c65780638280a25a146101db5780638da5cb5b146101f557806397fb70d414610239575f80fd5b80630322ed981461013657806320d91b7a1461014b578063467ef06f1461015e57806360305d621461017157806366435abf14610193575b5f80fd5b610149610144366004612b01565b610312565b005b610149610159366004612b30565b610529565b61014961016c366004612b7e565b610a15565b610179601481565b60405163ffffffff90911681526020015b60405180910390f35b6101a66101a1366004612b01565b610a23565b6040516001600160401b03909116815260200161018a565b610149610a37565b6101cd5f81565b60405190815260200161018a565b6101e3603081565b60405160ff909116815260200161018a565b7f9016d09d72d40fdae2fd8ceac6b6234c7706214fd39c1cd1e609a0528c199300546001600160a01b03165b6040516001600160a01b03909116815260200161018a565b610149610247366004612b01565b610a4a565b6101cd61025a366004612bad565b610a5f565b61014961026d366004612b7e565b610a7b565b6102216005600160991b0181565b6101cd5f8051602061370d83398151915281565b6101496102a2366004612b01565b610c04565b6101e3601481565b6101496102bd366004612c06565b610d41565b6102d56102d0366004612b01565b610e4f565b60405161018a9190612cc3565b6101a66202a30081565b6101496102fa366004612d43565b610f9e565b6101cd61030d366004612d65565b610fdb565b5f8181525f8051602061372d8339815191526020526040808220815160e0810190925280545f8051602061370d83398151915293929190829060ff16600581111561035f5761035f612c42565b600581111561037057610370612c42565b815260200160018201805461038490612dd0565b80601f01602080910402602001604051908101604052809291908181526020018280546103b090612dd0565b80156103fb5780601f106103d2576101008083540402835291602001916103fb565b820191905f5260205f20905b8154815290600101906020018083116103de57829003601f168201915b505050918352505060028201546001600160401b038082166020840152600160401b820481166040840152600160801b820481166060840152600160c01b909104811660808301526003928301541660a0909101529091508151600581111561046657610466612c42565b146104a2575f8381526007830160205260409081902054905163170cc93360e21b81526104999160ff1690600401612e08565b60405180910390fd5b6005600160991b016001600160a01b031663ee5b48eb6104c78584606001515f611036565b6040518263ffffffff1660e01b81526004016104e39190612e16565b6020604051808303815f875af11580156104ff573d5f803e3d5ffd5b505050506040513d601f19601f820116820180604052508101906105239190612e28565b50505050565b7fe92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb09545f8051602061370d8339815191529060ff161561057b57604051637fab81e560e01b815260040160405180910390fd5b6005600160991b016001600160a01b0316634213cf786040518163ffffffff1660e01b8152600401602060405180830381865afa1580156105be573d5f803e3d5ffd5b505050506040513d601f19601f820116820180604052508101906105e29190612e28565b83602001351461060b576040516372b0a7e760e11b815260208401356004820152602401610499565b3061061c6060850160408601612d43565b6001600160a01b03161461065f5761063a6060840160408501612d43565b604051632f88120d60e21b81526001600160a01b039091166004820152602401610499565b5f61066d6060850185612e3f565b905090505f805b828163ffffffff161015610955575f6106906060880188612e3f565b8363ffffffff168181106106a6576106a6612e84565b90506020028101906106b89190612e98565b6106c190612fbc565b80516040519192505f9160088801916106d991613035565b9081526020016040518091039020541461070957805160405163a41f772f60e01b81526104999190600401612e16565b5f6002885f01358460405160200161073892919091825260e01b6001600160e01b031916602082015260240190565b60408051601f198184030181529082905261075291613035565b602060405180830381855afa15801561076d573d5f803e3d5ffd5b5050506040513d601f19601f820116820180604052508101906107909190612e28565b90508086600801835f01516040516107a89190613035565b90815260408051602092819003830181209390935560e0830181526002835284518284015284810180516001600160401b03908116858401525f60608601819052915181166080860152421660a085015260c0840181905284815260078a01909252902081518154829060ff1916600183600581111561082a5761082a612c42565b0217905550602082015160018201906108439082613091565b506040828101516002830180546060860151608087015160a08801516001600160401b039586166001600160801b031990941693909317600160401b92861692909202919091176001600160801b0316600160801b918516919091026001600160c01b031617600160c01b9184169190910217905560c0909301516003909201805467ffffffffffffffff1916928416929092179091558301516108e8911685613164565b82516040519195506108f991613035565b60408051918290038220908401516001600160401b031682529082907f9d47fef9da077661546e646d61830bfcbda90506c2e5eed38195e82c4eb1cbdf9060200160405180910390a350508061094e90613177565b9050610674565b50600483018190555f61097361096a86611085565b6040015161119b565b90505f61097f87611328565b90505f6002826040516109929190613035565b602060405180830381855afa1580156109ad573d5f803e3d5ffd5b5050506040513d601f19601f820116820180604052508101906109d09190612e28565b90508281146109fc57604051631872fc8d60e01b81526004810182905260248101849052604401610499565b5050506009909201805460ff1916600117905550505050565b610a1e81611561565b505050565b5f610a2d82610e4f565b6080015192915050565b610a3f61189f565b610a485f6118fa565b565b610a5261189f565b610a5b8161196a565b5050565b5f610a6861189f565b610a728383611c4e565b90505b92915050565b5f8051602061370d8339815191525f80610aa0610a9785611085565b604001516121a1565b9150915080610ac657604051632d07135360e01b81528115156004820152602401610499565b5f82815260068401602052604090208054610ae090612dd0565b90505f03610b045760405163089938b360e11b815260048101839052602401610499565b60015f83815260078501602052604090205460ff166005811115610b2a57610b2a612c42565b14610b5d575f8281526007840160205260409081902054905163170cc93360e21b81526104999160ff1690600401612e08565b5f8281526006840160205260408120610b7591612a75565b5f828152600784016020908152604091829020805460ff1916600290811782550180546001600160401b0342818116600160c01b026001600160c01b0390931692909217928390558451600160801b9093041682529181019190915283917ff8fd1c90fb9cfa2ca2358fdf5806b086ad43315d92b221c929efc7f105ce7568910160405180910390a250505050565b5f8181527fe92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb066020526040902080545f8051602061370d8339815191529190610c4b90612dd0565b90505f03610c6f5760405163089938b360e11b815260048101839052602401610499565b60015f83815260078301602052604090205460ff166005811115610c9557610c95612c42565b14610cc8575f8281526007820160205260409081902054905163170cc93360e21b81526104999160ff1690600401612e08565b5f82815260068201602052604090819020905163ee5b48eb60e01b81526005600160991b019163ee5b48eb91610d019190600401613199565b6020604051808303815f875af1158015610d1d573d5f803e3d5ffd5b505050506040513d601f19601f82011682018060405250810190610a1e9190612e28565b7ff0c57e16840df040f15088dc2f81fe391c3923bec73e23a9662efc9c229c6a008054600160401b810460ff1615906001600160401b03165f81158015610d855750825b90505f826001600160401b03166001148015610da05750303b155b905081158015610dae575080155b15610dcc5760405163f92ee8a960e01b815260040160405180910390fd5b845467ffffffffffffffff191660011785558315610df657845460ff60401b1916600160401b1785555b610e00878761235d565b8315610e4657845460ff60401b19168555604051600181527fc7f505b2f371ae2175ee4913f4499e1f2633a7b5936321eed1cdaeb6115181d29060200160405180910390a15b50505050505050565b610e57612aac565b5f8281525f8051602061372d833981519152602052604090819020815160e0810190925280545f8051602061370d833981519152929190829060ff166005811115610ea457610ea4612c42565b6005811115610eb557610eb5612c42565b8152602001600182018054610ec990612dd0565b80601f0160208091040260200160405190810160405280929190818152602001828054610ef590612dd0565b8015610f405780601f10610f1757610100808354040283529160200191610f40565b820191905f5260205f20905b815481529060010190602001808311610f2357829003601f168201915b505050918352505060028201546001600160401b038082166020840152600160401b820481166040840152600160801b820481166060840152600160c01b9091048116608083015260039092015490911660a0909101529392505050565b610fa661189f565b6001600160a01b038116610fcf57604051631e4fbdf760e01b81525f6004820152602401610499565b610fd8816118fa565b50565b6040515f905f8051602061370d833981519152907fe92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb089061101e9086908690613223565b90815260200160405180910390205491505092915050565b604080515f6020820152600360e01b602282015260268101949094526001600160c01b031960c093841b811660468601529190921b16604e830152805180830360360181526056909201905290565b60408051606080820183525f8083526020830152918101919091526040516306f8253560e41b815263ffffffff831660048201525f9081906005600160991b0190636f825350906024015f60405180830381865afa1580156110e9573d5f803e3d5ffd5b505050506040513d5f823e601f3d908101601f191682016040526111109190810190613241565b915091508061113257604051636b2f19e960e01b815260040160405180910390fd5b815115611158578151604051636ba589a560e01b81526004810191909152602401610499565b60208201516001600160a01b031615611194576020820151604051624de75d60e31b81526001600160a01b039091166004820152602401610499565b5092915050565b5f81516026146111d057815160405163cc92daa160e01b815263ffffffff909116600482015260266024820152604401610499565b5f805b600281101561121f576111e7816001613313565b6111f2906008613326565b61ffff1684828151811061120857611208612e84565b016020015160f81c901b91909117906001016111d3565b5061ffff8116156112495760405163407b587360e01b815261ffff82166004820152602401610499565b5f805b60048110156112a457611260816003613313565b61126b906008613326565b63ffffffff168561127d836002613164565b8151811061128d5761128d612e84565b016020015160f81c901b919091179060010161124c565b5063ffffffff8116156112ca57604051635b60892f60e01b815260040160405180910390fd5b5f805b602081101561131f576112e181601f613313565b6112ec906008613326565b866112f8836006613164565b8151811061130857611308612e84565b016020015160f81c901b91909117906001016112cd565b50949350505050565b60605f8083356020850135601461134487870160408901612d43565b6113516060890189612e3f565b60405160f09790971b6001600160f01b0319166020880152602287019590955250604285019290925260e090811b6001600160e01b0319908116606286015260609290921b6bffffffffffffffffffffffff191660668501529190911b16607a820152607e0160405160208183030381529060405290505f5b6113d76060850185612e3f565b9050811015611194576113ed6060850185612e3f565b828181106113fd576113fd612e84565b905060200281019061140f9190612e98565b61141d90602081019061333d565b905060301461143f5760405163180ffa0d60e01b815260040160405180910390fd5b8161144d6060860186612e3f565b8381811061145d5761145d612e84565b905060200281019061146f9190612e98565b611479908061333d565b90506114886060870187612e3f565b8481811061149857611498612e84565b90506020028101906114aa9190612e98565b6114b4908061333d565b6114c16060890189612e3f565b868181106114d1576114d1612e84565b90506020028101906114e39190612e98565b6114f190602081019061333d565b6114fe60608b018b612e3f565b8881811061150e5761150e612e84565b90506020028101906115209190612e98565b61153190606081019060400161337f565b6040516020016115479796959493929190613398565b60408051601f1981840301815291905291506001016113ca565b5f61156a612aac565b5f8051602061370d8339815191525f80611586610a9787611085565b9150915080156115ad57604051632d07135360e01b81528115156004820152602401610499565b5f828152600784016020526040808220815160e081019092528054829060ff1660058111156115de576115de612c42565b60058111156115ef576115ef612c42565b815260200160018201805461160390612dd0565b80601f016020809104026020016040519081016040528092919081815260200182805461162f90612dd0565b801561167a5780601f106116515761010080835404028352916020019161167a565b820191905f5260205f20905b81548152906001019060200180831161165d57829003601f168201915b505050918352505060028201546001600160401b038082166020840152600160401b820481166040840152600160801b820481166060840152600160c01b909104811660808301526003928301541660a090910152909150815160058111156116e5576116e5612c42565b14158015611706575060018151600581111561170357611703612c42565b14155b1561172757805160405163170cc93360e21b81526104999190600401612e08565b60038151600581111561173c5761173c612c42565b0361174a576004815261174f565b600581525b8360080181602001516040516117659190613035565b90815260408051602092819003830190205f908190558581526007870190925290208151815483929190829060ff191660018360058111156117a9576117a9612c42565b0217905550602082015160018201906117c29082613091565b5060408201516002820180546060850151608086015160a08701516001600160401b039586166001600160801b031990941693909317600160401b92861692909202919091176001600160801b0316600160801b918516919091026001600160c01b031617600160c01b9184169190910217905560c0909201516003909101805467ffffffffffffffff1916919092161790558051600581111561186857611868612c42565b60405184907f1c08e59656f1a18dc2da76826cdc52805c43e897a17c50faefb8ab3c1526cc16905f90a39196919550909350505050565b336118d17f9016d09d72d40fdae2fd8ceac6b6234c7706214fd39c1cd1e609a0528c199300546001600160a01b031690565b6001600160a01b031614610a485760405163118cdaa760e01b8152336004820152602401610499565b7f9016d09d72d40fdae2fd8ceac6b6234c7706214fd39c1cd1e609a0528c19930080546001600160a01b031981166001600160a01b03848116918217845560405192169182907f8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e0905f90a3505050565b611972612aac565b5f8281525f8051602061372d8339815191526020526040808220815160e0810190925280545f8051602061370d83398151915293929190829060ff1660058111156119bf576119bf612c42565b60058111156119d0576119d0612c42565b81526020016001820180546119e490612dd0565b80601f0160208091040260200160405190810160405280929190818152602001828054611a1090612dd0565b8015611a5b5780601f10611a3257610100808354040283529160200191611a5b565b820191905f5260205f20905b815481529060010190602001808311611a3e57829003601f168201915b50505091835250506002828101546001600160401b038082166020850152600160401b820481166040850152600160801b820481166060850152600160c01b9091048116608084015260039093015490921660a09091015290915081516005811115611ac957611ac9612c42565b14611afc575f8481526007830160205260409081902054905163170cc93360e21b81526104999160ff1690600401612e08565b60038152426001600160401b031660c08201525f84815260078301602052604090208151815483929190829060ff19166001836005811115611b4057611b40612c42565b021790555060208201516001820190611b599082613091565b5060408201516002820180546060850151608086015160a08701516001600160401b039586166001600160801b031990941693909317600160401b92861692909202919091176001600160801b0316600160801b918516919091026001600160c01b031617600160c01b9184169190910217905560c0909201516003909101805467ffffffffffffffff1916919092161790555f611bf78582612377565b6080840151604080516001600160401b03909216825242602083015291935083925087917f13d58394cf269d48bcf927959a29a5ffee7c9924dafff8927ecdf3c48ffa7c67910160405180910390a3509392505050565b7fe92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb09545f9060ff16611c9257604051637fab81e560e01b815260040160405180910390fd5b5f8051602061370d83398151915242611cb1606086016040870161337f565b6001600160401b0316111580611ceb5750611ccf6202a30042613164565b611cdf606086016040870161337f565b6001600160401b031610155b15611d2557611d00606085016040860161337f565b604051635879da1360e11b81526001600160401b039091166004820152602401610499565b6030611d34602086018661333d565b905014611d6657611d48602085018561333d565b6040516326475b2f60e11b8152610499925060040190815260200190565b611d70848061333d565b90505f03611d9d57611d82848061333d565b604051633e08a12560e11b8152600401610499929190613401565b5f60088201611dac868061333d565b604051611dba929190613223565b90815260200160405180910390205414611df357611dd8848061333d565b60405163a41f772f60e01b8152600401610499929190613401565b611dfd835f6124ce565b6040805160e08101909152815481525f908190611f099060208101611e22898061333d565b8080601f0160208091040260200160405190810160405280939291908181526020018383808284375f92019190915250505090825250602090810190611e6a908a018a61333d565b8080601f0160208091040260200160405190810160405280939291908181526020018383808284375f92019190915250505090825250602001611eb360608a0160408b0161337f565b6001600160401b03168152602001611ece60608a018a61342f565b611ed790613443565b8152602001611ee960808a018a61342f565b611ef290613443565b8152602001876001600160401b03168152506126a8565b5f82815260068601602052604090209193509150611f278282613091565b508160088401611f37888061333d565b604051611f45929190613223565b9081526040519081900360200181209190915563ee5b48eb60e01b81525f906005600160991b019063ee5b48eb90611f81908590600401612e16565b6020604051808303815f875af1158015611f9d573d5f803e3d5ffd5b505050506040513d601f19601f82011682018060405250810190611fc19190612e28565b6040805160e081019091529091508060018152602001611fe1898061333d565b8080601f0160208091040260200160405190810160405280939291908181526020018383808284375f9201829052509385525050506001600160401b0389166020808401829052604080850184905260608501929092526080840183905260a0909301829052868252600788019092522081518154829060ff1916600183600581111561207057612070612c42565b0217905550602082015160018201906120899082613091565b5060408201516002820180546060850151608086015160a08701516001600160401b039586166001600160801b031990941693909317600160401b92861692909202919091176001600160801b0316600160801b918516919091026001600160c01b031617600160c01b9184169190910217905560c0909201516003909101805467ffffffffffffffff19169190921617905580612127888061333d565b604051612135929190613223565b6040518091039020847fb77297e3befc691bfc864a81e241f83e2ef722b6e7becaa2ecec250c6d52b430898b6040016020810190612173919061337f565b604080516001600160401b0393841681529290911660208301520160405180910390a4509095945050505050565b5f8082516027146121d757825160405163cc92daa160e01b815263ffffffff909116600482015260276024820152604401610499565b5f805b6002811015612226576121ee816001613313565b6121f9906008613326565b61ffff1685828151811061220f5761220f612e84565b016020015160f81c901b91909117906001016121da565b5061ffff8116156122505760405163407b587360e01b815261ffff82166004820152602401610499565b5f805b60048110156122ab57612267816003613313565b612272906008613326565b63ffffffff1686612284836002613164565b8151811061229457612294612e84565b016020015160f81c901b9190911790600101612253565b5063ffffffff81166002146122d357604051635b60892f60e01b815260040160405180910390fd5b5f805b6020811015612328576122ea81601f613313565b6122f5906008613326565b87612301836006613164565b8151811061231157612311612e84565b016020015160f81c901b91909117906001016122d6565b505f8660268151811061233d5761233d612e84565b016020015191976001600160f81b03199092161515965090945050505050565b612365612895565b61236e826128de565b610a5b816128f7565b5f8281525f8051602061372d833981519152602052604081206002015481905f8051602061370d83398151915290600160801b90046001600160401b03166123bf85826124ce565b5f6123c987612908565b5f8881526007850160205260408120600201805467ffffffffffffffff60801b1916600160801b6001600160401b038b16021790559091506005600160991b0163ee5b48eb6124198a858b611036565b6040518263ffffffff1660e01b81526004016124359190612e16565b6020604051808303815f875af1158015612451573d5f803e3d5ffd5b505050506040513d601f19601f820116820180604052508101906124759190612e28565b604080516001600160401b038a811682526020820184905282519394508516928b927f07de5ff35a674a8005e661f3333c907ca6333462808762d19dc7b3abb1a8c1df928290030190a3909450925050505b9250929050565b5f8051602061370d8339815191525f6001600160401b038084169085161115612502576124fb838561350a565b905061250f565b61250c848461350a565b90505b6040805160808101825260028401548082526003850154602083015260048501549282019290925260058401546001600160401b031660608201524291158061257157506001840154815161256d916001600160401b031690613164565b8210155b15612597576001600160401b0383166060820152818152604081015160208201526125b6565b82816060018181516125a9919061352a565b6001600160401b03169052505b60608101516125c690606461354a565b602082015160018601546001600160401b0392909216916125f19190600160401b900460ff16613326565b101561262157606081015160405163dfae880160e01b81526001600160401b039091166004820152602401610499565b856001600160401b03168160400181815161263c9190613164565b9052506040810180516001600160401b038716919061265c908390613313565b905250805160028501556020810151600385015560408101516004850155606001516005909301805467ffffffffffffffff19166001600160401b039094169390931790925550505050565b5f60608260400151516030146126d15760405163180ffa0d60e01b815260040160405180910390fd5b82516020808501518051604080880151606089015160808a01518051908701515193515f98612712988a986001989297929690959094909390929101613575565b60405160208183030381529060405290505f5b846080015160200151518110156127845781856080015160200151828151811061275157612751612e84565b602002602001015160405160200161276a92919061362f565b60408051601f198184030181529190529150600101612725565b5060a08401518051602091820151516040516127a4938593929101613665565b60405160208183030381529060405290505f5b8460a00151602001515181101561281657818560a001516020015182815181106127e3576127e3612e84565b60200260200101516040516020016127fc92919061362f565b60408051601f1981840301815291905291506001016127b7565b5060c084015160405161282d9183916020016136a0565b604051602081830303815290604052905060028160405161284e9190613035565b602060405180830381855afa158015612869573d5f803e3d5ffd5b5050506040513d601f19601f8201168201806040525081019061288c9190612e28565b94909350915050565b7ff0c57e16840df040f15088dc2f81fe391c3923bec73e23a9662efc9c229c6a0054600160401b900460ff16610a4857604051631afcd79f60e31b815260040160405180910390fd5b6128e6612895565b6128ee61297d565b610fd881612985565b6128ff612895565b610fd881612a6d565b5f8181525f8051602061372d8339815191526020526040812060020180545f8051602061370d833981519152919060089061295290600160401b90046001600160401b03166136d1565b91906101000a8154816001600160401b0302191690836001600160401b031602179055915050919050565b610a48612895565b61298d612895565b80355f8051602061370d83398151915290815560146129b260608401604085016136ec565b60ff1611806129d157506129cc60608301604084016136ec565b60ff16155b15612a05576129e660608301604084016136ec565b604051634a59bbff60e11b815260ff9091166004820152602401610499565b612a1560608301604084016136ec565b60018201805460ff92909216600160401b0260ff60401b19909216919091179055612a46604083016020840161337f565b600191909101805467ffffffffffffffff19166001600160401b0390921691909117905550565b610fa6612895565b508054612a8190612dd0565b5f825580601f10612a90575050565b601f0160209004905f5260205f2090810190610fd89190612ae9565b6040805160e08101909152805f81526060602082018190525f604083018190529082018190526080820181905260a0820181905260c09091015290565b5b80821115612afd575f8155600101612aea565b5090565b5f60208284031215612b11575f80fd5b5035919050565b803563ffffffff81168114612b2b575f80fd5b919050565b5f8060408385031215612b41575f80fd5b82356001600160401b03811115612b56575f80fd5b830160808186031215612b67575f80fd5b9150612b7560208401612b18565b90509250929050565b5f60208284031215612b8e575f80fd5b610a7282612b18565b80356001600160401b0381168114612b2b575f80fd5b5f8060408385031215612bbe575f80fd5b82356001600160401b03811115612bd3575f80fd5b830160a08186031215612be4575f80fd5b9150612b7560208401612b97565b6001600160a01b0381168114610fd8575f80fd5b5f808284036080811215612c18575f80fd5b6060811215612c25575f80fd5b508291506060830135612c3781612bf2565b809150509250929050565b634e487b7160e01b5f52602160045260245ffd5b60068110612c7257634e487b7160e01b5f52602160045260245ffd5b9052565b5f5b83811015612c90578181015183820152602001612c78565b50505f910152565b5f8151808452612caf816020860160208601612c76565b601f01601f19169290920160200192915050565b60208152612cd5602082018351612c56565b5f602083015160e06040840152612cf0610100840182612c98565b905060408401516001600160401b0380821660608601528060608701511660808601528060808701511660a08601528060a08701511660c08601528060c08701511660e086015250508091505092915050565b5f60208284031215612d53575f80fd5b8135612d5e81612bf2565b9392505050565b5f8060208385031215612d76575f80fd5b82356001600160401b0380821115612d8c575f80fd5b818501915085601f830112612d9f575f80fd5b813581811115612dad575f80fd5b866020828501011115612dbe575f80fd5b60209290920196919550909350505050565b600181811c90821680612de457607f821691505b602082108103612e0257634e487b7160e01b5f52602260045260245ffd5b50919050565b60208101610a758284612c56565b602081525f610a726020830184612c98565b5f60208284031215612e38575f80fd5b5051919050565b5f808335601e19843603018112612e54575f80fd5b8301803591506001600160401b03821115612e6d575f80fd5b6020019150600581901b36038213156124c7575f80fd5b634e487b7160e01b5f52603260045260245ffd5b5f8235605e19833603018112612eac575f80fd5b9190910192915050565b634e487b7160e01b5f52604160045260245ffd5b604051606081016001600160401b0381118282101715612eec57612eec612eb6565b60405290565b604080519081016001600160401b0381118282101715612eec57612eec612eb6565b604051601f8201601f191681016001600160401b0381118282101715612f3c57612f3c612eb6565b604052919050565b5f6001600160401b03821115612f5c57612f5c612eb6565b50601f01601f191660200190565b5f82601f830112612f79575f80fd5b8135612f8c612f8782612f44565b612f14565b818152846020838601011115612fa0575f80fd5b816020850160208301375f918101602001919091529392505050565b5f60608236031215612fcc575f80fd5b612fd4612eca565b82356001600160401b0380821115612fea575f80fd5b612ff636838701612f6a565b8352602085013591508082111561300b575f80fd5b5061301836828601612f6a565b60208301525061302a60408401612b97565b604082015292915050565b5f8251612eac818460208701612c76565b601f821115610a1e57805f5260205f20601f840160051c8101602085101561306b5750805b601f840160051c820191505b8181101561308a575f8155600101613077565b5050505050565b81516001600160401b038111156130aa576130aa612eb6565b6130be816130b88454612dd0565b84613046565b602080601f8311600181146130f1575f84156130da5750858301515b5f19600386901b1c1916600185901b178555613148565b5f85815260208120601f198616915b8281101561311f57888601518255948401946001909101908401613100565b508582101561313c57878501515f19600388901b60f8161c191681555b505060018460011b0185555b505050505050565b634e487b7160e01b5f52601160045260245ffd5b80820180821115610a7557610a75613150565b5f63ffffffff80831681810361318f5761318f613150565b6001019392505050565b5f60208083525f84546131ab81612dd0565b806020870152604060018084165f81146131cc57600181146131e857613215565b60ff19851660408a0152604084151560051b8a01019550613215565b895f5260205f205f5b8581101561320c5781548b82018601529083019088016131f1565b8a016040019650505b509398975050505050505050565b818382375f9101908152919050565b80518015158114612b2b575f80fd5b5f8060408385031215613252575f80fd5b82516001600160401b0380821115613268575f80fd5b908401906060828703121561327b575f80fd5b613283612eca565b8251815260208084015161329681612bf2565b828201526040840151838111156132ab575f80fd5b80850194505087601f8501126132bf575f80fd5b835192506132cf612f8784612f44565b83815288828587010111156132e2575f80fd5b6132f184838301848801612c76565b80604084015250819550613306818801613232565b9450505050509250929050565b81810381811115610a7557610a75613150565b8082028115828204841417610a7557610a75613150565b5f808335601e19843603018112613352575f80fd5b8301803591506001600160401b0382111561336b575f80fd5b6020019150368190038213156124c7575f80fd5b5f6020828403121561338f575f80fd5b610a7282612b97565b5f88516133a9818460208d01612c76565b60e089901b6001600160e01b031916908301908152868860048301378681019050600481015f8152858782375060c09390931b6001600160c01b0319166004939094019283019390935250600c019695505050505050565b60208152816020820152818360408301375f818301604090810191909152601f909201601f19160101919050565b5f8235603e19833603018112612eac575f80fd5b5f60408236031215613453575f80fd5b61345b612ef2565b61346483612b18565b81526020808401356001600160401b0380821115613480575f80fd5b9085019036601f830112613492575f80fd5b8135818111156134a4576134a4612eb6565b8060051b91506134b5848301612f14565b81815291830184019184810190368411156134ce575f80fd5b938501935b838510156134f857843592506134e883612bf2565b82825293850193908501906134d3565b94860194909452509295945050505050565b6001600160401b0382811682821603908082111561119457611194613150565b6001600160401b0381811683821601908082111561119457611194613150565b6001600160401b0381811683821602808216919082811461356d5761356d613150565b505092915050565b61ffff60f01b8a60f01b1681525f63ffffffff60e01b808b60e01b166002840152896006840152808960e01b1660268401525086516135bb81602a850160208b01612c76565b8651908301906135d281602a840160208b01612c76565b60c087901b6001600160c01b031916602a9290910191820152613604603282018660e01b6001600160e01b0319169052565b61361d603682018560e01b6001600160e01b0319169052565b603a019b9a5050505050505050505050565b5f8351613640818460208801612c76565b60609390931b6bffffffffffffffffffffffff19169190920190815260140192915050565b5f8451613676818460208901612c76565b6001600160e01b031960e095861b8116919093019081529290931b16600482015260080192915050565b5f83516136b1818460208801612c76565b60c09390931b6001600160c01b0319169190920190815260080192915050565b5f6001600160401b0380831681810361318f5761318f613150565b5f602082840312156136fc575f80fd5b813560ff81168114612d5e575f80fdfee92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb00e92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb07a164736f6c6343000819000a", "balance": "0x0", "nonce": "0x1" }, "0feedc0de0000000000000000000000000000000": { "code": "0x60806040523661001357610011610017565b005b6100115b61001f610169565b6001600160a01b0316330361015f5760606001600160e01b0319600035166364d3180d60e11b810161005a5761005361019c565b9150610157565b63587086bd60e11b6001600160e01b031982160161007a576100536101f3565b63070d7c6960e41b6001600160e01b031982160161009a57610053610239565b621eb96f60e61b6001600160e01b03198216016100b95761005361026a565b63a39f25e560e01b6001600160e01b03198216016100d9576100536102aa565b60405162461bcd60e51b815260206004820152604260248201527f5472616e73706172656e745570677261646561626c6550726f78793a2061646d60448201527f696e2063616e6e6f742066616c6c6261636b20746f2070726f78792074617267606482015261195d60f21b608482015260a4015b60405180910390fd5b815160208301f35b6101676102be565b565b60007fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035b546001600160a01b0316919050565b60606101a66102ce565b60006101b53660048184610683565b8101906101c291906106c9565b90506101df816040518060200160405280600081525060006102d9565b505060408051602081019091526000815290565b60606000806102053660048184610683565b81019061021291906106fa565b91509150610222828260016102d9565b604051806020016040528060008152509250505090565b60606102436102ce565b60006102523660048184610683565b81019061025f91906106c9565b90506101df81610305565b60606102746102ce565b600061027e610169565b604080516001600160a01b03831660208201529192500160405160208183030381529060405291505090565b60606102b46102ce565b600061027e61035c565b6101676102c961035c565b61036b565b341561016757600080fd5b6102e28361038f565b6000825111806102ef5750805b15610300576102fe83836103cf565b505b505050565b7f7e644d79422f17c01e4894b5f4f588d331ebfa28653d42ae832dc59e38c9798f61032e610169565b604080516001600160a01b03928316815291841660208301520160405180910390a1610359816103fb565b50565b60006103666104a4565b905090565b3660008037600080366000845af43d6000803e80801561038a573d6000f35b3d6000fd5b610398816104cc565b6040516001600160a01b038216907fbc7cd75a20ee27fd9adebab32041f755214dbc6bffa90cc0225b39da2e5c2d3b90600090a250565b60606103f4838360405180606001604052806027815260200161083060279139610560565b9392505050565b6001600160a01b0381166104605760405162461bcd60e51b815260206004820152602660248201527f455243313936373a206e65772061646d696e20697320746865207a65726f206160448201526564647265737360d01b606482015260840161014e565b807fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035b80546001600160a01b0319166001600160a01b039290921691909117905550565b60007f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc61018d565b6001600160a01b0381163b6105395760405162461bcd60e51b815260206004820152602d60248201527f455243313936373a206e657720696d706c656d656e746174696f6e206973206e60448201526c1bdd08184818dbdb9d1c9858dd609a1b606482015260840161014e565b807f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc610483565b6060600080856001600160a01b03168560405161057d91906107e0565b600060405180830381855af49150503d80600081146105b8576040519150601f19603f3d011682016040523d82523d6000602084013e6105bd565b606091505b50915091506105ce868383876105d8565b9695505050505050565b60608315610647578251600003610640576001600160a01b0385163b6106405760405162461bcd60e51b815260206004820152601d60248201527f416464726573733a2063616c6c20746f206e6f6e2d636f6e7472616374000000604482015260640161014e565b5081610651565b6106518383610659565b949350505050565b8151156106695781518083602001fd5b8060405162461bcd60e51b815260040161014e91906107fc565b6000808585111561069357600080fd5b838611156106a057600080fd5b5050820193919092039150565b80356001600160a01b03811681146106c457600080fd5b919050565b6000602082840312156106db57600080fd5b6103f4826106ad565b634e487b7160e01b600052604160045260246000fd5b6000806040838503121561070d57600080fd5b610716836106ad565b9150602083013567ffffffffffffffff8082111561073357600080fd5b818501915085601f83011261074757600080fd5b813581811115610759576107596106e4565b604051601f8201601f19908116603f01168101908382118183101715610781576107816106e4565b8160405282815288602084870101111561079a57600080fd5b8260208601602083013760006020848301015280955050505050509250929050565b60005b838110156107d75781810151838201526020016107bf565b50506000910152565b600082516107f28184602087016107bc565b9190910192915050565b602081526000825180602084015261081b8160408501602087016107bc565b601f01601f1916919091016040019291505056fe416464726573733a206c6f772d6c6576656c2064656c65676174652063616c6c206661696c6564a2646970667358221220b22984eb1f3348f5b2148862b6f80392e497e3c65d0d2cfbb5e53d737e5a6c6a64736f6c63430008190033", "storage": { "0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc": "0x0000000000000000000000000c0deba5e0000000000000000000000000000000", "0xb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103": "0x000000000000000000000000c0ffee1234567890abcdef1234567890abcdef34" }, "balance": "0x0", "nonce": "0x1" }, "32aaa04b1c166d02b0ee152dd221367687f72108": { "balance": "0x2086ac351052600000" }, "48a90c916ad48a72f49fa72a9f889c1ba9cc9b4b": { "balance": "0x8ac7230489e80000" }, "8db97c7cece249c2b98bdc0226cc4c2a57bf52fc": { "balance": "0xd3c21bcecceda1000000" }, "c0ffee1234567890abcdef1234567890abcdef34": { "code": "0x60806040526004361061007b5760003560e01c80639623609d1161004e5780639623609d1461011157806399a88ec414610124578063f2fde38b14610144578063f3b7dead1461016457600080fd5b8063204e1c7a14610080578063715018a6146100bc5780637eff275e146100d35780638da5cb5b146100f3575b600080fd5b34801561008c57600080fd5b506100a061009b366004610499565b610184565b6040516001600160a01b03909116815260200160405180910390f35b3480156100c857600080fd5b506100d1610215565b005b3480156100df57600080fd5b506100d16100ee3660046104bd565b610229565b3480156100ff57600080fd5b506000546001600160a01b03166100a0565b6100d161011f36600461050c565b610291565b34801561013057600080fd5b506100d161013f3660046104bd565b610300565b34801561015057600080fd5b506100d161015f366004610499565b610336565b34801561017057600080fd5b506100a061017f366004610499565b6103b4565b6000806000836001600160a01b03166040516101aa90635c60da1b60e01b815260040190565b600060405180830381855afa9150503d80600081146101e5576040519150601f19603f3d011682016040523d82523d6000602084013e6101ea565b606091505b5091509150816101f957600080fd5b8080602001905181019061020d91906105e2565b949350505050565b61021d6103da565b6102276000610434565b565b6102316103da565b6040516308f2839760e41b81526001600160a01b038281166004830152831690638f283970906024015b600060405180830381600087803b15801561027557600080fd5b505af1158015610289573d6000803e3d6000fd5b505050505050565b6102996103da565b60405163278f794360e11b81526001600160a01b03841690634f1ef2869034906102c990869086906004016105ff565b6000604051808303818588803b1580156102e257600080fd5b505af11580156102f6573d6000803e3d6000fd5b5050505050505050565b6103086103da565b604051631b2ce7f360e11b81526001600160a01b038281166004830152831690633659cfe69060240161025b565b61033e6103da565b6001600160a01b0381166103a85760405162461bcd60e51b815260206004820152602660248201527f4f776e61626c653a206e6577206f776e657220697320746865207a65726f206160448201526564647265737360d01b60648201526084015b60405180910390fd5b6103b181610434565b50565b6000806000836001600160a01b03166040516101aa906303e1469160e61b815260040190565b6000546001600160a01b031633146102275760405162461bcd60e51b815260206004820181905260248201527f4f776e61626c653a2063616c6c6572206973206e6f7420746865206f776e6572604482015260640161039f565b600080546001600160a01b038381166001600160a01b0319831681178455604051919092169283917f8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e09190a35050565b6001600160a01b03811681146103b157600080fd5b6000602082840312156104ab57600080fd5b81356104b681610484565b9392505050565b600080604083850312156104d057600080fd5b82356104db81610484565b915060208301356104eb81610484565b809150509250929050565b634e487b7160e01b600052604160045260246000fd5b60008060006060848603121561052157600080fd5b833561052c81610484565b9250602084013561053c81610484565b9150604084013567ffffffffffffffff8082111561055957600080fd5b818601915086601f83011261056d57600080fd5b81358181111561057f5761057f6104f6565b604051601f8201601f19908116603f011681019083821181831017156105a7576105a76104f6565b816040528281528960208487010111156105c057600080fd5b8260208601602083013760006020848301015280955050505050509250925092565b6000602082840312156105f457600080fd5b81516104b681610484565b60018060a01b03831681526000602060406020840152835180604085015260005b8181101561063c57858101830151858201606001528201610620565b506000606082860101526060601f19601f83011685010192505050939250505056fea264697066735822122019f39983a6fd15f3cffa764efd6fb0234ffe8d71051b3ebddc0b6bd99f87fa9764736f6c63430008190033", "storage": { "0x0000000000000000000000000000000000000000000000000000000000000000": "0x00000000000000000000000048a90c916ad48a72f49fa72a9f889c1ba9cc9b4b" }, "balance": "0x0", "nonce": "0x1" } }, "airdropHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "airdropAmount": null, "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "baseFeePerGas": null, "excessBlobGas": null, "blobGasUsed": null } ``` Open a text editor and copy the preceding text into a file called `custom_genesis.json`. For full breakdown of the genesis file, see the [Genesis File](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#genesis). The `timestamp` field is the Unix timestamp of the genesis block. `0x677eb2d9` represents the timestamp 1736356569 which is the time this tutorial was written. You should use the timestamp when you create your genesis file. Create the Avalanche L1 Configuration[​](#create-the-avalanche-l1-configuration "Direct link to heading") --------------------------------------------------------------------------------------------- Now that you have your binary, it's time to create the Avalanche L1 configuration. This tutorial uses `myblockchain` as it's Avalanche L1 name. Invoke the Avalanche L1 Creation Wizard with this command: ```bash avalanche blockchain create myblockchain ``` ### Choose Your VM[​](#choose-your-vm "Direct link to heading") Select `Custom` for your VM. ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose your VM: Subnet-EVM ▸ Custom ``` ### Select Validator Manager type ```bash Which validator management type would you like to use in your blockchain?: ▸ Proof Of Authority Proof Of Stake Explain the difference ``` Select the Validator manager that matches the `deployedBytecode` in your genesis. This tutorial is using Proof Of Authority. Next, select the key that will be used to controller the PoA ValidatorManager contract: ```bash Which address do you want to enable as controller of ValidatorManager contract?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` This key can add, remove and change weight of the validator set. ### Enter the Path to Your Genesis[​](#enter-the-path-to-your-genesis "Direct link to heading") Enter the path to the genesis file you created in this [step](#create-a-custom-genesis). ```bash ✔ Enter path to custom genesis: ./custom_genesis.json ``` ### ICM Setup ```bash ? Do you want to connect your blockchain with other blockchains or the C-Chain?: Yes, I want to enable my blockchain to interoperate with other blockchains and the C-Chain ▸ No, I want to run my blockchain isolated Explain the difference ``` Select no for now as we can setup ICM after deployment. ### Enter the Path to Your VM Binary[​](#enter-the-path-to-your-vm-binary "Direct link to heading") ```bash ? How do you want to set up the VM binary?: Download and build from a git repository (recommended for cloud deployments) ▸ I already have a VM binary (local network deployments only) ``` Select `I already have a VM binary`. Next, enter the path to your VM binary. This should be the path to the `custom_evm.bin` you created [previously](#modify-and-build-subnet-evm). ```bash ✔ Enter path to vm binary: ./custom_vm.bin ``` ### Wrapping Up[​](#wrapping-up "Direct link to heading") If all worked successfully, the command prints: ```bash ✓ Successfully created blockchain configuration Run 'avalanche blockchain describe' to view all created addresses and what their roles are ``` Now it's time to deploy it. Deploy the Avalanche L1 Locally[​](#deploy-the-avalanche-l1-locally "Direct link to heading") --------------------------------------------------------------------------------- To deploy your Avalanche L1, run: ```bash avalanche blockchain deploy myblockchain ``` Make sure to substitute the name of your Avalanche L1 if you used a different one than `myblockchain`. Next, select `Local Network`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: ▸ Local Network Fuji Mainnet ``` This command boots a five node Avalanche network on your machine. It needs to download the latest versions of AvalancheGo and Subnet-EVM. The command may take a couple minutes to run. If all works as expected, the command output should look something like this: ```bash Deploying [myblockchain] to Local Network Backend controller started, pid: 49158, output at: /Users/l1-dev/.avalanche-cli/runs/server_20250108_140532/avalanche-cli-backend.log Installing avalanchego-v1.12.1... avalanchego-v1.12.1 installation successful AvalancheGo path: /Users/l1-dev/.avalanche-cli/bin/avalanchego/avalanchego-v1.12.1/avalanchego Booting Network. Wait until healthy... Node logs directory: /Users/l1-dev/.avalanche-cli/runs/network_20250108_140538/node/logs Network ready to use. Your blockchain control keys: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] Your subnet auth keys for chain creation: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] CreateSubnetTx fee: 0.000010278 AVAX Subnet has been created with ID: GEieSy2doZ96bpMo5CuHPaX1LvaxpKZ9C72L22j94t6YyUb6X Now creating blockchain... CreateChainTx fee: 0.000095896 AVAX +-------------------------------------------------------------------+ | DEPLOYMENT RESULTS | +---------------+---------------------------------------------------+ | Chain Name | myblockchain | +---------------+---------------------------------------------------+ | Subnet ID | GEieSy2doZ96bpMo5CuHPaX1LvaxpKZ9C72L22j94t6YyUb6X | +---------------+---------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+---------------------------------------------------+ | Blockchain ID | 9FrNVEPkVpQyWDECQhEPDuT9oK98EhWQdg7anypKujVt9uSVT | +---------------+ | | P-Chain TXID | | +---------------+---------------------------------------------------+ Restarting node node2 to track subnet Restarting node node1 to track subnet Waiting for http://127.0.0.1:9652/ext/bc/9FrNVEPkVpQyWDECQhEPDuT9oK98EhWQdg7anypKujVt9uSVT/rpc to be available Waiting for http://127.0.0.1:9650/ext/bc/9FrNVEPkVpQyWDECQhEPDuT9oK98EhWQdg7anypKujVt9uSVT/rpc to be available ✓ Local Network successfully tracking myblockchain ✓ Subnet is successfully deployed Local Network ``` Use the describe command to find your L1s RPC: ```bash avalanche blockchain describe myblockchain ``` You can use the `RPC URL` to connect to and interact with your Avalanche L1. Interact with Your Avalanche L1[​](#interact-with-your-avalanche-l1 "Direct link to heading") --------------------------------------------------------------------------------- ### Check the Version[​](#check-the-version "Direct link to heading") You can verify that your Avalanche L1 has deployed correctly by querying the local node to see what Avalanche L1s it's running. You need to use the [`getNodeVersion`](/docs/api-reference/info-api#infogetnodeversion) endpoint. Try running this curl command: ```bash curl --location --request POST 'http://127.0.0.1:9650/ext/info' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeVersion", "params" :{ } }' ``` The command returns a list of all the VMs your local node is currently running along with their versions. ```json { "jsonrpc": "2.0", "result": { "version": "avalanche/1.10.8", "databaseVersion": "v1.4.5", "rpcProtocolVersion": "27", "gitCommit": "e70a17d9d988b5067f3ef5c4a057f15ae1271ac4", "vmVersions": { "avm": "v1.10.8", "evm": "v0.12.5", "platform": "v1.10.8", "qDMnZ895HKpRXA2wEvujJew8nNFEkvcrH5frCR9T1Suk1sREe": "v0.5.4@c0fe6506a40da466285f37dd0d3c044f494cce32" } }, "id": 1 } ``` Your results may be slightly different, but you can see that in addition to the X-Chain's `avm`, the C-Chain's `evm`, and the P-Chain's `platform` VM, the node is running the custom VM with commit `c0fe6506a40da466285f37dd0d3c044f494cce32`. ### Check a Balance[​](#check-a-balance "Direct link to heading") If you used the default genesis, your custom VM has a prefunded address. You can verify its balance with a curl command. Make sure to substitute the command's URL with the `RPC URL` from your deployment output. ```bash curl --location --request POST 'http://127.0.0.1:9650/ext/bc/myblockchain/rpc' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "eth_getBalance", "params": [ "0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc", "latest" ], "id": 1 }' ``` The command should return: ```json { "jsonrpc": "2.0", "id": 1, "result": "0xd3c21bcecceda1000000" } ``` The balance is hex encoded, so this means the address has a balance of 1 million tokens. Note, this command doesn't work on all custom VMs, only VMs that implement the EVM's `eth_getBalance` interface. Next Steps[​](#next-steps "Direct link to heading") --------------------------------------------------- You've now unlocked the ability to deploy custom VMs. Go build something cool! # With Multisig Auth (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-with-multisig-auth) --- title: With Multisig Auth description: Learn how to create an Avalanche L1 with a multisig authorization. --- Avalanche L1 creators can control critical Avalanche L1 operations with a N of M multisig. This multisig must be setup at deployment time and can't be edited afterward. Multisigs can are available on both the Fuji Testnet and Mainnet. To setup your multisig, you need to know the P-Chain address of each key holder and what you want your signing threshold to be. Avalanche-CLI requires Ledgers for Mainnet deployments. This how-to guide assumes the use of Ledgers for setting up your multisig. ## Prerequisites - [`Avalanche-CLI`](https://github.com/ava-labs/avalanche-cli) installed - Familiarity with process of [Deploying an Avalanche L1 on Testnet](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-fuji-testnet) and [Deploying a Permissioned Avalanche L1 on Mainnet](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-mainnet) - Multiple Ledger devices [configured for Avalanche](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-mainnet#setting-up-your-ledger) - an Avalanche L1 configuration ready to deploy to either Fuji Testnet or Mainnet Getting Started[​](#getting-started "Direct link to heading") ------------------------------------------------------------- When issuing the transactions to create the Avalanche L1, you need to sign the TXs with multiple keys from the multisig. ### Specify Network[​](#specify-network "Direct link to heading") Start the Avalanche L1 deployment with ```bash avalanche blockchain deploy testAvalanche L1 ``` First step is to specify `Fuji` or `Mainnet` as the network: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: Local Network Fuji ▸ Mainnet ``` ```bash Deploying [testblockchain] to Mainnet ``` Ledger is automatically recognized as the signature mechanism on `Mainnet`. After that, the CLI shows the first `Mainnet` Ledger address. ```bash Ledger address: P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 ``` ### Set Control Keys[​](#set-control-keys "Direct link to heading") Next the CLI asks the user to specify the control keys. This is where you setup your multisig. ```bash Configure which addresses may make changes to the Avalanche L1. These addresses are known as your control keys. You are going to also set how many control keys are required to make an Avalanche L1 change (the threshold). Use the arrow keys to navigate: ↓ ↑ → ← ? How would you like to set your control keys?: ▸ Use ledger address Custom list ``` Select `Custom list` and add every address that you'd like to be a key holder on the multisig. ```bash ✔ Custom list ? Enter control keys: ▸ Add Delete Preview More Info ↓ Done ``` Use the given menu to add each key, and select `Done` when finished. The output at this point should look something like: ```bash ✔ Custom list ✔ Add Enter P-Chain address (Example: P-...): P-avax1wryu62weky9qjlp40cpmnqf6ml2hytnagj5q28 ✔ Add Enter P-Chain address (Example: P-...): P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 ✔ Add Enter P-Chain address (Example: P-...): P-avax12gcy0xl0al6gcjrt0395xqlcuq078ml93wl5h8 ✔ Add Enter P-Chain address (Example: P-...): P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af ✔ Add Enter P-Chain address (Example: P-...): P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g ✔ Done Your Avalanche L1's control keys: [P-avax1wryu62weky9qjlp40cpmnqf6ml2hytnagj5q28 P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax12gcy0xl0al6gcjrt0395xqlcuq078ml93wl5h8 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g] ``` When deploying an Avalanche L1 with Ledger, you must include the Ledger's default address determined in [Specify Network](#specify-network) for the deployment to succeed. You may see an error like ``` Error: wallet does not contain Avalanche L1 auth keys exit status 1 ``` ### Set Threshold[​](#set-threshold "Direct link to heading") Next, specify the threshold. In your N of M multisig, your threshold is N, and M is the number of control keys you added in the previous step. ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Select required number of control key signatures to make an Avalanche L1 change: ▸ 1 2 3 4 5 ``` ### Specify Control Keys to Sign the Chain Creation TX[​](#specify-control-keys-to-sign-the-chain-creation-tx "Direct link to heading") You now need N of your key holders to sign the Avalanche L1 deployment transaction. You must select which addresses you want to sign the TX. ```bash ? Choose an Avalanche L1 auth key: ▸ P-avax1wryu62weky9qjlp40cpmnqf6ml2hytnagj5q28 P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax12gcy0xl0al6gcjrt0395xqlcuq078ml93wl5h8 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g ``` A successful control key selection looks like: ```bash ✔ 2 ✔ P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 ✔ P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af Your subnet auth keys for chain creation: [P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af] *** Please sign Avalanche L1 creation hash on the ledger device *** ``` #### Potential Errors[​](#potential-errors "Direct link to heading") If the currently connected Ledger address isn't included in your TX signing group, the operation fails with: ```bash ✔ 2 ✔ P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af ✔ P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g Your Avalanche L1 auth keys for chain creation: [P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g] Error: wallet does not contain Avalanche L1 auth keys exit status 1 ``` This can happen either because the original specified control keys -previous step- don't contain the Ledger address, or because the Ledger address control key wasn't selected in the current step. If the user has the correct address but doesn't have sufficient balance to pay for the TX, the operation fails with: ```bash ✔ 2 ✔ P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af ✔ P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g Your Avalanche L1 auth keys for chain creation: [P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g] *** Please sign Avalanche L1 creation hash on the ledger device *** Error: insufficient funds: provided UTXOs need 1000000000 more units of asset "rgNLkDPpANwqg3pHC4o9aGJmf2YU4GgTVUMRKAdnKodihkqgr" exit status 1 ``` ### Sign Avalanche L1 Deployment TX with the First Address[​](#sign-avalanche-l1-deployment-tx-with-the-first-address "Direct link to heading") The Avalanche L1 Deployment TX is ready for signing. ```bash *** Please sign Avalanche L1 creation hash on the ledger device *** ``` This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. ```bash Avalanche L1 has been created with ID: 2qUKjvPx68Fgc1NMi8w4mtaBt5hStgBzPhsQrS1m7vSub2q9ew. Now creating blockchain... *** Please sign blockchain creation hash on the ledger device *** ``` After successful Avalanche L1 creation, the CLI asks the user to sign the blockchain creation TX. This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. On success, the CLI provides Avalanche L1 deploy details. As only one address signed the chain creation TX, the CLI writes a file to disk to save the TX to continue the signing process with another command. ```bash +--------------------+----------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+----------------------------------------------------+ | Chain Name | testblockchain | +--------------------+----------------------------------------------------+ | Subnet ID | 2qUKjvPx68Fgc1NMi8w4mtaBt5hStgBzPhsQrS1m7vSub2q9ew | +--------------------+----------------------------------------------------+ | VM ID | rW1esjm6gy4BtGvxKMpHB2M28MJGFNsqHRY9AmnchdcgeB3ii | +--------------------+----------------------------------------------------+ 1 of 2 required Blockchain Creation signatures have been signed. Saving TX to disk to enable remaining signing. Path to export partially signed TX to: ``` Enter the name of file to write to disk, such as `partiallySigned.txt`. This file shouldn't exist already. ```bash Path to export partially signed TX to: partiallySigned.txt Addresses remaining to sign the tx: P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af Connect a ledger with one of the remaining addresses or choose a stored key and run the signing command, or send "partiallySigned.txt" to another user for signing. Signing command: avalanche transaction sign testblockchain --input-tx-filepath partiallySigned.txt ``` Gather Remaining Signatures and Issue the Avalanche L1 Deployment TX[​](#gather-remaining-signatures-and-issue-the-avalanche-l1-deployment-tx "Direct link to heading") ----------------------------------------------------------------------------------------------------------------------------------------------------------- So far, one address has signed the Avalanche L1 deployment TX, but you need N signatures. Your Avalanche L1 has not been fully deployed yet. To get the remaining signatures, you may connect a different Ledger to the same computer you've been working on. Alternatively, you may send the `partiallySigned.txt` file to other users to sign themselves. The remainder of this section assumes that you are working on a machine with access to both the remaining keys and the `partiallySigned.txt` file. ### Issue the Command to Sign the Chain Creation TX[​](#issue-the-command-to-sign-the-chain-creation-tx "Direct link to heading") Avalanche-CLI can detect the deployment network automatically. For `Mainnet` TXs, it uses your Ledger automatically. For `Fuji Testnet`, the CLI prompts the user to choose the signing mechanism. You can start the signing process with the `transaction sign` command: ```bash avalanche transaction sign testblockchain --input-tx-filepath partiallySigned.txt ``` ```bash Ledger address: P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af *** Please sign TX hash on the ledger device *** ``` Next, the CLI starts a new signing process for the Avalanche L1 deployment TX. If the Ledger isn't the correct one, the following error should appear instead: ```bash Ledger address: P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 Error: wallet does not contain Avalanche L1 auth keys exit status 1 ``` This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. Repeat this processes until all required parties have signed the TX. You should see a message like this: ```bash All 2 required Tx signatures have been signed. Saving TX to disk to enable commit. Overwriting partiallySigned.txt Tx is fully signed, and ready to be committed Commit command: avalanche transaction commit testblockchain --input-tx-filepath partiallySigned.txt ``` Now, `partiallySigned.txt` contains a fully signed TX. ### Commit the Avalanche L1 Deployment TX[​](#commit-the-avalanche-l1-deployment-tx "Direct link to heading") To run submit the fully signed TX, run: ```bash avalanche transaction commit testblockchain --input-tx-filepath partiallySigned.txt ``` The CLI recognizes the deployment network automatically and submits the TX appropriately. ```bash +--------------------+-------------------------------------------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+-------------------------------------------------------------------------------------+ | Chain Name | testblockchain | +--------------------+-------------------------------------------------------------------------------------+ | Subnet ID | 2qUKjvPx68Fgc1NMi8w4mtaBt5hStgBzPhsQrS1m7vSub2q9ew | +--------------------+-------------------------------------------------------------------------------------+ | VM ID | rW1esjm6gy4BtGvxKMpHB2M28MJGFNsqHRY9AmnchdcgeB3ii | +--------------------+-------------------------------------------------------------------------------------+ | Blockchain ID | 2fx9EF61C964cWBu55vcz9b7gH9LFBkPwoj49JTSHA6Soqqzoj | +--------------------+-------------------------------------------------------------------------------------+ | RPC URL | http://127.0.0.1:9650/ext/bc/2fx9EF61C964cWBu55vcz9b7gH9LFBkPwoj49JTSHA6Soqqzoj/rpc | +--------------------+-------------------------------------------------------------------------------------+ | P-Chain TXID | 2fx9EF61C964cWBu55vcz9b7gH9LFBkPwoj49JTSHA6Soqqzoj | +--------------------+-------------------------------------------------------------------------------------+ ``` Your Avalanche L1 successfully deployed with a multisig. Add Validators Using the Multisig[​](#add-validators-using-the-multisig "Direct link to heading") ------------------------------------------------------------------------------------------------- The `addValidator` command also requires use of the multisig. Before starting, make sure to connect, unlock, and run the Avalanche Ledger app. ```bash avalanche blockchain addValidator testblockchain ``` ### Select Network[​](#select-network "Direct link to heading") First specify the network. Select either `Fuji` or `Mainnet`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to add validator to.: ▸ Fuji Mainnet ``` ### Choose Signing Keys[​](#choose-signing-keys "Direct link to heading") Then, similar to the `deploy` command, the command asks the user to select the N control keys needed to sign the TX. ```bash ✔ Mainnet Use the arrow keys to navigate: ↓ ↑ → ← ? Choose an Avalanche L1 auth key: ▸ P-avax1wryu62weky9qjlp40cpmnqf6ml2hytnagj5q28 P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax12gcy0xl0al6gcjrt0395xqlcuq078ml93wl5h8 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g ``` ```bash ✔ Mainnet ✔ P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 ✔ P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af Your subnet auth keys for add validator TX creation: [P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af]. ``` ### Finish Assembling the TX[​](#finish-assembling-the-tx "Direct link to heading") Take a look at [Add a Validator](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-mainnet#add-a-validator) for additional help issuing this transaction. If setting up a multisig, don't select your validator start time to be in one minute. Finishing the signing process takes significantly longer when using a multisig. ```bash Next, we need the NodeID of the validator you want to whitelist. Check https://build.avax.network/docs/api-reference/info-api#info-getnodeid for instructions about how to query the NodeID from your node (Edit host IP address and port to match your deployment, if needed). What is the NodeID of the validator you'd like to whitelist?: NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg ✔ Default (20) When should your validator start validating? If your validator is not ready by this time, Avalanche L1 downtime can occur. ✔ Custom When should the validator start validating? Enter a UTC datetime in 'YYYY-MM-DD HH:MM:SS' format: 2022-11-22 23:00:00 ✔ Until primary network validator expires NodeID: NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg Network: Local Network Start time: 2022-11-22 23:00:00 End time: 2023-11-22 15:57:27 Weight: 20 Inputs complete, issuing transaction to add the provided validator information... ``` ```bash Ledger address: P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 *** Please sign add validator hash on the ledger device *** ``` After that, the command shows the connected Ledger's address, and asks the user to sign the TX with the Ledger. ```bash Partial TX created 1 of 2 required Add Validator signatures have been signed. Saving TX to disk to enable remaining signing. Path to export partially signed TX to: ``` Because you've setup a multisig, TX isn't fully signed, and the commands asks a file to write into. Use something like `partialAddValidatorTx.txt`. ```bash Path to export partially signed TX to: partialAddValidatorTx.txt Addresses remaining to sign the tx: P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af Connect a Ledger with one of the remaining addresses or choose a stored key and run the signing command, or send "partialAddValidatorTx.txt" to another user for signing. Signing command: avalanche transaction sign testblockchain --input-tx-filepath partialAddValidatorTx.txt ``` Sign and Commit the Add Validator TX[​](#sign-and-commit-the-add-validator-tx "Direct link to heading") ------------------------------------------------------------------------------------------------------- The process is very similar to signing of Avalanche L1 Deployment TX. So far, one address has signed the TX, but you need N signatures. To get the remaining signatures, you may connect a different Ledger to the same computer you've been working on. Alternatively, you may send the `partialAddValidatorTx.txt` file to other users to sign themselves. The remainder of this section assumes that you are working on a machine with access to both the remaining keys and the `partialAddValidatorTx.txt` file. ### Issue the Command to Sign the Add Validator TX[​](#issue-the-command-to-sign-the-add-validator-tx "Direct link to heading") Avalanche-CLI can detect the deployment network automatically. For `Mainnet` TXs, it uses your Ledger automatically. For `Fuji Testnet`, the CLI prompts the user to choose the signing mechanism. ```bash avalanche transaction sign testblockchain --input-tx-filepath partialAddValidatorTx.txt ``` ```bash Ledger address: P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af *** Please sign TX hash on the ledger device *** ``` Next, the command is going to start a new signing process for the Add Validator TX. This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. Repeat this processes until all required parties have signed the TX. You should see a message like this: ```bash All 2 required Tx signatures have been signed. Saving TX to disk to enable commit. Overwriting partialAddValidatorTx.txt Tx is fully signed, and ready to be committed Commit command: avalanche transaction commit testblockchain --input-tx-filepath partialAddValidatorTx.txt ``` Now, `partialAddValidatorTx.txt` contains a fully signed TX. ### Issue the Command to Commit the add validator TX[​](#issue-the-command-to-commit-the-add-validator-tx "Direct link to heading") To run submit the fully signed TX, run: ```bash avalanche transaction commit testblockchain --input-tx-filepath partialAddValidatorTx.txt ``` The CLI recognizes the deployment network automatically and submits the TX appropriately. ```bash Transaction successful, transaction ID: K7XNSwcmgjYX7BEdtFB3hEwQc6YFKRq9g7hAUPhW4J5bjhEJG ``` You've successfully added the validator to the Avalanche L1. # Teleporter on Devnet (/docs/tooling/avalanche-cli/cross-chain/teleporter-devnet) --- title: Teleporter on Devnet description: This how-to guide focuses on deploying Teleporter-enabled Avalanche L1s to a Devnet. --- After this tutorial, you would have created a Devnet and deployed two Avalanche L1s in it, and have enabled them to cross-communicate with each other and with the C-Chain through Teleporter and the underlying Warp technology. For more information on cross chain messaging through Teleporter and Warp, check: - [Cross Chain References](/docs/cross-chain) Note that currently only [Subnet-EVM](https://github.com/ava-labs/subnet-evm) and [Subnet-EVM-Based](/docs/virtual-machines/evm-customization/introduction) virtual machines support Teleporter. ## Prerequisites Before we begin, you will need to have: - Created an AWS account and have an updated AWS `credentials` file in home directory with \[default\] profile Note: the tutorial uses AWS hosts, but Devnets can also be created and operated in other supported cloud providers, such as GCP. Create Avalanche L1s Configurations[​](#create-avalanche-l1s-configurations "Direct link to heading") ----------------------------------------------------------------------------------------- For this section we will follow this [steps](/docs/tooling/cross-chain/teleporter-local-network#create-avalanche-l1s-configurations), to create two Teleporter-enabled Avalanche L1s, `` and ``. Create a Devnet and Deploy an Avalanche L1 in It[​](#create-a-devnet-and-deploy-a-avalanche-l1-in-it "Direct link to heading") ----------------------------------------------------------------------------------------------------------------- Let's use the `devnet wiz` command to create a devnet `` and deploy `` in it. The devnet will be created in the `us-east-1` region of AWS, and will consist of 5 validators only. ```bash avalanche node devnet wiz --aws --node-type default --region us-east-1 --num-validators 5 --num-apis 0 --enable-monitoring=false --default-validator-params Creating the devnet... Creating new EC2 instance(s) on AWS... ... Deploying [chain1] to Cluster ... configuring AWM RElayer on host i-0f1815c016b555fcc Setting the nodes as subnet trackers ... Setting up teleporter on subnet Teleporter Messenger successfully deployed to chain1 (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to chain1 (0xb623C4495220C603D0A939D32478F55891a61750) Teleporter Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to c-chain (0x5DB9A7629912EBF95876228C24A848de0bfB43A9) Starting AWM Relayer Service setting AWM Relayer on host i-0f1815c016b555fcc to relay subnet chain1 updating configuration file ~/.avalanche-cli/nodes/i-0f1815c016b555fcc/services/awm-relayer/awm-relayer-config.json Devnet is successfully created and is now validating subnet chain1! Subnet RPC URL: http://67.202.23.231:9650/ext/bc/fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p/rpc ✓ Cluster information YAML file can be found at ~/.avalanche-cli/nodes/inventories//clusterInfo.yaml at local host ``` Notice some details here: - Two smart contracts are deployed to the Avalanche L1: Teleporter Messenger and Teleporter Registry - Both Teleporter smart contracts are also deployed to `C-Chain` - [AWM Teleporter Relayer](https://github.com/ava-labs/awm-relayer) is installed and configured as a service into one of the nodes (A Relayer [listens](/docs/cross-chain/teleporter/overview#data-flow) for new messages being generated on a source Avalanche L1 and sends them to the destination Avalanche L1.) CLI configures the Relayer to enable every Avalanche L1 to send messages to all other Avalanche L1s. If you add more Avalanche L1s to the Devnet, the Relayer will be automatically reconfigured. Checking Devnet Configuration and Relayer Logs[​](#checking-devnet-configuration-and-relayer-logs "Direct link to heading") --------------------------------------------------------------------------------------------------------------------------- Execute `node list` command to get a list of the devnet nodes: ```bash avalanche node list Cluster "" (Devnet) Node i-0f1815c016b555fcc (NodeID-91PGQ7keavfSV1XVFva2WsQXWLWZqqqKe) 67.202.23.231 [Validator,Relayer] Node i-026392a651571232c (NodeID-AkPyyTs9e9nPGShdSoxdvWYZ6X2zYoyrK) 52.203.183.68 [Validator] Node i-0d1b98d5d941d6002 (NodeID-ByEe7kuwtrPStmdMgY1JiD39pBAuFY2mS) 50.16.235.194 [Validator] Node i-0c291f54bb38c2984 (NodeID-8SE2CdZJExwcS14PYEqr3VkxFyfDHKxKq) 52.45.0.56 [Validator] Node i-049916e2f35231c29 (NodeID-PjQY7xhCGaB8rYbkXYddrr1mesYi29oFo) 3.214.163.110 [Validator] ``` Notice that, in this case, `i-0f1815c016b555fcc` was set as Relayer. This host contains a `systemd` service called `awm-relayer` that can be used to check the Relayer logs, and set the execution status. To view the Relayer logs, the following command can be used: ```bash avalanche node ssh i-0f1815c016b555fcc "journalctl -u awm-relayer --no-pager" [Node i-0f1815c016b555fcc (NodeID-91PGQ7keavfSV1XVFva2WsQXWLWZqqqKe) 67.202.23.231 [Validator,Relayer]] Warning: Permanently added '67.202.23.231' (ED25519) to the list of known hosts. -- Logs begin at Fri 2024-04-05 14:11:43 UTC, end at Fri 2024-04-05 14:30:24 UTC. -- Apr 05 14:15:06 ip-172-31-47-187 systemd[1]: Started AWM Relayer systemd service. Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:66","msg":"Initializing awm-relayer"} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:71","msg":"Set config options."} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:78","msg":"Initializing destination clients"} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.021Z","logger":"awm-relayer","caller":"main/main.go:97","msg":"Initializing app request network"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.159Z","logger":"awm-relayer","caller":"main/main.go:309","msg":"starting metrics server...","port":9090} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"main/main.go:251","msg":"Creating relayer","originBlockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"main/main.go:251","msg":"Creating relayer","originBlockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"relayer/relayer.go:114","msg":"Creating relayer","subnetID":"11111111111111111111111111111111LpoYY","subnetIDHex":"0000000000000000000000000000000000000000000000000000000000000000","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6","blockchainIDHex":"a2b6b947cf2b9bf6df03c8caab08e38ab951d8b120b9c37265d9be01d86bb170"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"relayer/relayer.go:114","msg":"Creating relayer","subnetID":"giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML","subnetIDHex":"5a2e2d87d74b4ec62fdd6626e7d36a44716484dfcc721aa4f2168e8a61af63af","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p","blockchainIDHex":"582fc7bd55472606c260668213bf1b6d291df776c9edf7e042980a84cce7418a"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.171Z","logger":"awm-relayer","caller":"evm/subscriber.go:247","msg":"Successfully subscribed","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.171Z","logger":"awm-relayer","caller":"relayer/relayer.go:161","msg":"processed-missed-blocks set to false, starting processing from chain head","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.172Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0xea06381426934ec1800992f41615b9d362c727ad542f6351dbfa7ad2849a35bf","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0x175e14327136d57fe22d4bdd295ff14bea8a7d7ab1884c06a4d9119b9574b9b3","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"main/main.go:272","msg":"Created relayer","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"main/main.go:295","msg":"Relayer initialized. Listening for messages to relay.","originBlockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.178Z","logger":"awm-relayer","caller":"evm/subscriber.go:247","msg":"Successfully subscribed","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.178Z","logger":"awm-relayer","caller":"relayer/relayer.go:161","msg":"processed-missed-blocks set to false, starting processing from chain head","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.179Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0xe584ccc0df44506255811f6b54375e46abd5db40a4c84fd9235a68f7b69c6f06","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.179Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0x70f14d33bde4716928c5c4723d3969942f9dfd1f282b64ffdf96f5ac65403814","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.180Z","logger":"awm-relayer","caller":"main/main.go:272","msg":"Created relayer","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.180Z","logger":"awm-relayer","caller":"main/main.go:295","msg":"Relayer initialized. Listening for messages to relay.","originBlockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} ``` Deploying the Second Avalanche L1[​](#deploying-the-second-avalanche-l1 "Direct link to heading") ------------------------------------------------------------------------------------- Let's use the `devnet wiz` command again to deploy ``. When deploying Avalanche L1 ``, the two Teleporter contracts will not be deployed to C-Chain in Local Network as they have already been deployed when we deployed the first Avalanche L1. ```bash avalanche node devnet wiz --default-validator-params Adding subnet into existing devnet ... ... Deploying [chain2] to Cluster ... Stopping AWM Relayer Service Setting the nodes as subnet trackers ... Setting up teleporter on subnet Teleporter Messenger successfully deployed to chain2 (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to chain2 (0xb623C4495220C603D0A939D32478F55891a61750) Teleporter Messenger has already been deployed to c-chain Starting AWM Relayer Service setting AWM Relayer on host i-0f1815c016b555fcc to relay subnet chain2 updating configuration file ~/.avalanche-cli/nodes/i-0f1815c016b555fcc/services/awm-relayer/awm-relayer-config.json Devnet is now validating subnet chain2 Subnet RPC URL: http://67.202.23.231:9650/ext/bc/7gKt6evRnkA2uVHRfmk9WrH3dYZH9gEVVxDAknwtjvtaV3XuQ/rpc ✓ Cluster information YAML file can be found at ~/.avalanche-cli/nodes/inventories//clusterInfo.yaml at local host ``` Verify Teleporter Is Successfully Set Up[​](#verify-teleporter-is-successfully-set-up "Direct link to heading") --------------------------------------------------------------------------------------------------------------- To verify that Teleporter is successfully, let's send a couple of cross messages: ```bash avalanche teleporter msg C-Chain chain1 "Hello World" --cluster Delivering message "this is a message" to source subnet "C-Chain" (2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6) Waiting for message to be received at destination subnet subnet "chain1" (fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p) Message successfully Teleported! ``` ```bash avalanche teleporter msg chain2 chain1 "Hello World" --cluster Delivering message "this is a message" to source subnet "chain2" (29WP91AG7MqPUFEW2YwtKnsnzVrRsqcWUpoaoSV1Q9DboXGf4q) Waiting for message to be received at destination subnet subnet "chain1" (fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p) Message successfully Teleported! ``` You have Teleport-ed your first message in the Devnet! Obtaining Information on Teleporter Deploys[​](#obtaining-information-on-teleporter-deploys "Direct link to heading") --------------------------------------------------------------------------------------------------------------------- ### Obtaining Avalanche L1 Information[​](#obtaining-avalanche-l1-information "Direct link to heading") By executing `blockchain describe` on a Teleporter enabled Avalanche L1, the following relevant information can be found: - Blockchain RPC URL - Blockchain ID in cb58 format - Blockchain ID in plain hex format - Teleporter Messenger address - Teleporter Registry address Let's get the information for ``: ```bash avalanche blockchain describe _____ _ _ _ | __ \ | | (_) | | | | | ___| |_ __ _ _| |___ | | | |/ _ \ __/ _ | | / __| | |__| | __/ || (_| | | \__ \ |_____/ \___|\__\__,_|_|_|___/ +--------------------------------+----------------------------------------------------------------------------------------+ | PARAMETER | VALUE | +--------------------------------+----------------------------------------------------------------------------------------+ | Blockchain Name | chain1 | +--------------------------------+----------------------------------------------------------------------------------------+ | ChainID | 1 | +--------------------------------+----------------------------------------------------------------------------------------+ | Token Name | TOKEN1 Token | +--------------------------------+----------------------------------------------------------------------------------------+ | Token Symbol | TOKEN1 | +--------------------------------+----------------------------------------------------------------------------------------+ | VM Version | v0.6.3 | +--------------------------------+----------------------------------------------------------------------------------------+ | VM ID | srEXiWaHjFEgKSgK2zBgnWQUVEy2MZA7UUqjqmBSS7MZYSCQ5 | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster SubnetID | giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster RPC URL | http://67.202.23.231:9650/ext/bc/fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p/rpc | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster | fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p | | BlockchainID | | + +----------------------------------------------------------------------------------------+ | | 0x582fc7bd55472606c260668213bf1b6d291df776c9edf7e042980a84cce7418a | | | | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster Teleporter| 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | Messenger Address | | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster Teleporter| 0xb623C4495220C603D0A939D32478F55891a61750 | | Registry Address | | +--------------------------------+----------------------------------------------------------------------------------------+ ... ``` ### Obtaining C-Chain Information[​](#obtaining-c-chain-information "Direct link to heading") Similar information can be found for C-Chain by using `primary describe`: ```bash avalanche primary describe --cluster _____ _____ _ _ _____ / ____| / ____| | (_) | __ \ | | ______| | | |__ __ _ _ _ __ | |__) |_ _ _ __ __ _ _ __ ___ ___ | | |______| | | '_ \ / _ | | '_ \ | ___/ _ | '__/ _ | '_ _ \/ __| | |____ | |____| | | | (_| | | | | | | | | (_| | | | (_| | | | | | \__ \ \_____| \_____|_| |_|\__,_|_|_| |_| |_| \__,_|_| \__,_|_| |_| |_|___/ +------------------------------+--------------------------------------------------------------------+ | PARAMETER | VALUE | +------------------------------+--------------------------------------------------------------------+ | RPC URL | http://67.202.23.231:9650/ext/bc/C/rpc | +------------------------------+--------------------------------------------------------------------+ | EVM Chain ID | 43112 | +------------------------------+--------------------------------------------------------------------+ | TOKEN SYMBOL | AVAX | +------------------------------+--------------------------------------------------------------------+ | Address | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +------------------------------+--------------------------------------------------------------------+ | Balance | 49999489.815751426 | +------------------------------+--------------------------------------------------------------------+ | Private Key | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | +------------------------------+--------------------------------------------------------------------+ | BlockchainID | 2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6 | + +--------------------------------------------------------------------+ | | 0xa2b6b947cf2b9bf6df03c8caab08e38ab951d8b120b9c37265d9be01d86bb170 | +------------------------------+--------------------------------------------------------------------+ | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | +------------------------------+--------------------------------------------------------------------+ | ICM Registry Address | 0x5DB9A7629912EBF95876228C24A848de0bfB43A9 | +------------------------------+--------------------------------------------------------------------+ ``` Controlling Relayer Execution[​](#controlling-relayer-execution "Direct link to heading") ----------------------------------------------------------------------------------------- CLI provides two commands to remotely control Relayer execution: ```bash avalanche interchain relayer stop --cluster ✓ Remote AWM Relayer on i-0f1815c016b555fcc successfully stopped ``` ```bash avalanche interchain relayer start --cluster ✓ Remote AWM Relayer on i-0f1815c016b555fcc successfully started ``` # Teleporter on Local Network (/docs/tooling/avalanche-cli/cross-chain/teleporter-local-network) --- title: Teleporter on Local Network description: This how-to guide focuses on deploying Teleporter-enabled Avalanche L1s to a local Avalanche network. --- After this tutorial, you would have created and deployed two Avalanche L1s to the local network and have enabled them to cross-communicate with each other and with the local C-Chain (through Teleporter and the underlying Warp technology.) For more information on cross chain messaging through Teleporter and Warp, check: - [Cross Chain References](/docs/cross-chain) Note that currently only [Subnet-EVM](https://github.com/ava-labs/subnet-evm) and [Subnet-EVM-Based](/docs/virtual-machines/evm-customization/introduction) virtual machines support Teleporter. ## Prerequisites - [Avalanche-CLI installed](/docs/tooling/get-avalanche-cli) ## Create Avalanche L1 Configurations Let's create an Avalanche L1 called `` with the latest Subnet-EVM version, a chain ID of 1, TOKEN1 as the token name, and with default Subnet-EVM parameters (more information regarding Avalanche L1 creation can be found [here](/docs/tooling/create-avalanche-l1#create-your-avalanche-l1-configuration)): ```bash avalanche blockchain create --evm --latest\ --evm-chain-id 1 --evm-token TOKEN1 --evm-defaults creating genesis for configuring airdrop to stored key "subnet__airdrop" with address 0x0EF8151A3e6ad1d4e17C8ED4128b20EB5edc58B1 loading stored key "cli-teleporter-deployer" for teleporter deploys (evm address, genesis balance) = (0xE932784f56774879e03F3624fbeC6261154ec711, 600000000000000000000) using latest teleporter version (v1.0.0) ✓ Successfully created subnet configuration ``` Notice that by default, Teleporter is enabled and a stored key is created to fund Teleporter related operations (that is deploy Teleporter smart contracts, fund Teleporter Relayer). To disable Teleporter in your Avalanche L1, use the flag `--teleporter=false` when creating the Avalanche L1. To disable Relayer in your Avalanche L1, use the flag `--relayer=false` when creating the Avalanche L1. Now let's create a second Avalanche L1 called ``, with similar settings: ```bash avalanche blockchain create --evm --latest\ --evm-chain-id 2 --evm-token TOKEN2 --evm-defaults creating genesis for configuring airdrop to stored key "subnet__airdrop" with address 0x0EF815FFFF6ad1d4e17C8ED4128b20EB5edAABBB loading stored key "cli-teleporter-deployer" for teleporter deploys (evm address, genesis balance) = (0xE932784f56774879e03F3624fbeC6261154ec711, 600000000000000000000) using latest teleporter version (v1.0.0) ✓ Successfully created subnet configuration ``` Deploy the Avalanche L1s to Local Network[​](#deploy-the-avalanche-l1s-to-local-network "Direct link to heading") ----------------------------------------------------------------------------------------------------- Let's deploy ``: ```bash avalanche blockchain deploy --local Deploying [] to Local Network Backend controller started, pid: 149427, output at: ~/.avalanche-cli/runs/server_20240229_165923/avalanche-cli-backend.log Booting Network. Wait until healthy... Node logs directory: ~/.avalanche-cli/runs/network_20240229_165923/node/logs Network ready to use. Deploying Blockchain. Wait until network acknowledges... Teleporter Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to c-chain (0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25) Teleporter Messenger successfully deployed to (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to (0x9EDc4cB4E781413b1b82CC3A92a60131FC111F58) Using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... Blockchain ready to use. Local network node endpoints: +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | NODE | VM | URL | ALIAS URL | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ Browser Extension connection details (any node URL from above works): RPC URL: http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc Funded address: 0x0EF8151A3e6ad1d4e17C8ED4128b20EB5edc58B1 with 1000000 (10^18) - private key: 16289399c9466912ffffffdc093c9b51124f0dc54ac7a766b2bc5ccf558d8eee Network name: Chain ID: 1 Currency Symbol: TOKEN1 ``` Notice some details here: - Two smart contracts are deployed to each Avalanche L1: Teleporter Messenger and Teleporter Registry - Both Teleporter smart contracts are also deployed to `C-Chain` in the Local Network - [AWM Teleporter Relayer](https://github.com/ava-labs/awm-relayer) is installed, configured and executed in background (A Relayer [listens](/docs/cross-chain/teleporter/overview#data-flow) for new messages being generated on a source Avalanche L1 and sends them to the destination Avalanche L1.) CLI configures the Relayer to enable every Avalanche L1 to send messages to all other Avalanche L1s. If you add more Avalanche L1s, the Relayer will be automatically reconfigured. When deploying Avalanche L1 ``, the two Teleporter contracts will not be deployed to C-Chain in Local Network as they have already been deployed when we deployed the first Avalanche L1. ```bash avalanche blockchain deploy --local Deploying [] to Local Network Deploying Blockchain. Wait until network acknowledges... Teleporter Messenger has already been deployed to c-chain Teleporter Messenger successfully deployed to (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to (0x9EDc4cB4E781413b1b82CC3A92a60131FC111F58) Using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... Blockchain ready to use. Local network node endpoints: +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | NODE | VM | URL | ALIAS URL | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ Browser Extension connection details (any node URL from above works): RPC URL: http://127.0.0.1:9650/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc Funded address: 0x0EF815FFFF6ad1d4e17C8ED4128b20EB5edAABBB with 1000000 (10^18) - private key: 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 Network name: Chain ID: 2 Currency Symbol: TOKEN2 ``` Verify Teleporter Is Successfully Set Up[​](#verify-teleporter-is-successfully-set-up "Direct link to heading") --------------------------------------------------------------------------------------------------------------- To verify that Teleporter is successfully, let's send a couple of cross messages (from C-Chain to chain1): ```bash avalanche teleporter sendMsg C-Chain chain1 "Hello World" --local ``` Results: ```bash Delivering message "this is a message" to source subnet "C-Chain" Waiting for message to be received at destination subnet subnet "chain1" Message successfully Teleported! ``` To verify that Teleporter is successfully deployed, let's send a couple of cross-chain messages (from chain2 to chain1): ```bash avalanche teleporter sendMsg chain2 chain1 "Hello World" --local ``` Results: ```bash Delivering message "this is a message" to source subnet "chain2" Waiting for message to be received at destination subnet subnet "chain1" Message successfully Teleported! ``` You have Teleport-ed your first message in the Local Network! Relayer related logs can be found at `~/.avalanche-cli/runs/awm-relayer.log`, and Relayer configuration can be found at `~/.avalanche-cli/runs/awm-relayer-config.json` Obtaining Information on Teleporter Deploys[​](#obtaining-information-on-teleporter-deploys "Direct link to heading") --------------------------------------------------------------------------------------------------------------------- ### Obtaining Avalanche L1 Information[​](#obtaining-avalanche-l1-information "Direct link to heading") By executing `blockchain describe` on a Teleporter enabled Avalanche L1, the following relevant information can be found: - Blockchain RPC URL - Blockchain ID in cb58 format - Blockchain ID in plain hex format - Teleporter Messenger address - Teleporter Registry address Let's get the information for ``: ```bash avalanche blockchain describe _____ _ _ _ | __ \ | | (_) | | | | | ___| |_ __ _ _| |___ | | | |/ _ \ __/ _ | | / __| | |__| | __/ || (_| | | \__ \ |_____/ \___|\__\__,_|_|_|___/ +--------------------------------+-------------------------------------------------------------------------------------+ | PARAMETER | VALUE | +--------------------------------+-------------------------------------------------------------------------------------+ | Blockchain Name | chain1 | +--------------------------------+-------------------------------------------------------------------------------------+ | ChainID | 1 | +--------------------------------+-------------------------------------------------------------------------------------+ | Token Name | TOKEN1 Token | +--------------------------------+-------------------------------------------------------------------------------------+ | Token Symbol | TOKEN1 | +--------------------------------+-------------------------------------------------------------------------------------+ | VM Version | v0.6.3 | +--------------------------------+-------------------------------------------------------------------------------------+ | VM ID | srEXiWaHjFEgKSgK2zBgnWQUVEy2MZA7UUqjqmBSS7MZYSCQ5 | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network SubnetID | 2CZP2ndbQnZxTzGuZjPrJAm5b4s2K2Bcjh8NqWoymi8NZMLYQk | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network RPC URL | http://127.0.0.1:9650/ext/bc/2cFWSgGkmRrmKtbPkB8yTpnq9ykK3Dc2qmxphwYtiGXCvnSwg8/rpc | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network BlockchainID | 2cFWSgGkmRrmKtbPkB8yTpnq9ykK3Dc2qmxphwYtiGXCvnSwg8 | + +-------------------------------------------------------------------------------------+ | | 0xd3bc5f71e6946d17c488d320cd1f6f5337d9dce75b3fac5023433c4634b6e91e | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network ICM | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | Messenger Address | | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network ICM | 0xbD9e8eC38E43d34CAB4194881B9BF39d639D7Bd3 | | Registry Address | | +--------------------------------+-------------------------------------------------------------------------------------+ ... ``` ### Obtaining C-Chain Information[​](#obtaining-c-chain-information "Direct link to heading") Similar information can be found for C-Chain by using `primary describe`: ```bash avalanche primary describe --local _____ _____ _ _ _____ / ____| / ____| | (_) | __ \ | | ______| | | |__ __ _ _ _ __ | |__) |_ _ _ __ __ _ _ __ ___ ___ | | |______| | | '_ \ / _ | | '_ \ | ___/ _ | '__/ _ | '_ _ \/ __| | |____ | |____| | | | (_| | | | | | | | | (_| | | | (_| | | | | | \__ \ \_____| \_____|_| |_|\__,_|_|_| |_| |_| \__,_|_| \__,_|_| |_| |_|___/ +------------------------------+--------------------------------------------------------------------+ | PARAMETER | VALUE | +------------------------------+--------------------------------------------------------------------+ | RPC URL | http://127.0.0.1:9650/ext/bc/C/rpc | +------------------------------+--------------------------------------------------------------------+ | EVM Chain ID | 43112 | +------------------------------+--------------------------------------------------------------------+ | TOKEN SYMBOL | AVAX | +------------------------------+--------------------------------------------------------------------+ | Address | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +------------------------------+--------------------------------------------------------------------+ | Balance | 49999489.829989485 | +------------------------------+--------------------------------------------------------------------+ | Private Key | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | +------------------------------+--------------------------------------------------------------------+ | BlockchainID | 2JeJDKL9Bvn1vLuuPL1DpUccBCVUh7iRnkv3a5pV9kJW5HbuQz | + +--------------------------------------------------------------------+ | | 0xabc1bd35cb7313c8a2b62980172e6d7ef42aaa532c870499a148858b0b6a34fd | +------------------------------+--------------------------------------------------------------------+ | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | +------------------------------+--------------------------------------------------------------------+ | ICM Registry Address | 0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25 | +------------------------------+--------------------------------------------------------------------+ ``` Controlling Relayer Execution[​](#controlling-relayer-execution "Direct link to heading") ----------------------------------------------------------------------------------------- Besides having the option to not use a Relayer at Avalanche L1 creation time, the Relayer can be stopped and restarted on used request. To stop the Relayer: ```bash avalanche interchain relayer stop --local ✓ Local AWM Relayer successfully stopped ``` To start it again: ```bash avalanche interchain relayer start --local using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... ✓ Local AWM Relayer successfully started Logs can be found at ~/.avalanche-cli/runs/awm-relayer.log ``` # Teleporter Token Bridge (/docs/tooling/avalanche-cli/cross-chain/teleporter-token-bridge) --- title: Teleporter Token Bridge description: Deploy an example Teleporter Token Bridge on the local network. --- Teleporter Token Bridge enables users to transfer tokens between Avalanche L1s. The bridge is a set of smart contracts that are deployed across multiple Avalanche L1s, and leverages Teleporter for cross-chain communication. For more information on Teleporter Token Bridge, check: - [Teleporter Token Bridge README](https://github.com/ava-labs/teleporter-token-bridge) ## How to Deploy Teleporter Token Bridge on a Local Network This how-to guide focuses on deploying Teleporter Token Bridge on a local Avalanche network. After this tutorial, you would have learned how to transfer an ERC-20 token between two Teleporter-enabled Avalanche L1s and between C-Chain and a Teleporter-enabled Avalanche L1. ## Prerequisites For our example, you will first need to create and deploy a Teleporter-enabled Avalanche L1 in Local Network. We will name our Avalanche L1 `testblockchain`. - To create a Teleporter-enabled Avalanche L1 configuration, [visit here](/docs/tooling/cross-chain/teleporter-local-network#create-subnet-configurations) - To deploy a Teleporter-enabled Avalanche L1, [visit here](/docs/tooling/cross-chain/teleporter-local-network#deploy-the-subnets-to-local-network) ## Deploy ERC-20 Token in C-Chain First, let's create an ERC-20 Token and deploy it to C-Chain. For our example, it will be called TOK. Sample script to deploy the ERC-20 Token can be found [here](https://github.com/ava-labs/avalanche-cli/blob/main/cmd/contractcmd/deploy_erc20.go). To deploy the ERC-20 Token to C-Chain, we will call: ```bash avalanche contract deploy erc20 ``` When the command is run, our EWOQ address `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` would have received 100000 TOK tokens in C-Chain. Note that `0x5DB9A7629912EBF95876228C24A848de0bfB43A9` is our ERC-20 Token address, which we will use in our next command. ## Deploy Teleporter Token Bridge Next, we will now deploy Teleporter Token Bridge to our Local Network, where we will deploy the Home Contract to C-Chain and the Remote Contract to our Avalanche L1. ```bash avalanche teleporter bridge deploy ✔ Local Network ✔ C-Chain ✔ Deploy a new Home for the token ✔ An ERC-20 token Enter the address of the ERC-20 Token: 0x5DB9A7629912EBF95876228C24A848de0bfB43A9 ✔ Subnet testblockchain Downloading Bridge Contracts Compiling Bridge Home Deployed to http://127.0.0.1:9650/ext/bc/C/rpc Home Address: 0x4Ac1d98D9cEF99EC6546dEd4Bd550b0b287aaD6D Remote Deployed to http://127.0.0.1:9650/ext/bc/2TnSWd7odhkDWKYFDZHqU7CvtY8G6m46gWxUnhJRNYu4bznrrc/rpc Remote Address: 0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0 ``` Before we transfer our ERC-20 token from C-Chain to our Avalanche L1, we will first call `avalanche key list` command to check our initial balances in C-Chain and Avalanche L1. We will inquire the balances of our ERC-20 Token TOK both in C-Chain and our Avalanche L1, which has the address of `0x5DB9A7629912EBF95876228C24A848de0bfB43A9` in the C-Chain and address of `0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0` in our Avalanche L1 `testblockchain`. ```bash `avalanche key list --local --keys ewoq,blockchain_airdrop --subnets c,testblockchain --tokens 0x5DB9A7629912EBF95876228C24A848de0bfB43A9,0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0` +--------+--------------------+------------+--------------------------------------------+---------------+-----------------+---------------+ | KIND | NAME | SUBNET | ADDRESS | TOKEN | BALANCE | NETWORK | +--------+--------------------+---------+--------------------------------------------+---------------+-----------------+---------------+ | stored | ewoq | testblockchain | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | TOK (0x7DD1.) | 0 | Local Network | + + +---------+--------------------------------------------+---------------+-----------------+---------------+ | | | C-Chain | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | TOK (0x5DB9.) | 100000.000000000| Local Network | + +--------------------+---------+--------------------------------------------+---------------+-----------------+---------------+ | | blockchain | testblockchain | 0x5a4601D594Aa3848cA5EE0770b7883d3DBC666f6 | TOK (0x7DD1.) | 0 | Local Network | + + _airdrop +---------+--------------------------------------------+---------------+-----------------+---------------+ | | | C-Chain | 0x5a4601D594Aa3848cA5EE0770b7883d3DBC666f6 | TOK (0x5DB9.) | 0 | Local Network | +--------+--------------------+------------+--------------------------------------------+---------------+-----------------+---------------+ ``` ## Transfer the Token from C-Chain to Our Avalanche L1 Now we will transfer 100 TOK tokens from our `ewoq` address in C-Chain to blockchain_airdrop address in our Avalanche L1 `testblockchain`. Note that we will be using the Home contract address `0x4Ac1d98D9cEF99EC6546dEd4Bd550b0b287aaD6D` and Remote contract address `0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0`. ```bash avalanche key transfer ✔ Local Network ✔ C-Chain ✔ Subnet testblockchain Enter the address of the Bridge on c-chain: 0x4Ac1d98D9cEF99EC6546dEd4Bd550b0b287aaD6D Enter the address of the Bridge on testblockchain: 0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0 ✔ ewoq ✔ Key ✔ blockchain_airdrop Amount to send (TOKEN units): 100 ``` ## Verify That Transfer Is Successful We will call `avalanche key list` command again to verify that the transfer is successful. `blockchain_airdrop` should now have 100 TOK tokens in our Avalanche L1 `testblockchain` and our EWOQ account now has 99900 TOK tokens in C-Chain. ```bash avalanche key list --local --keys ewoq,blockchain_airdrop --subnets c,testblockchain --tokens 0x5DB9A7629912EBF95876228C24A848de0bfB43A9,0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0 +--------+--------------------+------------+--------------------------------------------+---------------+-----------------+---------------+ | KIND | NAME | SUBNET | ADDRESS | TOKEN | BALANCE | NETWORK | +--------+--------------------+---------+--------------------------------------------+---------------+-----------------+---------------+ | stored | ewoq | testblockchain | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | TOK (0x7DD1.) | 0 | Local Network | + + +---------+--------------------------------------------+---------------+-----------------+---------------+ | | | C-Chain | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | TOK (0x5DB9.) | 99900.000000000 | Local Network | + +--------------------+---------+--------------------------------------------+---------------+-----------------+---------------+ | | blockchain | testblockchain | 0x5a4601D594Aa3848cA5EE0770b7883d3DBC666f6 | TOK (0x7DD1.) | 100.000000000 | Local Network | + + _airdrop +---------+--------------------------------------------+---------------+-----------------+---------------+ | | | C-Chain | 0x5a4601D594Aa3848cA5EE0770b7883d3DBC666f6 | TOK (0x5DB9.) | 0 | Local Network | +--------+--------------------+------------+--------------------------------------------+---------------+-----------------+---------------+ ``` And that's it! You have now successfully completed your first transfer from C-Chain to Avalanche L1 using Teleporter Token Bridge! # Import an Avalanche L1 (/docs/tooling/avalanche-cli/guides/import-avalanche-l1) --- title: Import an Avalanche L1 description: Learn how to import an Avalanche-l1 into Avalanche-CLI. --- Context[​](#context "Direct link to heading") --------------------------------------------- In previous instances, Avalanche L1s might have been manually created through transaction issuance to node APIs, whether it was done using a local node or public API nodes. However, the current focus is on integrating Avalanche-CLI. To achieve this integration, this guide demonstrates the process of importing an Avalanche L1 to the Avalanche-CLI to enable better management of the Avalanche L1's configuration. This how-to uses the BEAM Avalanche L1 deployed on Fuji Testnet as the example Avalanche L1. Requirements[​](#requirements "Direct link to heading") ------------------------------------------------------- For the import to work properly, you need: - The Avalanche L1's genesis file, stored on disk - The Avalanche L1's SubnetID Import the Avalanche L1[​](#import-the-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------- For these use cases, Avalanche-CLI now supports the `import public` command. Start the import by issuing ``` avalanche blockchain import public ``` The tool prompts for the network from which to import. The invariant assumption here is that the network is a public network, either the Fuji testnet or Mainnet. In other words, importing from a local network isn't supported. ``` Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to import from: ▸ Fuji Mainnet ``` As stated earlier, this is from Fuji, so select it. As a next step, Avalanche-CLI asks for the path of the genesis file on disk: ``` ✗ Provide the path to the genesis file: /tmp/subnet_evm.genesis.json ``` The wizard checks if the file at the provided path exists, refer to the checkmark at the beginning of the line: ``` ✔ Provide the path to the genesis file: /tmp/subnetevm_genesis.json ``` Subsequently, the wizard asks if nodes have already been deployed for this Avalanche L1. ``` Use the arrow keys to navigate: ↓ ↑ → ← ? Have nodes already been deployed to this subnet?: Yes ▸ No ``` ### Nodes are Already Validating This Avalanche L1[​](#nodes-are-already-validating-this-avalanche-l1 "Direct link to heading") If nodes already have been deployed, the wizard attempts to query such a node for detailed data like the VM version. This allows the tool to skip querying GitHub (or wherever the VM's repository is hosted) for the VM's version, but rather we'll get the exact version which is actually running on the node. For this to work, a node API URL is requested from the user, which is used for the query. This requires that the node's API IP and port are accessible from the machine running Avalanche-CLI, or the node is obviously not reachable, and thus the query times out and fails, and the tool exits. The node should also be validating the given Avalanche L1 for the import to be meaningful, otherwise, the import fails with missing information. If the query succeeded, the wizard jumps to prompt for the Avalanche L1 ID (SubnetID). ``` Please provide an API URL of such a node so we can query its VM version (e.g. http://111.22.33.44:5555): http://154.42.240.119:9650 What is the ID of the subnet?: ie1wUBR2bQDPkGCRf2CBVzmP55eSiyJsFYqeGXnTYt2r33aKW ``` The rest of the wizard is identical to the next section, except that there is no prompt for the VM version anymore. ### Nodes Aren't Yet Validating this Avalanche L1, the Nodes API URL are Unknown, or Inaccessible (Firewalls)[​](#nodes-arent-yet-validating-this-avalanche-l1-the-nodes-api-url-are-unknown-or-inaccessible-firewalls "Direct link to heading") If you don't have a node's API URL at hand, or it's not reachable from the machine running Avalanche-CLI, or maybe no nodes have even been deployed yet because only the `CreateSubnet` transaction has been issued, for example, you can query the public APIs. You can't know for sure what Avalanche L1 VM versions the validators are running though, therefore the tool has to prompt later. So, select `No` when the tool asks for deployed nodes: Thus, at this point the wizard requests the Avalanche L1's ID, without which it can't know what to import. Remember the ID is different on different networks. From the [Testnet Avalanche L1 Explorer](https://subnets-test.avax.network/beam) you can see that BEAM's Avalanche L1 ID (SubnetID) is `ie1wUBR2bQDPkGCRf2CBVzmP55eSiyJsFYqeGXnTYt2r33aKW`: ``` ✔ What is the ID of the subnet?: ie1wUBR2bQDPkGCRf2CBVzmP55eSiyJsFYqeGXnTYt2r33aKW ``` Notice the checkmark at line start, it signals that there is ID format validation. If you hit `enter` now, the tool queries the public APIs for the given network, and if successful, it prints some information about the Avalanche L1, and proceeds to ask about the Avalanche L1's type: ``` Getting information from the Fuji network... Retrieved information. BlockchainID: y97omoP2cSyEVfdSztQHXD9EnfnVP9YKjZwAxhUfGbLAPYT9t, Name: BEAM, VMID: kLPs8zGsTVZ28DhP1VefPCFbCgS7o5bDNez8JUxPVw9E6Ubbz Use the arrow keys to navigate: ↓ ↑ → ← ? What's this VM's type?: ▸ Subnet-EVM Custom ``` Avalanche-CLI needs to know the VM type, to hit its repository and select what VM versions are available. This works automatically for Ava Labs VMs (like Subnet-EVM). Custom VMs aren't supported yet at this point, but are next on the agenda. As the import is for BEAM, and you know that it's a Subnet-EVM type, select that. The tool then queries the (GitHub) repository for available releases, and prompts the user to pick the version she wants to use: ``` ✔ Subnet-EVM Use the arrow keys to navigate: ↓ ↑ → ← ? Pick the version for this VM: ▸ v0.4.5 v0.4.5-rc.1 v0.4.4 v0.4.4-rc.0 ↓ v0.4.3 ``` There is only so much the tool can help here, the Avalanche L1 manager/administrator should know what they want to use Avalanche-CLI for, how, and why they're importing the Avalanche L1. It's crucial to understand that the correct versions are only known to the user. The latest might be usually fine, but the tool can't make assumptions about it easily. This is why it's indispensable that the wizard prompts the user, and the tool requires her to choose. If you selected to query an actual Avalanche L1 validator, not the public APIs, in the preceding step. In such a scenario, the tool skips this picking. ``` ✔ v0.4.5 Subnet BEAM imported successfully ``` The choice finalizes the wizard, which hopefully signals that the import succeeded. If something went wrong, the error messages provide cause information. This means you can now use Avalanche-CLI to handle the imported Avalanche L1 in the accustomed way. For example, you could deploy the BEAM Avalanche L1 locally. For a complete description of options, flags, and the command, visit the [command reference](/docs/tooling/cli-commands#avalanche-l1-import). # Run with Docker (/docs/tooling/avalanche-cli/guides/run-with-docker) --- title: Run with Docker description: Instructions for running Avalanche-CLI in a Docker container. --- To run Avalanche-CLI in a docker container, you need to enable ipv6. Edit `/etc/docker/daemon.json`. Add this snippet then restart the docker service. ```json { "ipv6": true, "fixed-cidr-v6": "fd00::/80" } ``` # Add Validator (/docs/tooling/avalanche-cli/maintain/add-validator-l1) --- title: Add Validator description: Learn how to add a validator to an Avalanche L1. --- ### Add a Validator to an Avalanche L1 ```bash avalanche blockchain addValidator ``` #### Choose Network Choose the network where the operation will be ```bash ? Choose a network for the operation: ▸ Local Network Devnet Fuji Testnet Mainnet ``` #### Choose P-Chain Fee Payer Choose the key that will be used to pay for the transaction fees on the P-Chain. ```bash ? Which key should be used to pay for transaction fees on P-Chain?: ▸ Use stored key Use ledger ``` #### Enter Node ID Enter the NodeID of the node you want to add as a blockchain validator. ```bash ✗ What is the NodeID of the node you want to add as a blockchain validator?: ``` You can find the NodeID in the node's configuration file or the console when first started with avalanchego. An example of a NodeID is `NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg` #### Enter the BLS Public Key Enter the public key of the node's BLS ```bash Next, we need the public key and proof of possession of the node's BLS Check https://build.avax.network/docs/api-reference/info-api#infogetnodeid for instructions on calling info.getNodeID API ✗ What is the node's BLS public key?: ``` You can find the BLS public key in the node's configuration file or the console when first started with avalanchego. #### Enter BLS Proof of Possession Enter the proof of possession of the node's BLS ```bash ✗ What is the node's BLS proof of possession?: ``` You can find the BLS proof of possession in the node's configuration file or the console when first started with avalanchego. #### Enter AVAX Balance This balance will be used to pay the P-Chain continuous staking fee. ```bash ✗ What balance would you like to assign to the validator (in AVAX)?: ``` #### Enter Leftover AVAX Address This address will receive any leftover AVAX when the node is removed from the validator set. ```bash ? Which key would you like to set as change owner for leftover AVAX if the node is removed from validator set?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` #### Enter Disable Validator Address This address will be able to disable the validator using P-Chain transactions. ```bash Which address do you want to be able to disable the validator using P-Chain transactions?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` ### Proof of Stake specific parameters If your network was created with Proof of Stake validator manager, you will be asked for the following additional parameters. You can also pass these parameters as flags to the command. ```bash --delegation-fee uint16 delegation fee (in bips) --stake-amount uint amount of native tokens to stake --staking-period duration how long this validator will be staking ``` # Delete an Avalanche L1 (/docs/tooling/avalanche-cli/maintain/delete-avalanche-l1) --- title: Delete an Avalanche L1 description: Learn how to delete an Avalanche L1. --- Deleting an Avalanche L1 Configuration[​](#deleting-a-avalanche-l1-configuration "Direct link to heading") --------------------------------------------------------------------------------------------- To delete a created Avalanche L1 configuration, run: ```bash avalanche blockchain delete ``` Deleting a Deployed Avalanche L1[​](#deleting-a-deployed-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------------------------- You can't delete Avalanche L1s deployed to Mainnet or the Fuji Testnet. However, you may delete Avalanche L1s deployed to a local network by cleaning the network state with below command: ```bash avalanche network clean ``` # Remove Validator (/docs/tooling/avalanche-cli/maintain/remove-validator-l1) --- title: Remove Validator description: Learn how to remove a validator from an Avalanche L1. --- ### Remove a Validator from an Avalanche L1 ```bash avalanche blockchain removeValidator ``` #### Choose the Network Choose the network where the validator is registered. ```bash ? Choose a network for the operation: ▸ Local Network Devnet Fuji Testnet Mainnet ``` #### Choose P-Chain fee payer Choose the key to pay for the transaction fees on the P-Chain. ```bash ? Which key should be used to pay for transaction fees on P-Chain?: ▸ Use stored key Use ledger ``` #### Enter Node-ID of the Validator to Remove Enter the Node-ID of the validator you want to remove. ```bash ✗ What is the NodeID of the node you want to remove as a blockchain validator?: ``` You can find the NodeID in the node's configuration file or the console when first started with avalanchego. An example of a NodeID is `NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg` #### Confirm the removal ```bash Validator manager owner 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC pays for the initialization of the validator's removal (Blockchain gas token) RPC Endpoint: http://127.0.0.1:9652/ext/bc/2qmU6w47Mp7D7fGhbRuZm6Z1Nn6FZXZAxKpaeTMFiRQW9CBErh/rpc Forcing removal of NodeID-7cQrriPWGXa5yuJGZUgsxxwH9j4T8pPkY as it is a PoS bootstrap validator Using validationID: 228zzCgDmAmuaJDGnFkFgnVqbbJPRF7qF1Xpd3dhtqGDhMjJK2 for nodeID: NodeID-7cQrriPWGXa5yuJGZUgsxxwH9j4T8pPkY ValidationID: 228zzCgDmAmuaJDGnFkFgnVqbbJPRF7qF1Xpd3dhtqGDhMjJK2 SetSubnetValidatorWeightTX fee: 0.000078836 AVAX SetSubnetValidatorWeightTx ID: 2FUimPZ37DscPJiQDLrtKtumER3LNr48MJi6VR2jGXkYEKpCaq ✓ Validator successfully removed from the Subnet ``` ### Proof of Authority Networks If the network is a Proof of Authority network, the validator must be removed from the network by the Validator Manager owner. ### Proof of Stake Networks If the network is a Proof of Stake network, the validator must be removed by the initial staker. Rewards will be distributed to the validator after the removal. It is important to note that the initial PoS Validator set are treated as bootstrap validators and are not elgible for rewards. # Troubleshooting (/docs/tooling/avalanche-cli/maintain/troubleshooting) --- title: Troubleshooting description: If you run into trouble deploying your Avalanche L1, use this document for tips to resolve common issues. --- Deployment Times Out[​](#deployment-times-out "Direct link to heading") ----------------------------------------------------------------------- During a local deployment, your network may fail to start. Your error may look something like this: ```bash [~]$ avalanche blockchain deploy myblockchain ✔ Local Network Deploying [myblockchain] to Local Network Backend controller started, pid: 26388, output at: /Users/user/.avalanche-cli/runs/server_20221231_111605/avalanche-cli-backend VMs ready. Starting network... .................................................................................. .................................................................................. ......Error: failed to query network health: rpc error: code = DeadlineExceeded desc = context deadline exceeded ``` Avalanche-CLI only supports running one local Avalanche network at a time. If other instances of AvalancheGo are running concurrently, your Avalanche-CLI network fails to start. To test for this error, start by shutting down any Avalanche nodes started by Avalanche-CLI. ```bash avalanche network clean --hard ``` Next, look for any lingering AvalancheGo processes with: ```bash ps aux | grep avalanchego ``` If any processes are running, you need to stop them before you can launch your VM with Avalanche-CLI. If you're running a validator node on the same box you're using Avalanche-CLI, **don't** end any of these lingering AvalancheGo processes. This may shut down your validator and could affect your validation uptime. Incompatible RPC Version for Custom VM[​](#incompatible-rpc-version-for-custom-vm "Direct link to heading") ----------------------------------------------------------------------------------------------------------- If you're locally deploying a custom VM, you may run into this error message. ```bash [~]$ avalanche blockchain deploy myblockchain ✔ Local Network Deploying [myblockchain] to Local Network Backend controller started, pid: 26388, output at: /Users/user/.avalanche-cli/runs/server_20221231_111605/avalanche-cli-backend VMs ready. Starting network... ......... Blockchain has been deployed. Wait until network acknowledges... .................................................................................. .................................................................................. ......Error: failed to query network health: rpc error: code = DeadlineExceeded desc = context deadline exceeded ``` This error has many possible causes, but a common cause is usually due to **an RPC protocol version mismatch.** AvalancheGo communicates with custom VMs over RPC using [gRPC](https://grpc.io/). gRPC defines a protocol specification shared by both AvalancheGo and the VM. **Both components must be running the same RPC version for VM deployment to work.** Your custom VM's RPC version is set by the version of AvalancheGo that you import. By default, Avalanche-CLI creates local Avalalanche networks that run the latest AvalancheGo release. ### Example[​](#example "Direct link to heading") Here's an example with real numbers from the AvalancheGo compatibility page: - If the latest AvalancheGo release is version v1.10.11, then Avalanche-CLI deploys a network with RPC version 28. - For your deploy to be successful, your VM must also have RPC version 28. Because only AvalancheGo versions v1.10.9, v1.10.10 and v1.10.11 supports RPC version 28, your VM **must** import one of those versions. ### Solution[​](#solution "Direct link to heading") Error: `RPCChainVM protocol version mismatch between AvalancheGo and Virtual Machine plugin` This error occurs when the RPCChainVM protocol version used by VMs like Subnet-EVM are incompatible with the protocol version of AvalancheGo. If your VM has an RPC version mismatch, you have two options: 1. Update the version of AvalancheGo you use in your VM. This is the correct long-term approach. 2. Use Avalanche-CLI to deploy an older version of AvalancheGo by using the `--avalanchego-version` flag. Both the [`blockchain deploy`](/docs/tooling/cli-commands#deploy) and [`network start`](/docs/tooling/cli-commands#start) commands support setting the AvalancheGo version explicitly. Although it's very important to keep your version of AvalancheGo up-to-date, this workaround helps you avoid broken builds in the short term. You must upgrade to the latest AvalancheGo version when deploying publicly to Fuji Testnet or Avalanche Mainnet. ### More Information[​](#more-information "Direct link to heading") Similar version matching is required in different tools on the ecosystem. Here is a compatibility table showing which RPCChainVM Version implements the more recent releases of AvalancheGo, Subnet-EVM, Precompile-EVM and HyperSDK. |RPCChainVM|AvalancheGo |Subnet-EVM |Precompile-EVM |HyperSDK | |----------|------------------|---------------|---------------|----------------| |26 |v1.10.1-v1.10.4 |v0.5.1-v0.5.2 |v0.1.0-v0.1.1 |v0.0.6-v0.0.9 | |27 |v1.10.5-v1.10.8 |v0.5.3 |v0.1.2 |v0.0.10-v0.0.12 | |28 |v1.10.9-v1.10.12 |v0.5.4-v0.5.6 |v0.1.3-v0.1.4 |v0.0.13-v0.0.15 | |30 |v1.10.15-v1.10.17 |v0.5.9-v0.5.10 |v0.1.6-v0.1.7 |- | |29 |v1.10.13-v1.10.14 |v0.5.7-v0.5.8 |v0.1.5 |- | |31 |v1.10.18- v1.10.19|v0.5.11 |v0.1.8 |v0.0.16 (latest)| |33 |v1.11.0 |v0.6.0-v0.6.1 |v0.2.0 |- | |34 |v1.11.1- v1.11.2 |v0.6.2 |- |- | |35 |v1.11.3 (latest) |v0.6.3 (latest)|v0.2.1 (latest)|- | You can view the full RPC compatibility broken down by release version for each tool here: [AvalancheGo](https://github.com/ava-labs/avalanchego/blob/master/version/compatibility.json). [Subnet-EVM](https://github.com/ava-labs/subnet-evm/blob/master/compatibility.json). [Precompile-EVM](https://github.com/ava-labs/precompile-evm/blob/main/compatibility.json). Updates to AvalancheGo's RPC version are **not** tied to its semantic version scheme. Minor AvalancheGo version bumps may include a breaking RPC version bump. Fix for MacBook Air M1/M2: ‘Bad CPU type in executable' Error[​](#fix-for-macbook-air-m1m2-bad-cpu-type-in-executable-error "Direct link to heading") ----------------------------------------------------------------------------------------------------------------------------------------------------- When running `avalanche blockchain deploy` via the Avalanche-CLI, the terminal may throw an error that contains the following: ```bash zsh: bad CPU type in executable: /Users/user.name/Downloads/build/avalanchego ``` This is because some Macs lack support for x86 binaries. Running the following command should fix this issue: ```bash /usr/sbin/softwareupdate --install-rosetta ``` # View Avalanche L1s (/docs/tooling/avalanche-cli/maintain/view-avalanche-l1s) --- title: View Avalanche L1s description: CLI commands for viewing avalanche-l1s. --- ## List Avalanche L1 Configurations You can list the Avalanche L1s you've created with: `avalanche blockchain list` Example: ```bash > avalanche blockchain list +--------------+--------------+----------+---------------------------------------------------+------------+------------+-----------+ | SUBNET | CHAIN | CHAINID | VMID | TYPE | VM VERSION | FROM REPO | +--------------+--------------+----------+---------------------------------------------------+------------+------------+-----------+ | myblockchain | myblockchain | 111 | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | Subnet-EVM | v0.7.0 | false | +--------------+--------------+----------+---------------------------------------------------+------------+------------+-----------+ ``` To see detailed information about your deployed Avalanche L1s, add the `--deployed` flag: ```bash > avalanche blockchain list --deployed +--------------+--------------+---------------------------------------------------+---------------+----------------+---------+ | SUBNET | CHAIN | VM ID | LOCAL NETWORK | FUJI (TESTNET) | MAINNET | +--------------+--------------+---------------------------------------------------+---------------+----------------+---------+ | myblockchain | myblockchain | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | Yes | No | No | +--------------+--------------+---------------------------------------------------+---------------+----------------+---------+ ```bash ## Describe Avalanche L1 Configurations To see the details of a specific configuration, run: `avalanche blockchain describe ` Example: ```bash > avalanche blockchain describe myblockchain +---------------------------------------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN | +---------------+-----------------------------------------------------------------------------------------------------------------+ | Name | myblockchain | +---------------+-----------------------------------------------------------------------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+-----------------------------------------------------------------------------------------------------------------+ | VM Version | v0.7.0 | +---------------+-----------------------------------------------------------------------------------------------------------------+ | Validation | Proof Of Authority | +---------------+--------------------------+--------------------------------------------------------------------------------------+ | Local Network | ChainID | 12345 | | +--------------------------+--------------------------------------------------------------------------------------+ | | SubnetID | fvx83jt2BWyibBRL4SRMa6WzjWp7GSFUeUUeoeBe1AqJ5Ey5w | | +--------------------------+--------------------------------------------------------------------------------------+ | | BlockchainID (CB58) | 2QGB9GbEhsFJLSRVii2mKs8dxugHzmK98G5391P2bvXSCb4sED | | +--------------------------+--------------------------------------------------------------------------------------+ | | BlockchainID (HEX) | 0xb883b54815c84a3f0903dbccd289ed5563395dd61c189db626e2d2680546b990 | | +--------------------------+--------------------------------------------------------------------------------------+ | | RPC Endpoint | http://127.0.0.1:60538/ext/bc/2QGB9GbEhsFJLSRVii2mKs8dxugHzmK98G5391P2bvXSCb4sED/rpc | +---------------+--------------------------+--------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+ | ICM | +---------------+-----------------------+--------------------------------------------+ | Local Network | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | +-----------------------+--------------------------------------------+ | | ICM Registry Address | 0x695Ea5FbeBBdc99cA679F5fD7768f179d2281d74 | +---------------+-----------------------+--------------------------------------------+ +-------------------------------+ | TOKEN | +--------------+----------------+ | Token Name | TUTORIAL Token | +--------------+----------------+ | Token Symbol | TUTORIAL | +--------------+----------------+ +----------------------------------------------------------------------------------------------------------------------------------------+ | INITIAL TOKEN ALLOCATION | +-------------------------+------------------------------------------------------------------+---------------+---------------------------+ | DESCRIPTION | ADDRESS AND PRIVATE KEY | AMOUNT (OWEN) | AMOUNT (WEI) | +-------------------------+------------------------------------------------------------------+---------------+---------------------------+ | Main funded account | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | 1000000 | 1000000000000000000000000 | | ewoq | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | | | +-------------------------+------------------------------------------------------------------+---------------+---------------------------+ | Used by ICM | 0x001CBe3650FAD190d9ccBd57b289124F5131AA57 | 600 | 600000000000000000000 | | cli-teleporter-deployer | d00b93e1526d05a30b681911a3e0f5e5528add205880c1cafa4f84cdb2746b00 | | | +-------------------------+------------------------------------------------------------------+---------------+---------------------------+ +-----------------------------------------------------------------------------------------------------------------+ | SMART CONTRACTS | +-----------------------+--------------------------------------------+--------------------------------------------+ | DESCRIPTION | ADDRESS | DEPLOYER | +-----------------------+--------------------------------------------+--------------------------------------------+ | Proxy Admin | 0xC0fFEE1234567890aBCdeF1234567890abcDef34 | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +-----------------------+--------------------------------------------+--------------------------------------------+ | PoA Validator Manager | 0x0C0DEbA5E0000000000000000000000000000000 | | +-----------------------+--------------------------------------------+--------------------------------------------+ | Transparent Proxy | 0x0Feedc0de0000000000000000000000000000000 | | +-----------------------+--------------------------------------------+--------------------------------------------+ +----------------------------------------------------------------------+ | INITIAL PRECOMPILE CONFIGS | +------------+-----------------+-------------------+-------------------+ | PRECOMPILE | ADMIN ADDRESSES | MANAGER ADDRESSES | ENABLED ADDRESSES | +------------+-----------------+-------------------+-------------------+ | Warp | n/a | n/a | n/a | +------------+-----------------+-------------------+-------------------+ +--------------------------------------------------------------------------+ | NODES | +-------+------------------------------------------+-----------------------+ | NAME | NODE ID | LOCALHOST ENDPOINT | +-------+------------------------------------------+-----------------------+ | node1 | NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg | http://127.0.0.1:9650 | +-------+------------------------------------------+-----------------------+ | node2 | NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ | http://127.0.0.1:9652 | +-------+------------------------------------------+-----------------------+ +--------------------------------------------------------------------------------------------------------+ | WALLET CONNECTION | +-----------------+--------------------------------------------------------------------------------------+ | Network RPC URL | http://127.0.0.1:60538/ext/bc/2QGB9GbEhsFJLSRVii2mKs8dxugHzmK98G5391P2bvXSCb4sED/rpc | +-----------------+--------------------------------------------------------------------------------------+ | Network Name | myblockchain | +-----------------+--------------------------------------------------------------------------------------+ | Chain ID | 12345 | +-----------------+--------------------------------------------------------------------------------------+ | Token Symbol | TUTORIAL | +-----------------+--------------------------------------------------------------------------------------+ | Token Name | TUTORIAL Token | +-----------------+--------------------------------------------------------------------------------------+ ``` ## Viewing a Genesis File If you'd like to see the raw genesis file, supply the `--genesis` flag to the describe command: `avalanche blockchain describe --genesis` Example: ```bash > avalanche blockchain describe myblockchain --genesis { "config": { "berlinBlock": 0, "byzantiumBlock": 0, "chainId": 111, "constantinopleBlock": 0, "eip150Block": 0, "eip155Block": 0, "eip158Block": 0, "feeConfig": { "gasLimit": 12000000, "targetBlockRate": 2, "minBaseFee": 25000000000, "targetGas": 60000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 }, "homesteadBlock": 0, "istanbulBlock": 0, "londonBlock": 0, "muirGlacierBlock": 0, "petersburgBlock": 0, "warpConfig": { "blockTimestamp": 1734549536, "quorumNumerator": 67, "requirePrimaryNetworkSigners": true } }, "nonce": "0x0", "timestamp": "0x67632020", "extraData": "0x", "gasLimit": "0xb71b00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "alloc": { "001cbe3650fad190d9ccbd57b289124f5131aa57": { "balance": "0x2086ac351052600000" }, "0c0deba5e0000000000000000000000000000000": { "code": "", "balance": "0x0", "nonce": "0x1" }, "0feedc0de0000000000000000000000000000000": { "code": "", "storage": { "0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc": "0x0000000000000000000000000c0deba5e0000000000000000000000000000000", //sslot for proxy implementation "0xb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103": "0x000000000000000000000000c0ffee1234567890abcdef1234567890abcdef34" //sslot for proxy admin }, "balance": "0x0", "nonce": "0x1" }, "8db97c7cece249c2b98bdc0226cc4c2a57bf52fc": { "balance": "0xd3c21bcecceda1000000" }, "c0ffee1234567890abcdef1234567890abcdef34": { "code": "", "storage": { "0x0000000000000000000000000000000000000000000000000000000000000000": "0x0000000000000000000000008db97c7cece249c2b98bdc0226cc4c2a57bf52fc" //sslot for owner }, "balance": "0x0", "nonce": "0x1" } }, "airdropHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "airdropAmount": null, "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "baseFeePerGas": null, "excessBlobGas": null, "blobGasUsed": null } ``` # Ledger P-Chain Transfer (/docs/tooling/avalanche-cli/transactions/ledger-p-chain-transfer) --- title: Ledger P-Chain Transfer description: Transferring funds between P-Chain using Avalanche CLI. --- Transferring funds between P-Chain wallets becomes necessary in certain situations: 1. Funds need to be sent to the Avalanche L1 control key, which might have a zero balance due to fee payments. The Avalanche L1 control key requires funding to ensure proper support for Avalanche L1 operations. 2. Funds need to be moved from one Ledger address index to another. A Ledger manages an infinite sequence of addresses all derived from a master private key and can sign for any of those addresses. Each one is referred to by an index, or the associated address. Avalanche-CLI usually expects to use index 0, but sometimes, the funds are in a different index. Occasionally, a transfer made to a ledger can be made to an address different from the default one used by the CLI. To enable direct transfers between P-Chain addresses, use the command `avalanche key transfer`. This operation involves a series of import/export actions with the P-Chain and X-Chain. The fee for this operation is four times the typical import operation fee, which comes out to 0.004 AVAX. You can find more information about fees [here](/docs/api-reference/standards/guides/txn-fees). The `key transfer` command can also be applied to the stored keys managed by the CLI. It enables moving funds from one stored key to another, and from a ledger to a stored key or the other way. This how-to guide focuses on transferring funds between ledger accounts. ## Prerequisites - [`Avalanche-CLI`](/docs/tooling/get-avalanche-cli) installed - Multiple Ledger devices [configured for Avalanche](/docs/tooling/create-deploy-avalanche-l1s/deploy-on-mainnet#setting-up-your-ledger) Example: Sending All Funds From One Ledger to Another[​](#example-sending-all-funds-from-one-ledger-to-another "Direct link to heading") ---------------------------------------------------------------------------------------------------------------------------------------- - Source address: ledger A, index 2 (the web wallet shows 4.5 AVAX for this ledger) - Target address: ledger B, index 0 (the web wallet shows 0 AVAX for this ledger) ### Determine Sender Address Index[​](#determine-sender-address-index "Direct link to heading") A ledger can manage an infinite amount of addresses derived from a main private key. Because of this, many operations require the user to specify an address index. After confirming with a web wallet that 4.5 AVAX is available on p-chain address `P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0`, connect ledger A. With the avalanche app running, execute: ```bash avalanche key list --mainnet --ledger 0,1,2,3,4,5 ``` To see p-chain addresses and balances for the first 6 indices in the ledger derived owner addresses. ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1g8yucm7j0cnwwru4rp5lkzw6dpdxjmc2rfkqs9 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 1 | | P-avax1drppshkst2ccygyq37m2z9e3ex2jhkd2txcm5r | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 2 | | P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0 | 4.5 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 3 | | P-avax1yfpm7v5y5rej2nu7t2r0ffgrlpfq36je0rc5k6 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 4 | | P-avax17nqvwcqsa8ddgeww8gzmfe932pz2syaj2vyd89 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 5 | | P-avax1jzvnd05vsfksrtatm2e3rzu6eux9a287493yf8 | 0 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` The address `P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0` has 4.5 AVAX and is associated with index 2 of ledger A. ### Determine Receiver Address Index[​](#determine-receiver-address-index "Direct link to heading") In this case the user wants to use index 0, the one CLI by default expects to contain funds. For the transfer command, it is also needed to know the target p-chain address. Do the following to obtain it: With the ledger B connected and the avalache app running, execute: ```bash avalanche key list --mainnet --ledger 0 ``` ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm | 0 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` Target address to be used is `P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm`, containing 0 funds. ### Send the Transfer[​](#send-the-transfer "Direct link to heading") A P-Chain to P-chain transfer is a two-part operation. There is no need for the two parts to be executed on the same machine, only for them to have some common params. For each part, the appropriate ledger (either source or target) must be connected to the machine executing it. The first step moves the money out of the source account into a X-Chain account owner by the receiver. It needs to be signed by the sending ledger. Enter the amount of AVAX to send to the recipient. This amount does not include fees. Note that the sending ledger pays all the fees. Then start the command: ```bash avalanche key transfer ``` First step is to specify the network. `Mainnet` in this case: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Network to use: ▸ Mainnet Fuji Local Network ``` Next, the step of the transfer must be specified. Send in this case: ```bash ? Step of the transfer: ▸ Send Receive ``` Next, the key source for the sender address. That is, the key that is going to sign the sending transactions. Select `Use ledger`: ```bash ? Which key source should be used to for the sender address?: Use stored key ▸ Use ledger ``` Next, the ledger index is asked for. Input `2`: ```bash ✗ Ledger index to use: 2 ``` Next, the amount to be sent is asked for: ```bash ✗ Amount to send (AVAX units): 4.496 ``` The, the target address is required: ```bash ✗ Receiver address: P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm ``` After that, a confirmation message is printed. Read carefully and choose `Yes`: ```bash this operation is going to: - send 4.496000000 AVAX from P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0 to target address P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm - take a fee of 0.004000000 AVAX from source address P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0 Use the arrow keys to navigate: ↓ ↑ → ← ? Confirm transfer: No ▸ Yes ``` After this, the first part is completed: ### Receive the Transfer[​](#receive-the-transfer "Direct link to heading") In this step, Ledger B signs the transaction to receive the funds. It imports the funds on the X-Chain before exporting them back to the desired P-Chain address. Connect ledger B and execute avalanche app. Then start the command: ```bash avalanche key transfer ``` Specify the `Mainnet` network: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Network to use: ▸ Mainnet Fuji Local Network ``` Next, the step of the transfer must be specified. Receive in this case: ```bash ? Step of the transfer: Send ▸ Receive ``` Then, select Ledger as the key source that is going to sign the receiver operations. ```bash ? Which key source should be used to for the receiver address?: Use stored key ▸ Use ledger ``` Next, the ledger index is asked for. Input `0`: ```bash ✗ Ledger index to use: 0 ``` Next, the amount to receive is asked for: ```bash ✗ Amount to send (AVAX units): 4.496 ``` After that, a confirmation message is printed. Select `Yes`: ```bash this operation is going to: - receive 4.496000000 AVAX at target address P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm: Use the arrow keys to navigate: ↓ ↑ → ← ? Confirm transfer: No ▸ Yes ``` Finally, the second part of the operation is executed and the transfer is completed. ```bash Issuing ImportTx P -> X Issuing ExportTx X -> P Issuing ImportTx X -> P ``` ### Verifying Results of the Transfer Operation using `key list`[​](#verifying-results-of-the-transfer-operation-using-key-list "Direct link to heading") First verify ledger A accounts. Connect ledger A and open the avalanche app: ```bash avalanche key list --mainnet --ledger 0,1,2,3,4,5 ``` With result: ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1g8yucm7j0cnwwru4rp5lkzw6dpdxjmc2rfkqs9 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 1 | | P-avax1drppshkst2ccygyq37m2z9e3ex2jhkd2txcm5r | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 2 | | P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 3 | | P-avax1yfpm7v5y5rej2nu7t2r0ffgrlpfq36je0rc5k6 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 4 | | P-avax17nqvwcqsa8ddgeww8gzmfe932pz2syaj2vyd89 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 5 | | P-avax1jzvnd05vsfksrtatm2e3rzu6eux9a287493yf8 | 0 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` Next, verify ledger B accounts. Connect ledger B and open the avalanche app: ```bash avalanche key list --mainnet --ledger 0,1,2,3,4,5 ``` With result: ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm | 4.496 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 1 | | P-avax18e9qsm30du590lhkwydhmkfwhcc9999gvxcaez | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 2 | | P-avax1unkkjstggvdty5gtnfhc0mgnl7qxa52z2d4c9y | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 3 | | P-avax1ek7n0zky3py7prxcrgnmh44y3wm6lc7r7x5r8e | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 4 | | P-avax1rsz6nt6qht5ep37qjk7ht0u9h30mgfhehsmqea | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 5 | | P-avax17u5wm4tfex7xr27xlwejm28pyk84tj0jzp42zz | 0 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` ### Recovery Steps[​](#recovery-steps "Direct link to heading") As a multi step operation, the receiving part of the transfer can have intermediate errors, due for example to temporal network connections on the client side. The CLI is going to capture errors and provide the user with a recovery message of the kind: ```bash ERROR: restart from this step by using the same command with extra arguments: --receive-recovery-step 1 ``` If this happen, the receiving operation should be started the same way, choosing the same options, but adding the extra suggested parameter: ```bash avalanche key transfer --receive-recovery-step 1 ``` Then, the CLI is going to resume where it left off. # Send AVAX on C/P-Chain (/docs/tooling/avalanche-cli/transactions/native-send) --- title: Send AVAX on C/P-Chain description: Learn how to execute a native transfer on the C or P-Chain using the Avalanche CLI. --- # Prerequisites - Install the [Avalanche CLI](/docs/tooling/get-avalanche-cli). - Use the CLI to [create a key](/docs/tooling/cli-commands#key-create). - Fund the key with AVAX. You can use the [faucet](https://test.core.app/tools/testnet-faucet/?subnet=c&token=c) with coupon code `devrel-avax-0112` to get testnet AVAX. - *Optionally*, you can [export](/docs/tooling/cli-commands#key-export) your private key for use in scripting or other tools. ## Initiate the `transfer` Command and Walk Through the Prompts In your terminal, run the following command: ```zsh avalanche key transfer ``` This command and all of its flags are documented [here](/docs/tooling/cli-commands#key-transfer). You will be prompted to answer the following questions: ```zsh ? On what Network do you want to execute the transfer?: ▸ Mainnet Fuji Testnet Devnet Local Network ``` If you select "Devnet", you must input the RPC URL. If your devnet's C-Chain RPC is `https://demo.avax-dev.network/ext/bc/C/rpc`, you should input the URL as: ```zsh ✔ Devnet Endpoint: https://demo.avax-dev.network ``` Select the chain you want to transfer funds from: ```zsh ? Where are the funds to transfer?: ▸ P-Chain C-Chain My blockchain isn't listed ``` Select the chain you want to transfer funds to: ```zsh ? Destination Chain: ▸ P-Chain X-Chain ``` Select the step of the transfer process you want to execute: ```zsh ? Step of the transfer: ▸ Send Receive ``` If you are performing a native transfer where the sender and receiver address are on the same chain, you only need to complete a "send" transaction. If you wish to perform a cross-chain transfer (i.e. from C to P-Chain), you should abort this flow and reinitiate the command as `avalanche key transfer --fund-p-chain` or `avalanche key transfer --fund-x-chain`, completing both the "send" and "receive" flows with keys stored in the CLI. You can fund your CLI-stored key with AVAX on the C-Chain using the [faucet](https://test.core.app/tools/testnet-faucet/?subnet=c&token=c) with coupon code `devrel-avax-0112`. Select the sender address: ```zsh ? Which key should be used as the sender?: ▸ Use stored key Use ledger ? Which stored key should be used as the sender address?: ▸ DemoKey MyKey ewoq ``` Specify the amount to send, input the destination address: ```zsh ✗ Amount to send (AVAX units): 100 ✗ Destination address: P-avax1zgjx8zj7z7zj7z7zj7z7zj7z7zj7zj7zj7zj7e ``` Review the transaction details and confirm/abort: ```zsh this operation is going to: - send 100.000000000 AVAX from P-avax1gmuqt8xg9j4h88kj3hyprt23nf50azlfg8txn2 to destination address P-avax1f630gvct4ht35ragcheapnn2n5cv2tkmq73ec0 - take a fee of 0.001000000 AVAX from source address P-avax1gmuqt8xg9j4h88kj3hyprt23nf50azlfg8txn2 ? Confirm transfer: No ▸ Yes ``` After a successful transfer, you can check your CLI keys' balances with the [command](/docs/tooling/cli-commands#key-list): `avalanche key list`. # Precompile Configs (/docs/tooling/avalanche-cli/upgrade/avalanche-l1-precompile-config) --- title: Precompile Configs description: Learn how to upgrade your Subnet-EVM precompile configurations. --- You can customize Subnet-EVM based Avalanche L1s after deployment by enabling and disabling precompiles. To do this, create a `upgrade.json` file and place it in the appropriate directory. This document describes how to perform such network upgrades. It's specific for Subnet-EVM upgrades. The document [Upgrade an Avalanche L1](/docs/avalanche-l1s/upgrade/considerations) describes all the background information required regarding Avalanche L1 upgrades. It's very important that you have read and understood the previously linked document. Failing to do so can potentially grind your network to a halt. This tutorial assumes that you have already [installed](/docs/tooling/get-avalanche-cli) Avalanche-CLI. It assumes you have already created and deployed an Avalanche L1 called `testblockchain`. Generate the Upgrade File[​](#generate-the-upgrade-file "Direct link to heading") --------------------------------------------------------------------------------- The [Precompiles](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#network-upgrades-enabledisable-precompiles) documentation describes what files the network upgrade requires, and where to place them. To generate a valid `upgrade.json` file, run: ```bash avalanche blockchain upgrade generate testblockchain ``` If you didn't create `testblockchain` yet, you would see this result: ```bash avalanche blockchain upgrade generate testblockchain The provided Avalanche L1 name "testblockchain" does not exist ``` Again, it makes no sense to try the upgrade command if the Avalanche L1 doesn't exist. If that's the case, please go ahead and [create](/docs/tooling/create-avalanche-l1) the Avalanche L1 first. If the Avalanche L1 definition exists, the tool launches a wizard. It may feel a bit redundant, but you first see some warnings, to draw focus to the dangers involved: ```bash avalanche blockchain upgrade generate testblockchain Performing a network upgrade requires coordinating the upgrade network-wide. A network upgrade changes the rule set used to process and verify blocks, such that any node that upgrades incorrectly or fails to upgrade by the time that upgrade goes into effect may become out of sync with the rest of the network. Any mistakes in configuring network upgrades or coordinating them on validators may cause the network to halt and recovering may be difficult. Please consult https://build.avax.network/docs/subnets/customize-a-subnet#network-upgrades-enabledisable-precompiles for more information Use the arrow keys to navigate: ↓ ↑ → ← ? Press [Enter] to continue, or abort by choosing 'no': ▸ Yes No ``` Go ahead and select `Yes` if you understand everything and you agree. You see a last note, before the actual configuration wizard starts: ```bash Avalanchego and this tool support configuring multiple precompiles. However, we suggest to only configure one per upgrade. Use the arrow keys to navigate: ↓ ↑ → ← ? Select the precompile to configure: ▸ Contract Deployment Allow List Manage Fee Settings Native Minting Transaction Allow List ``` Refer to [Precompiles](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#precompiles) for a description of available precompiles and how to configure them. Make sure you understand precompiles thoroughly and how to configure them before attempting to continue. For every precompile in the list, the wizard guides you to provide correct information by prompting relevant questions. For the sake of this tutorial, select `Transaction Allow List`. The document [Restricting Who Can Submit Transactions](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#restricting-who-can-submit-transactions) describes what this precompile is about. ```bash ✔ Transaction Allow List Set parameters for the "Manage Fee Settings" precompile Use the arrow keys to navigate: ↓ ↑ → ← ? When should the precompile be activated?: ▸ In 5 minutes In 1 day In 1 week In 2 weeks Custom ``` This is actually common to all precompiles: they require an activation timestamp. If you think about it, it makes sense: you want a synchronized activation of your precompile. So think for a moment about when you want to set the activation timestamp to. You can select one of the suggested times in the future, or you can pick a custom one. After picking `Custom`, it shows the following prompt: ```bash ✔ Custom ✗ Enter the block activation UTC datetime in 'YYYY-MM-DD HH:MM:SS' format: ``` The format is `YYYY-MM-DD HH:MM:SS`, therefore `2023-03-31 14:00:00` would be a valid timestamp. Notice that the timestamp is in UTC. Please make sure you have converted the time from your timezone to UTC. Also notice the `✗` at the beginning of the line. The CLI tool does input validation, so if you provide a valid timestamp, the `x` disappears: ```bash ✔ Enter the block activation UTC datetime in 'YYYY-MM-DD HH:MM:SS' format: 2023-03-31 14:00:00 ``` The timestamp must be in the **future**, so make sure you use such a timestamp should you be running this tutorial after `2023-03-31 14:00:00`. After you provided the valid timestamp, proceed with the precompile specific configurations: ```bash The chosen block activation time is 2023-03-31 14:00:00 Use the arrow keys to navigate: ↓ ↑ → ← ? Add 'adminAddresses'?: ▸ Yes No ``` This will enable the addresses added in this section to add other admins and/or add enabled addresses for transaction issuance. The addresses provided in this tutorial are fake. However, make sure you or someone you trust have full control over the addresses. Otherwise, you might bring your Avalanche L1 to a halt. ```bash ✔ Yes Use the arrow keys to navigate: ↓ ↑ → ← ? Provide 'adminAddresses': ▸ Add Delete Preview More Info ↓ Done ``` The prompting runs with a pattern used throughout the tool: 1. Select an operation: - `Add`: adds a new address to the current list - `Delete`: removes an address from the current list - `Preview`: prints the current list 2. `More info` prints additional information for better guidance, if available 3. Select `Done` when you completed the list Go ahead and add your first address: ```bash ✔ Add ✔ Add an address: 0xaaaabbbbccccddddeeeeffff1111222233334444 ``` Add another one: ```bash ✔ Add Add an address: 0xaaaabbbbccccddddeeeeffff1111222233334444 ✔ Add ✔ Add an address: 0x1111222233334444aaaabbbbccccddddeeeeffff ``` Select `Preview` this time to confirm the list is correct: ```bash ✔ Preview 0. 0xaaaAbbBBCccCDDddEeEEFFfF1111222233334444 1. 0x1111222233334444aAaAbbBBCCCCDdDDeEeEffff Use the arrow keys to navigate: ↓ ↑ → ← ? Provide 'adminAddresses': ▸ Add Delete Preview More Info ↓ Done ``` If it looks good, select `Done` to continue: ```bash ✔ Done Use the arrow keys to navigate: ↓ ↑ → ← ? Add 'enabledAddresses'?: ▸ Yes No ``` Add one such enabled address, these are addresses which can issue transactions: ```bash ✔ Add ✔ Add an address: 0x55554444333322221111eeeeaaaabbbbccccdddd█ ``` After you added this address, and selected `Done`, the tool asks if you want to add another precompile: ```bash ✔ Done Use the arrow keys to navigate: ↓ ↑ → ← ? Should we configure another precompile?: ▸ No Yes ``` If you needed to add another one, you would select `Yes` here. The wizard would guide you through the other available precompiles, excluding already configured ones. To avoid making this tutorial too long, the assumption is you're done here. Select `No`, which ends the wizard. This means you have successfully terminated the generation of the upgrade file, often called upgrade bytes. The tool stores them internally. You shouldn't move files around manually. Use the `export` and `import` commands to get access to the files. So at this point you can either: - Deploy your upgrade bytes locally - Export your upgrade bytes to a file, for installation on a validator running on another machine - Import a file into a different machine running Avalanche-CLI How To Upgrade a Local Network[​](#how-to-upgrade-a-local-network "Direct link to heading") ------------------------------------------------------------------------------------------- The normal use case for this operation is that: - You already created an Avalanche L1 - You already deployed the Avalanche L1 locally - You already generated the upgrade file with the preceding command or imported into the tool - This tool already started the network If the preceding requirements aren't met, the network upgrade command fails. Therefore, to apply your generated or imported upgrade configuration: ```bash avalanche blockchain upgrade apply testblockchain ``` A number of checks run. For example, if you created the Avalanche L1 but didn't deploy it locally: ```bash avalanche blockchain upgrade apply testblockchain Error: no deployment target available Usage: avalanche blockchain upgrade apply [blockchainName] [flags] Flags: --avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/home/fabio/.avalanchego/chains") --config create upgrade config for future Avalanche L1 deployments (same as generate) --fuji fuji apply upgrade existing fuji deployment (alias for `testnet`) -h, --help help for apply --local local apply upgrade existing local deployment --mainnet mainnet apply upgrade existing mainnet deployment --print if true, print the manual config without prompting (for public networks only) --testnet testnet apply upgrade existing testnet deployment (alias for `fuji`) Global Flags: --log-level string log level for the application (default "ERROR") ``` Go ahead and [deploy](/docs/tooling/create-deploy-avalanche-l1s/deploy-locally) first your Avalanche L1 if that's your case. If you already had deployed the Avalanche L1 instead, you see something like this: ```bash avalanche blockchain upgrade apply testblockchain Use the arrow keys to navigate: ↓ ↑ → ← ? What deployment would you like to upgrade: ▸ Existing local deployment ``` Select `Existing local deployment`. This installs the upgrade file on all nodes of your local network running in the background. Et voilà. This is the output shown if all went well: ```bash ✔ Existing local deployment ....... Network restarted and ready to use. Upgrade bytes have been applied to running nodes at these endpoints. The next upgrade will go into effect 2023-03-31 09:00:00 +-------+------------+-----------------------------------------------------------------------------------+ | NODE | VM | URL | +-------+------------+-----------------------------------------------------------------------------------+ | node1 | testblockchain | http://0.0.0.0:9650/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ | node2 | testblockchain | http://0.0.0.0:9652/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ | node3 | testblockchain | http://0.0.0.0:9654/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ | node4 | testblockchain | http://0.0.0.0:9656/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ | node5 | testblockchain | http://0.0.0.0:9658/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ ``` There is only so much the tool can do here for you. It installed the upgrade bytes _as-is_ as you configured respectively provided them to the tool. You should verify yourself that the upgrades were actually installed correctly, for example issuing some transactions - mind the timestamp!. Apply the Upgrade to a Public Node (Fuji or Mainnet)[​](#apply-the-upgrade-to-a-public-node-fuji-or-mainnet "Direct link to heading") ------------------------------------------------------------------------------------------------------------------------------------- For this scenario to work, you should also have deployed the Avalanche L1 to the public network (Fuji or Mainnet) with this tool. Otherwise, the tool won't know the details of the Avalanche L1, and won't be able to guide you. Assuming the Avalanche L1 has been already deployed to Fuji, when running the `apply` command, the tool notices the deployment: ```bash avalanche blockchain upgrade apply testblockchain Use the arrow keys to navigate: ↓ ↑ → ← ? What deployment would you like to upgrade: Existing local deployment ▸ Fuji ``` If not, you would not find the `Fuji` entry here. This scenario assumes that you are running the `fuji` validator on the same machine which is running Avalanche-CLI. If this is the case, the tool tries to install the upgrade file at the expected destination. If you use default paths, it tries to install at `$HOME/.avalanchego/chains/`, creating the chain id directory, so that the file finally ends up at `$HOME/.avalanchego/chains//upgrade.json`. If you are _not_ using default paths, you can configure the path by providing the flag `--avalanchego-chain-config-dir` to the tool. For example: ```bash avalanche blockchain upgrade apply testblockchain --avalanchego-chain-config-dir /path/to/your/chains ``` Make sure to identify correctly where your chain config dir is, or the node might fail to find it. If all is correct, the file gets installed: ```bash avalanche blockchain upgrade apply testblockchain ✔ Fuji The chain config dir avalanchego uses is set at /home/fabio/.avalanchego/chains Trying to install the upgrade files at the provided /home/fabio/.avalanchego/chains path Successfully installed upgrade file ``` If however the node is _not_ running on this same machine where you are executing Avalanche-CLI, there is no point in running this command for a Fuji node. In this case, you might rather export the file and install it at the right location. To see the instructions about how to go about this, add the `--print` flag: ```bash avalanche blockchain upgrade apply testblockchain --print ✔ Fuji To install the upgrade file on your validator: 1. Identify where your validator has the avalanchego chain config dir configured. The default is at $HOME/.avalanchego/chains (/home/user/.avalanchego/chains on this machine). If you are using a different chain config dir for your node, use that one. 2. Create a directory with the blockchainID in the configured chain-config-dir (e.g. $HOME/.avalanchego/chains/ExDKhjXqiVg7s35p8YJ56CJpcw6nJgcGCCE7DbQ4oBknZ1qXi) if doesn't already exist. 3. Create an `upgrade.json` file in the blockchain directory with the content of your upgrade file. This is the content of your upgrade file as configured in this tool: { "precompileUpgrades": [ { "txAllowListConfig": { "adminAddresses": [ "0xb3d82b1367d362de99ab59a658165aff520cbd4d" ], "enabledAddresses": null, "blockTimestamp": 1677550447 } } ] } ****************************************************************************************************************** * Upgrades are tricky. The syntactic correctness of the upgrade file is important. * * The sequence of upgrades must be strictly observed. * * Make sure you understand https://build.avax.network/docs/nodes/configure/configs-flags#subnet-chain-configs * * before applying upgrades manually. * ****************************************************************************************************************** ``` The instructions also show the content of your current upgrade file, so you can just select that if you wish. Or actually export the file. Export the Upgrade File[​](#export-the-upgrade-file "Direct link to heading") ----------------------------------------------------------------------------- If you have generated the upgrade file, you can export it: ```bash avalanche blockchain upgrade export testblockchain ✔ Provide a path where we should export the file to: /tmp/testblockchain-upgrade.json ``` Just provide a valid path to the prompt, and the tool exports the file there. ```bash avalanche blockchain upgrade export testblockchain Provide a path where we should export the file to: /tmp/testblockchain-upgrade.json Writing the upgrade bytes file to "/tmp/testblockchain-upgrade.json"... File written successfully. ``` You can now take that file and copy it to validator nodes, see preceding instructions. Import the Upgrade File[​](#import-the-upgrade-file "Direct link to heading") ----------------------------------------------------------------------------- You or someone else might have generated the file elsewhere, or on another machine. And now you want to install it on the validator machine, using Avalanche-CLI. You can import the file: ```bash avalanche blockchain upgrade import testblockchain Provide the path to the upgrade file to import: /tmp/testblockchain-upgrade.json ``` An existing file with the same path and filename would be overwritten. After you have imported the file, you can `apply` it either to a local network or to a locally running validator. Follow the instructions for the appropriate use case. # Virtual Machine (/docs/tooling/avalanche-cli/upgrade/avalanche-l1-virtual-machine) --- title: Virtual Machine description: This how-to guide explains how to upgrade the VM of an already-deployed Avalanche L1. --- To upgrade a local Avalanche L1, you first need to pause the local network. To do so, run: ```bash avalanche network stop ``` Next, you need to select the new VM to run your Avalanche L1 on. If you're running a Subnet-EVM Avalanche L1, you likely want to bump to the latest released version. If you're running a Custom VM, you'll want to choose another custom binary. Start the upgrade wizard with: ```bash avalanche blockchain upgrade vm ``` where you replace `` with the name of the Avalanche L1 you would like to upgrade. ## Selecting a VM Deployment to Upgrade After starting the Avalanche L1 Upgrade Wizard, you should see something like this: ```bash ? What deployment would you like to upgrade: ▸ Update config for future deployments Existing local deployment ``` If you select the first option, Avalanche-CLI updates your Avalanche L1's config and any future calls to `avalanche blockchain deploy` use the new version you select. However, any existing local deployments continue to use the old version. If you select the second option, the opposite occurs. The existing local deployment switches to the new VM but subsequent deploys use the original. ## Select a VM to Upgrade To The next option asks you to select your new virtual machine. ```bash ? How would you like to update your Avalanche L1's virtual machine: ▸ Update to latest version Update to a specific version Update to a custom binary ``` If you're using the Subnet-EVM, you'll have the option to upgrade to the latest released version. You can also select a specific version or supply a custom binary. If your Avalanche L1 already uses a custom VM, you need to select another custom binary. Once you select your VM, you should see something like: ```bash Upgrade complete. Ready to restart the network. ``` ## Restart the Network If you are running multiple Avalanche L1s concurrently, you may need to update multiple Avalanche L1s to restart the network. All of your deployed must be using the same RPC Protocol version. You can see more details about this [here](/docs/tooling/maintain/troubleshooting#incompatible-rpc-version-for-custom-vm). Finally, restart the network with: ```bash avalanche network start ``` If the network starts correctly, your Avalanche L1 is now running the upgraded VM. # Authentication (/docs/tooling/avalanche-sdk/chainkit/authentication) --- title: Authentication description: Authentication for the ChainKit SDK icon: Lock --- ### Per-Client Security Schemes This SDK supports the following security scheme globally: | Name | Type | Scheme | | -------- | ------ | ------- | | `apiKey` | apiKey | API key | The ChainKit SDK can be used without an API key, but rate limits will be lower. Adding an API key allows for higher rate limits. To get an API key, create one via [Builder Console](/console/utilities/data-api-keys) and securely store it. Whether or not you use an API key, you can still interact with the SDK effectively, but the API key provides performance benefits for higher request volumes. ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.healthCheck(); // Handle the result console.log(result); } run(); ``` Never hardcode your API key directly into your code. Instead, securely store it and retrieve it from an environment variable, a secrets manager, or a dedicated configuration storage mechanism. This ensures that sensitive information remains protected and is not exposed in version control or publicly accessible code. # Custom HTTP Client (/docs/tooling/avalanche-sdk/chainkit/custom-http) --- title: Custom HTTP Client description: Custom HTTP Client for the ChainKit SDK icon: Server --- The TypeScript SDK makes API calls using an HTTPClient that wraps the native [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API). This client is a thin wrapper around `fetch` and provides the ability to attach hooks around the request lifecycle that can be used to modify the request or handle errors and response. The `HTTPClient` constructor takes an optional `fetcher` argument that can be used to integrate a third-party HTTP client or when writing tests to mock out the HTTP client and feed in fixtures. The following example shows how to use the `beforeRequest` hook to to add a custom header and a timeout to requests and how to use the `requestError` hook to log errors: ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; import { HTTPClient } from "@avalanche-sdk/chainkit/lib/http"; const httpClient = new HTTPClient({ // fetcher takes a function that has the same signature as native `fetch`. fetcher: (request) => { return fetch(request); }, }); httpClient.addHook("beforeRequest", (request) => { const nextRequest = new Request(request, { signal: request.signal || AbortSignal.timeout(5000), }); nextRequest.headers.set("x-custom-header", "custom value"); return nextRequest; }); httpClient.addHook("requestError", (error, request) => { console.group("Request Error"); console.log("Reason:", `${error}`); console.log("Endpoint:", `${request.method} ${request.url}`); console.groupEnd(); }); const sdk = new Avalanche({ httpClient }); ``` # Error Handling (/docs/tooling/avalanche-sdk/chainkit/errors) --- title: Error Handling description: Error Handling for the ChainKit SDK icon: Bug --- All SDK methods return a response object or throw an error. If Error objects are specified in your OpenAPI Spec, the SDK will throw the appropriate Error type. | Error Object | Status Code | Content Type | | :------------------------- | :---------- | :--------------- | | errors.BadRequest | 400 | application/json | | errors.Unauthorized | 401 | application/json | | errors.Forbidden | 403 | application/json | | errors.NotFound | 404 | application/json | | errors.TooManyRequests | 429 | application/json | | errors.InternalServerError | 500 | application/json | | errors.BadGateway | 502 | application/json | | errors.ServiceUnavailable | 503 | application/json | | errors.SDKError | 4xx-5xx | / | Validation errors can also occur when either method arguments or data returned from the server do not match the expected format. The SDKValidationError that is thrown as a result will capture the raw value that failed validation in an attribute called `rawValue`. Additionally, a `pretty()` method is available on this error that can be used to log a nicely formatted string since validation errors can list many issues and the plain error string may be difficult read when debugging. ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; import { BadGateway, BadRequest, Forbidden, InternalServerError, NotFound, SDKValidationError, ServiceUnavailable, TooManyRequests, Unauthorized, } from "@avalanche-sdk/chainkit/models/errors"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { try { await avalancheSDK.data.nfts.reindex({ address: "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", tokenId: "145", }); } catch (err) { switch (true) { case err instanceof SDKValidationError: { // Validation errors can be pretty-printed console.error(err.pretty()); // Raw value may also be inspected console.error(err.rawValue); return; } case err instanceof BadRequest: { // Handle err.data$: BadRequestData console.error(err); return; } case err instanceof Unauthorized: { // Handle err.data$: UnauthorizedData console.error(err); return; } case err instanceof Forbidden: { // Handle err.data$: ForbiddenData console.error(err); return; } case err instanceof NotFound: { // Handle err.data$: NotFoundData console.error(err); return; } case err instanceof TooManyRequests: { // Handle err.data$: TooManyRequestsData console.error(err); return; } case err instanceof InternalServerError: { // Handle err.data$: InternalServerErrorData console.error(err); return; } case err instanceof BadGateway: { // Handle err.data$: BadGatewayData console.error(err); return; } case err instanceof ServiceUnavailable: { // Handle err.data$: ServiceUnavailableData console.error(err); return; } default: { throw err; } } } } run(); ``` # Getting Started (/docs/tooling/avalanche-sdk/chainkit/getting-started) --- title: Getting Started description: Get started with the ChainKit SDK icon: Rocket --- ### ChainKit SDK The ChainKit SDK provides web3 application developers with multi-chain data related to Avalanche's primary network, Avalanche L1s, and Ethereum. With the Data API, you can easily build products that leverage real-time and historical transaction and transfer history, native and token balances, and various types of token metadata. **Migration Notice**: This SDK was previously known as the AvaCloud SDK. We have made namespace changes and will discontinue the AvaCloud SDK in favor of the ChainKit SDK. For migration guidance and specific method updates, please refer to the individual method documentation. The SDK is currently available in TypeScript, with more languages coming soon. If you are interested in a language that is not listed, please reach out to us in the [#dev-tools](https://discord.com/channels/578992315641626624/1280920394236297257) channel in the [Avalanche Discord](https://discord.gg/avax). [https://www.npmjs.com/package/@avalanche-sdk/chainkit](https://www.npmjs.com/package/@avalanche-sdk/chainkit) [https://github.com/ava-labs/avalanche-sdk-typescript](https://github.com/ava-labs/avalanche-sdk-typescript) ### SDK Installation ```bash npm add @avalanche-sdk/chainkit ``` ```bash pnpm add @avalanche-sdk/chainkit ``` ```bash bun add @avalanche-sdk/chainkit ``` ```bash yarn add @avalanche-sdk/chainkit zod ``` ### SDK Example Usage ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.healthCheck(); // Handle the result console.log(result); } run(); ``` Refer to the code samples provided for each route to see examples of how to use them in the SDK. Explore routes here [Data API](/docs/api-reference/data-api/getting-started), [Metrics API](/docs/api-reference/metrics-api/getting-started) & [Webhooks API](/docs/api-reference/webhook-api/getting-started). # Global Parameters (/docs/tooling/avalanche-sdk/chainkit/global-parameters) --- title: Global Parameters description: Global parameters for the ChainKit SDK icon: Globe --- Certain parameters are configured globally. These parameters may be set on the SDK client instance itself during initialization. When configured as an option during SDK initialization, These global values will be used as defaults on the operations that use them. When such operations are called, there is a place in each to override the global value, if needed. For example, you can set `chainId` to `43114` at SDK initialization and then you do not have to pass the same value on calls to operations like getBlock. But if you want to do so you may, which will locally override the global setting. See the example code below for a demonstration. ### Available Globals The following global parameters are available. | Name | Type | Required | Description | | :-------- | :---------------------------- | :------- | :------------------------------------------------------- | | `chainId` | string | No | A supported EVM chain id, chain alias, or blockchain id. | | `network` | components.GlobalParamNetwork | No | A supported network type, either mainnet or a testnet. | Example ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", // Sets chainId globally, will be used if not passed during method call. network: "mainnet", }); async function run() { const result = await avalancheSDK.data.evm.blocks.get({ blockId: "0x17533aeb5193378b9ff441d61728e7a2ebaf10f61fd5310759451627dfca2e7c", chainId: "", // Override the globally set chain id. }); // Handle the result console.log(result); } run(); ``` # Pagination (/docs/tooling/avalanche-sdk/chainkit/pagination) --- title: Pagination description: Pagination for the ChainKit SDK icon: StickyNote --- Some of the endpoints in this SDK support pagination. To use pagination, you make your SDK calls as usual, but the returned response object will also be an async iterable that can be consumed using the `for await...of` syntax. Here's an example of one such pagination call: ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.chains.list({ network: "mainnet", }); for await (const page of result) { // Handle the page console.log(page); } } run(); ``` # Retries (/docs/tooling/avalanche-sdk/chainkit/retries) --- title: Retries description: Retries for the ChainKit SDK icon: RotateCcw --- Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK. To change the default retry strategy for a single API call, simply provide a retryConfig object to the call: ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.healthCheck({ retries: { strategy: "backoff", backoff: { initialInterval: 1, maxInterval: 50, exponent: 1.1, maxElapsedTime: 100, }, retryConnectionErrors: false, }, }); // Handle the result console.log(result); } run(); ``` If you'd like to override the default retry strategy for all operations that support retries, you can provide a retryConfig at SDK initialization: ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ retryConfig: { strategy: "backoff", backoff: { initialInterval: 1, maxInterval: 50, exponent: 1.1, maxElapsedTime: 100, }, retryConnectionErrors: false, }, apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.healthCheck(); // Handle the result console.log(result); } run(); ``` # Getting Started (/docs/tooling/avalanche-sdk/client/getting-started) --- title: Getting Started description: Getting Started with the Client SDK icon: Rocket --- ### Client SDK The main Avalanche client SDK for interacting with Avalanche nodes and building blockchain applications. Features: * Complete API coverage for P-Chain, X-Chain, and C-Chain. * Full viem compatibility - anything you can do with viem works here. * TypeScript-first design with full type safety. * Abstractions over the JSON-RPC API to make your life easier. * Wallet integration and transaction management. * First-class APIs for interacting with Smart Contracts. * Retrieve balances and UTXOs for addresses.. * Build, sign, and issue transactions to any chain. * Perform cross-chain transfers between X, P and C chains. * Add validators and delegators. * Create subnets and blockchains, convert subnets to L1s. The SDK is currently available in TypeScript, with more languages coming soon. If you are interested in a language that is not listed, please reach out to us in the [#dev-tools](https://discord.com/channels/578992315641626624/1280920394236297257) channel in the [Avalanche Discord](https://discord.gg/avax). [https://www.npmjs.com/package/@avalanche-sdk/client](https://www.npmjs.com/package/@avalanche-sdk/client) [https://github.com/ava-labs/avalanche-sdk-typescript](https://github.com/ava-labs/avalanche-sdk-typescript) ### SDK Installation ```bash npm add @avalanche-sdk/client ``` ```bash pnpm add @avalanche-sdk/client ``` ```bash bun add @avalanche-sdk/client ``` ```bash yarn add @avalanche-sdk/client zod ``` Yarn does not install peer dependencies automatically. You will need to install zod as shown above. ### SDK Example Usage ```javascript import { createAvalancheClient } from '@avalanche-sdk/client' import { avalanche } from '@avalanche-sdk/client/chains' const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" } }) // Get account balance const balance = await client.getBalance({ address: '0xA0Cf798816D4b9b9866b5330EEa46a18382f251e', }) ``` Refer to the code samples provided for each route to see examples of how to use them in the SDK. Explore routes here [Data API](/docs/api-reference/data-api/getting-started), [Metrics API](/docs/api-reference/metrics-api/getting-started) & [Webhooks API](/docs/api-reference/webhook-api/getting-started). # Getting Started (/docs/tooling/avalanche-sdk/interchain/getting-started) --- title: Getting Started description: Getting Started with the Interchain SDK icon: Rocket --- ### Interchain SDK Interchain SDK The Interchain SDK is a TypeScript SDK for interacting with the Interchain Messaging Protocol (ICM) and Teleporter on Avalanche. Features: * Type-safe ICM client for sending cross-chain messages * Works seamlessly with wallet clients * Built-in support for Avalanche C-Chain and custom subnets * Built-in support for Avalanche C-Chain and custom subnets The SDK is currently available in TypeScript, with more languages coming soon. If you are interested in a language that is not listed, please reach out to us in the [#dev-tools](https://discord.com/channels/578992315641626624/1280920394236297257) channel in the [Avalanche Discord](https://discord.gg/avax). [https://www.npmjs.com/package/@avalanche-sdk/interchain](https://www.npmjs.com/package/@avalanche-sdk/interchain) [https://github.com/ava-labs/avalanche-sdk-typescript](https://github.com/ava-labs/avalanche-sdk-typescript) ### SDK Installation ```bash npm add @avalanche-sdk/interchain ``` ```bash pnpm add @avalanche-sdk/interchain ``` ```bash bun add @avalanche-sdk/interchain ``` ```bash yarn add @avalanche-sdk/interchain zod ``` Yarn does not install peer dependencies automatically. You will need to install zod as shown above. ### SDK Example Usage ```javascript import { createWalletClient, http } from "viem"; import { createICMClient } from "@avalanche-sdk/interchain"; import { privateKeyToAccount } from "viem/accounts"; import * as dotenv from 'dotenv'; // Load environment variables dotenv.config(); // these will be made available in a separate SDK soon import { avalancheFuji, dispatch } from "@avalanche-sdk/interchain/chains"; // Get private key from environment const privateKey = process.env.PRIVATE_KEY; if (!privateKey) { throw new Error("PRIVATE_KEY not found in environment variables"); } // Load your signer/account const account = privateKeyToAccount(privateKey as `0x${string}`); // Create a viem wallet client connected to Avalanche Fuji const wallet = createWalletClient({ transport: http('https://api.avax-test.network/ext/bc/C/rpc'), account, }); // Initialize the ICM client const icmClient = createICMClient(wallet); // Send a message across chains async function main() { try { const hash = await icmClient.sendMsg({ sourceChain: avalancheFuji, destinationChain: dispatch, message: 'Hello from Avalanche Fuji to Dispatch Fuji!', }); console.log('Message sent with hash:', hash); } catch (error) { console.error('Error sending message:', error); process.exit(1); } } main(); ``` Refer to the code samples provided for each route to see examples of how to use them in the SDK. Explore routes here [Data API](/docs/api-reference/data-api/getting-started), [Metrics API](/docs/api-reference/metrics-api/getting-started) & [Webhooks API](/docs/api-reference/webhook-api/getting-started). # avax.getAtomicTx (/docs/rpcs/c-chain/avalanche/avax_getAtomicTx) --- title: avax.getAtomicTx full: true _openapi: method: POST route: /ext/bc/C/avax#avax.getAtomicTx toc: [] structuredData: headings: [] contents: - content: Returns the specified atomic transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the specified atomic transaction. # avax.getAtomicTxStatus (/docs/rpcs/c-chain/avalanche/avax_getAtomicTxStatus) --- title: avax.getAtomicTxStatus full: true _openapi: method: POST route: /ext/bc/C/avax#avax.getAtomicTxStatus toc: [] structuredData: headings: [] contents: - content: Returns the status of the specified atomic transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the status of the specified atomic transaction. # avax.getUTXOs (/docs/rpcs/c-chain/avalanche/avax_getUTXOs) --- title: avax.getUTXOs full: true _openapi: method: POST route: /ext/bc/C/avax#avax.getUTXOs toc: [] structuredData: headings: [] contents: - content: Gets all UTXOs for the specified addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets all UTXOs for the specified addresses. # avax.issueTx (/docs/rpcs/c-chain/avalanche/avax_issueTx) --- title: avax.issueTx full: true _openapi: method: POST route: /ext/bc/C/avax#avax.issueTx toc: [] structuredData: headings: [] contents: - content: Issues a transaction to the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Issues a transaction to the network. # eth_suggestPriceOptions (/docs/rpcs/c-chain/avalanche/eth_suggestPriceOptions) --- title: eth_suggestPriceOptions full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_suggestPriceOptions toc: [] structuredData: headings: [] contents: - content: Returns suggested fee options (Coreth extension). --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns suggested fee options (Coreth extension). # debug_getRawBlock (/docs/rpcs/c-chain/debug/debug_getRawBlock) --- title: debug_getRawBlock full: true _openapi: method: POST route: /ext/bc/C/rpc#debug_getRawBlock toc: [] structuredData: headings: [] contents: - content: > Returns RLP-encoded bytes of a single block by number or hash. **⚠️ Note:** This method is NOT available on public RPC endpoints. You must run your own node or use a dedicated node service to access debug methods. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns RLP-encoded bytes of a single block by number or hash. **⚠️ Note:** This method is NOT available on public RPC endpoints. You must run your own node or use a dedicated node service to access debug methods. # debug_getRawTransaction (/docs/rpcs/c-chain/debug/debug_getRawTransaction) --- title: debug_getRawTransaction full: true _openapi: method: POST route: /ext/bc/C/rpc#debug_getRawTransaction toc: [] structuredData: headings: [] contents: - content: > Returns the bytes of the transaction for the given hash. **⚠️ Note:** This method is NOT available on public RPC endpoints. You must run your own node or use a dedicated node service to access debug methods. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the bytes of the transaction for the given hash. **⚠️ Note:** This method is NOT available on public RPC endpoints. You must run your own node or use a dedicated node service to access debug methods. # eth_accounts (/docs/rpcs/c-chain/eth/eth_accounts) --- title: eth_accounts full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_accounts toc: [] structuredData: headings: [] contents: - content: > Returns a list of addresses owned by the client. **Note:** Public RPC endpoints will return an empty array as they don't manage any accounts. This method only returns accounts when using a node you control with unlocked accounts. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a list of addresses owned by the client. **Note:** Public RPC endpoints will return an empty array as they don't manage any accounts. This method only returns accounts when using a node you control with unlocked accounts. # eth_blockNumber (/docs/rpcs/c-chain/eth/eth_blockNumber) --- title: eth_blockNumber full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_blockNumber toc: [] structuredData: headings: [] contents: - content: Returns the block number of the chain head. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the block number of the chain head. # eth_call (/docs/rpcs/c-chain/eth/eth_call) --- title: eth_call full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_call toc: [] structuredData: headings: [] contents: - content: Executes a call without sending a transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Executes a call without sending a transaction. # eth_chainId (/docs/rpcs/c-chain/eth/eth_chainId) --- title: eth_chainId full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_chainId toc: [] structuredData: headings: [] contents: - content: Returns the chain ID of the current network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the chain ID of the current network. # eth_coinbase (/docs/rpcs/c-chain/eth/eth_coinbase) --- title: eth_coinbase full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_coinbase toc: [] structuredData: headings: [] contents: - content: > Returns the client coinbase address. **Note:** On public RPC endpoints, this returns the node operator's coinbase address, not yours. This method is mainly useful when running your own node. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the client coinbase address. **Note:** On public RPC endpoints, this returns the node operator's coinbase address, not yours. This method is mainly useful when running your own node. # eth_estimateGas (/docs/rpcs/c-chain/eth/eth_estimateGas) --- title: eth_estimateGas full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_estimateGas toc: [] structuredData: headings: [] contents: - content: Returns the lowest gas limit for a transaction to succeed. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the lowest gas limit for a transaction to succeed. # eth_feeHistory (/docs/rpcs/c-chain/eth/eth_feeHistory) --- title: eth_feeHistory full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_feeHistory toc: [] structuredData: headings: [] contents: - content: Returns the fee market history. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the fee market history. # eth_gasPrice (/docs/rpcs/c-chain/eth/eth_gasPrice) --- title: eth_gasPrice full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_gasPrice toc: [] structuredData: headings: [] contents: - content: Returns a suggestion for a legacy gas price. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a suggestion for a legacy gas price. # eth_getBalance (/docs/rpcs/c-chain/eth/eth_getBalance) --- title: eth_getBalance full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBalance toc: [] structuredData: headings: [] contents: - content: Returns the balance of the account of given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the balance of the account of given address. # eth_getBlockByHash (/docs/rpcs/c-chain/eth/eth_getBlockByHash) --- title: eth_getBlockByHash full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBlockByHash toc: [] structuredData: headings: [] contents: - content: >- Returns the block for a given hash. Second param selects full transactions. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the block for a given hash. Second param selects full transactions. # eth_getBlockByNumber (/docs/rpcs/c-chain/eth/eth_getBlockByNumber) --- title: eth_getBlockByNumber full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBlockByNumber toc: [] structuredData: headings: [] contents: - content: >- Returns the block for a given number. Second param selects full transactions. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the block for a given number. Second param selects full transactions. # eth_getBlockTransactionCountByHash (/docs/rpcs/c-chain/eth/eth_getBlockTransactionCountByHash) --- title: eth_getBlockTransactionCountByHash full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBlockTransactionCountByHash toc: [] structuredData: headings: [] contents: - content: >- Returns the number of transactions in a block from a block matching the given block hash. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the number of transactions in a block from a block matching the given block hash. # eth_getBlockTransactionCountByNumber (/docs/rpcs/c-chain/eth/eth_getBlockTransactionCountByNumber) --- title: eth_getBlockTransactionCountByNumber full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBlockTransactionCountByNumber toc: [] structuredData: headings: [] contents: - content: >- Returns the number of transactions in a block matching the given block number. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the number of transactions in a block matching the given block number. # eth_getCode (/docs/rpcs/c-chain/eth/eth_getCode) --- title: eth_getCode full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getCode toc: [] structuredData: headings: [] contents: - content: Returns code at a given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns code at a given address. # eth_getFilterChanges (/docs/rpcs/c-chain/eth/eth_getFilterChanges) --- title: eth_getFilterChanges full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getFilterChanges toc: [] structuredData: headings: [] contents: - content: >- Polling method for a filter, which returns an array of logs which occurred since last poll. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Polling method for a filter, which returns an array of logs which occurred since last poll. # eth_getFilterLogs (/docs/rpcs/c-chain/eth/eth_getFilterLogs) --- title: eth_getFilterLogs full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getFilterLogs toc: [] structuredData: headings: [] contents: - content: Returns an array of all logs matching filter with given id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns an array of all logs matching filter with given id. # eth_getLogs (/docs/rpcs/c-chain/eth/eth_getLogs) --- title: eth_getLogs full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getLogs toc: [] structuredData: headings: [] contents: - content: Returns logs matching the given filter object. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns logs matching the given filter object. # eth_getStorageAt (/docs/rpcs/c-chain/eth/eth_getStorageAt) --- title: eth_getStorageAt full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getStorageAt toc: [] structuredData: headings: [] contents: - content: Returns the value from a storage position at a given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the value from a storage position at a given address. # eth_getTransactionByBlockHashAndIndex (/docs/rpcs/c-chain/eth/eth_getTransactionByBlockHashAndIndex) --- title: eth_getTransactionByBlockHashAndIndex full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionByBlockHashAndIndex toc: [] structuredData: headings: [] contents: - content: >- Returns information about a transaction by block hash and transaction index position. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns information about a transaction by block hash and transaction index position. # eth_getTransactionByBlockNumberAndIndex (/docs/rpcs/c-chain/eth/eth_getTransactionByBlockNumberAndIndex) --- title: eth_getTransactionByBlockNumberAndIndex full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionByBlockNumberAndIndex toc: [] structuredData: headings: [] contents: - content: >- Returns information about a transaction by block number and transaction index position. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns information about a transaction by block number and transaction index position. # eth_getTransactionByHash (/docs/rpcs/c-chain/eth/eth_getTransactionByHash) --- title: eth_getTransactionByHash full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionByHash toc: [] structuredData: headings: [] contents: - content: >- Returns the information about a transaction requested by transaction hash. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the information about a transaction requested by transaction hash. # eth_getTransactionCount (/docs/rpcs/c-chain/eth/eth_getTransactionCount) --- title: eth_getTransactionCount full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionCount toc: [] structuredData: headings: [] contents: - content: Returns the number of transactions sent from an address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the number of transactions sent from an address. # eth_getTransactionReceipt (/docs/rpcs/c-chain/eth/eth_getTransactionReceipt) --- title: eth_getTransactionReceipt full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionReceipt toc: [] structuredData: headings: [] contents: - content: Returns the receipt of a transaction by transaction hash. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the receipt of a transaction by transaction hash. # eth_maxPriorityFeePerGas (/docs/rpcs/c-chain/eth/eth_maxPriorityFeePerGas) --- title: eth_maxPriorityFeePerGas full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_maxPriorityFeePerGas toc: [] structuredData: headings: [] contents: - content: Returns a suggestion for a tip cap for dynamic fee transactions. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a suggestion for a tip cap for dynamic fee transactions. # eth_newBlockFilter (/docs/rpcs/c-chain/eth/eth_newBlockFilter) --- title: eth_newBlockFilter full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_newBlockFilter toc: [] structuredData: headings: [] contents: - content: > Creates a filter in the node, to notify when a new block arrives. **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a filter in the node, to notify when a new block arrives. **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. # eth_newFilter (/docs/rpcs/c-chain/eth/eth_newFilter) --- title: eth_newFilter full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_newFilter toc: [] structuredData: headings: [] contents: - content: > Creates a filter object, based on filter options, to notify when the state changes (logs). **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a filter object, based on filter options, to notify when the state changes (logs). **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. # eth_newPendingTransactionFilter (/docs/rpcs/c-chain/eth/eth_newPendingTransactionFilter) --- title: eth_newPendingTransactionFilter full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_newPendingTransactionFilter toc: [] structuredData: headings: [] contents: - content: > Creates a filter in the node, to notify when new pending transactions arrive. **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a filter in the node, to notify when new pending transactions arrive. **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. # eth_protocolVersion (/docs/rpcs/c-chain/eth/eth_protocolVersion) --- title: eth_protocolVersion full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_protocolVersion toc: [] structuredData: headings: [] contents: - content: Returns the current ethereum protocol version. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the current ethereum protocol version. # eth_sendRawTransaction (/docs/rpcs/c-chain/eth/eth_sendRawTransaction) --- title: eth_sendRawTransaction full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_sendRawTransaction toc: [] structuredData: headings: [] contents: - content: Submits a signed raw transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Submits a signed raw transaction. # eth_sign (/docs/rpcs/c-chain/eth/eth_sign) --- title: eth_sign full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_sign toc: [] structuredData: headings: [] contents: - content: > Signs data with a given address. **⚠️ Security Note:** This method is typically disabled on public RPC endpoints for security reasons. It requires access to private keys and should only be used on nodes you control. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Signs data with a given address. **⚠️ Security Note:** This method is typically disabled on public RPC endpoints for security reasons. It requires access to private keys and should only be used on nodes you control. # eth_uninstallFilter (/docs/rpcs/c-chain/eth/eth_uninstallFilter) --- title: eth_uninstallFilter full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_uninstallFilter toc: [] structuredData: headings: [] contents: - content: Uninstalls a filter with given id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Uninstalls a filter with given id. # net_version (/docs/rpcs/c-chain/net/net_version) --- title: net_version full: true _openapi: method: POST route: /ext/bc/C/rpc#net_version toc: [] structuredData: headings: [] contents: - content: Returns the current network ID as a string. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the current network ID as a string. # personal_newAccount (/docs/rpcs/c-chain/personal/personal_newAccount) --- title: personal_newAccount full: true _openapi: method: POST route: /ext/bc/C/rpc#personal_newAccount toc: [] structuredData: headings: [] contents: - content: > Creates a new account with the given password. **⚠️ Security Note:** This method is NOT available on public RPC endpoints for security reasons. Personal methods manage private keys and must only be used on nodes you control. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a new account with the given password. **⚠️ Security Note:** This method is NOT available on public RPC endpoints for security reasons. Personal methods manage private keys and must only be used on nodes you control. # personal_unlockAccount (/docs/rpcs/c-chain/personal/personal_unlockAccount) --- title: personal_unlockAccount full: true _openapi: method: POST route: /ext/bc/C/rpc#personal_unlockAccount toc: [] structuredData: headings: [] contents: - content: > Unlocks an account with the given password and optional duration. **⚠️ Security Note:** This method is NOT available on public RPC endpoints for security reasons. Personal methods manage private keys and must only be used on nodes you control. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Unlocks an account with the given password and optional duration. **⚠️ Security Note:** This method is NOT available on public RPC endpoints for security reasons. Personal methods manage private keys and must only be used on nodes you control. # txpool_status (/docs/rpcs/c-chain/txpool/txpool_status) --- title: txpool_status full: true _openapi: method: POST route: /ext/bc/C/rpc#txpool_status toc: [] structuredData: headings: [] contents: - content: > Returns transaction pool status. **⚠️ Note:** This method may be restricted or rate-limited on public RPC endpoints. Consider running your own node for unrestricted access. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns transaction pool status. **⚠️ Note:** This method may be restricted or rate-limited on public RPC endpoints. Consider running your own node for unrestricted access. # web3_clientVersion (/docs/rpcs/c-chain/web3/web3_clientVersion) --- title: web3_clientVersion full: true _openapi: method: POST route: /ext/bc/C/rpc#web3_clientVersion toc: [] structuredData: headings: [] contents: - content: Returns the client version. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the client version. # Banff Changes (/docs/rpcs/other/guides/banff-changes) --- title: Banff Changes description: This document specifies the changes in Avalanche “Banff”, which was released in AvalancheGo v1.9.x. --- Block Changes[​](#block-changes "Direct link to heading") --------------------------------------------------------- ### Apricot[​](#apricot "Direct link to heading") Apricot allows the following block types with the following content: - _Standard Blocks_ may contain multiple transactions of the following types: - CreateChainTx - CreateSubnetTx - ImportTx - ExportTx - _Proposal Blocks_ may contain a single transaction of the following types: - AddValidatorTx - AddDelegatorTx - AddSubnetValidatorTx - RewardValidatorTx - AdvanceTimeTx - _Options Blocks_, that is _Commit Block_ and _Abort Block_ do not contain any transactions. Each block has a header containing: - ParentID - Height ### Banff[​](#banff "Direct link to heading") Banff allows the following block types with the following content: - _Standard Blocks_ may contain multiple transactions of the following types: - CreateChainTx - CreateSubnetTx - ImportTx - ExportTx - AddValidatorTx - AddDelegatorTx - AddSubnetValidatorTx - _RemoveSubnetValidatorTx_ - _TransformSubnetTx_ - _AddPermissionlessValidatorTx_ - _AddPermissionlessDelegatorTx_ - _Proposal Blocks_ may contain a single transaction of the following types: - RewardValidatorTx - _Options blocks_, that is _Commit Block_ and _Abort Block_ do not contain any transactions. Note that each block has an header containing: - ParentID - Height - _Time_ So the two main differences with respect to Apricot are: - _AddValidatorTx_, _AddDelegatorTx_, _AddSubnetValidatorTx_ are included into Standard Blocks rather than Proposal Blocks so that they don't need to be voted on (that is followed by a Commit/Abort Block). - New Transaction types: _RemoveSubnetValidatorTx_, _TransformSubnetTx_, _AddPermissionlessValidatorTx_, and _AddPermissionlessDelegatorTx_ have been added into Standard Blocks. - Block timestamp is explicitly serialized into block header, to allow chain time update. ### New Transactions[​](#new-transactions "Direct link to heading") #### RemoveSubnetValidatorTx[​](#removesubnetvalidatortx "Direct link to heading") ``` type RemoveSubnetValidatorTx struct { BaseTx `serialize:"true"` // The node to remove from the Avalanche L1. NodeID ids.NodeID `serialize:"true" json:"nodeID"` // The Avalanche L1 to remove the node from. Subnet ids.ID `serialize:"true" json:"subnet"` // Proves that the issuer has the right to remove the node from the Avalanche L1. SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"` } ``` #### TransformSubnetTx[​](#transformsubnettx "Direct link to heading") ``` type TransformSubnetTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // ID of the Subnet to transform // Restrictions: // - Must not be the Primary Network ID Subnet ids.ID `serialize:"true" json:"subnetID"` // Asset to use when staking on the Avalanche L1 // Restrictions: // - Must not be the Empty ID // - Must not be the AVAX ID AssetID ids.ID `serialize:"true" json:"assetID"` // Amount to initially specify as the current supply // Restrictions: // - Must be > 0 InitialSupply uint64 `serialize:"true" json:"initialSupply"` // Amount to specify as the maximum token supply // Restrictions: // - Must be >= [InitialSupply] MaximumSupply uint64 `serialize:"true" json:"maximumSupply"` // MinConsumptionRate is the rate to allocate funds if the validator's stake // duration is 0 MinConsumptionRate uint64 `serialize:"true" json:"minConsumptionRate"` // MaxConsumptionRate is the rate to allocate funds if the validator's stake // duration is equal to the minting period // Restrictions: // - Must be >= [MinConsumptionRate] // - Must be <= [reward.PercentDenominator] MaxConsumptionRate uint64 `serialize:"true" json:"maxConsumptionRate"` // MinValidatorStake is the minimum amount of funds required to become a // validator. // Restrictions: // - Must be > 0 // - Must be <= [InitialSupply] MinValidatorStake uint64 `serialize:"true" json:"minValidatorStake"` // MaxValidatorStake is the maximum amount of funds a single validator can // be allocated, including delegated funds. // Restrictions: // - Must be >= [MinValidatorStake] // - Must be <= [MaximumSupply] MaxValidatorStake uint64 `serialize:"true" json:"maxValidatorStake"` // MinStakeDuration is the minimum number of seconds a staker can stake for. // Restrictions: // - Must be > 0 MinStakeDuration uint32 `serialize:"true" json:"minStakeDuration"` // MaxStakeDuration is the maximum number of seconds a staker can stake for. // Restrictions: // - Must be >= [MinStakeDuration] // - Must be <= [GlobalMaxStakeDuration] MaxStakeDuration uint32 `serialize:"true" json:"maxStakeDuration"` // MinDelegationFee is the minimum percentage a validator must charge a // delegator for delegating. // Restrictions: // - Must be <= [reward.PercentDenominator] MinDelegationFee uint32 `serialize:"true" json:"minDelegationFee"` // MinDelegatorStake is the minimum amount of funds required to become a // delegator. // Restrictions: // - Must be > 0 MinDelegatorStake uint64 `serialize:"true" json:"minDelegatorStake"` // MaxValidatorWeightFactor is the factor which calculates the maximum // amount of delegation a validator can receive. // Note: a value of 1 effectively disables delegation. // Restrictions: // - Must be > 0 MaxValidatorWeightFactor byte `serialize:"true" json:"maxValidatorWeightFactor"` // UptimeRequirement is the minimum percentage a validator must be online // and responsive to receive a reward. // Restrictions: // - Must be <= [reward.PercentDenominator] UptimeRequirement uint32 `serialize:"true" json:"uptimeRequirement"` // Authorizes this transformation SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"` } ``` #### AddPermissionlessValidatorTx[​](#addpermissionlessvalidatortx "Direct link to heading") ``` type AddPermissionlessValidatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Describes the validator Validator validator.Validator `serialize:"true" json:"validator"` // ID of the Avalanche L1 this validator is validating Subnet ids.ID `serialize:"true" json:"subnet"` // Where to send staked tokens when done validating StakeOuts []*avax.TransferableOutput `serialize:"true" json:"stake"` // Where to send validation rewards when done validating ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"` // Where to send delegation rewards when done validating DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"` // Fee this validator charges delegators as a percentage, times 10,000 // For example, if this validator has DelegationShares=300,000 then they // take 30% of rewards from delegators DelegationShares uint32 `serialize:"true" json:"shares"` } ``` #### AddPermissionlessDelegatorTx[​](#addpermissionlessdelegatortx "Direct link to heading") ``` type AddPermissionlessDelegatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Describes the validator Validator validator.Validator `serialize:"true" json:"validator"` // ID of the Avalanche L1 this validator is validating Subnet ids.ID `serialize:"true" json:"subnet"` // Where to send staked tokens when done validating Stake []*avax.TransferableOutput `serialize:"true" json:"stake"` // Where to send staking rewards when done validating RewardsOwner fx.Owner `serialize:"true" json:"rewardsOwner"` } ``` #### New TypeIDs[​](#new-typeids "Direct link to heading") ``` ApricotProposalBlock = 0 ApricotAbortBlock = 1 ApricotCommitBlock = 2 ApricotStandardBlock = 3 ApricotAtomicBlock = 4 secp256k1fx.TransferInput = 5 secp256k1fx.MintOutput = 6 secp256k1fx.TransferOutput = 7 secp256k1fx.MintOperation = 8 secp256k1fx.Credential = 9 secp256k1fx.Input = 10 secp256k1fx.OutputOwners = 11 AddValidatorTx = 12 AddSubnetValidatorTx = 13 AddDelegatorTx = 14 CreateChainTx = 15 CreateSubnetTx = 16 ImportTx = 17 ExportTx = 18 AdvanceTimeTx = 19 RewardValidatorTx = 20 stakeable.LockIn = 21 stakeable.LockOut = 22 RemoveSubnetValidatorTx = 23 TransformSubnetTx = 24 AddPermissionlessValidatorTx = 25 AddPermissionlessDelegatorTx = 26 EmptyProofOfPossession = 27 BLSProofOfPossession = 28 BanffProposalBlock = 29 BanffAbortBlock = 30 BanffCommitBlock = 31 BanffStandardBlock = 32 ``` # Flow of a Single Blockchain (/docs/rpcs/other/guides/blockchain-flow) --- title: Flow of a Single Blockchain --- ![](/images/flow1.png) Intro[​](#intro "Direct link to heading") ----------------------------------------- The Avalanche network consists of 3 built-in blockchains: X-Chain, C-Chain, and P-Chain. The X-Chain is used to manage assets and uses the Avalanche consensus protocol. The C-Chain is used to create and interact with smart contracts and uses the Snowman consensus protocol. The P-Chain is used to coordinate validators and stake and also uses the Snowman consensus protocol. At the time of writing, the Avalanche network has ~1200 validators. A set of validators makes up an Avalanche L1. Avalanche L1s can validate 1 or more chains. It is a common misconception that 1 Avalanche L1 = 1 chain and this is shown by the primary Avalanche L1 of Avalanche which is made up of the X-Chain, C-Chain, and P-Chain. A node in the Avalanche network can either be a validator or a non-validator. A validator stakes AVAX tokens and participates in consensus to earn rewards. A non-validator does not participate in consensus or have any AVAX staked but can be used as an API server. Both validators and non-validators need to have their own copy of the chain and need to know the current state of the network. At the time of writing, there are ~1200 validators and ~1800 non-validators. Each blockchain on Avalanche has several components: the virtual machine, database, consensus engine, sender, and handler. These components help the chain run smoothly. Blockchains also interact with the P2P layer and the chain router to send and receive messages. Peer-to-Peer (P2P)[​](#peer-to-peer-p2p "Direct link to heading") ----------------------------------------------------------------- ### Outbound Messages[​](#outbound-messages "Direct link to heading") [The `OutboundMsgBuilder` interface](https://github.com/ava-labs/avalanchego/blob/master/message/outbound_msg_builder.go) specifies methods that build messages of type `OutboundMessage`. Nodes communicate to other nodes by sending `OutboundMessage` messages. All messaging functions in `OutboundMsgBuilder` can be categorized as follows: - **Handshake** - Nodes need to be on a certain version before they can be accepted into the network. - **State Sync** - A new node can ask other nodes for the current state of the network. It only syncs the required state for a specific block. - **Bootstrapping** - Nodes can ask other nodes for blocks to build their own copy of the chain. A node can fetch all blocks from the locally last accepted block to the current last accepted block in the network. - **Consensus** - Once a node is up to tip they can participate in consensus! During consensus, a node conducts a poll to several different small random samples of the validator set. They can communicate decisions on whether or not they have accepted/rejected a block. - **App** - VMs communicate application-specific messages to other nodes through app messages. A common example is mempool gossiping. Currently, AvalancheGo implements its own message serialization to communicate. In the future, AvalancheGo will use protocol buffers to communicate. ### Network[​](#network "Direct link to heading") [The networking interface](https://github.com/ava-labs/avalanchego/blob/master/network/network.go) is shared across all chains. It implements functions from the `ExternalSender` interface. The two functions it implements are `Send` and `Gossip`. `Send` sends a message of type `OutboundMessage` to a specific set of nodes (specified by an array of `NodeIDs`). `Gossip` sends a message of type `OutboundMessage` to a random group of nodes in an Avalanche L1 (can be a validator or a non-validator). Gossiping is used to push transactions across the network. The networking protocol uses TLS to pass messages between peers. Along with sending and gossiping, the networking library is also responsible for making connections and maintaining connections. Any node, either a validator or non-validator, will attempt to connect to the primary network. Router[​](#router "Direct link to heading") ------------------------------------------- [The `ChainRouter`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/router/chain_router.go) routes all incoming messages to its respective blockchain using `ChainID`. It does this by pushing all the messages onto the respective Chain handler's queue. The `ChainRouter` references all existing chains on the network such as the X-chain, C-chain, P-chain and possibly any other chain. The `ChainRouter` handles timeouts as well. When sending messages on the P2P layer, timeouts are registered on the sender and cleared on the `ChainRouter` side when a response is received. If no response is received, then it triggers a timeout. Because timeouts are handled on the `ChainRouter` side, the handler is reliable. Timeouts are triggered when peers do not respond and the `ChainRouter` will still notify the handler of failure cases. The timeout manager within `ChainRouter` is also adaptive. If the network is experiencing long latencies, timeouts will then be adjusted as well. Handler[​](#handler "Direct link to heading") --------------------------------------------- The main function of [the `Handler`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/handler/handler.go) is to pass messages from the network to the consensus engine. It receives these messages from the `ChainRouter`. It passes messages by pushing them onto a sync or Async queue (depends on message type). Messages are then popped from the queue, parsed, and routed to the correct function in consensus engine. This can be one of the following. - **State sync message (sync queue)** - **Bootstrapping message (sync queue)** - **Consensus message (sync queue)** - **App message (Async queue)** Sender[​](#sender "Direct link to heading") ------------------------------------------- The main role of [the `sender`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/sender/sender.go) is to build and send outbound messages. It is actually a very thin wrapper around the normal networking code. The main difference here is that sender registers timeouts and tells the router to expect a response message. The timer starts on the sender side. If there is no response, sender will send a failed response to the router. If a node is repeatedly unresponsive, that node will get benched and the sender will immediately start marking those messages as failed. If a sufficient amount of network deems the node benched, it might not get rewards (as a validator). Consensus Engine[​](#consensus-engine "Direct link to heading") --------------------------------------------------------------- Consensus is defined as getting a group of distributed systems to agree on an outcome. In the case of the Avalanche network, consensus is achieved when validators are in agreement with the state of the blockchain. The novel consensus algorithm is documented in the [white paper](https://assets.website-files.com/5d80307810123f5ffbb34d6e/6009805681b416f34dcae012_Avalanche%20Consensus%20Whitepaper.pdf). There are two main consensus algorithms: Avalanche and [Snowman](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/consensus.go). The engine is responsible for adding proposing a new block to consensus, repeatedly polling the network for decisions (accept/reject), and communicating that decision to the `Sender`. Blockchain Creation[​](#blockchain-creation "Direct link to heading") --------------------------------------------------------------------- [The `Manager`](https://github.com/ava-labs/avalanchego/blob/master/chains/manager.go) is what kick-starts everything in regards to blockchain creation, starting with the P-Chain. Once the P-Chain finishes bootstrapping, it will kickstart C-Chain and X-Chain and any other chains. The `Manager`'s job is not done yet, if a create-chain transaction is seen by a validator, a whole new process to create a chain will be started by the `Manager`. This can happen dynamically, long after the 3 chains in the Primary Network have been created and bootstrapped. # Issuing API Calls (/docs/rpcs/other/guides/issuing-api-calls) --- title: Issuing API Calls description: This guide explains how to make calls to APIs exposed by Avalanche nodes. --- Endpoints[​](#endpoints "Direct link to heading") ------------------------------------------------- An API call is made to an endpoint, which is a URL, made up of the base URI which is the address and the port of the node, and the path the particular endpoint the API call is on. ### Base URL[​](#base-url "Direct link to heading") The base of the URL is always: `[node-ip]:[http-port]` where - `node-ip` is the IP address of the node the call is to. - `http-port` is the port the node listens on for HTTP calls. This is specified by [command-line argument](/docs/nodes/configure/configs-flags#http-server) `http-port` (default value `9650`). For example, if you're making RPC calls on the local node, the base URL might look like this: `127.0.0.1:9650`. If you're making RPC calls to remote nodes, then the instead of `127.0.0.1` you should use the public IP of the server where the node is. Note that by default the node will only accept API calls on the local interface, so you will need to set up the [`http-host`](/docs/nodes/configure/configs-flags#--http-host-string) config flag on the node. Also, you will need to make sure the firewall and/or security policy allows access to the `http-port` from the internet. When setting up RPC access to a node, make sure you don't leave the `http-port` accessible to everyone! There are malicious actors that scan for nodes that have unrestricted access to their RPC port and then use those nodes for spamming them with resource-intensive queries which can knock the node offline. Only allow access to your node's RPC port from known IP addresses! ### Endpoint Path[​](#endpoint-path "Direct link to heading") Each API's documentation specifies what endpoint path a user should make calls to in order to access the API's methods. In general, they are formatted like: So for the Admin API, the endpoint path is `/ext/admin`, for the Info API it is `/ext/info` and so on. Note that some APIs have additional path components, most notably the chain RPC endpoints which includes the Avalanche L1 chain RPCs. We'll go over those in detail in the next section. So, in combining the base URL and the endpoint path we get the complete URL for making RPC calls. For example, to make a local RPC call on the Info API, the full URL would be: ``` http://127.0.0.1:9650/ext/info ``` Primary Network and Avalanche L1 RPC calls[​](#primary-network-and-avalanche-l1-rpc-calls "Direct link to heading") ------------------------------------------------------------------------------------------------------- Besides the APIs that are local to the node, like Admin or Metrics APIs, nodes also expose endpoints for talking to particular chains that are either part of the Primary Network (the X, P and C chains), or part of any Avalanche L1s the node might be syncing or validating. In general, chain endpoints are formatted as: ### Primary Network Endpoints[​](#primary-network-endpoints "Direct link to heading") The Primary Network consists of three chains: X, P and C chain. As those chains are present on every node, there are also convenient aliases defined that can be used instead of the full blockchainIDs. So, the endpoints look like: ### C-Chain and Subnet-EVM Endpoints[​](#c-chain-and-subnet-evm-endpoints "Direct link to heading") C-Chain and many Avalanche L1s run a version of the EthereumVM (EVM). EVM exposes its own endpoints, which are also accessible on the node: JSON-RPC, and Websocket. #### JSON-RPC EVM Endpoints[​](#json-rpc-evm-endpoints "Direct link to heading") To interact with C-Chain EVM via the JSON-RPC use the endpoint: To interact with Avalanche L1 instances of the EVM via the JSON-RPC endpoint: ``` /ext/bc/[blockchainID]/rpc ``` where `blockchainID` is the ID of the blockchain running the EVM. So for example, the RPC URL for the DFK Network (an Avalanche L1 that runs the DeFi Kingdoms:Crystalvale game) running on a local node would be: ``` http://127.0.0.1/ext/bc/q2aTwKuyzgs8pynF7UXBZCU7DejbZbZ6EUyHr3JQzYgwNPUPi/rpc ``` Or for the WAGMI Avalanche L1 on the Fuji testnet: ``` http://127.0.0.1/ext/bc/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/rpc ``` #### Websocket EVM Endpoints[​](#websocket-evm-endpoints "Direct link to heading") To interact with C-Chain via the websocket endpoint, use: To interact with other instances of the EVM via the websocket endpoint: where `blockchainID` is the ID of the blockchain running the EVM. For example, to interact with the C-Chain's Ethereum APIs via websocket on localhost you can use: ``` ws://127.0.0.1:9650/ext/bc/C/ws ``` When using the [Public API](/docs/tooling/rpc-providers) or another host that supports HTTPS, use `https://` or `wss://` instead of `http://` or `ws://`. Also, note that the [public API](/docs/tooling/rpc-providers#using-the-public-api-nodes) only supports C-Chain websocket API calls for API methods that don't exist on the C-Chain's HTTP API. Making a JSON RPC Request[​](#making-a-json-rpc-request "Direct link to heading") --------------------------------------------------------------------------------- Most of the built-in APIs use the [JSON RPC 2.0](https://www.jsonrpc.org/specification) format to describe their requests and responses. Such APIs include the Platform API and the X-Chain API. Suppose we want to call the `getTxStatus` method of the [X-Chain API](/docs/api-reference/x-chain/api). The X-Chain API documentation tells us that the endpoint for this API is `/ext/bc/X`. That means that the endpoint we send our API call to is: `[node-ip]:[http-port]/ext/bc/X` The X-Chain API documentation tells us that the signature of `getTxStatus` is: [`avm.getTxStatus`](/docs/api-reference/x-chain/api#avmgettxstatus)`(txID:bytes) -> (status:string)` where: - Argument `txID` is the ID of the transaction we're getting the status of. - Returned value `status` is the status of the transaction in question. To call this method, then: ``` curl -X POST --data '{ "jsonrpc":"2.0", "id" :4, "method" :"avm.getTxStatus", "params" :{ "txID":"2QouvFWUbjuySRxeX5xMbNCuAaKWfbk5FeEa2JmoF85RKLk2dD" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` - `jsonrpc` specifies the version of the JSON RPC protocol. (In practice is always 2.0) - `method` specifies the service (`avm`) and method (`getTxStatus`) that we want to invoke. - `params` specifies the arguments to the method. - `id` is the ID of this request. Request IDs should be unique. That's it! ### JSON RPC Success Response[​](#json-rpc-success-response "Direct link to heading") If the call is successful, the response will look like this: ``` { "jsonrpc": "2.0", "result": { "Status": "Accepted" }, "id": 1 } ``` - `id` is the ID of the request that this response corresponds to. - `result` is the returned values of `getTxStatus`. ### JSON RPC Error Response[​](#json-rpc-error-response "Direct link to heading") If the API method invoked returns an error then the response will have a field `error` in place of `result`. Additionally, there is an extra field, `data`, which holds additional information about the error that occurred. Such a response would look like: ``` { "jsonrpc": "2.0", "error": { "code": -32600, "message": "[Some error message here]", "data": [Object with additional information about the error] }, "id": 1 } ``` Other API Formats[​](#other-api-formats "Direct link to heading") ----------------------------------------------------------------- Some APIs may use a standard other than JSON RPC 2.0 to format their requests and responses. Such extension should specify how to make calls and parse responses to them in their documentation. Sending and Receiving Bytes[​](#sending-and-receiving-bytes "Direct link to heading") ------------------------------------------------------------------------------------- Unless otherwise noted, when bytes are sent in an API call/response, they are in hex representation. However, Transaction IDs (TXIDs), ChainIDs, and subnetIDs are in [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) representation, a base-58 encoding with a checksum. # Transaction Fees (/docs/rpcs/other/guides/txn-fees) --- title: Transaction Fees --- In order to prevent spam, transactions on Avalanche require the payment of a transaction fee. The fee is paid in AVAX. **The transaction fee is burned (destroyed forever).** When you issue a transaction through Avalanche's API, the transaction fee is automatically deducted from one of the addresses you control. The [avalanchego wallet](https://github.com/ava-labs/avalanchego/blob/master/wallet/chain) contains example code written in golang for building and signing transactions on all three mainnet chains. X-Chain Fees[​](#fee-schedule) ------------------------------------------------------- The X-Chain currently operates under a fixed fee mechanism. This table shows the X-Chain transaction fee schedule: ``` +----------+---------------------------+--------------------------------+ | Chain | Transaction Type | Mainnet Transaction Fee (AVAX) | +----------+---------------------------+--------------------------------+ | X | Send | 0.001 | +----------+---------------------------+--------------------------------+ | X | Create Asset | 0.01 | +----------+---------------------------+--------------------------------+ | X | Mint Asset | 0.001 | +----------+---------------------------+--------------------------------+ | X | Import AVAX | 0.001 | +----------+---------------------------+--------------------------------+ | X | Export AVAX | 0.001 | +----------+---------------------------+--------------------------------+ ``` C-Chain Fees[​](#c-chain-fees) ------------------------------------------------------- The Avalanche C-Chain uses an algorithm to determine the "base fee" for a transaction. The base fee increases when network utilization is above the target utilization and decreases when network utilization is below the target. ### Dynamic Fee Transactions[​](#dynamic-fee-transactions ) Transaction fees for non-atomic transactions are based on Ethereum's EIP-1559 style Dynamic Fee Transactions, which consists of a gas fee cap and a gas tip cap. The fee cap specifies the maximum price the transaction is willing to pay per unit of gas. The tip cap (also called the priority fee) specifies the maximum amount above the base fee that the transaction is willing to pay per unit of gas. Therefore, the effective gas price paid by a transaction will be `min(gasFeeCap, baseFee + gasTipCap)`. Unlike in Ethereum, where the priority fee is paid to the miner that produces the block, in Avalanche both the base fee and the priority fee are burned. For legacy transactions, which only specify a single gas price, the gas price serves as both the gas fee cap and the gas tip cap. Use the [`eth_baseFee`](/docs/api-reference/c-chain/api#eth_basefee) API method to estimate the base fee for the next block. If more blocks are produced in between the time that you construct your transaction and it is included in a block, the base fee could be different from the base fee estimated by the API call, so it is important to treat this value as an estimate. Next, use [eth\_maxPriorityFeePerGas](/docs/api-reference/c-chain/api#eth_maxpriorityfeepergas) API call to estimate the priority fee needed to be included in a block. This API call will look at the most recent blocks and see what tips have been paid by recent transactions in order to be included in the block. Transactions are ordered by the priority fee, then the timestamp (oldest first). Based off of this information, you can specify the `gasFeeCap` and `gasTipCap` to your liking based on how you prioritize getting your transaction included as quickly as possible vs. minimizing the price paid per unit of gas. #### Base Fee[​](#base-fee) The base fee can go as low as 1 nAVAX (Gwei) and has no upper bound. You can use the [`eth_baseFee`](/docs/api-reference/c-chain/api#eth_basefee) and [eth\_maxPriorityFeePerGas](/docs/api-reference/c-chain/api#eth_maxpriorityfeepergas) API methods, or [Snowtrace's C-Chain Gas Tracker](https://snowtrace.io/gastracker), to estimate the gas price to use in your transactions. #### Further Readings[​](#further-readings) - [Adjusting Gas Price During High Network Activity](/docs/dapps/advanced-tutorials/manually-adjust-gas-price) - [Sending Transactions with Dynamic Fees using JavaScript](/docs/dapps/advanced-tutorials/dynamic-gas-fees) ### Atomic Transaction Fees[​](#atomic-transaction-fees) C-Chain atomic transactions (that is imports and exports from/to other chains) charge dynamic fees based on the amount of gas used by the transaction and the base fee of the block that includes the atomic transaction. Gas Used: ``` +---------------------+-------+ | Item : Gas | +---------------------+-------+ | Unsigned Tx Byte : 1 | +---------------------+-------+ | Signature : 1000 | +---------------------+-------+ | Per Atomic Tx : 10000 | +---------------------+-------+ ``` Therefore, the gas used by an atomic transaction is `1 * len(unsignedTxBytes) + 1,000 * numSignatures + 10,000` The TX fee additionally takes the base fee into account. Due to the fact that atomic transactions use units denominated in 9 decimal places, the base fee must be converted to 9 decimal places before calculating the actual fee paid by the transaction. Therefore, the actual fee is: `gasUsed * baseFee (converted to 9 decimals)`. P-Chain Fees[​](#p-chain-fees) ------------------------------------------------------- The Avalanche P-Chain utilizes a dynamic fee mechanism to optimize transaction costs and network utilization. This system adapts fees based on gas consumption to maintain a target utilization rate. ### Dimensions of Gas Consumption Gas consumption is measured across four dimensions: 1. **Bandwidth** The transaction size in bytes. 2. **Reads** The number of state/database reads. 3. **Writes** The number of state/database writes. 4. **Compute** The compute time in microseconds. The total gas consumed ($G$) by a transaction is: ```math G = B + 1000R + 1000W + 4C ``` The current fee dimension weight configurations as well as the parameter configurations of the P-Chain can be read at any time with the [`platform.getFeeConfig`](/docs/api-reference/p-chain/api#platformgetfeeconfig) API endpoint. ### Fee Adjustment Mechanism Fees adjust dynamically based on excess gas consumption, the difference between current gas usage and the target gas rate. The exponential adjustment ensures consistent reactivity regardless of the current gas price. Fee changes scale proportionally with excess gas consumption, maintaining fairness and network stability. The technical specification of this mechanism is documented in [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md#mechanism). # X-Chain Migration (/docs/rpcs/other/guides/x-chain-migration) --- title: X-Chain Migration --- Overview[​](#overview "Direct link to heading") ----------------------------------------------- This document summarizes all of the changes made to the X-Chain API to support Avalanche Cortina (v1.10.0), which migrates the X-Chain to run Snowman++. In summary, the core transaction submission and confirmation flow is unchanged, however, there are new APIs that must be called to index all transactions. Transaction Broadcast and Confirmation[​](#transaction-broadcast-and-confirmation "Direct link to heading") ----------------------------------------------------------------------------------------------------------- The transaction format on the X-Chain does not change in Cortina. This means that wallets that have already integrated with the X-Chain don't need to change how they sign transactions. Additionally, there is no change to the format of the [avm.issueTx](/docs/api-reference/x-chain/api#avmissuetx) or the [avm.getTx](/docs/api-reference/x-chain/api#avmgettx) API. However, the [avm.getTxStatus](/docs/api-reference/x-chain/api#avmgettxstatus) endpoint is now deprecated and its usage should be replaced with [avm.getTx](/docs/api-reference/x-chain/api#avmgettx) (which only returns accepted transactions for AvalancheGo >= v1.9.12). [avm.getTxStatus](/docs/api-reference/x-chain/api#avmgettxstatus) will still work up to and after the Cortina activation if you wish to migrate after the network upgrade has occurred. Vertex -> Block Indexing[​](#vertex---block-indexing "Direct link to heading") ------------------------------------------------------------------------------ Before Cortina, indexing the X-Chain required polling the `/ext/index/X/vtx` endpoint to fetch new vertices. During the Cortina activation, a “stop vertex” will be produced using a [new codec version](https://github.com/ava-labs/avalanchego/blob/c27721a8da1397b218ce9e9ec69839b8a30f9860/snow/engine/avalanche/vertex/codec.go#L17-L18) that will contain no transactions. This new vertex type will be the [same format](https://github.com/ava-labs/avalanchego/blob/c27721a8da1397b218ce9e9ec69839b8a30f9860/snow/engine/avalanche/vertex/stateless_vertex.go#L95-L102) as previous vertices. To ensure historical data can still be accessed in Cortina, the `/ext/index/X/vtx` will remain accessible even though it will no longer be populated with chain data. The index for the X-chain tx and vtx endpoints will never increase again. The index for the X-chain blocks will increase as new blocks are added. After Cortina activation, you will need to migrate to using the new _ext/index/X/block_ endpoint (shares the same semantics as [/ext/index/P/block](/docs/api-reference/index-api#p-chain-blocks)) to continue indexing X-Chain activity. Because X-Chain ordering is deterministic in Cortina, this means that X-Chain blocks across all heights will be consistent across all nodes and will include a timestamp. Here is an example of iterating over these blocks in Golang: ``` package main import ( "context" "fmt" "log" "time" "github.com/ava-labs/avalanchego/indexer" "github.com/ava-labs/avalanchego/vms/proposervm/block" "github.com/ava-labs/avalanchego/wallet/chain/x" "github.com/ava-labs/avalanchego/wallet/subnet/primary" ) func main() { var ( uri = fmt.Sprintf("%s/ext/index/X/block", primary.LocalAPIURI) client = indexer.NewClient(uri) ctx = context.Background() nextIndex uint64 ) for { log.Printf("polling for next accepted block") container, err := client.GetContainerByIndex(ctx, nextIndex) if err != nil { time.Sleep(time.Second) continue } proposerVMBlock, err := block.Parse(container.Bytes) if err != nil { log.Fatalf("failed to parse proposervm block: %s\n", err) } avmBlockBytes := proposerVMBlock.Block() avmBlock, err := x.Parser.ParseBlock(avmBlockBytes) if err != nil { log.Fatalf("failed to parse avm block: %s\n", err) } acceptedTxs := avmBlock.Txs() log.Printf("accepted block %s with %d transactions", avmBlock.ID(), len(acceptedTxs)) for _, tx := range acceptedTxs { log.Printf("accepted transaction %s", tx.ID()) } nextIndex++ } } ``` After Cortina activation, it will also be possible to fetch X-Chain blocks directly without enabling the Index API. You can use the [avm.getBlock](/docs/api-reference/x-chain/api#avmgetblock), [avm.getBlockByHeight](/docs/api-reference/x-chain/api#avmgetblockbyheight), and [avm.getHeight](/docs/api-reference/x-chain/api#avmgetheight) endpoints to do so. This, again, will be similar to the [P-Chain semantics](/docs/api-reference/p-chain/api#platformgetblock). Deprecated API Calls[​](#deprecated-api-calls "Direct link to heading") ----------------------------------------------------------------------- This long-term deprecation effort will better align usage of AvalancheGo with its purpose, to be a minimal and efficient runtime that supports only what is required to validate the Primary Network and Avalanche L1s. Integrators should make plans to migrate to tools and services that are better optimized for serving queries over Avalanche Network state and avoid keeping any keys on the node itself. This deprecation ONLY applies to APIs that AvalancheGo exposes over the HTTP port. Transaction types with similar names to these APIs are NOT being deprecated. - ipcs - ipcs.publishBlockchain - ipcs.unpublishBlockchain - ipcs.getPublishedBlockchains - keystore - keystore.createUser - keystore.deleteUser - keystore.listUsers - keystore.importUser - keystore.exportUser - avm/pubsub - avm - avm.getAddressTxs - avm.getBalance - avm.getAllBalances - avm.createAsset - avm.createFixedCapAsset - avm.createVariableCapAsset - avm.createNFTAsset - avm.createAddress - avm.listAddresses - avm.exportKey - avm.importKey - avm.mint - avm.sendNFT - avm.mintNFT - avm.import - avm.export - avm.send - avm.sendMultiple - avm/wallet - wallet.issueTx - wallet.send - wallet.sendMultiple - platform - platform.exportKey - platform.importKey - platform.getBalance - platform.createAddress - platform.listAddresses - platform.getSubnets - platform.addValidator - platform.addDelegator - platform.addSubnetValidator - platform.createSubnet - platform.exportAVAX - platform.importAVAX - platform.createBlockchain - platform.getBlockchains - platform.getStake - platform.getMaxStakeAmount - platform.getRewardUTXOs Cortina FAQ[​](#cortina-faq "Direct link to heading") ----------------------------------------------------- ### Do I Have to Upgrade my Node?[​](#do-i-have-to-upgrade-my-node "Direct link to heading") If you don't upgrade your validator to `v1.10.0` before the Avalanche Mainnet activation date, your node will be marked as offline and other nodes will report your node as having lower uptime, which may jeopardize your staking rewards. ### Is There any Change in Hardware Requirements?[​](#is-there-any-change-in-hardware-requirements "Direct link to heading") No. ### Will Updating Decrease my Validator's Uptime?[​](#will-updating-decrease-my-validators-uptime "Direct link to heading") No. As a reminder, you can check your validator's estimated uptime using the [`info.uptime` API call](/docs/api-reference/info-api#infouptime). ### I Think Something Is Wrong. What Should I Do?[​](#i-think-something-is-wrong-what-should-i-do "Direct link to heading") First, make sure that you've read the documentation thoroughly and checked the [FAQs](https://support.avax.network/en/). If you don't see an answer to your question, go to our [Discord](https://discord.com/invite/RwXY7P6) server and search for your question. If it has not already been asked, please post it in the appropriate channel. # Avalanche Network Protocol (/docs/rpcs/other/standards/avalanche-network-protocol) --- title: Avalanche Network Protocol --- Overview[​](#overview "Direct link to heading") ----------------------------------------------- Avalanche network defines the core communication format between Avalanche nodes. It uses the [primitive serialization](/docs/api-reference/standards/serialization-primitives) format for payload packing. `"Containers"` are mentioned extensively in the description. A Container is simply a generic term for blocks. This document describes the protocol for peer-to-peer communication using Protocol Buffers (proto3). The protocol defines a set of messages exchanged between peers in a peer-to-peer network. Each message is represented by the `Message` proto message, which can encapsulate various types of messages, including network messages, state-sync messages, bootstrapping messages, consensus messages, and application messages. Message[​](#message "Direct link to heading") --------------------------------------------- The `Message` proto message is the main container for all peer-to-peer communication. It uses the `oneof` construct to represent different message types. The supported compression algorithms include Gzip and Zstd. ``` message Message { oneof message { bytes compressed_gzip = 1; bytes compressed_zstd = 2; // ... (other compression algorithms can be added) Ping ping = 11; Pong pong = 12; Version version = 13; PeerList peer_list = 14; // ... (other message types) } } ``` ### Compression[​](#compression "Direct link to heading") The `compressed_gzip` and `compressed_zstd` fields are used for Gzip and Zstd compression, respectively, of the encapsulated message. These fields are set only if the message type supports compression. Network Messages[​](#network-messages "Direct link to heading") --------------------------------------------------------------- ### Ping[​](#ping "Direct link to heading") The `Ping` message reports a peer's perceived uptime percentage. ``` message Ping { uint32 uptime = 1; repeated SubnetUptime subnet_uptimes = 2; } ``` - `uptime`: Uptime percentage on the primary network \[0, 100\]. - `subnet_uptimes`: Uptime percentages on Avalanche L1s. ### Pong[​](#pong "Direct link to heading") The `Pong` message is sent in response to a `Ping` with the perceived uptime of the peer. ``` message Pong { uint32 uptime = 1; // Deprecated: uptime is now sent in Ping repeated SubnetUptime subnet_uptimes = 2; // Deprecated: uptime is now sent in Ping } ``` ### Version[​](#version "Direct link to heading") The `Version` message is the first outbound message sent to a peer during the p2p handshake. ``` message Version { uint32 network_id = 1; uint64 my_time = 2; bytes ip_addr = 3; uint32 ip_port = 4; string my_version = 5; uint64 my_version_time = 6; bytes sig = 7; repeated bytes tracked_subnets = 8; } ``` - `network_id`: Network identifier (e.g., local, testnet, Mainnet). - `my_time`: Unix timestamp when the `Version` message was created. - `ip_addr`: IP address of the peer. - `ip_port`: IP port of the peer. - `my_version`: Avalanche client version. - `my_version_time`: Timestamp of the IP. - `sig`: Signature of the peer IP port pair at a provided timestamp. - `tracked_subnets`: Avalanche L1s the peer is tracking. ### PeerList[​](#peerlist "Direct link to heading") The `PeerList` message contains network-level metadata for a set of validators. ``` message PeerList { repeated ClaimedIpPort claimed_ip_ports = 1; } ``` - `claimed_ip_ports`: List of claimed IP and port pairs. ### PeerListAck[​](#peerlistack "Direct link to heading") The `PeerListAck` message is sent in response to `PeerList` to acknowledge the subset of peers that the peer will attempt to connect to. ``` message PeerListAck { reserved 1; // deprecated; used to be tx_ids repeated PeerAck peer_acks = 2; } ``` - `peer_acks`: List of acknowledged peers. State-Sync Messages[​](#state-sync-messages "Direct link to heading") --------------------------------------------------------------------- ### GetStateSummaryFrontier[​](#getstatesummaryfrontier "Direct link to heading") The `GetStateSummaryFrontier` message requests a peer's most recently accepted state summary. ``` message GetStateSummaryFrontier { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. ### StateSummaryFrontier[​](#statesummaryfrontier "Direct link to heading") The `StateSummaryFrontier` message is sent in response to a `GetStateSummaryFrontier` request. ``` message StateSummaryFrontier { bytes chain_id = 1; uint32 request_id = 2; bytes summary = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetStateSummaryFrontier` request. - `summary`: The requested state summary. ### GetAcceptedStateSummary[​](#getacceptedstatesummary "Direct link to heading") The `GetAcceptedStateSummary` message requests a set of state summaries at specified block heights. ``` message GetAcceptedStateSummary { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; repeated uint64 heights = 4; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `heights`: Heights being requested. ### AcceptedStateSummary[​](#acceptedstatesummary "Direct link to heading") The `AcceptedStateSummary` message is sent in response to `GetAcceptedStateSummary`. ``` message AcceptedStateSummary { bytes chain_id = 1; uint32 request_id = 2; repeated bytes summary_ids = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetAcceptedStateSummary` request. - `summary_ids`: State summary IDs. Bootstrapping Messages[​](#bootstrapping-messages "Direct link to heading") --------------------------------------------------------------------------- ### GetAcceptedFrontier[​](#getacceptedfrontier "Direct link to heading") The `GetAcceptedFrontier` message requests the accepted frontier from a peer. ``` message GetAcceptedFrontier { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; EngineType engine_type = 4; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `engine_type`: Consensus type the remote peer should use to handle this message. ### AcceptedFrontier[​](#acceptedfrontier "Direct link to heading") The `AcceptedFrontier` message contains the remote peer's last accepted frontier. ``` message AcceptedFrontier { reserved 4; // Until Cortina upgrade is activated bytes chain_id = 1; uint32 request_id = 2; bytes container_id = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetAcceptedFrontier` request. - `container_id`: The ID of the last accepted frontier. ### GetAccepted[​](#getaccepted "Direct link to heading") The `GetAccepted` message sends a request with the sender's accepted frontier to a remote peer. ``` message GetAccepted { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; repeated bytes container_ids = 4; EngineType engine_type = 5; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this message. - `deadline`: Timeout (ns) for this request. - `container_ids`: The sender's accepted frontier. - `engine_type`: Consensus type to handle this message. ### Accepted[​](#accepted "Direct link to heading") The `Accepted` message is sent in response to `GetAccepted`. ``` message Accepted { reserved 4; // Until Cortina upgrade is activated bytes chain_id = 1; uint32 request_id = 2; repeated bytes container_ids = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetAccepted` request. - `container_ids`: Subset of container IDs from the `GetAccepted` request that the sender has accepted. ### GetAncestors[​](#getancestors "Direct link to heading") The `GetAncestors` message requests the ancestors for a given container. ``` message GetAncestors { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes container_id = 4; EngineType engine_type = 5; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `container_id`: Container for which ancestors are being requested. - `engine_type`: Consensus type to handle this message. ### Ancestors[​](#ancestors "Direct link to heading") The `Ancestors` message is sent in response to `GetAncestors`. ``` message Ancestors { reserved 4; // Until Cortina upgrade is activated bytes chain_id = 1; uint32 request_id = 2; repeated bytes containers = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetAncestors` request. - `containers`: Ancestry for the requested container. Consensus Messages[​](#consensus-messages "Direct link to heading") ------------------------------------------------------------------- ### Get[​](#get "Direct link to heading") The `Get` message requests a container from a remote peer. ``` message Get { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes container_id = 4; EngineType engine_type = 5; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `container_id`: Container being requested. - `engine_type`: Consensus type to handle this message. ### Put[​](#put "Direct link to heading") The `Put` message is sent in response to `Get` with the requested block. ``` message Put { bytes chain_id = 1; uint32 request_id = 2; bytes container = 3; EngineType engine_type = 4; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `Get` request. - `container`: Requested container. - `engine_type`: Consensus type to handle this message. ### PushQuery[​](#pushquery "Direct link to heading") The `PushQuery` message requests the preferences of a remote peer given a container. ``` message PushQuery { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes container = 4; EngineType engine_type = 5; uint64 requested_height = 6; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `container`: Container being gossiped. - `engine_type`: Consensus type to handle this message. - `requested_height`: Requesting peer's last accepted height. ### PullQuery[​](#pullquery "Direct link to heading") The `PullQuery` message requests the preferences of a remote peer given a container id. ``` message PullQuery { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes container_id = 4; EngineType engine_type = 5; uint64 requested_height = 6; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `container_id`: Container id being gossiped. - `engine_type`: Consensus type to handle this message. - `requested_height`: Requesting peer's last accepted height. ### Chits[​](#chits "Direct link to heading") The `Chits` message contains the preferences of a peer in response to a `PushQuery` or `PullQuery` message. ``` message Chits { bytes chain_id = 1; uint32 request_id = 2; bytes preferred_id = 3; bytes accepted_id = 4; bytes preferred_id_at_height = 5; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `PushQuery`/`PullQuery` request. - `preferred_id`: Currently preferred block. - `accepted_id`: Last accepted block. - `preferred_id_at_height`: Currently preferred block at the requested height. Application Messages[​](#application-messages "Direct link to heading") ----------------------------------------------------------------------- ### AppRequest[​](#apprequest "Direct link to heading") The `AppRequest` message is a VM-defined request. ``` message AppRequest { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes app_bytes = 4; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `app_bytes`: Request body. ### AppResponse[​](#appresponse "Direct link to heading") The `AppResponse` message is a VM-defined response sent in response to `AppRequest`. ``` message AppResponse { bytes chain_id = 1; uint32 request_id = 2; bytes app_bytes = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `AppRequest`. - `app_bytes`: Response body. ### AppGossip[​](#appgossip "Direct link to heading") The `AppGossip` message is a VM-defined message. ``` message AppGossip { bytes chain_id = 1; bytes app_bytes = 2; } ``` - `chain_id`: Chain the message is for. - `app_bytes`: Message body. # Cryptographic Primitives (/docs/rpcs/other/standards/cryptographic-primitives) --- title: Cryptographic Primitives --- Avalanche uses a variety of cryptographic primitives for its different functions. This file summarizes the type and kind of cryptography used at the network and blockchain layers. ## Cryptography in the Network Layer Avalanche uses Transport Layer Security, TLS, to protect node-to-node communications from eavesdroppers. TLS combines the practicality of public-key cryptography with the efficiency of symmetric-key cryptography. This has resulted in TLS becoming the standard for internet communication. Whereas most classical consensus protocols employ public-key cryptography to prove receipt of messages to third parties, the novel Snow\* consensus family does not require such proofs. This enables Avalanche to employ TLS in authenticating stakers and eliminates the need for costly public-key cryptography for signing network messages. ### TLS Certificates Avalanche does not rely on any centralized third-parties, and in particular, it does not use certificates issued by third-party authenticators. All certificates used within the network layer to identify endpoints are self-signed, thus creating a self-sovereign identity layer. No third parties are ever involved. ### TLS Addresses To avoid posting the full TLS certificate to the P-Chain, the certificate is first hashed. For consistency, Avalanche employs the same hashing mechanism for the TLS certificates as is used in Bitcoin. Namely, the DER representation of the certificate is hashed with sha256, and the result is then hashed with ripemd160 to yield a 20-byte identifier for stakers. This 20-byte identifier is represented by "NodeID-" followed by the data's [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded string. ## Cryptography in the Avalanche Virtual Machine The Avalanche virtual machine uses elliptic curve cryptography, specifically `secp256k1`, for its signatures on the blockchain. This 32-byte identifier is represented by "PrivateKey-" followed by the data's [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded string. ### Secp256k1 Addresses Avalanche is not prescriptive about addressing schemes, choosing to instead leave addressing up to each blockchain. The addressing scheme of the X-Chain and the P-Chain relies on secp256k1. Avalanche follows a similar approach as Bitcoin and hashes the ECDSA public key. The 33-byte compressed representation of the public key is hashed with sha256 **once**. The result is then hashed with ripemd160 to yield a 20-byte address. Avalanche uses the convention `chainID-address` to specify which chain an address exists on. `chainID` may be replaced with an alias of the chain. When transmitting information through external applications, the CB58 convention is required. ### Bech32 Addresses on the X-Chain and P-Chain use the [Bech32](http://support.avalabs.org/en/articles/4587392-what-is-bech32) standard outlined in [BIP 0173](https://en.bitcoin.it/wiki/BIP_0173). There are four parts to a Bech32 address scheme. In order of appearance: - A human-readable part (HRP). On Mainnet this is `avax`. - The number `1`, which separates the HRP from the address and error correction code. - A base-32 encoded string representing the 20 byte address. - A 6-character base-32 encoded error correction code. Additionally, an Avalanche address is prefixed with the alias of the chain it exists on, followed by a dash. For example, X-Chain addresses are prefixed with `X-`. The following regular expression matches addresses on the X-Chain, P-Chain and C-Chain for Mainnet, Fuji and localhost. Note that all valid Avalanche addresses will match this regular expression, but some strings that are not valid Avalanche addresses may match this regular expression. ``` ^([XPC]|[a-km-zA-HJ-NP-Z1-9]{36,72})-[a-zA-Z]{1,83}1[qpzry9x8gf2tvdw0s3jn54khce6mua7l]{38}$ ``` Read more about Avalanche's [addressing scheme](https://support.avalabs.org/en/articles/4596397-what-is-an-address). For example the following Bech32 address, `X-avax19rknw8l0grnfunjrzwxlxync6zrlu33y2jxhrg`, is composed like so: 1. HRP: `avax` 2. Separator: `1` 3. Address: `9rknw8l0grnfunjrzwxlxync6zrlu33y` 4. Checksum: `2jxhrg` Depending on the `networkID`, the encoded addresses will have a distinctive HRP per each network. - 0 - X-`custom`19rknw8l0grnfunjrzwxlxync6zrlu33yeg5dya - 1 - X-`avax`19rknw8l0grnfunjrzwxlxync6zrlu33y2jxhrg - 2 - X-`cascade`19rknw8l0grnfunjrzwxlxync6zrlu33ypmtvnh - 3 - X-`denali`19rknw8l0grnfunjrzwxlxync6zrlu33yhc357h - 4 - X-`everest`19rknw8l0grnfunjrzwxlxync6zrlu33yn44wty - 5 - X-`fuji`19rknw8l0grnfunjrzwxlxync6zrlu33yxqzg0h - 1337 - X-`custom`19rknw8l0grnfunjrzwxlxync6zrlu33yeg5dya - 12345 - X-`local`19rknw8l0grnfunjrzwxlxync6zrlu33ynpm3qq Here's the mapping of `networkID` to bech32 HRP. ``` 0: "custom", 1: "avax", 2: "cascade", 3: "denali", 4: "everest", 5: "fuji", 1337: "custom", 12345: "local" ``` ### Secp256k1 Recoverable Signatures Recoverable signatures are stored as the 65-byte **`[R || S || V]`** where **`V`** is 0 or 1 to allow quick public key recoverability. **`S`** must be in the lower half of the possible range to prevent signature malleability. Before signing a message, the message is hashed using sha256. ### Secp256k1 Example Suppose Rick and Morty are setting up a secure communication channel. Morty creates a new public-private key pair. Private Key: `0x98cb077f972feb0481f1d894f272c6a1e3c15e272a1658ff716444f465200070` Public Key (33-byte compressed): `0x02b33c917f2f6103448d7feb42614037d05928433cb25e78f01a825aa829bb3c27` Because of Rick's infinite wisdom, he doesn't trust himself with carrying around Morty's public key, so he only asks for Morty's address. Morty follows the instructions, SHA256's his public key, and then ripemd160's that result to produce an address. SHA256(Public Key): `0x28d7670d71667e93ff586f664937f52828e6290068fa2a37782045bffa7b0d2f` Address: `0xe8777f38c88ca153a6fdc25942176d2bf5491b89` Morty is quite confused because a public key should be safe to be public knowledge. Rick belches and explains that hashing the public key protects the private key owner from potential future security flaws in elliptic curve cryptography. In the event cryptography is broken and a private key can be derived from a public key, users can transfer their funds to an address that has never signed a transaction before, preventing their funds from being compromised by an attacker. This enables coin owners to be protected while the cryptography is upgraded across the clients. Later, once Morty has learned more about Rick's backstory, Morty attempts to send Rick a message. Morty knows that Rick will only read the message if he can verify it was from him, so he signs the message with his private key. Message: `0x68656c702049276d207472617070656420696e206120636f6d7075746572` Message Hash: `0x912800c29d554fb9cdce579c0abba991165bbbc8bfec9622481d01e0b3e4b7da` Message Signature: `0xb52aa0535c5c48268d843bd65395623d2462016325a86f09420c81f142578e121d11bd368b88ca6de4179a007e6abe0e8d0be1a6a4485def8f9e02957d3d72da01` Morty was never seen again. ### Signed Messages A standard for interoperable generic signed messages based on the Bitcoin Script format and Ethereum format. ``` sign(sha256(length(prefix) + prefix + length(message) + message)) ``` The prefix is simply the string `\x1AAvalanche Signed Message:\n`, where `0x1A` is the length of the prefix text and `length(message)` is an [integer](/docs/api-reference/standards/serialization-primitives#integer) of the message size. ### Gantt Pre-Image Specification ``` +---------------+-----------+------------------------------+ | prefix : [26]byte | 26 bytes | +---------------+-----------+------------------------------+ | messageLength : int | 4 bytes | +---------------+-----------+------------------------------+ | message : []byte | size(message) bytes | +---------------+-----------+------------------------------+ | 26 + 4 + size(message) | +------------------------------+ ``` ### Example As an example we will sign the message "Through consensus to the stars" ``` // prefix size: 26 bytes 0x1a // prefix: Avalanche Signed Message:\n 0x41 0x76 0x61 0x6c 0x61 0x6e 0x63 0x68 0x65 0x20 0x53 0x69 0x67 0x6e 0x65 0x64 0x20 0x4d 0x65 0x73 0x73 0x61 0x67 0x65 0x3a 0x0a // msg size: 30 bytes 0x00 0x00 0x00 0x1e // msg: Through consensus to the stars 54 68 72 6f 75 67 68 20 63 6f 6e 73 65 6e 73 75 73 20 74 6f 20 74 68 65 20 73 74 61 72 73 ``` After hashing with `sha256` and signing the pre-image we return the value [cb58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded: `4Eb2zAHF4JjZFJmp4usSokTGqq9mEGwVMY2WZzzCmu657SNFZhndsiS8TvL32n3bexd8emUwiXs8XqKjhqzvoRFvghnvSN`. Here's an example using [Core web](https://core.app/tools/signing-tools/sign/). A full guide on how to sign messages with Core web can be found [here](https://support.avax.network/en/articles/7206948-core-web-how-do-i-use-the-signing-tools). ![Sign message](/images/cryptography1.png) ## Cryptography in Ethereum Virtual Machine Avalanche nodes support the full Ethereum Virtual Machine (EVM) and precisely duplicate all of the cryptographic constructs used in Ethereum. This includes the Keccak hash function and the other mechanisms used for cryptographic security in the EVM. ## Cryptography in Other Virtual Machines Since Avalanche is an extensible platform, we expect that people will add additional cryptographic primitives to the system over time. # Serialization Primitives (/docs/rpcs/other/standards/serialization-primitives) --- title: Serialization Primitives --- Avalanche uses a simple, uniform, and elegant representation for all internal data. This document describes how primitive types are encoded on the Avalanche platform. Transactions are encoded in terms of these basic primitive types. Byte[​](#byte "Direct link to heading") --------------------------------------- Bytes are packed as-is into the message payload. Example: ``` Packing: 0x01 Results in: [0x01] ``` Short[​](#short "Direct link to heading") ----------------------------------------- Shorts are packed in BigEndian format into the message payload. Example: ``` Packing: 0x0102 Results in: [0x01, 0x02] ``` Integer[​](#integer "Direct link to heading") --------------------------------------------- Integers are 32-bit values packed in BigEndian format into the message payload. Example: ``` Packing: 0x01020304 Results in: [0x01, 0x02, 0x03, 0x04] ``` Long Integers[​](#long-integers "Direct link to heading") --------------------------------------------------------- Long integers are 64-bit values packed in BigEndian format into the message payload. Example: ``` Packing: 0x0102030405060708 Results in: [0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08] ``` IP Addresses[​](#ip-addresses "Direct link to heading") ------------------------------------------------------- IP addresses are represented as 16-byte IPv6 format, with the port appended into the message payload as a Short. IPv4 addresses are padded with 12 bytes of leading 0x00s. IPv4 example: ``` Packing: "127.0.0.1:9650" Results in: [ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f, 0x00, 0x00, 0x01, 0x25, 0xb2, ] ``` IPv6 example: ``` Packing: "[2001:0db8:ac10:fe01::]:12345" Results in: [ 0x20, 0x01, 0x0d, 0xb8, 0xac, 0x10, 0xfe, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, ] ``` Fixed-Length Array[​](#fixed-length-array "Direct link to heading") ------------------------------------------------------------------- Fixed-length arrays, whose length is known ahead of time and by context, are packed in order. Byte array example: ``` Packing: [0x01, 0x02] Results in: [0x01, 0x02] ``` Integer array example: ``` Packing: [0x03040506] Results in: [0x03, 0x04, 0x05, 0x06] ``` Variable Length Array[​](#variable-length-array "Direct link to heading") ------------------------------------------------------------------------- The length of the array is prefixed in Integer format, followed by the packing of the array contents in Fixed Length Array format. Byte array example: ``` Packing: [0x01, 0x02] Results in: [0x00, 0x00, 0x00, 0x02, 0x01, 0x02] ``` Int array example: ``` Packing: [0x03040506] Results in: [0x00, 0x00, 0x00, 0x01, 0x03, 0x04, 0x05, 0x06] ``` String[​](#string "Direct link to heading") ------------------------------------------- A String is packed similarly to a variable-length byte array. However, the length prefix is a short rather than an int. Strings are encoded in UTF-8 format. Example: ``` Packing: "Avax" Results in: [0x00, 0x04, 0x41, 0x76, 0x61, 0x78] ``` # platform.getBalance (/docs/rpcs/p-chain/balances-&-utxos/platform_getBalance) --- title: platform.getBalance full: true _openapi: method: POST route: /ext/bc/P#platform.getBalance toc: [] structuredData: headings: [] contents: - content: GetBalance gets the balance of an address --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetBalance gets the balance of an address # platform.getUTXOs (/docs/rpcs/p-chain/balances-&-utxos/platform_getUTXOs) --- title: platform.getUTXOs full: true _openapi: method: POST route: /ext/bc/P#platform.getUTXOs toc: [] structuredData: headings: [] contents: - content: GetUTXOs returns the UTXOs controlled by the given addresses --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetUTXOs returns the UTXOs controlled by the given addresses # platform.getBlockchainStatus (/docs/rpcs/p-chain/blockchains/platform_getBlockchainStatus) --- title: platform.getBlockchainStatus full: true _openapi: method: POST route: /ext/bc/P#platform.getBlockchainStatus toc: [] structuredData: headings: [] contents: - content: >- GetBlockchainStatus gets the status of a blockchain with the ID [args.BlockchainID]. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetBlockchainStatus gets the status of a blockchain with the ID [args.BlockchainID]. # platform.getBlockchains (/docs/rpcs/p-chain/blockchains/platform_getBlockchains) --- title: platform.getBlockchains full: true _openapi: method: POST route: /ext/bc/P#platform.getBlockchains toc: [] structuredData: headings: [] contents: - content: GetBlockchains returns all of the blockchains that exist --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetBlockchains returns all of the blockchains that exist # platform.validatedBy (/docs/rpcs/p-chain/blockchains/platform_validatedBy) --- title: platform.validatedBy full: true _openapi: method: POST route: /ext/bc/P#platform.validatedBy toc: [] structuredData: headings: [] contents: - content: >- ValidatedBy returns the ID of the Subnet that validates [args.BlockchainID] --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} ValidatedBy returns the ID of the Subnet that validates [args.BlockchainID] # platform.validates (/docs/rpcs/p-chain/blockchains/platform_validates) --- title: platform.validates full: true _openapi: method: POST route: /ext/bc/P#platform.validates toc: [] structuredData: headings: [] contents: - content: >- Validates returns the IDs of the blockchains validated by [args.SubnetID] --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates returns the IDs of the blockchains validated by [args.SubnetID] # platform.getBlock (/docs/rpcs/p-chain/blocks/platform_getBlock) --- title: platform.getBlock full: true _openapi: method: POST route: /ext/bc/P#platform.getBlock toc: [] structuredData: headings: [] contents: - content: Calls the platform.getBlock method --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Calls the platform.getBlock method # platform.getBlockByHeight (/docs/rpcs/p-chain/blocks/platform_getBlockByHeight) --- title: platform.getBlockByHeight full: true _openapi: method: POST route: /ext/bc/P#platform.getBlockByHeight toc: [] structuredData: headings: [] contents: - content: GetBlockByHeight returns the block at the given height. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetBlockByHeight returns the block at the given height. # platform.getCurrentSupply (/docs/rpcs/p-chain/chain-info/platform_getCurrentSupply) --- title: platform.getCurrentSupply full: true _openapi: method: POST route: /ext/bc/P#platform.getCurrentSupply toc: [] structuredData: headings: [] contents: - content: >- GetCurrentSupply returns an upper bound on the supply of AVAX in the system --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetCurrentSupply returns an upper bound on the supply of AVAX in the system # platform.getHeight (/docs/rpcs/p-chain/chain-info/platform_getHeight) --- title: platform.getHeight full: true _openapi: method: POST route: /ext/bc/P#platform.getHeight toc: [] structuredData: headings: [] contents: - content: GetHeight returns the height of the last accepted block --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetHeight returns the height of the last accepted block # platform.getProposedHeight (/docs/rpcs/p-chain/chain-info/platform_getProposedHeight) --- title: platform.getProposedHeight full: true _openapi: method: POST route: /ext/bc/P#platform.getProposedHeight toc: [] structuredData: headings: [] contents: - content: GetProposedHeight returns the current ProposerVM height --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetProposedHeight returns the current ProposerVM height # platform.getTimestamp (/docs/rpcs/p-chain/chain-info/platform_getTimestamp) --- title: platform.getTimestamp full: true _openapi: method: POST route: /ext/bc/P#platform.getTimestamp toc: [] structuredData: headings: [] contents: - content: GetTimestamp returns the current timestamp on chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetTimestamp returns the current timestamp on chain. # platform.getFeeConfig (/docs/rpcs/p-chain/fees/platform_getFeeConfig) --- title: platform.getFeeConfig full: true _openapi: method: POST route: /ext/bc/P#platform.getFeeConfig toc: [] structuredData: headings: [] contents: - content: GetFeeConfig returns the dynamic fee config of the chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetFeeConfig returns the dynamic fee config of the chain. # platform.getFeeState (/docs/rpcs/p-chain/fees/platform_getFeeState) --- title: platform.getFeeState full: true _openapi: method: POST route: /ext/bc/P#platform.getFeeState toc: [] structuredData: headings: [] contents: - content: GetFeeState returns the current fee state of the chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetFeeState returns the current fee state of the chain. # platform.getValidatorFeeConfig (/docs/rpcs/p-chain/fees/platform_getValidatorFeeConfig) --- title: platform.getValidatorFeeConfig full: true _openapi: method: POST route: /ext/bc/P#platform.getValidatorFeeConfig toc: [] structuredData: headings: [] contents: - content: GetValidatorFeeConfig returns the validator fee config of the chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetValidatorFeeConfig returns the validator fee config of the chain. # platform.getValidatorFeeState (/docs/rpcs/p-chain/fees/platform_getValidatorFeeState) --- title: platform.getValidatorFeeState full: true _openapi: method: POST route: /ext/bc/P#platform.getValidatorFeeState toc: [] structuredData: headings: [] contents: - content: >- GetValidatorFeeState returns the current validator fee state of the chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetValidatorFeeState returns the current validator fee state of the chain. # platform.getRewardUTXOs (/docs/rpcs/p-chain/rewards/platform_getRewardUTXOs) --- title: platform.getRewardUTXOs full: true _openapi: method: POST route: /ext/bc/P#platform.getRewardUTXOs toc: [] structuredData: headings: [] contents: - content: >- GetRewardUTXOs returns the UTXOs that were rewarded after the provided transaction's staking period ended. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetRewardUTXOs returns the UTXOs that were rewarded after the provided transaction's staking period ended. # platform.getMinStake (/docs/rpcs/p-chain/staking/platform_getMinStake) --- title: platform.getMinStake full: true _openapi: method: POST route: /ext/bc/P#platform.getMinStake toc: [] structuredData: headings: [] contents: - content: GetMinStake returns the minimum staking amount in nAVAX. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetMinStake returns the minimum staking amount in nAVAX. # platform.getStake (/docs/rpcs/p-chain/staking/platform_getStake) --- title: platform.getStake full: true _openapi: method: POST route: /ext/bc/P#platform.getStake toc: [] structuredData: headings: [] contents: - content: >- GetStake returns the amount of nAVAX that [args.Addresses] have cumulatively staked on the Primary Network. This method assumes that each stake output has only owner This method assumes only AVAX can be staked This method only concerns itself with the Primary Network, not subnets TODO: Improve the performance of this method by maintaining this data in a data structure rather than re-calculating it by iterating over stakers --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetStake returns the amount of nAVAX that [args.Addresses] have cumulatively staked on the Primary Network. This method assumes that each stake output has only owner This method assumes only AVAX can be staked This method only concerns itself with the Primary Network, not subnets TODO: Improve the performance of this method by maintaining this data in a data structure rather than re-calculating it by iterating over stakers # platform.getTotalStake (/docs/rpcs/p-chain/staking/platform_getTotalStake) --- title: platform.getTotalStake full: true _openapi: method: POST route: /ext/bc/P#platform.getTotalStake toc: [] structuredData: headings: [] contents: - content: GetTotalStake returns the total amount staked on the Primary Network --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetTotalStake returns the total amount staked on the Primary Network # platform.getStakingAssetID (/docs/rpcs/p-chain/subnets/platform_getStakingAssetID) --- title: platform.getStakingAssetID full: true _openapi: method: POST route: /ext/bc/P#platform.getStakingAssetID toc: [] structuredData: headings: [] contents: - content: >- GetStakingAssetID returns the assetID of the token used to stake on the provided subnet --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetStakingAssetID returns the assetID of the token used to stake on the provided subnet # platform.getSubnet (/docs/rpcs/p-chain/subnets/platform_getSubnet) --- title: platform.getSubnet full: true _openapi: method: POST route: /ext/bc/P#platform.getSubnet toc: [] structuredData: headings: [] contents: - content: Calls the platform.getSubnet method --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Calls the platform.getSubnet method # platform.getSubnets (/docs/rpcs/p-chain/subnets/platform_getSubnets) --- title: platform.getSubnets full: true _openapi: method: POST route: /ext/bc/P#platform.getSubnets toc: [] structuredData: headings: [] contents: - content: >- GetSubnets returns the subnets whose ID are in [args.IDs] The response will include the primary network --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetSubnets returns the subnets whose ID are in [args.IDs] The response will include the primary network # platform.getTx (/docs/rpcs/p-chain/transactions/platform_getTx) --- title: platform.getTx full: true _openapi: method: POST route: /ext/bc/P#platform.getTx toc: [] structuredData: headings: [] contents: - content: Calls the platform.getTx method --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Calls the platform.getTx method # platform.getTxStatus (/docs/rpcs/p-chain/transactions/platform_getTxStatus) --- title: platform.getTxStatus full: true _openapi: method: POST route: /ext/bc/P#platform.getTxStatus toc: [] structuredData: headings: [] contents: - content: GetTxStatus gets a tx's status --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetTxStatus gets a tx's status # platform.issueTx (/docs/rpcs/p-chain/transactions/platform_issueTx) --- title: platform.issueTx full: true _openapi: method: POST route: /ext/bc/P#platform.issueTx toc: [] structuredData: headings: [] contents: - content: Calls the platform.issueTx method --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Calls the platform.issueTx method # platform.getCurrentValidators (/docs/rpcs/p-chain/validators/platform_getCurrentValidators) --- title: platform.getCurrentValidators full: true _openapi: method: POST route: /ext/bc/P#platform.getCurrentValidators toc: [] structuredData: headings: [] contents: - content: >- GetCurrentValidators returns the current validators. If a single nodeID is provided, full delegators information is also returned. Otherwise only delegators' number and total weight is returned. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetCurrentValidators returns the current validators. If a single nodeID is provided, full delegators information is also returned. Otherwise only delegators' number and total weight is returned. # platform.getL1Validator (/docs/rpcs/p-chain/validators/platform_getL1Validator) --- title: platform.getL1Validator full: true _openapi: method: POST route: /ext/bc/P#platform.getL1Validator toc: [] structuredData: headings: [] contents: - content: GetL1Validator returns the L1 validator if it exists --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetL1Validator returns the L1 validator if it exists # platform.getValidatorsAt (/docs/rpcs/p-chain/validators/platform_getValidatorsAt) --- title: platform.getValidatorsAt full: true _openapi: method: POST route: /ext/bc/P#platform.getValidatorsAt toc: [] structuredData: headings: [] contents: - content: >- GetValidatorsAt returns the weights of the validator set of a provided subnet at the specified height. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetValidatorsAt returns the weights of the validator set of a provided subnet at the specified height. # platform.sampleValidators (/docs/rpcs/p-chain/validators/platform_sampleValidators) --- title: platform.sampleValidators full: true _openapi: method: POST route: /ext/bc/P#platform.sampleValidators toc: [] structuredData: headings: [] contents: - content: SampleValidators returns a sampling of the list of current validators --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SampleValidators returns a sampling of the list of current validators # avm.getAssetDescription (/docs/rpcs/x-chain/chain/avm_getAssetDescription) --- title: avm.getAssetDescription full: true _openapi: method: POST route: /ext/bc/X#avm.getAssetDescription toc: [] structuredData: headings: [] contents: - content: >- Get information about an asset including name, symbol, and denomination. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get information about an asset including name, symbol, and denomination. # avm.getBlock (/docs/rpcs/x-chain/chain/avm_getBlock) --- title: avm.getBlock full: true _openapi: method: POST route: /ext/bc/X#avm.getBlock toc: [] structuredData: headings: [] contents: - content: Returns the block with the given id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the block with the given id. # avm.getBlockByHeight (/docs/rpcs/x-chain/chain/avm_getBlockByHeight) --- title: avm.getBlockByHeight full: true _openapi: method: POST route: /ext/bc/X#avm.getBlockByHeight toc: [] structuredData: headings: [] contents: - content: Returns block at the given height. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns block at the given height. # avm.getHeight (/docs/rpcs/x-chain/chain/avm_getHeight) --- title: avm.getHeight full: true _openapi: method: POST route: /ext/bc/X#avm.getHeight toc: [] structuredData: headings: [] contents: - content: Returns the height of the last accepted block. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the height of the last accepted block. # avm.getTx (/docs/rpcs/x-chain/chain/avm_getTx) --- title: avm.getTx full: true _openapi: method: POST route: /ext/bc/X#avm.getTx toc: [] structuredData: headings: [] contents: - content: Returns the specified transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the specified transaction. # avm.getTxFee (/docs/rpcs/x-chain/chain/avm_getTxFee) --- title: avm.getTxFee full: true _openapi: method: POST route: /ext/bc/X#avm.getTxFee toc: [] structuredData: headings: [] contents: - content: Get the transaction fees of the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the transaction fees of the network. # avm.getUTXOs (/docs/rpcs/x-chain/chain/avm_getUTXOs) --- title: avm.getUTXOs full: true _openapi: method: POST route: /ext/bc/X#avm.getUTXOs toc: [] structuredData: headings: [] contents: - content: > Gets the UTXOs that reference a given address. If sourceChain is specified, then it will retrieve the atomic UTXOs exported from that chain to the X Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the UTXOs that reference a given address. If sourceChain is specified, then it will retrieve the atomic UTXOs exported from that chain to the X Chain. # avm.issueTx (/docs/rpcs/x-chain/chain/avm_issueTx) --- title: avm.issueTx full: true _openapi: method: POST route: /ext/bc/X#avm.issueTx toc: [] structuredData: headings: [] contents: - content: Send a signed transaction to the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Send a signed transaction to the network. # Welcome to the Course (/academy/avacloudapis) --- title: Welcome to the Course description: Learn about AvaCloud APIs and the AvaCloud SDK. updated: 2024-09-03 authors: [owenwahlgren] icon: Smile --- ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-banner/avacloudapis-ThOanH9UQJiAizv9gKtgFufXxRywkQ.jpg) ## Why Take This Course? [AvaCloud APIs](https://developers.avacloud.io/introduction), built by [AvaCloud](https://avacloud.io/), provides Web3 developers with multi-chain data from Avalanche’s primary network and other Avalanche L1s. With the AvaCloud API, you can easily build products that utilize real-time and historical transaction data, transfer records, native and token balances, and various types of token metadata. ## Course Content - [AvaCloud API Overview](/academy/avacloudapis/02-overview/01-about) - [Environment Setup](/academy/avacloudapis/03-environment-setup/01-avacloud-account) - [Build an ERC-20 Token Balance App](/academy/avacloudapis/04-erc20-token-balance-app/01-overview) - [Build a Wallet Portfolio App](/academy/avacloudapis/05-wallet-portfolio-app/01-overview) - [Build a Basic Block Explorer](/academy/avacloudapis/06-block-explorer-app/01-overview) ## Prerequisites A general understanding of web development is required. While you won't need to write a lot of code, familiarity with key concepts is important. - **TypeScript:** Understanding of common concepts, as all exercises will use TypeScript. - **React:** Basic understanding of React is required, as all exercises will use React. - **NextJS:** Basic understanding of NextJS is required, as all exercises will use NextJS. ## Learning Outcomes By the end of this course, you will: - Be familiar with the AvaCloud API including the [Data API](https://developers.avacloud.io/data-api/overview) and [Webhooks API](https://developers.avacloud.io/webhooks-api/overview). - Learn how to use the [AvaCloudSDK](https://github.com/ava-labs/avacloud-sdk-typescript) to interact with [AvaCloud APIs](https://developers.avacloud.io/introduction). - Build multiple web apps using the [AvaCloudSDK](https://github.com/ava-labs/avacloud-sdk-typescript). # Course Completion Certificate (/academy/avalanche-fundamentals/get-certificate) --- title: Course Completion Certificate updated: 2024-09-09 authors: [ashucoder9] icon: BadgeCheck --- import CertificatePage from '@/components/quizzes/certificates'; You've made it to the end of the course. Let's check your progress and get your certificate. Thank you for participating in this course. We hope you found it informative and enjoyable! # Welcome to the Course (/academy/avalanche-fundamentals) --- title: Welcome to the Course description: Learn about the basics of Avalanche. updated: 2024-05-31 authors: [ashucoder9] icon: Smile --- Welcome to the Avalanche Fundamentals, an online course introducing you to the exciting world of the Avalanche technology! This course will provide you with a comprehensive understanding of the basic concepts making Avalanche unique. Throughout, you'll learn the key features and benefits of the platform, plus how to build on it. You can also ask our expert instructors questions. By the end of these courses, you'll have the knowledge and skills to leverage the power of blockchain technology for your own projects and applications. We're excited to have you join us on this journey and can't wait to see what you'll create with Avalanche! ## Prerequisites This course is for people with some blockchain knowledge. Check out this [guide](/guides/what-is-a-blockchain) to review what a blockchain is. Familiarity with the basic design of modern distributed software systems and common blockchain systems such as Bitcoin and Ethereum is also recommended. You do not need to know how to write code to successfully complete this course. Having these prerequisites will help you better understand the course material and engage in the activities. If you don't know whether you have the necessary foundation for this course, please contact the course instructor for guidance. ## Learning Outcomes By the end of this course, you will: - Understand how Avalanche consensus works and what makes it different. - Understand how Avalanche L1s enable scalability, customizability, and independence. - Understand the Primary Network, a special Avalanche L1, and how to interact with it. - Understand how Virtual Machines enable developers to create more optimized and capable blockchain systems and to tackle completely new use cases unachievable with previous solutions. You can evaluate your own understanding of the material through quizzes and claim a certificate for successful completion at the end. Overall, this course aims to provide a foundational understanding of Avalanche. By completing it, you will be better prepared to take on more advanced courses focused on building on Avalanche. # Course Completion Certificate (/academy/blockchain-fundamentals/get-certificate) --- title: Course Completion Certificate updated: 2025-30-10 authors: [ashucoder9] icon: BadgeCheck --- import CertificatePage from '@/components/quizzes/certificates'; You've made it to the end of the course. Let's check your progress and get your certificate. Thank you for participating in this course. We hope you found it informative and enjoyable! # Welcome to the Course (/academy/blockchain-fundamentals) --- title: Welcome to the Course description: Learn about the basics of blockchain technology. updated: 2025-10-30 authors: [ashucoder9] icon: Smile --- Welcome to the Blockchain Fundamentals, an online course introducing you to blockchain! This course will provide you with a comprehensive understanding of the basic concepts of blockchains. By the end of these courses, you'll have the knowledge and skills to leverage the power of blockchain technology for your own projects and applications. We're excited to have you join us on this journey and can't wait to see what you'll create with Avalanche! ## Course Content We will cover the following topics in this course: - What is a Blockchain? - Deep Dive into Payments - Cryptography: Signature schemes - Consensus Mechanisms - Sybil Defense Mechanisms - Smart contracts - Tokenomics - Virtual Machines ## Prerequisites Anyone can take this course! ## Learning Outcomes By the end of this course, you will have a good understanding of the basic concepts of blockchain technology. You can evaluate your own understanding of the material through quizzes and claim a certificate for successful completion at the end. # Decentralization (/academy/blockchain-fundamentals/xx-decentralization) --- title: Decentralization description: TBD updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- ## Moving from a centralized Entity to a Collective of Validators Blockchain systems achieve decentralization through the use of a network of validators, sometimes referred to as nodes, miners, or stakers, depending on the underlying consensus mechanism. Validators are responsible for verifying and securing transactions, maintaining the integrity of the blockchain, and ensuring that the system remains decentralized and trustless. Decentralization is achieved by distributing the responsibility of maintaining the network across numerous independent participants. Decentralization in blockchain is achieved through a network of validators. These are independent entities that run the blockchain software, perform computations, and ensure the integrity of the blockchain. Here’s how it works: Computation Execution: Each validator independently performs the same computation. For example, let's consider a simple computation: 5 + 3. Consensus Process: After performing the computation, validators share their results with each other. They then use a process called consensus to agree on the correct result. You can think of it as a election, where all validators vote on the correct answer. The consensus process ensures that all validators reach an agreement on the correct result, given that the majority is honest. This agreement is crucial for maintaining the integrity and security of the blockchain. If a validator tries to cheat or provide an incorrect result, the other validators will detect the discrepancy, and the cheating validator will be penalized. # Tokens (/academy/blockchain-fundamentals/xx-tokens) --- title: Tokens description: TBD updated: 2024-05-31 authors: [martineckardt] icon: Notebook --- Tokens are a concept that existed in societies for a long time. Tokens can be used to represent value. The most common valuable tokens we use every day are fiat currencies, like the US dollar. But there are also other kinds of tokens, such as points or miles in loyalty programs. Today we can also tokenize other assets, such as property titles. ## Fungible Tokens Fungibility means that different tokens can be considered of equal value. Let's take for example two one dollar bills. Most people will not care if they get one or the other. They both offer the same utility. ## Non-Fungible Tokens Non-fungible tokens are not considered equal. The most prominent use cases are Art-NFTs. While the tokens may follow a standard of how they can be transfered, they may not be interchangeable. Two pieces of art may have very different values. # Welcome to the Course (/academy/customizing-evm) --- title: Welcome to the Course description: Learn how to customize the Ethereum Virtual Machine. updated: 2024-09-27 authors: [ashucoder9, owenwahlgren] icon: Smile --- import { Step, Steps } from 'fumadocs-ui/components/steps'; ## Why take this Course? A significant innovation in blockchain is the development of multi-chain systems, such as Avalanche, which provide a significant improvement in scalability, interoperability, and flexibility. At the core of these multi-chain systems is the ability to run multiple blockchains powered by different virtual machines simultaneously. Each VM of a chain is optimized for specialized use cases, thereby boosting the network's overall performance. Configuring and modifying the EVM is an efficient way to create a specialized virtual machine, as it allows developers to build upon years of active community work and leverage the extensive ecosystem surrounding the EVM, including wallets, explorers, and development frameworks. ## Course Content ### EVM Basics & Precompiles In the first section of the course we will go through some basic concepts of the EVM, such as the account-based model, keys, addresses, transactions, and blocks. Furthermore, we will explain what a precompile is and show you how to interact with a precompile on the Fuji testnet. ### Development Environment Setup We will explore various ways to set up a development environment for building customized EVMs. You'll learn about GitHub Codespaces and Development Containers (`.devcontainer`). If you choose a local setup, you'll install a code editor, the Go language, configure your shell, and install additional dependencies. ### Hands-On Exercises The following sections contain hands-on exercises where you customize the EVM. The difficulty of the customizations will increase from section to section. Find out more about each exercise by clicking its name below: - Learn how the Avalanche Network Runner Works - Create a your own Avalanche Network - Create an EVM Blockchain on your network - Connect Core Wallet to your blockchain - Learn what the genesis block is and how the data is structured - Create a custom gas fee structure for your blockchain - Define the initial token allocation at launch - Configure pre-installed precompiles - Create the blockchain and connect to it Learn about the basic building blocks of a precompile by building a precompile for the md5 hash function: - Generate boilerplate code for the precompile from a solidity interface - Learn how to unpack inputs and pack outputs into 32 byte arrays - Configure and register your md5 precompile - Create a new blockchain, connect to it and interact with your precompile Learn to build more complex precompiles by building a calculator precompile and master the following skills in addition to the previous section: - Unpack multiple inputs and pack multiple outputs into 32 byte arrays - Set a gas cost for your precompile - Add go tests for your precompile Learn to build stateful precompiles by building a counter precompile and master the following skills in addition to the previous sections: - Read from and write to the EVM state - Define the initial state in the precompileConfig - Add solidity tests for your precompile Learn to build permissioned precompiles by building a XXX precompile and master the following skills in addition to the previous sections: - Add permissions to only allow certain addresses to interact with the precompile - Define the initial permissions in the precompileConfig - Change the permissions on the fly ## Prerequisites ### Avalanche This course is intended for people with a solid understanding of the basic concepts of Avalanche. You should be familiar with these concepts: 1. **Virtual Machines**: What they are and what VM customization means 2. **Blockchains**: What the components are of a blockchain and how they interact, specifically how Avalanche L1s leverage precompiles. If some of this is not clear, I strongly recommend taking the [Avalanche Fundamentals](/academy/avalanche-fundamentals) and [Multi-Chain Architecture](/academy/multi-chain-architecture) courses first. ### Coding You will need a general understanding of software development. You won't have to write a lot of code, but you will have to be able to understand some. Therefore, we recommend: 1. **Solidity**: Basic knowledge, familiarity with types and interfaces. Familiarity with Foundry will help in advanced sections. 2. **Go**: You don't necessarily need to know Go, but should have some experience with an advanced object-oriented and typed language (C, C++, Java, Typescript) 3. **Testing**: It will help you in later sections if you are generally familiar with the concept of unit testing ## Learning Outcomes By the end of this course, students will be able to: - Understand what Precompiles are and when to use them. - Understand how developing precompiles allows developers to create more optimized and capable blockchain systems, enabling them to address entirely new use cases that were previously unattainable. - Apply the knowledge gained in the course by building multiple precompiles ![](/wolfie/wolfie-hack.png) # Welcome to the course (/academy/encrypted-erc) --- title: Welcome to the course description: Learn about the encrypted-ERC (eERC) token standard from AvaCloud. updated: 2025-07-21 authors: [alejandro99so] icon: Lock --- Welcome to the **eERC Token Standard** course! This course is designed to give you a deep understanding of encrypted ERC (eERC), a privacy preserving ERC-20 like token standard developed by AvaCloud, enabling confidential transactions on EVM-compatible blockchains. By the end of this course, you will understand the privacy limitations of traditional token standards, how eERC solves them, and how to create and integrate your own private tokens. ## Course Content ### 1. What is a Digital Asset? - **Standards**: ERC-20, ERC-721, ERC-1155 - **Current Uses**: How companies, DeFi protocols, and projects are using them today. - **Limitations**: Challenges companies face when trying to adopt these standards (scalability, privacy, compliance). ### 2. Real Privacy - **How Private is the Blockchain?**: Transparency vs confidentiality. - **Compliance**: Why regulatory alignment matters for privacy tokens. - **Necessities Solved with Privacy**: Situations where privacy is essential (e.g., sensitive transactions, enterprise use). ### 3. Encrypted Tokens - **What Kind of Privacy Does eERC Provide?**: Hidden balances, confidential transfers. - **Comparison: ERC-20 vs eERC**: Feature-by-feature breakdown. - **Technology Behind eERC**: Zero-knowledge proofs and homomorphic encryption. ### 4. Usability of eERC - **Standalone vs Converter Contract**: When to use each mode. - **Use Cases**: From DeFi to regulated financial institutions. ### 5. eERC Contracts Flow - **Step-by-Step**: Creating your own eERC token and considerations. - **Kinds of ZKProof to Use**: Selection criteria based on use case. - **User Flow**: How to interact with eERC as an end user. --- ## Prerequisites Before starting this course, you should have: ### Blockchain Fundamentals Familiarity with EVM-based blockchains and ERC token standards. ### Development Tools Basic knowledge of Solidity and comfort using TypeScript will help, especially when working with the Privacy functions. --- ## Learning Outcomes By the end of this course, you will: - Understand the **purpose and innovation** behind the eERC standard. - Compare **traditional ERC standards** with encrypted ERC for privacy and compliance. - Identify the **key benefits, architecture, and technical underpinnings** of eERC. - Differentiate between **Standalone** and **Converter** implementation modes. - Recognize **real-world use cases** where privacy and auditability are essential. - Create and deploy your own **eERC token** following a step-by-step process. - Understand how to **interact with eERC as a user** in different scenarios. # Course Completion Certificate (/academy/icm-chainlink/certificate) --- title: Course Completion Certificate updated: 2024-10-11 authors: [owenwahlgren] icon: BadgeCheck --- import CertificatePage from '@/components/quizzes/certificates'; You've made it to the end of the course. Let's check your progress and get your certificate. Thank you for participating in this course. We hope you found it informative and enjoyable! # Welcome to the course (/academy/icm-chainlink) --- title: Welcome to the course description: Enable Chainlink services on L1 networks that do not have direct Chainlink support. updated: 2024-05-31 authors: [martineckardt] icon: Smile --- In this course, you will learn how to integrate your own blockchain with the Chainlink services that are deployed on C-chain. To do this, we will use Avalanche Interchain Messaging and Chainlink VRF. ## Why Take This Course? As the blockchain ecosystem grows, developers require robust tools to build secure, scalable, and innovative applications. Chainlink provides a suite of decentralized services, including VRF (Verifiable Random Function), Automation, Chainlink Functions, and CCIP (Cross-Chain Interoperability Protocol), to empower developers in creating high-performance, trustless applications. These services are not integrated by default on Avalanche L1 chains. However, Avalanche’s Interchain Messaging (ICM) enables developers to consume Chainlink services, typically available on the C-Chain, directly within their own L1 environments. This course equips you with the knowledge to integrate all Chainlink services into your Avalanche L1 blockchain. You will start with Chainlink VRF and progressively expand to advanced features like Automation for task scheduling, Functions for off-chain data computations, and CCIP for bridging data between Ethereum and Avalanche. We suggest you to revisit this course as we will keep on adding new content. ## Course Content ### Introduction Overview of Chainlink services and their role in decentralized applications. Importance of integrating these tools into Avalanche L1 chains. ### Chainlink VRF Explanation of VRF and its use cases. Deploying VRF on Avalanche L1 and verifying randomness. ### Chainlink Functions Overview of off-chain computation and API integrations. Implementing Functions for real-time data feeds. ### Chainlink Automation Using Automation for task scheduling. Examples include automated smart contracts and dynamic pricing. ### Chainlink CCIP Bridging data and assets between Ethereum and Avalanche L1. Practical applications such as cross-chain governance and token transfers. ## Prerequisites Avalanche Interchain Messaging (ICM): A solid understanding of Avalanche’s ICM protocol is essential. You should have completed the ICM course, which covers: - The fundamentals of cross-chain communication in Avalanche - Message formats and flows - Security techniques for interchain messaging Blockchain and Development Knowledge - Solidity: Familiarity with the language’s key concepts, especially those related to smart contract interactions and randomness. - Oracles: While we will cover the basic components of Chainlink's VRF, having a sense of oracles is desirable. If any of this is unclear, we strongly recommend taking the Avalanche Interchain Messaging course first. ## Learning Outcomes By the end of this course, students will: - Understand the mechanics of Chainlink VRF and other Chainlink Services and its significance in decentralized applications. - Gain proficiency in setting up Avalanche L1 chains with Chainlink services enabled. - Integrate Chainlink VRF with Avalanche ICM for secure cross-chain randomness. - Use Chainlink functions for Off-chain data integration with on-chain logic. - Automated smart contract operations - Cross-Chain communication with ecosystems outside Avalanche trough Chainlink's CCIP - Apply best practices to ensure reliability, security, and cost-efficiency in their applications. This course prepares developers to leverage Chainlink’s powerful suite of tools for building next-generation blockchain solutions on Avalanche L1. # Course Completion Certificate (/academy/interchain-messaging/certificate) --- title: Course Completion Certificate updated: 2024-10-11 authors: [owenwahlgren] icon: BadgeCheck --- import CertificatePage from '@/components/quizzes/certificates'; You've made it to the end of the course. Let's check your progress and get your certificate. Thank you for participating in this course. We hope you found it informative and enjoyable! # Welcome to the course (/academy/interchain-messaging) --- title: Welcome to the course description: Learn about Interchain Messaging, the interoperability protocol of Avalanche. updated: 2025-05-13 authors: [martineckardt, nicolasarnedo] icon: Smile --- In this course, you will learn how to build cross-L1 Solidity dApps with Interchain Messaging and Avalanche Warp Messaging. ## Why Take This Course? A significant innovation in blockchain is the development of multi-chain systems, like Avalanche, which provide a significant improvement in scalability, interoperability, and flexibility. At the core of these multi-chain systems is the ability to run multiple blockchains that communicate. Each chain's VM is optimized for specialized use cases, thereby boosting the network's overall performance. Cross-chain communication is a crucial building block of multi-chain systems. Utilizing Interchain Messaging and Avalanche Warp Messaging is an incredible easy way to build cross-L1 dApps, since developers can build on top an extensive and audited development framework. ## Course Content Below you can find a 30 minute recording of a presentation about Avalanche Warp Messaging and Teleporter. This summarizes the content of the first chapters: ### Interoperability In the first section, we cover some basic concepts of interoperability in multi-chain systems. You will learn about examples of interoperability between blockchains and the terms "source," "destination," and "message." ### Avalanche Interchain Messaging In this section, we learn what Avalanche Interchain Messaging is and what is abstracted away from the general dApp developer. You will also build your first cross-L1 dApps. ### Securing Cross-Chain Communication In this section, we look at techniques to secure cross-chain communication. We dive into signature schemes, multi-signature schemes, and the BLS multi-signature scheme. ### Avalanche Interchain Messaging Protocol Avalanche blockchains can natively interoperate between one another using AWM. You will learn about the AWM message format and how the message flow works. ## Prerequisites ### Avalanche This course is meant for people with a solid understanding of the basic concepts of Avalanche. You should be familiar with these concepts: - **Virtual Machines:** What they are and what VM customization means - **Avalanche L1s & Blockchains:** What the difference between a VM, Blockchain, and an Avalanche L1 is If any of this is unclear, we strongly recommend taking the Avalanche Fundamentals and Multi-Chain Architecture courses first. ### Software Development You will need a general understanding of Software Development. You won't have to write a lot of code, but you will have to understand some. Therefore, we recommend: - **Solidity:** Familiarity with most concepts of the language. All exercises will mainly consist of writing cross-subnet dApps in Solidity. - **Docker:** The advanced exercises in the latter part of the course will occur in contained environments for easy setup. It will help if you're generally familiar with the concept of containerization, docker, and docker compose. - **Testing:** Having some experience or familiarity with unit testing is ideal. ## Learning Outcomes By the end of this course, students will: - Understand the challenges of cross-chain communication - Know what separates Avalanche Warp Messaging from other cross-chain communication protocols - Understand the differences between Avalanche Warp Messaging and Teleporter - Apply their knowledge by building cross-Avalanche L1 dApps, such as asset bridges Overall, this course aims to provide an advanced understanding of Teleporter. By completing this course, students will be better prepared to build advanced cross-Avalanche L1 blockchain applications. # Dapp's and L1's (/academy/l1-native-tokenomics/dappVsL1) --- title: Dapp's and L1's description: Listing tokenomics and infrastructure requirements to understand whether creating an L1 is neccessary for your business-case updated: 2025-08-21 authors: [nicolasarnedo] icon: Microscope --- So far you have learnt a lot of the basics around blockchains, how they work, architecture, consensus mechanism's, smart contract's, and also Avalanche-specific concepts. Before reading through this course however, now is a good moment to reflect on whether your use-case requires it's own L1 and native tokenomics. In order to do this, there are several economic factors which will dictate if we should deploy an application on a public blockchain (Avalanche C-chain, Ethereum) or create our own custom Layer 1 (L1) blockchain. Let's look into them... ### Transaction Volume and Value If your application processes a high volume of low-value transactions, a custom L1 may be more cost-effective in the long run. Conversely, if your application handles only a few high-value transactions, the cost savings may not justify the complexity of running a separate L1. ### Fee Stability On a public blockchain, you are at the mercy of network-wide fee fluctuations. With a custom L1, you can ensure fee stability, which is crucial for budgeting and long-term planning. ### Data Control and Customization You can control who interacts with your application equally when deploying an app to a public chain vs on your own L1, but if you want to have more control on the data; who can deploy smart contracts, isolating your data for compliance/KYC reasons, or who can validate transactions, then you will absolutely need an L1. If you do have to create an L1 - the primary cost of running it is the infrastructure required to support the validator set. Once you establish an appropriate number of validators based on your security requirements, the operational costs become mostly fixed. These costs are independent of the number and complexity of the transactions processed on your chain, providing predictable financial outlays as your application scales. ### Token Location Choose based on where your security and token economics must live. If your token must be native to the protocol (gas token, staking/slashing, fee burn/distribution, protocol‑level issuance or supply rules) and you need validator/permissioning guarantees, a custom L1 is appropriate. If your token’s utility is application‑level (access, rewards, governance without base‑fee mechanics) and you benefit from public‑chain security and liquidity, a dApp with an ERC‑20 on a public chain is the simpler path. Decide whether the required guarantees are protocol‑layer (L1) or app‑layer (public chain). Launching a blockchain application on a public permissionless blockchain or spinning up your own L1 on Avalanche is a decision that depends heavily on your application's transaction profile and economic model. Ultimately, the decision should be driven by a careful analysis of your transaction patterns, user experience goals, what will be the token utilty and the potential for fee volatility in public networks. By weighing these factors, you can make an informed decision that aligns with your application's long-term success. #### *Use-case Example* *Gaming application where users frequently make micro-transactions, high transaction fees on a public blockchain could be a significant barrier to user retention. By deploying your own L1, you can minimize or even eliminate these fees, thereby enhancing the user experience and driving higher engagement. This is particularly relevant for applications that target mainstream audiences, where a seamless and cost-effective user experience is paramount.* **Live Proof**: [Off The Grid](https://gunzillagames.com/en/) uses a dedicated Avalanche L1 to power a player‑driven economy with on‑chain item ownership (NFTs) and currency ($GUNZ). # Welcome to the Course (/academy/l1-native-tokenomics) --- title: Welcome to the Course description: Learn about the L1 Native Tokenomics updated: 2025-08-21 authors: [nicolasarnedo] icon: Smile --- Welcome to the **L1 Native Tokenomics** course! This course is designed to give you a deep understanding of how to create and manage native tokens on your own Avalanche L1 blockchain. By the end of this course, you will have practical skills in designing tokenomics, configuring native token allocation, and leveraging precompiles to create powerful token economies. ## Prerequisites Before starting this course, you should have completed the [Blockchain Fundamentals](/academy/blockchain-fundamentals) and [Avalanche Fundamentals](/academy/avalanche-fundamentals) courses of the [Avalanche Developers Academy](https://build.avax.network/academy#:~:text=Choose%20Your%20Learning%20Path) learning tree. ## Learning Outcomes By the end of this course, you will: - **Understand Token Fundamentals**: Gain deep insights into what tokens are, their differences, and the implications of creating native tokens versus ERC20 tokens. - **Master Native Token Creation**: Learn how to create custom native tokens and understand when you need them versus when ERC20 tokens are sufficient. - **Leverage Precompiles**: Understand how to use the Native Minter and Fee Config Precompiles to create powerful tokenomics. - **Design Token Distribution**: Create effective vesting schedules, bonding curves, and airdrop strategies for your native tokens. - **Implement Governance**: Develop governance structures including DAOs and quadratic voting models for decentralized decision-making. But before diving into the technical implementation, we've included two essential think pieces to help you make informed decisions: **Essential Reading**: [Dapp vs L1](/academy/l1-native-tokenomics/dapp-vs-l1) - A critical analysis to determine whether your use case actually requires its own L1 with native tokenomics, or if deploying on an existing chain would be more appropriate. **Highly Recommended**: [Token Ownership](/academy/l1-native-tokenomics/token-ownership) - Not strictly necessary for implementation, this deep dive into the philosophical and practical aspects of token ownership will give you a much richer understanding of the underlying concepts at play in tokenomics design. # Token Ownership (/academy/l1-native-tokenomics/token-ownership) --- title: Token Ownership description: Everything you need to know about token ownership on blockchains updated: 2025-08-21 authors: [nicolasarnedo] icon: Key --- ## A Short History and Why It Matters Token ownership means recording who controls a digital asset on a public ledger, enforced by cryptography and consensus rather than by legal registries alone. It unlocks portable, programmable, and verifiable property rights for anything that can be represented in bits. ### From Scarcity to Programmability - Bitcoin (2009) solved digital scarcity and established provable, bearer‑style ownership “by code.” - Ethereum (2015) generalized ownership with smart contracts, enabling tokens: programmable representations of value, access, or rights. The 2014–2018 wave of token sales showed how ownership could be distributed globally without intermediaries—sometimes as utility access, sometimes (mistakenly) as implied equity. The lesson: tokens coordinate, but value capture must be designed, not assumed. ### What “Ownership” Means On‑Chain On blockchains, ownership is: - Verifiable: anyone can audit balances and provenance. - Portable: assets move across apps and wallets with private keys. - Programmable: rights can encode governance, access, royalties, staking, or fee distribution. - Composable: tokens integrate with DeFi, marketplaces, and tooling by common standards. ### Token Classes at a Glance Classification of crypto assets At a high level: - Fungible tokens (e.g., ERC‑20) track interchangeable units (balances, payments, governance power). - Non‑fungible tokens (e.g., ERC‑721) track unique items and their provenance (collectibles, credentials, rights). - Hybrids and multi‑tokens (e.g., ERC‑1155) combine both models for efficient gaming/commerce flows. ### Why Tokenized Ownership Works Tokens drastically reduce friction to distribute and align ownership. Like equity, they can motivate long‑term contribution; unlike equity, they are programmable (e.g., automatic reward streams, on‑chain voting) and readily tradable. But not every token confers the same rights: designs differ in value capture (fee burns, staking rewards, access discounts), transferability, and governance—and must consider regulation. ### Looking Ahead As you move into native tokens and ERC‑20s next, keep this lens: token ownership is a coordination primitive. Good designs clearly define what is owned (rights), how value accrues (mechanics), and how ownership changes hands (standards and controls). With those pillars, tokens become reliable building blocks for open, interoperable economies. # Course Completion Certificate (/academy/interchain-token-transfer/certificate) --- title: Course Completion Certificate updated: 2024-10-11 authors: [owenwahlgren] icon: BadgeCheck --- import CertificatePage from '@/components/quizzes/certificates'; You've made it to the end of the course. Let's check your progress and get your certificate. Thank you for participating in this course. We hope you found it informative and enjoyable! # Welcome to the Course (/academy/interchain-token-transfer) --- title: Welcome to the Course description: Learn about sending assets to another L1s with Avalanche Interchain Token Transfer. updated: 2024-05-31 authors: [ashucoder9] icon: Smile --- In this course, you will learn how to transfer assets across multiple Avalanche blockchains with Avalanche Interchain Token Transfer ICTT. ## Why Take This Course? A significant innovation in blockchain is the development of multi-chain systems, like Avalanche, which provide a significant improvement in scalability, interoperability, and flexibility. At the core of these multi-chain systems is the ability to run multiple blockchains that communicate. Each chain's VM is optimized for specialized use cases, thereby boosting the network's overall performance. Cross-chain communication is a crucial building block of multi-chain systems. Utilizing Avalanche Interchain Messaging and Interchain Token Transfer is an incredibly easy way to build cross-Avalanche L1 dApps, since developers can build on top of an extensive, audited development framework. Transfering tokens between multiple chains is a common use case in multi-chain systems. This course will help you understand how to transfer assets between multiple Avalanche blockchains using the Avalanche Interchain Token Transfer protocol. This course focuses on using the existing testnet chains (Fuji C-Chain, Echo, and Dispatch). If you want to create your own L1 blockchain, please refer to the [Creating an L1](/academy/avalanche-fundamentals/04-creating-an-l1/01-creating-an-l1) course. Since Interchain Token Transfer relies on Interchain Messaging (ICM), you must run your own relayer to enable cross-chain communication. Follow the [Running a Relayer](/academy/interchain-messaging/09-running-a-relayer/01-relayer-introduction) course to set up your relayer. ## Course Content ### Getting Started with Interchain Token Transfer In this section, you will learn how to use our Interchain Token Transfer toolbox to perform cross-chain operations. We'll guide you through the process of using our user-friendly interface to deploy contracts, create bridges, and transfer assets across the testnet chains (Fuji C-Chain, Echo, and Dispatch). ### Tokens and Token Types In this section, you will learn about the different types of tokens that can be transferred between Avalanche blockchains. We will cover ERC-20 and native tokens and how to deploy and transfer them using our toolbox. Furthermore, you will learn what wrapped native tokens are and how they can be used to transfer assets between chains. ### Token Bridging Next we will talk about the high level concepts of token bridging and demonstrate how to use our toolbox to create and manage bridge contracts for cross-chain transfers between the testnet chains. ### Interchain Token Transfer Architecture In this chapter we will look at the design of Avalanche Interchain Token Transfer. You will learn about the file structure of the contracts and the concepts of the token home and token remote. ### ERC-20 to ERC-20 Bridge Implementation You will learn how to use our toolbox to deploy ERC-20 tokens and create bridges to transfer them between the testnet chains. ### Multi-Chain Token Operations Here you will learn about the concept of multi-hops and how to use our toolbox to bridge tokens between multiple testnet chains. ### Native to ERC-20 Bridge Implementation In this chapter you will learn how to use our toolbox to bridge a native token as an ERC-20 token to another testnet chain. ### Send and Call Operations In this chapter you will learn how to use our toolbox to call smart contracts with the tokens after sending them to another testnet chain. ### Cross-Chain Token Swaps In this chapter you will learn how to perform cross-chain token swaps between the testnet chains using our toolbox. ## Prerequisites ### Avalanche Knowledge This course is intended for people with knowledge about Cross-Chain communication protocols, and a solid understanding of the basic concepts of Avalanche. You should be familiar with these concepts: 1. Avalanche Architecture: Be familiar with Avalanche blockchains. 2. Interchain Messaging: Know how to communicate between two Avalanche blockchains with ICM. If some of this is not clear, we strongly recommend taking the Avalanche Fundamentals, Multi-Chain Architecture, and Interchain Messaging courses first. ### Software Development You will need a general understanding of how to use Web3 applications. We recommend: 1. Basic understanding of how to use Core Wallet - Download from [core.app](https://core.app) 2. Test tokens for development - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet tokens automatically - **Alternative:** Use external faucets like [core.app/tools/testnet-faucet](https://core.app/tools/testnet-faucet/?subnet=c&token=c) with code `avalanche-academy` 3. Understanding of token standards (ERC-20, etc.) ## Learning Outcomes By the end of this course, you will: - Understand what Avalanche Interchain Token Transfer is and when to use it. - Understand the different options for transferring assets between multiple chains. - Be able to deploy tokens and create bridges using our toolbox. - Be able to perform cross-chain token transfers between testnet chains using our toolbox. - Apply the knowledge gained in the course by enabling assets to be transferred between multiple Avalanche blockchains. # Course Completion Certificate (/academy/multi-chain-architecture/certificate) --- title: Course Completion Certificate updated: 2024-10-11 authors: [owenwahlgren] icon: BadgeCheck --- import CertificatePage from '@/components/quizzes/certificates'; You've made it to the end of the course. Let's check your progress and get your certificate. Thank you for participating in this course. We hope you found it informative and enjoyable! # Welcome to the Course (/academy/multi-chain-architecture) --- title: Welcome to the Course description: Learn about the Multi-Chain Architecture. updated: 2024-06-28 authors: [usmaneth] icon: Smile --- Are you ready to dive deeper into the fascinating world of Avalanche? This course will equip you with a comprehensive understanding of Avalanche's Multi Blockchain architecture, along with the practical skills required to create and manage custom blockchains. Ideal for people familiar with the basics of Avalanche, this course will unlock the potential of creating custom blockchains. We will explore the various components and key concepts of the Custom Blockchain architecture, empowering you to fully leverage its benefits. Throughout this course, you will gain hands-on experience by utilizing the Avalanche Command Line Interface (Avalanche CLI) to create your own custom blockchains and run them locally. You will explore different variations and configurations to learn the versatility and advantages of custom blockchains. Additionally, you will interact with the local instances using the Core Wallet, further enhancing your practical skills. Let's get started exploring the power of Avalanche Custom Blockchains. ## Presentation Below you can find a recording of a presentation about the Multi-Chain Architecture of Avalanche. ## Prerequisites This course is for people who understand the Avalanche basics. If you are not familiar with Avalanche Consensus, Virtual Machines, blockchains in Avalanche, or Avalanche Custom Blockchains, we strongly recommend taking our [Avalanche Fundamentals](/academy/avalanche-fundamentals) course first. ## Learning Outcomes In this course you will learn: - The benefits of Avalanche Custom Blockchains - How to use Avalanche CLI to create and run custom blockchains locally - How to interact with Avalanche L1s using Core Wallet # Course Completion Certificate (/academy/permissioned-l1s/certificate) --- title: Course Completion Certificate description: Get your completion certificate for the Permissioned L1s course updated: 2025-03-19 authors: [nicolasarnedo] icon: BadgeCheck --- import CertificatePage from '@/components/quizzes/certificates'; You've made it to the end of the course! Let's check your progress and get your certificate. Thank you for participating in this course. We hope you found it informative and enjoyable! # Welcome to the Course (/academy/permissioned-l1s) --- title: Welcome to the Course description: Learn about L1 Validator Management for Permissioned Blockchains updated: 2025-07-15 authors: [nicolasrnedo] icon: Smile --- ## Permissioned L1s Welcome to the **Permissioned L1s** course! This course is designed to give you a deep understanding of configuring, launching and maintaining permissioned L1s on Avalanche. By the end of this course, you will have practical skills in deploying an L1 and managing the validator set in a Proof of Authority (PoA) network. ## Video ### What You'll Learn This comprehensive course will walk you through: - **Introduction** - P-Chain review, how Validator Manager Contracts use ICM & commonlys used Proxy Patterns - **Proof of Authority** - Understanding permissioning types for blockchains, what Proof of Authority is and the Validator Manager contract structure - **Create an L1** - Creating a Subnet, diving deep into the Transparent Proxy pattern, understanding genesis pre-deployed contracts and creating your L1 (recommended to first have created an L1 in the [Avalanche Fundamentals course](/academy/avalanche-fundamentals)) - **Validator Manager Deployment** - Deploying and configuring the Validator Manager Contract (VMC) on your new L1 - **Validator Manager Operations** - Adding, changing weights and removing validators - **Coming Soon... Multi-Sig Setup for PoA** - Implementing secure multi-signature governance with Safe/Ash wallets - **Coming Soon... Private L1s** - Configuring validator-only access and RPC node restrictions ### Let's Get Started! Each section builds upon previous knowledge, so we recommend completing the course in order. Happy learning and building! # Course Completion Certificate (/academy/permissionless-l1s/certificate) --- title: Course Completion Certificate description: Get your completion certificate for the Permissioned L1s course updated: 2025-03-19 authors: [nicolasarnedo] icon: BadgeCheck --- import CertificatePage from '@/components/quizzes/certificates'; You've made it to the end of the course! Let's check your progress and get your certificate. Thank you for participating in this course. We hope you found it informative and enjoyable! # Welcome to the Course (/academy/permissionless-l1s) --- title: Welcome to the Course description: Learn about L1 Validator Management for Permissionless Blockchains updated: 2025-01-15 authors: [nicolasarnedo] icon: Smile --- ## Permissionless L1s Welcome to the **Permissionless L1s** course! This course is designed to give you a deep understanding of configuring, launching and maintaining permissionless L1s on Avalanche. By the end of this course, you will have practical skills in deploying an L1 with Proof of Stake (PoS) consensus and managing the validator set in a permissionless network. ### Prerequisites Before starting this course, we recommend completing: - [L1 Native Tokenomics](/academy/l1-native-tokenomics) - Understanding tokenomics fundamentals - [Permissioned L1s](/academy/permissioned-l1s) - Foundation in L1 validator management If you just completed these recently, you can go ahead and skip the [Review chapter](). If not, we recommend giving it a read, it will cover: - P-Chain fundamentals - Multi-Chain Architecture - Permissioned L1s - Native Tokenomics ### What You'll Learn This comprehensive course will walk you through: - **Proof of Stake** - Understanding PoS consensus, staking token selection, and liquid staking considerations - **Necessary Precompiles** - Native Minter and Reward Manager precompiles for tokenomics - **Basic Setup** - Deploying Validator Manager Contract, upgrading proxy, and initializing validator configuration - **Staking Manager Setup** - Deploying and configuring staking managers for native token staking - **Staking Manager Operations** - Adding, changing weights, removing validators, and handling delegation - **Node Licenses** - Understanding validator licensing and real-world examples ### Let's Get Started! Each section builds upon previous knowledge, so we recommend completing the course in order. Happy learning and building! # Welcome to the course (/academy/solidity-foundry) --- title: Welcome to the course description: Learn the basics about programming Smart Contracts in Solidity for an EVM Blockchain updated: 2024-05-31 authors: [Andrea Vargas] icon: Smile --- In this course, you will learn how to build Solidity dApps on Avalanche. ## Why Take This Course? A significant innovation in blockchain is the development of multi-chain systems, like Avalanche, which provide a significant improvement in scalability, interoperability, and flexibility. While a blockchain on Avalanche can be run with any VM, the most prominent choice currently is the Ethereum Virtual Machine (EVM). Users can deploy their own logic in the form of smart contracts to the EVM. These smart contracts can be written in Solidity. Learning Solidity can enable you to leverage the features of blockchain for your dApp. ## Course Content ### Smart Contracts In the first section, we will look at what smart contracts are, their basic structure and how they work. ### Hello World Part I In this section we will look at the primitive types (strings, integers, bool, ...) and function as well as the Solidity file structure. ### Hello World Part II We will look at control flows (if & else), data structures and constructors. Furthermore, we learn about inheritance from other contracts and modifiers and events. ### Contract Standardization You will learn how contracts can be standardized and how inheritance and interfaces can help us to do so. ## Prerequisites ### Blockchain / Web3 This course is meant for people with a some experience when it comes to web3. You should be familiar with these concepts: - Wallet: What they are and how to create one - dApp: What a decentralized application is and how to interact with one If any of this is unclear, we strongly recommend taking the Avalanche Fundamentals and Subnet Architecture courses first, that give a soft introduction into these topics from a user stand point. ### Software Development You will need a general understanding of Software Development. Therefore, we recommend: - Programming: Familiarity with most basic concepts of programming, such as variables, control flows (if, else) and loops. All exercises will consist of writing small contracts in Solidity. - IDE: It will help if you're generally familiar with the concept of an integrated developer environment. We will be leveraging Remix. ## Learning Outcomes By the end of this course, students will: - Interact and Deploy contracts using Foundry - Get familiar with ERC20 and ERC721 token standards - Understand important concepts such as inheritance, modifiers, and events. - Apply their knowledge by building their own smart contracts. # Features (/academy/avacloudapis/02-overview/01-about) --- title: Features description: Learn about the features of the AvaCloud API. updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- ## What is the Data API? The Data API provides web3 application developers with multi-chain data related to Avalanche’s primary network, Avalanche L1s, and Ethereum. With the Data API, you can easily build products that leverage real-time and historical transaction and transfer history, native and token balances, and various types of token metadata. ### Data API Features - **Extensive L1 Support**: Gain access to data from over 100+ L1s across both mainnet and testnet. If an L1 is listed on the [Avalanche Explorer](https://subnets.avax.network/stats/), you can query its data using the Data API. - **Transactions and UTXOs**: Easily retrieve details related to transactions, UTXOs, and token transfers from Avalanche EVMs, Ethereum, and Avalanche’s Primary Network (P-Chain, X-Chain and C-Chain). - **Blocks**: Retrieve latest blocks and block details - **Balances**: Fetch balances of native, ERC-20, ERC-721, and ERC-1155 tokens along with relevant metadata. - **Tokens**: Augment your user experience with asset details. - **Staking**: Get staking related data for active and historical validations. ## What is the Metrics API? The Metrics API equips web3 developers with a robust suite of tools to access and analyze on-chain activity across Avalanche’s primary network, Avalanche L1s, and other supported EVM chains. This API delivers comprehensive metrics and analytics, enabling you to seamlessly integrate historical data on transactions, gas consumption, throughput, staking, and more into your applications. The Metrics API, along with the Data API are the driving force behind every graph you see on the [Avalanche Explorer](https://subnets.avax.network/stats/). From transaction trends to staking insights, the visualizations and data presented are all powered by these APIs, offering real-time and historical insights that are essential for building sophisticated, data-driven blockchain products. ### Metrics API Features - **Chain Throughput**: Retrieve detailed metrics on gas consumption, Transactions Per Second (TPS), and gas prices, including rolling windows of data for granular analysis. - **Cumulative Metrics**: Access cumulative data on addresses, contracts, deployers, and transaction counts, providing insights into network growth over time. - **Staking Information**: Obtain staking-related data, including the number of validators and delegators, along with their respective weights, across different L1s. - **Blockchains and L1s**: Get information about supported blockchains, including EVM Chain IDs, blockchain IDs, and L1s associations, facilitating multi-chain analytics. - **Composite Queries**: Perform advanced queries by combining different metric types and conditions, enabling detailed and customizable data retrieval. ## What is the AvaCloudSDK? The [AvaCloud SDK](https://developers.avacloud.io/avacloud-sdk/getting-started) provides web3 application developers with multi-chain data related to Avalanche’s primary network, Avalanche L1s, and Ethereum. With the Data API, you can easily build products that leverage real-time and historical transaction and transfer history, native and token balances, and various types of token metadata. The SDK is currently available in TypeScript, with more languages coming soon. If you are interested in a language that is not listed, please reach out to us in the [`#dev-tools`](https://discord.com/login?redirect_to=%2Flogin%3Fredirect_to%3D%252Fchannels%252F578992315641626624%252F1280920394236297257) channel in the [Avalanche Discord](https://discord.com/invite/avax). # APIs vs RPCs (/academy/avacloudapis/02-overview/02-apis-vs-rpc) --- title: APIs vs RPCs description: Learn how the AvaCloud Data API differs from RPC calls updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- Blockchain RPCs and APIs both facilitate interactions with a network, but they differ significantly in how they operate. ### RPCs **Blockchain RPCs** allow you to communicate directly with a blockchain node, performing tasks like querying data or submitting transactions. These are low-level, synchronous calls, requiring a deep understanding of the blockchain's structure and specific commands. To get a more comprehensive understanding of Ethereum's JSON-RPC API, you can refer to the [official Ethereum documentation](https://ethereum.org/en/developers/docs/apis/json-rpc/). ### APIs **Blockchain APIs**, like the AvaCloud Data API, abstract away much of the complexity. They offer higher-level, user-friendly endpoints that streamline interactions, making it easier to build and manage blockchain applications without needing in-depth knowledge of the underlying blockchain protocols. To get a more comprehensive understanding of the AvaCloud Data API, you can refer to the [official AvaCloud Data API documentation](https://developers.avacloud.io/data-api/overview). ### Example Use Case For example, querying a user's ERC-20 portfolio using an RPC involves a series of complex calls to retrieve and parse raw blockchain data. Using just RPCs, you would need to: 1. Query every block on the network for transaction logs. 2. Parse each transaction log to identify ERC-20 token transfers. 3. Extract the ERC-20 token contract address. 4. For each ERC-20 token contract, query the user's address to get the balance. 5. Parse and aggregate the data to present the user's portfolio. While it may seem simple in theory, this process can be time-consuming and error-prone, especially when dealing with multiple blockchains. With the AvaCloud Data API, you could simply use a dedicated endpoint such as: ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/balances:listErc20 \ --header 'x-glacier-api-key: ' ``` to get a neatly formatted response with the user's ERC-20 portfolio, significantly reducing development time and complexity. ```json { "nextPageToken": "", "erc20TokenBalances": [ { "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F", "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "ercType": "ERC-20", "price": { "currencyCode": "usd", "value": "42.42" }, "chainId": "43114", "balance": "2000000000000000000", "balanceValue": { "currencyCode": "usd", "value": "42.42" } } ] } ``` # Data API (/academy/avacloudapis/02-overview/03-dataapi-endpoints) --- title: Data API description: Learn about AvaCloud Data API. updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/glacier-data-api-7knGxPQ6gpsJcZehfeZVYcRUAr6u1l.png) The AvaCloud Data API provides a comprehensive set of endpoints to interact with Avalanche networks. These endpoints allow you to query information about blocks, transactions, assets, and more. Below are some of the key endpoints available in the Data API. A more comprehensive list of Data API endpoints can be found [here](https://developers.avacloud.io/data-api/overview). ## Data API Reference ### EVM Endpoints - [List All Chains](https://developers.avacloud.io/data-api/evm-chains/list-chains) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains \ --header 'accept: application/json' ``` - [Get Chain Information](https://developers.avacloud.io/data-api/evm-chains/get-chain-information) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId} \ --header 'accept: application/json' ``` - [List Latest Blocks](https://developers.avacloud.io/data-api/evm-chains/list-latest-blocks) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/blocks \ --header 'accept: application/json' ``` - [Get Block Information](https://developers.avacloud.io/data-api/evm-chains/getblock) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/blocks/{blockId} \ --header 'accept: application/json' ``` - [Get Deployment Transaction](https://developers.avacloud.io/data-api/evm-transactions/get-deployment-transaction) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/contracts/{address}/transactions:getDeployment \ --header 'accept: application/json' ``` - [List Deployed Contracts](https://developers.avacloud.io/data-api/evm-transactions/list-deployed-contracts) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/contracts/{address}/deployments \ --header 'accept: application/json' ``` - [List ERC Transfers](https://developers.avacloud.io/data-api/evm-transactions/list-erc-transfers) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/tokens/{address}/transfers \ --header 'accept: application/json' ``` - [List Transactions](https://developers.avacloud.io/data-api/evm-transactions/list-transactions) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/transactions \ --header 'accept: application/json' ``` - [List Native Transactions](https://developers.avacloud.io/data-api/evm-transactions/list-native-transactions) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/transactions:listNative \ --header 'accept: application/json' ``` - [List ERC-20 Transfers](https://developers.avacloud.io/data-api/evm-transactions/list-erc-20-transfers) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/transactions:listErc20 \ --header 'accept: application/json' ``` - [List ERC-721 Transfers](https://developers.avacloud.io/data-api/evm-transactions/list-erc-721-transfers) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/transactions:listErc721 \ --header 'accept: application/json' ``` - [List ERC-1155 Transfers](https://developers.avacloud.io/data-api/evm-transactions/list-erc-1155-transfers) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/transactions:listErc1155 \ --header 'accept: application/json' ``` - [List Internal Transactions](https://developers.avacloud.io/data-api/evm-transactions/list-internal-transactions) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/transactions:listInternals \ --header 'accept: application/json' ``` - [Get Transaction](https://developers.avacloud.io/data-api/evm-transactions/get-transaction) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/transactions/{txHash} \ --header 'accept: application/json' ``` - [List Transactions For a Block](https://developers.avacloud.io/data-api/evm-transactions/list-transactions-for-a-block) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/blocks/{blockId}/transactions \ --header 'accept: application/json' ``` - [List Latest Transactions](https://developers.avacloud.io/data-api/evm-transactions/list-latest-transactions) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/transactions \ --header 'accept: application/json' ``` - [Get Native Token Balance](https://developers.avacloud.io/data-api/evm-balances/get-native-token-balance) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/balances:getNative \ --header 'accept: application/json' ``` - [List ERC-20 Balances](https://developers.avacloud.io/data-api/evm-balances/list-erc-20-balances) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/balances:listErc20 \ --header 'accept: application/json' ``` - [List ERC-721 Balances](https://developers.avacloud.io/data-api/evm-balances/list-erc-721-balances) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/balances:listErc721 \ --header 'accept: application/json' ``` - [List ERC-1155 Balances](https://developers.avacloud.io/data-api/evm-balances/list-erc-1155-balances) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/balances:listErc1155 \ --header 'accept: application/json' ``` - [List Collectible (ERC-721 and ERC-1155) Balances](https://developers.avacloud.io/data-api/evm-balances/list-collectible-erc-721erc-1155-balances) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address}/balances:listCollectibles \ --header 'accept: application/json' ``` - [Get Contract Metadata](https://developers.avacloud.io/data-api/evm-contracts/get-contract-metadata) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/addresses/{address} \ --header 'accept: application/json' ``` - [Reindex NFT Metadata](https://developers.avacloud.io/data-api/nfts/reindex-nft-metadata) ```bash curl --request POST \ --url https://glacier-api.avax.network/v1/chains/{chainId}/nfts/collections/{address}/tokens/tokenId:reindex \ --header 'accept: application/json' ``` - [List Tokens](https://developers.avacloud.io/data-api/nfts/list-tokens) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/nfts/collections/{address}/tokens \ --header 'accept: application/json' ``` - [Get Token Details](https://developers.avacloud.io/data-api/nfts/get-token-details) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/chains/{chainId}/nfts/collections/{address}/tokens/tokenId \ --header 'accept: application/json' ``` ### Avalanche Primary Network Endpoints - [Get Chain Interactions for Addresses](https://developers.avacloud.io/data-api/primary-network/get-chain-interactions-for-addresses) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/{network}/addresses:listChainIds \ --header 'accept: application/json' ``` - [Get Network Details](https://developers.avacloud.io/data-api/primary-network/get-network-details) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/{network} \ --header 'accept: application/json' ``` - [List Blockchains](https://developers.avacloud.io/data-api/primary-network/list-blockchains) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/{network}/blockchains \ --header 'accept: application/json' ``` - [List Subnets](https://developers.avacloud.io/data-api/primary-network/list-subnets) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/subnets \ --header 'accept: application/json' ``` - [Get Subnet Details by ID](https://developers.avacloud.io/data-api/primary-network/get-subnet-details-by-id) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/subnets/{subnetId} \ --header 'accept: application/json' ``` - [List Validators](https://developers.avacloud.io/data-api/primary-network/list-validators) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/validators \ --header 'accept: application/json' ``` - [Get Single Validator Details](https://developers.avacloud.io/data-api/primary-network/get-single-validator-details) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/validators/{nodeId} \ --header 'accept: application/json' ``` - [List Delegators](https://developers.avacloud.io/data-api/primary-network/list-delegators) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/delegators \ --header 'accept: application/json' ``` - [Get Block](https://developers.avacloud.io/data-api/primary-network-blocks/get-block) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/11111111111111111111111111111111LpoYY/blocks/{blockId} \ --header 'accept: application/json' ``` - [List Blocks Proposed by Node](https://developers.avacloud.io/data-api/primary-network-blocks/list-blocks-proposed-by-node) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/11111111111111111111111111111111LpoYY/nodes/{nodeId}/blocks \ --header 'accept: application/json' ``` - [List Latest Blocks](https://developers.avacloud.io/data-api/primary-network-blocks/list-latest-blocks) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/11111111111111111111111111111111LpoYY/blocks \ --header 'accept: application/json' ``` - [List Vertices](https://developers.avacloud.io/data-api/primary-network-vertices/list-vertices) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM/vertices \ --header 'accept: application/json' ``` - [Get Vertex](https://developers.avacloud.io/data-api/primary-network-vertices/get-vertex) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM/vertices/vertexHash \ --header 'accept: application/json' ``` - [List Vertices by Height](https://developers.avacloud.io/data-api/primary-network-vertices/list-vertices-by-height) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM/vertices:listByHeight \ --header 'accept: application/json' ``` - [Get Transaction](https://developers.avacloud.io/data-api/primary-network-transactions/get-transaction) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/{network}/blockchains/{blockchainId}/transactions/{txHash} \ --header 'accept: application/json' ``` - [List Latest Transactions](https://developers.avacloud.io/data-api/primary-network-transactions/list-latest-transactions) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/{network}/blockchains/{blockchainId}/transactions \ --header 'accept: application/json' ``` - [List Staking Transactions](https://developers.avacloud.io/data-api/primary-network-transactions/list-staking-transactions) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/11111111111111111111111111111111LpoYY/transactions:listStaking \ --header 'accept: application/json' ``` - [List Asset Transactions](https://developers.avacloud.io/data-api/primary-network-transactions/list-asset-transactions) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM/assets/{assetId}/transactions \ --header 'accept: application/json' ``` - [List UTXOs](https://developers.avacloud.io/data-api/primary-network-utxos/list-utxos) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/11111111111111111111111111111111LpoYY/utxos \ --header 'accept: application/json' ``` - [Get Balances](https://developers.avacloud.io/data-api/primary-network-balances/get-balances) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/11111111111111111111111111111111LpoYY/balances \ --header 'accept: application/json' ``` - [List Pending Rewards](https://developers.avacloud.io/data-api/primary-network-rewards/list-pending-rewards) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/rewards:listPending \ --header 'accept: application/json' ``` - [List Historical Rewards](https://developers.avacloud.io/data-api/primary-network-rewards/list-historical-rewards) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/rewards \ --header 'accept: application/json' ``` - [Get Asset Details](https://developers.avacloud.io/data-api/primary-network/get-asset-details) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/networks/mainnet/blockchains/2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM/assets/{assetId} \ --header 'accept: application/json' ``` # Webhooks API (/academy/avacloudapis/02-overview/04-webhooks) --- title: Webhooks API description: Learn about AvaCloud Webhooks API. updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- ## What is a Webhook? A webhook is a communication mechanism to provide applications with real-time information. It delivers data to other applications as it happens, meaning you get data immediately, unlike typical APIs where you would need to poll for data to get it in "real-time". This makes webhooks much more efficient for both providers and consumers. Webhooks work by registering a URL to send notifications once certain events occur. You can create receiver endpoints in your server in any programming language and those will have an associated URL. This object contains all the relevant information about what just happened, including the type of event and the data associated with that event. ## What are AvaCloud Webhooks? With the Webhooks API, you can monitor real-time events on the Avalanche C-Chain and L1s. For example, you can monitor smart contract events, track NFT transfers, and observe wallet-to-wallet transactions. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/glacier-webhooks-xpLeH8o7slcnvOdWsIorDJSUeNlz04.png) ### Key Features: - **Real-time notifications:** Receive immediate updates on specified on-chain activities without polling. - **Customizable:** Specify the desired event type to listen for, customizing notifications according to individual requirements. - **Secure:** Employ shared secrets and signature-based verification to guarantee that notifications originate from a trusted source. - **Broad Coverage:** Support for C-chain mainnet, testnet, and L1s within the Avalanche ecosystem, ensuring wide-ranging monitoring capabilities. ### Use Cases: - **NFT Marketplace Transactions:** Get alerts for NFT minting, transfers, auctions, bids, sales, and other interactions within NFT marketplaces. - **Wallet Notifications:** Receive alerts when an address performs actions such as sending, receiving, swapping, or burning assets. - **DeFi Activities:** Receive notifications for various DeFi activities such as liquidity provisioning, yield farming, borrowing, lending, and liquidations. ## Webhooks API Reference [Create a new webhook](https://developers.avacloud.io/webhooks-api/webhooks/create-a-webhook) ```bash curl --request POST \ --url https://glacier-api.avax.network/v1/webhooks \ --header 'accept: application/json' \ --header 'content-type: application/json' \ --data ' { "eventType": "address_activity" } ``` [Lists webhooks for the user.](https://developers.avacloud.io/webhooks-api/webhooks/list-webhooks) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/webhooks \ --header 'accept: application/json' ``` [Retrieves a webhook by ID.](https://developers.avacloud.io/webhooks-api/webhooks/get-a-webhook-by-id) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/webhooks/id \ --header 'accept: application/json' ``` [Deactivates a webhook by ID.](https://developers.avacloud.io/webhooks-api/webhooks/deactivate-a-webhook) ```bash curl --request DELETE \ --url https://glacier-api.avax.network/v1/webhooks/id \ --header 'accept: application/json' ``` [Updates an existing webhook.](https://developers.avacloud.io/webhooks-api/webhooks/update-a-webhook) ```bash curl --request PATCH \ --url https://glacier-api.avax.network/v1/webhooks/id \ --header 'accept: application/json' \ --header 'content-type: application/json' ``` [Generates a new shared secret.](https://developers.avacloud.io/webhooks-api/webhooks/generate-a-shared-secret) ```bash curl --request POST \ --url https://glacier-api.avax.network/v1/webhooks:generateOrRotateSharedSecret \ --header 'accept: application/json' ``` [Get a previously generated shared secret.](https://developers.avacloud.io/webhooks-api/webhooks/get-a-shared-secret) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/webhooks:getSharedSecret \ --header 'accept: application/json' ``` [Add addresses to webhook.](https://developers.avacloud.io/webhooks-api/webhooks/add-addresses-to-webhook) ```bash curl --request PATCH \ --url https://glacier-api.avax.network/v1/webhooks/id/addresses \ --header 'accept: application/json' \ --header 'content-type: application/json' ``` [Remove addresses from webhook.](https://developers.avacloud.io/webhooks-api/webhooks/remove-addresses-from-webhook) ```bash curl --request DELETE \ --url https://glacier-api.avax.network/v1/webhooks/id/addresses \ --header 'accept: application/json' \ --header 'content-type: application/json' ``` [List adresses by webhook.](https://developers.avacloud.io/webhooks-api/webhooks/list-adresses-by-webhook) ```bash curl --request GET \ --url https://glacier-api.avax.network/v1/webhooks/id/addresses \ --header 'accept: application/json' ``` # Create API Key (/academy/avacloudapis/03-environment-setup/01-avacloud-account) --- title: Create API Key description: Get an API key for AvaCloud updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; ## Get an AvaCloud API Key In order to utilize your accounts rate limits, you will need to make API requests with an API key. You can generate API Keys from the [AvaCloud portal](https://app.avacloud.io/glacier-api/). Create an account on [AvaCloud](https://avacloud.io). Click on the `Web3 Data API` [tab](https://app.avacloud.io/glacier-api/). ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/create-api-key-gfBR9O5rSJilgNzws8JWyHXctbsXox.png) Click `+ Add API Key` and create a new name for your API key. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/name-api-key-NuNRHpnSFC3dLvLYdbztDTlAf8Ax6k.png) **Save your API key** in a secure location. We will need it in the next step. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/grab-api-key-KN2CdMQVWrHPKjPY4vxS4H6pIZVJmW.png) After generating the API keys the AvaCloud page should look like this: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/final-api-key-5ydE6hNLZajRH3hNEAYE5IFFmNBdPX.png) Once you've created and retrieved the key, you will be able to make authenticated queries by passing in your API key in the `x-glacier-api-key` header of your HTTP request. An example curl request to the Data API: ```bash curl -H "Content-Type: Application/json" -H "x-glacier-api-key: " \ "https://glacier-api.avax.network/v1/chains" ``` # Setup AvaCloudSDK Starter Kit (/academy/avacloudapis/03-environment-setup/02-setup-starter-kit) --- title: Setup AvaCloudSDK Starter Kit description: Set up the AvaCloudSDK Starter Kit in a GitHub Codespace updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- import Link from 'next/link'; import { cn } from '@/utils/cn'; import { buttonVariants } from '@/components/ui/button.tsx' import { Step, Steps } from 'fumadocs-ui/components/steps'; In this course we will run the AvaCloudSDK Starter Kit in a Github Codepsace. This is the quickest way to get started. The `AvaCloudSDK Starter Kit` contains everything we need to get started quickly with the AvaCloud API. ### Open the AvaCloudSDK Starter Kit Github Repository: **Make sure you are on the `follow-along` branch!** Open AvaCloudSDK Starter Kit ### Create a Codespace [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://github.com/codespaces/new?hide_repo_select=true&ref=follow-along&repo=851861669&machine=standardLinux32gb) The Codespace will open in a new tab. Wait a few minutes until it's fully built. ### Verify everything is working correctly Open the terminal with `` Ctrl + ` `` or by opening it through the menu: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/terminal-08TThSFqM438J1Jd5E1bh1OyMYAXWg.png) ### Set the `AVACLOUD_API_KEY` environment variable Create a `.env` file in the root of the project and add your API key from [the previous step](/academy/avacloudapis/03-environment-setup/01-avacloud-account): ```bash AVACLOUD_API_KEY=ac_rGIKESl9_9DWuLfJJQLSV5nlzbKR7eHxym6XW3XEQJeNBDRxI... ``` ### Start the NextJS app Now we can run the NextJS app in hot reload mode: ```bash yarn dev ``` It should preview within the Codespace: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/site-running-neUxhhADHqXs34Jrgw1uSJhqokItAC.png) ### Optional: Open Codespace locally in Visual Studio Code You can switch any time from the browser IDE to Visual Studio Code: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/vs-code-DNVuxWt4Z4ffoDsszoGDyUuhXvhC8m.png) The first time you switch, you will be asked to install the [Codespaces extension](https://marketplace.visualstudio.com/items?itemName=GitHub.codespaces) and connect VS Code to you GitHub account, if it is not already connceted. # Time to Build! (/academy/avacloudapis/03-environment-setup/03-what-we-build) --- title: Time to Build! description: Learn about what we will build in this course updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- ### Finished Setup Once your environment is set up, we can start building some applications using the AvaCloud API and the AvaCloudSDK. The home page should look like this: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/preview-L1zktfNDP4fWzdESdhIQ7KxsfnQJj0.png) ### What We Will Build Here is a brief overview of what we will be building in this course: - ERC-20 Balance App - Wallet Portfolio App - Block Explorer App Each of these applications will utilize the Data API to get data from the Avalanche network. We will fetch data such as ERC-20 token balances, NFT balances, recent transactions from an address, block information and much more. # Overview (/academy/avacloudapis/04-erc20-token-balance-app/01-overview) --- title: Overview description: Use the AvaCloud Data API to create a simple web app that displays a user's ERC-20 token balances. updated: 2024-09-13 authors: [owenwahlgren] icon: Book --- In this section we will use the Data API to create a simple web app that displays a user's ERC-20 token portfolio. This app will allow users to input their Avalanche C-Chain address and view a list of their ERC-20 token balances. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/balance-app-9ggCriiHBsggku1bFGF8tL1lyThSx4.png) We will use two endpoints from the Data API to accomplish this: - [`data.evm.blocks.getLatestBlocks`](https://developers.avacloud.io/data-api/evm-blocks/list-latest-blocks) - [`data.evm.balances.listErc20Balances`](https://developers.avacloud.io/data-api/evm-balances/list-erc-20-balances) # Understanding the Code (/academy/avacloudapis/04-erc20-token-balance-app/02-understanding-code) --- title: Understanding the Code description: Before we start coding, let's take a look at the code we will be working with. updated: 2024-09-13 authors: [owenwahlgren] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; There will be two main files that we will be working with in this section. ### `Page.tsx` This is the code that will be rendered on the client side, as distinguished by `"use client";` at the top of the file. It contains the React components that will be displayed to the user and is responsible for making the API calls to our backend, which in turn calls the Data API. It is important to understand that when you `"use client"` in a NextJS project, it will be rendered on the client side. This means that the code will be executed in the user's browser and not on the server. This is important to keep in mind when working with sensitive data or when you want to keep your API keys secure. Besides this, we have two main functions that we will be working with in this file: ```tsx title="src/app/balance-app/page.tsx" const handleSetAddress = async () => { // // TODO: Implement handleSetAddress // }; ``` `handleSetAddress` is a simple function that will be called when the user clicks the "Set Address" button. It will ensure the address is valid then set the inputted address to the React State. Next, we call our next function `fetchERC20Balances` to get the user's balance. ```tsx title="src/app/balance-app/page.tsx" const fetchERC20Balances = async (address: string) => { // // TODO: Implement fetchERC20Balances // }; ``` `fetchERC20Balances` is a function that will make a call to our backend to get the user's ERC-20 token balances. It will first get the current block height, then call the `listErc20Balances` method on our backend with the user's address and the block height. It will then return the balances as an array of `Erc20TokenBalance` objects. ### `Route.ts` This code will be executed on the server side, as distinguished by `"use server";` at the top of the file. It contains the code that will be executed on the server side and is responsible for making the API calls to the Data API. There are a few key components to understand in this file: ```tsx title="src/app/api/balance/route.ts" import { AvaCloudSDK } from "@avalabs/avacloud-sdk"; const avaCloudSDK = new AvaCloudSDK({ apiKey: process.env.AVACLOUD_API_KEY, chainId: "43114", // Avalanche Mainnet network: "mainnet", }); ``` Here we initialize the `AvaCloudSDK` with our AvaCloud API key and the chainId of `43114` for the Avalanche Mainnet. This will allow us to make calls to the Data API. ```tsx title="src/app/api/balance/route.ts" export async function GET(request: Request) { const { searchParams } = new URL(request.url) const method = searchParams.get('method') try { let result switch (method) { case 'getBlockHeight': result = await getBlockHeight() break case 'listErc20Balances': const address: string = searchParams.get('address')! const blockNumber: string = searchParams.get('blockNumber')! result = await listErc20Balances(address, blockNumber); break default: return NextResponse.json({ error: 'Invalid method' }, { status: 400 }) } return NextResponse.json(result) } catch (error) { return NextResponse.json({ error: 'Internal Server Error' }, { status: 500 }) } } ``` Here we define the internal API methods for our backend. We have two methods that we will be working with in this section: `getBlockHeight` and `listErc20Balances`. We create both of these methods internally, then forward the request to the Data API. We then return the result to the client. ```tsx title="src/app/api/balance/route.ts" async function getBlockHeight() { // // TODO: Implement getBlockHeight // } async function listErc20Balances(address: string, blockNumber: string) { // // TODO: Implement listErc20Balances // } ``` In the next section, we will implement the `listErc20Balances` and `getBlockHeight` function to call the Data API through the AvaCloudSDK. # Modifying the Code (/academy/avacloudapis/04-erc20-token-balance-app/03-modifying-code) --- title: Modifying the Code description: Lets modify the code to implement the Data API. updated: 2024-09-13 authors: [owenwahlgren] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this section we will modify the code to implement the Data API. ### Modify Backend `src/app/api/balance/route.ts` First we will implement the `getBlockHeight` function. The goal of this function is to fetch recent blocks from the Data API then to return the highest block in the first position. Reference the [AvaCloud SDK documentation](https://developers.avacloud.io/data-api/evm-blocks/list-latest-blocks) to see how to fetch the latest blocks. ```tsx title="src/app/api/balance/route.ts" async function getBlockHeight() { const result = await avaCloudSDK.data.evm.blocks.getLatestBlocks({ pageSize: 1, }); return result.result.blocks[0].blockNumber } ``` Next we will implement the `listErc20Balances` function. The goal of this function is to fetch the ERC-20 token balances for a given address at a specific block height. Reference the [AvaCloud SDK documentation](https://developers.avacloud.io/data-api/evm-balances/list-erc-20-balances) to see how to fetch ERC-20 token balances. Note the `Erc20TokenBalance` type that is imported, we should use this to combine paged results. ```tsx title="src/app/api/balance/route.ts" async function listErc20Balances(address: string, blockNumber: string) { const result = await avaCloudSDK.data.evm.balances.listErc20Balances({ blockNumber: blockNumber, pageSize: 10, address: address, }); const balances: Erc20TokenBalance[] = []; for await (const page of result) { balances.push(...page.result.erc20TokenBalances); } return balances } ``` ### Modify Frontend `src/app/balance-app/page.tsx` First we will implement the `fetchERC20Balances` function. The goal of this function is to make a call to our backend to get the user's ERC-20 token balances. Make a call to our backend first for the most recent block height, then our `listErc20Balances` API. Finally return the results. ```tsx title="src/app/balance-app/page.tsx" const fetchERC20Balances = async (address: string) => { const blockResult = await fetch("api/balance?method=getBlockHeight"); const blockNumber = await blockResult.json(); const balanceResult = await fetch("api/balance?method=listErc20Balances&address=" + address + "&blockNumber=" + blockNumber); const balances = await balanceResult.json(); return balances as Erc20TokenBalance[]; }; ``` Next we will implement the `handleSetAddress` function. The goal of this function is to set the address in the state and fetch the ERC-20 token balances for that address using our `fetchERC20Balances` function. First make sure the address is valid, then update the state for `Address` and `Balances` ```tsx title="src/app/balance-app/page.tsx" const handleSetAddress = async () => { const addressInput = document.getElementById("address") as HTMLInputElement; const address = addressInput.value; const addressPattern = /^0x[a-fA-F0-9]{40}$/; if (addressInput && addressPattern.test(address)) { setAddress(address); setBalances(await fetchERC20Balances(address)); } }; ``` # Final Result (/academy/avacloudapis/04-erc20-token-balance-app/04-final) --- title: Final Result description: The final result of the ERC-20 token balance app. updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- If we implemented the code correctly, we should have a working ERC-20 token balance app that displays the user's token balances. The app will look like the following: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/balance-app-init-bIVbTrMxCmFteBLTTcDRHGuVU75Kcd.png) After setting the address field, the app will display the user's ERC-20 token balances: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/balance-app-9ggCriiHBsggku1bFGF8tL1lyThSx4.png) Notice how the app also displays images for each token. This is done by fetching the token metadata from `listErc20Balances` and displaying the token's logo. # Overview (/academy/avacloudapis/05-wallet-portfolio-app/01-overview) --- title: Overview description: Use the Data API to create a simple web app that displays data and metrics on a connected wallet. updated: 2024-09-13 authors: [owenwahlgren] icon: Book --- In this section we will expand on the ERC-20 Balance App with NFTs (ERC-721) and ERC-1155 tokens. We will also add a connect wallet button to allow users to connect their own wallet and view their own token balances. Here is a preview of what we will build: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/nfts-v4LwE4Ij450GkYThM5JQG7ghKtT1LQ.png) ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/tokens-ILhHQF5XpddcXcg3pbyG6aQ8hbd9XJ.png) We will use a few additional endpoints from the Data API to accomplish this: - [`data.evm.balances.listErc721Balances`](https://developers.avacloud.io/data-api/evm-balances/list-erc-721-balances) - [`data.evm.balances.listErc1155Balances`](https://developers.avacloud.io/data-api/evm-balances/list-erc-1155-balances) - [`data.evm.transactions.listTransactions`](https://developers.avacloud.io/data-api/evm-transactions/list-transactions) # Understanding the Code (/academy/avacloudapis/05-wallet-portfolio-app/02-understanding-code) --- title: Understanding the Code description: Before we start coding, let's take a look at the code we will be working with. updated: 2024-09-13 authors: [owenwahlgren] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; There will be two main files that we will be working with in this section. ### `Page.tsx` This is the code that will be rendered on the client side, as distinguished by `"use client";` at the top of the file. It contains the React components that will be displayed to the user and is responsible for making the API calls to our backend, which in turn calls the Data API. It is important to understand that when you `"use client"` in a NextJS project, it will be rendered on the client side. This means that the code will be executed in the user's browser and not on the server. This is important to keep in mind when working with sensitive data or when you want to keep your API keys secure. Besides this, we have three main functions that we will be working with in this file: ```tsx title="src/app/basic-wallet/page.tsx" const fetchERC721Balances = async (address: string) => { // // TODO: Implement this! // return [] as Erc721TokenBalance[]; } ``` `fetchERC721Balances` is a function that will make a call to our backend to get the user's ERC-721 token balances. It will call the `listErc721Balances` method on our backend with the user's address. It will then return the balances as an array of `Erc721TokenBalance` objects. ```tsx title="src/app/basic-wallet/page.tsx" const fetchERC1155Balances = async (address: string) => { // // TODO: Implement this! // return [] as Erc1155TokenBalance[]; } ``` `fetchERC1155Balances` is a function that will make a call to our backend to get the user's ERC-1155 token balances. It will call the `listErc1155Balances` method on our backend with the user's address. It will then return the balances as an array of `Erc1155TokenBalance` objects. ```tsx title="src/app/basic-wallet/page.tsx" const fetchRecentTransactions = async (address: string) => { // // TODO: Implement this! // return {} as TransactionDetails; } ``` `fetchRecentTransactions` is a function that will make a call to our backend to get the user's recent transactions for all tokens. It will call the `listRecentTransactions` method on our backend with the user's address. It will then return the balances as an object of type `TransactionDetails`. ### `Route.ts` This code will be executed on the server side, as distinguished by `"use server";` at the top of the file. It contains the code that will be executed on the server side and is responsible for making the API calls to the Data API. There are a few key components to understand in this file: ```tsx title="src/app/api/wallet/route.ts" import { AvaCloudSDK } from "@avalabs/avacloud-sdk"; const avaCloudSDK = new AvaCloudSDK({ apiKey: process.env.AVACLOUD_API_KEY, chainId: "43114", // Avalanche Mainnet network: "mainnet", }); ``` Here we initialize the `AvaCloudSDK` with our AvaCloud API key and the chainId of `43114` for the Avalanche Mainnet. This will allow us to make calls to the Data API. ```tsx title="src/app/api/wallet/route.ts" export async function GET(request: Request) { const { searchParams } = new URL(request.url) const method = searchParams.get('method') let address try { let result switch (method) { case 'listERC721Balances': address = searchParams.get('address')! result = await listERC721Balances(address) break case 'listERC1155Balances': address = searchParams.get('address')! result = await listErc1155Balances(address) break case 'listRecentTransactions': address = searchParams.get('address')! result = await listRecentTransactions(address) break default: return NextResponse.json({ error: 'Invalid method' }, { status: 400 }) } return NextResponse.json(result) } catch (error) { return NextResponse.json({ error: 'Internal Server Error' }, { status: 500 }) } } ``` Here we define the internal API methods for our backend. We have two methods that we will be working with in this section: `getBlockHeight`, `listERC721Balances`, `listErc1155Balances` and `listRecentTransactions`. We create all of these methods internally, then forward the request to the Data API. We then return the result to the client. ```tsx title="src/app/api/wallet/route.ts" async function getBlockHeight() { // // TODO: Implement this! // return } const listERC721Balances = async (address: string) => { // // TODO: Implement this! // return } const listErc1155Balances = async (address: string) => { // // TODO: Implement this! // return } const listRecentTransactions = async (address: string) => { // // TODO: Implement this! // return } ``` In the next section, we will implement these functions to call the Data API through the AvaCloudSDK. # Modifying the Code (/academy/avacloudapis/05-wallet-portfolio-app/03-modifying-code) --- title: Modifying the Code description: Lets modify the code to implement the Data API. updated: 2024-09-13 authors: [owenwahlgren] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this section, we will modify the code to implement the Data API. ### Modify Backend `src/app/api/wallet/route.ts` First we will implement the `getBlockHeight` function. This function is the same as the one in ERC20 Balance App, but we will repeat it for the sake of the tutorial. The goal of this function is to fetch recent blocks from the Data API then to return the highest block in the first position. Reference the [AvaCloud SDK documentation](https://developers.avacloud.io/data-api/evm-blocks/list-latest-blocks) to see how to fetch the latest blocks. ```tsx title="src/app/api/wallet/route.ts" async function getBlockHeight() { const result = await avaCloudSDK.data.evm.blocks.getLatestBlocks({ pageSize: 1, }); return result.result.blocks[0].blockNumber } ``` Next we will implement the `listERC721Balances` function. The goal of this function is to fetch the ERC-721 token balances for a given address. Reference the [AvaCloud SDK documentation](https://developers.avacloud.io/data-api/evm-balances/list-erc-721-balances) to see how to fetch ERC-721 token balances. Note the `Erc721TokenBalance` type that is imported, we should use this to combine paged results. ```tsx title="src/app/api/wallet/route.ts" const listERC721Balances = async (address: string) => { const result = await avaCloudSDK.data.evm.balances.listErc721Balances({ pageSize: 10, address: address, }); const balances: Erc721TokenBalance[] = []; for await (const page of result) { balances.push(...page.result.erc721TokenBalances); } return balances } ``` Now we will implement the `listErc1155Balances` function. The goal of this function is to fetch the ERC-1155 token balances for a given address. Reference the [AvaCloud SDK documentation](https://developers.avacloud.io/data-api/evm-balances/list-erc-1155-balances) to see how to fetch ERC-1155 token balances. Note the `Erc1155TokenBalance` type that is imported, we should use this to combine paged results. ```tsx title="src/app/api/wallet/route.ts" const listErc1155Balances = async (address: string) => { const result = await avaCloudSDK.data.evm.balances.listErc1155Balances({ pageSize: 10, address: address, }); const balances: Erc1155TokenBalance[] = []; for await (const page of result) { balances.push(...page.result.erc1155TokenBalances); } return balances } ``` Finally we will implement the `listRecentTransactions` function. The goal of this function is to fetch recent transactions for a given address given a block start and end. Reference the [AvaCloud SDK documentation](https://developers.avacloud.io/data-api/evm-transactions/list-transactions) to see how to fetch recent transactions. Note the `TransactionDetails` type that is imported, we should use this to combine and sort paged results. ```tsx title="src/app/api/wallet/route.ts" const listRecentTransactions = async (address: string) => { const blockHeight = await getBlockHeight() const result = await avaCloudSDK.data.evm.transactions.listTransactions({ pageSize: 10, startBlock: blockHeight - 100000, endBlock: blockHeight, address: address, sortOrder: "desc", }); const transactions: TransactionDetails = { erc20Transfers: [], erc721Transfers: [], erc1155Transfers: [], nativeTransaction: { blockNumber: '', blockTimestamp: 0, blockHash: '', blockIndex: 0, txHash: '', txStatus: '', txType: 0, gasLimit: '', gasUsed: '', gasPrice: '', nonce: '', from: { name: undefined, symbol: undefined, decimals: undefined, logoUri: undefined, address: '' }, to: { name: undefined, symbol: undefined, decimals: undefined, logoUri: undefined, address: '' }, value: '' }, } for await (const page of result) { for (const transaction of page.result.transactions) { if (transaction.erc20Transfers) { if (transactions.erc20Transfers) { transactions.erc20Transfers.push(...transaction.erc20Transfers); } } else if (transaction.erc721Transfers) { if (transactions.erc721Transfers) { transactions.erc721Transfers.push(...transaction.erc721Transfers); } } else if (transaction.erc1155Transfers) { if (transactions.erc1155Transfers) { transactions.erc1155Transfers.push(...transaction.erc1155Transfers); } } } } return transactions } ``` ### Modify Frontend `src/app/basic-wallet/page.tsx` Now we will modify the frontend to make calls to our backend. First we will implement the `fetchERC20Balances` function. The goal of this function is to make a call to our backend to get the user's ERC-20 token balances. Make a call to our backend first for the most recent block height, then our `listErc20Balances` API. Finally return the results. ```tsx title="src/app/basic-wallet/page.tsx" const fetchERC20Balances = async (address: string) => { const blockResult = await fetch("api/balance?method=getBlockHeight"); const blockNumber = await blockResult.json(); const balanceResult = await fetch("api/balance?method=listErc20Balances&address=" + address + "&blockNumber=" + blockNumber); const balances = await balanceResult.json(); return balances as Erc20TokenBalance[]; }; ``` Next we will implement the `fetchERC721Balances` function. The goal of this function is to call our `listErc721Balances` function on the backend and return it as a `Erc721TokenBalance` array. Make a call to our backend then parse the result as json. Return it as `Erc721TokenBalance[]`. ```tsx title="src/app/basic-wallet/page.tsx" const fetchERC721Balances = async (address: string) => { const result = await fetch(`api/wallet?method=listERC721Balances&address=${address}`); const balances = await result.json(); return balances as Erc721TokenBalance[]; } ``` Now we will implement the `fetchERC1155Balances` function. The goal of this function is to call our `listERC1155Balances` function on the backend and return it as a `Erc1155TokenBalance` array. Make a call to our backend then parse the result as json. Return it as `Erc1155TokenBalance[]`. ```tsx title="src/app/basic-wallet/page.tsx" const fetchERC1155Balances = async (address: string) => { const result = await fetch(`api/wallet?method=listERC1155Balances&address=${address}`); const balances = await result.json(); return balances as Erc1155TokenBalance[]; } ``` Finally we will implement the `fetchRecentTransactions` function. The goal of this function is to call our `listRecentTransactions` function on the backend and return it as an object of type `TransactionDetails`. Make a call to our backend then parse the result as json. Return it as type of `TransactionDetails`. ```tsx title="src/app/basic-wallet/page.tsx" const fetchRecentTransactions = async (address: string) => { const result = await fetch(`api/wallet?method=listRecentTransactions&address=${address}`); const transactions = await result.json(); return transactions as TransactionDetails; } ``` # Overview (/academy/avacloudapis/06-block-explorer-app/01-overview) --- title: Overview description: Use the Data API to create a simple block explorer. updated: 2024-09-13 authors: [owenwahlgren] icon: Book --- In this section we will build a basic block explorer using the Data API. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avacloudsdk/explorer-Uahzju9LouXAGd61AQvm38Vxtq4cG5.png) We will use a couple additional endpoints from the Data API to accomplish this: - [`data.evm.blocks.getBlockHeight`](https://developers.avacloud.io/data-api/evm-blocks/list-latest-blocks) - [`data.evm.transactions.listLatestTransactions`](https://developers.avacloud.io/data-api/evm-transactions/list-latest-transactions) # Understanding the Code (/academy/avacloudapis/06-block-explorer-app/02-understanding-code) --- title: Understanding the Code description: Before we start coding, let's take a look at the code we will be working with. updated: 2024-10-09 authors: [owenwahlgren] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; There will be two main files that we will be working with in this section. ### `Page.tsx` This is the code that will be rendered on the client side, as distinguished by `"use client";` at the top of the file. It contains the React components that will be displayed to the user and is responsible for making the API calls to our backend, which in turn calls the Data API. It is important to understand that when you `"use client"` in a NextJS project, it will be rendered on the client side. This means that the code will be executed in the user's browser and not on the server. This is important to keep in mind when working with sensitive data or when you want to keep your API keys secure. Besides this, we have three main functions that we will be working with in this file: ```tsx title="src/app/basic-explorer/page.tsx" const fetchRecentTransactions = async () => { // // TODO: Implement this! // return data as NativeTransaction[] } ``` `fetchRecentTransactions` is a function that will make a call to our backend to get the most recent transactions from the chain. It will call the `getRecentTransactions` method on our backend. It will then return the transactions as an array of `NativeTransaction` objects. ```tsx title="src/app/basic-explorer/page.tsx" const fetchRecentBlocks = async () => { // // TODO: Implement this! // return data as EvmBlock[] } ``` `fetchRecentBlocks` is a function that will make a call to our backend to get the most recent blocks from the chain. It will call the `getRecentBlocks` method on our backend. It will then return the blocks as an array of `EvmBlock` objects. ### `Route.ts` This code will be executed on the server side, as distinguished by `"use server";` at the top of the file. It contains the code that will be executed on the server side and is responsible for making the API calls to the Data API. There are a few key components to understand in this file: ```tsx title="src/app/api/explorer/route.ts" import { AvaCloudSDK } from "@avalabs/avacloud-sdk"; const avaCloudSDK = new AvaCloudSDK({ apiKey: process.env.AVACLOUD_API_KEY, chainId: "43114", // Avalanche Mainnet network: "mainnet", }); ``` Here we initialize the `AvaCloudSDK` with our AvaCloud API key and the chainId of `43114` for the Avalanche Mainnet. This will allow us to make calls to the Data API. ```tsx title="src/app/api/explorer/route.ts" export async function GET(request: Request) { const { searchParams } = new URL(request.url) const method = searchParams.get('method') try { let result switch (method) { case 'getRecentTransactions': result = await getRecentTransactions() break case 'getRecentBlocks': result = await getRecentBlocks() break default: return NextResponse.json({ error: 'Invalid method' }, { status: 400 }) } return NextResponse.json(result) } catch (error) { return NextResponse.json({ error: 'Internal Server Error' }, { status: 500 }) } } ``` Here we define the internal API methods for our backend. We have two methods that we will be working with in this section: `getRecentBlocks` and `getRecentTransactions`. We create all of these methods internally, then forward the request to the Data API. We then return the result to the client. ```tsx title="src/app/api/explorer/route.ts" const getRecentBlocks = async () => { // // TODO: Implement this! // } const getRecentTransactions = async () => { // // TODO: Implement this! // } ``` In the next section, we will implement these functions to call the Data API through the AvaCloudSDK. # Modifying the Code (/academy/avacloudapis/06-block-explorer-app/03-modifying-code) --- title: Modifying the Code description: Lets modify the code to implement the Data API. updated: 2024-10-09 authors: [owenwahlgren] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this section we will modify the code to implement the Data API. ### Modify Backend `src/app/api/explorer/route.ts` First we will implement the `getRecentBlocks` function. The goal of this function is to fetch recent blocks from the Data API with all its information. Reference the [AvaCloud SDK documentation](https://developers.avacloud.io/data-api/evm-blocks/list-latest-blocks) to see how to fetch the latest blocks. ```tsx title="src/app/api/explorer/route.ts" const getRecentBlocks = async () => { const result = await avaCloudSDK.data.evm.blocks.getLatestBlocks({ pageSize: 1, }); let count = 0; const blocks: EvmBlock[] = []; for await (const page of result) { if (count === 20) { break; } blocks.push(...page.result.blocks); count++; } return blocks } ``` Next we will implement the `getRecentTransactions` function. The goal of this function is to fetch all native token transfers from the Data API. Reference the [AvaCloud SDK documentation](https://developers.avacloud.io/data-api/evm-transactions/list-latest-transactions) to see how to fetch latest transactions. Note the `NativeTransaction` type that is imported, we should use this to combine paged results. ```tsx title="src/app/api/explorer/route.ts" const getRecentTransactions = async () => { const result = await avaCloudSDK.data.evm.transactions.listLatestTransactions({ pageSize: 3, }); let count = 0; const transactions: NativeTransaction[] = []; for await (const page of result) { if (count === 20) { break; } transactions.push(...page.result.transactions); count++; } return transactions; } ``` ### Modify Frontend `src/app/basic-explorer/page.tsx` First we will implement the `fetchRecentTransactions` function. The goal of this function is to make a call to our backend to get recent transactions. Make a call to our backend first for our `getRecentTransactions` method. Finally return the results. ```tsx title="src/app/basic-explorer/page.tsx" const fetchRecentTransactions = async () => { const response = await fetch(`/api/explorer?method=getRecentTransactions`) const data = await response.json() return data as NativeTransaction[] } ``` Next we will implement the `fetchRecentBlocks` function. The goal of this function is to make a call to our backend to get recent block information. Make a call to our backend first for our `getRecentBlocks` method. Finally return the results. ```tsx title="src/app/basic-explorer/page.tsx" const fetchRecentBlocks = async () => { const response = await fetch(`/api/explorer?method=getRecentBlocks`) const data = await response.json() return data as EvmBlock[] } ``` # Overview (/academy/avacloudapis/07-using-webhooks/01-overview) --- title: Overview description: Coming soon! updated: 2024-09-13 authors: [owenwahlgren] icon: Book --- # Avalanche Consensus (/academy/avalanche-fundamentals/02-avalanche-consensus-intro/01-avalanche-consensus-intro) --- title: Avalanche Consensus description: Learn how blockchains arrive at consensus using different mechanisms. updated: 2024-05-31 authors: [martineckardt] icon: Book --- Avalanche Consensus is a novel consensus protocol that is used in the Avalanche network. It is a leaderless, decentralized, and scalable consensus protocol that is used to achieve consensus on the state of the network. ## What You Will Learn In this section, you will go through the following topics: - **Consensus Mechanisms:** Understand what consensus mechanisms are and how they work - **Snowman Consensus:** Learn about the Snowman consensus protocol - **TPS vs TTF:** Understand the difference between transactions per second (TPS) and time to finality (TTF) # Consensus Mechanisms (/academy/avalanche-fundamentals/02-avalanche-consensus-intro/02-consensus-mechanisms) --- title: Consensus Mechanisms description: Learn how blockchains arrive at consensus using different mechanisms. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Consensus plays a crucial role in [blockchain networks](/guides/what-is-a-blockchain) by resolving conflicts and ensuring that all validators agree on the current state of the distributed ledger. The main objective of a consensus mechanism is to create a single version of truth that is universally accepted by network participants. Validators can reach a consensus by following a set of steps called a consensus protocol. This way, they collectively decide on the state of the system and all state changes. Different consensus mechanisms have different approaches. All aim to ensure that validators reach a majority agreement on network state. ## Ordering through Consensus Consensus is needed to agree on the order of state changes in a blockchain. This allows the validators to decide between two conflicting states. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/1-aWL9i9BFicUEF7ntMjBFuUejfadAUS.png) ## Double Spending Attack A Double Spending Attack is when a user attempts to spend more crypto than they own by creating multiple transactions that reference the same funds. Let's look at an Example: Alice owns 5 AVAX ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/2-2fU6eqsUiCq76Sy04VEfvD5rLo3eVc.png) Now Alice issues two transactions at the same time to different validators: 1. Send 5 AVAX to Bob 2. Send 5 AVAX to Charly It is important that there is only a limited notion of time in blockchain systems. Even if Alice does not issue these transactions at the exact same time, validators cannot identify which was issued first. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/3-8GHqtWHuf44Nilsugj6yC9sC0BFsZh.png) Each of the transaction in itself is valid. Alice owns 5 AVAX and should be able to send them to whomever she wants. However, she should only be able to send the amount she actually owns. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/4-GeSctGjJKsmXFWvwMpZmYpDcGquZcg.png) Therefore, validators have to collectively come to consensus on which of the two conflicting transactions will be included in the blockchain first, and therefore be accepted by all validators as the next state. To illustrate, we will color the validators preferring Charlie yellow and the validators preferring Bob blue. # Snowman Consensus (/academy/avalanche-fundamentals/02-avalanche-consensus-intro/03-snowman-consensus) --- title: Snowman Consensus description: Learn about the Avalanche Snowman consensus protocol. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Protocols in the Avalanche family operate through repeated sub-sampled voting. When a validator is determining whether a block should be accepted, it asks a small, random subset of validators on their preferences. Based on the responses the validator gets, it might change its own preference. Let's visualize this with an example. You are a validator in a set of validators that is performing the Avalanche Consensus protocol to agree on whether to send the funds to Charlie (yellow) or to Bob (blue). It is important to understand that none of the validators really cares whether it is going to be yellow or blue, as long as all correctly operating validators decide for the same outcome at the end of the process. The starting preference is chosen randomly by each validator. ## Changing Preference You start by sampling the current preference of five other nodes and they reply: 2 prefer yellow (Charlie) and 3 prefer blue (Bob). ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/5-f6WnJHv0VAKRNOSd8gFHbO2Mar4mWl.png) Avalanche consensus dictates that a validator changes its preference if an α-majority of the sampled validators agree on another option, and it goes along with this popular choice. Let's set the alpha value to 3 in our example, meaning that we change our preference when 3 out of 5 sampled nodes have another preference. Since 3 out of 5 have replied with blue (Bob) we are changing our own preference to Bob. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/6-TscORkJchAtrT2rG8rG5CPFPilUR1b.png) From now on you will respond with blue, when another validators queries you for your current preference. ## Consecutive Successes Avalanche Consensus does not run for a fixed number of rounds, but until a decision threshold is hit. This means the validator keeps sampling until their preference is confirmed for beta (β) consecutive rounds. Now you query another five validators for their preference. Again, three of the five reply with the preference blue (Bob). Since your current preference is confirmed, you increment a counter for consecutive successes by one. You repeat this sampling process until their preference is confirmed for 8 consecutive rounds. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/7-KYmL0QWax9tmW7uT8bdqu9R5sNwgSs.png) ## Parameters of Avalanche Consensus In our example we used fixed values for how many nodes are sampled, how many are needed to change our preference and how many consecutive rounds of successes we require to finalize our decision. These consensus parameters are formalized in the table below and can be chosen by every node individually to meet their needs. |Symbol|Name|Range|Explanation| |---|---|---|---| |n |Number of Participants|1 to ∞|How many participants take part in the system?| |k |Sample Size|1 to n|How many of these participants get asked every round of sub-sampling?| |α |Quorum Size|1 to k|How many of the asked participants have to have the same preference for me to change my preference?| |β |Decision Threshold|>= 1|How many times does the quorum have to confirm my preference until I finalize my decision?| With these parameters we can illustrate the consensus algorithm as pseudo code: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/8-XzHBlGu1fdnPniQecAkpFfC5JTegVD.png) ## Finalization In the common case when a transaction has no conflicts, finalization happens very quickly. When conflicts exist, honest validators quickly cluster around conflicting transactions, entering a positive feedback loop until all correct validators prefer that transaction. This leads to the acceptance of non-conflicting transactions and the rejection of conflicting transactions. Avalanche Consensus guarantees (with high probability based on system parameters) that if any honest validator accepts a transaction, all honest validators will come to the same conclusion. # Throughput vs. Time to Finality (/academy/avalanche-fundamentals/02-avalanche-consensus-intro/04-tps-vs-ttf) --- title: Throughput vs. Time to Finality description: Learn how metrics like throughput and time to finality are different. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- To measure blockchain performance we can use two metrics: - Throughput: How many transaction are finalized per second measured in transactions per second (TPS) - Time to Finality: How long does it take a transaction to go from being submitted to validators to being unchangeable. These metrics are very different. Blockchain builders aspire to high throughput and very short time to finality. ## Highway Analogy Let's take an analogy of a highway. Each car represents a transaction and they all travel the same speed. When you click *send* in your wallet, you're like a car entering the highway. When the transaction is *finalized* and unchangeable, it's like the car reaching its destination. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/9-JT3ORUzCZAILlQJjQaY2nJvFZHEVRL.png) Throughput could be likened to the number of lanes, determining how many cars can pass on the highway in a given amount of time. The more lanes you have, the more cars can pass through, thus increasing the throughput. In a blockchain network, increasing the block size (analogous to adding more lanes) can increase throughput. Now, imagine you're driving to a specific destination (finality). The time it takes you to get from your starting point (initiation of the transaction) to your destination (confirmation of the transaction) is analogous to the "time to finality." As soon as a car reaches finality, it cannot return or abort the travel. Our goal is to build highways with many lanes (high throughput) but that bring us to our destination as fast as possible (short time to finality). ### Throughput and Finality of Popular Blockchain Networks |Network|Throughput|Time to Finality| |---|---|---| |Bitcoin|7 TPS|60 min| |Ethereum|30 TPS|6.4 min| |Avalanche / Avalanche L1|2500 TPS|~0.8 seconds| **TPS** → **Transactions per Second** Returning to the highway analogy, these networks would look like this: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/10-M9xrguxF39gS0MkB03LHE1DcNDGsIV.png) # Multi-Chain Architecture (/academy/avalanche-fundamentals/03-multi-chain-architecture-intro/01-multi-chain-architecture) --- title: Multi-Chain Architecture description: Learn about the Multi-Chain Architecture. updated: 2024-05-31 authors: [ashucoder9] icon: Book --- Multi-chain systems are a significant innovation, which provide greater scalability, customizability, and independence. At the core of multi-chain systems is the ability to run multiple blockchains simultaneously. Each blockchain is optimized for specialized use cases, thereby boosting the network's overall performance. ## What You Will Learn In this section, you will go through the following topics: - **Avalanche L1s:** Understand what Avalanche L1s are and how they work - **Setup Core Wallet:** Learn how to setup the Core wallet browser extension - **Use Dexalot:** Use your first Avalanche L1 # Avalanche L1s (/academy/avalanche-fundamentals/03-multi-chain-architecture-intro/02-L1) --- title: Avalanche L1s description: Learn about the Multi-Chain Architecture. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- > An Avalanche L1 is an independent blockchain with its own set of rules regarding validator membership, tokenomics, and the execution layer. It has it's own validator set that comes to consensus and validates the blockchain. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/l1s.png) These specialized L1 blockchains can be tailored to meet specific needs for a variety of applications, including DeFi, NFTs, gaming, and enterprise solutions. - They specify their own execution logic. - They determine their own fee structures. - They maintain their own state. - They facilitate their own networking. - They provide their own security. This independence ensures that L1s don't have to share execution threads, storage, or networking with other L1s networks or the Primary Network. ## Avalanche Network The Avalanche network is a network of independent interoperable L1s. They can communicate with each other and assets can be transferred between them, while they maintain interoperability. As a result, the Avalanche network can: - Scale up easily - Achieve higher overall transactions per second (TPS) - Offer lower transaction costs By leveraging L1s, Avalanche creates a flexible and scalable ecosystem that can adapt to a wide range of use cases and requirements. # Features & Benefits of Avalanche L1s (/academy/avalanche-fundamentals/03-multi-chain-architecture-intro/03-benefits) --- title: Features & Benefits of Avalanche L1s description: Learn about various benefits of using Avalanche L1s, such as Scalability, Independence, Customizability, Privacy, and Interoperability. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## Scalability of the Avalanche Network Every blockchain has a limited capacity for computation and data storage. Therefore, the transactions it can process in a given time frame and the state it can store are limited. In order to scale horizontally and to offer more blockspace, we can just add more blockchains to the Avalanche Network. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/11-KyUqqw9iwrH60hF4wnYPOWRTBOGRhT.png) We can use the analogy of a highway to visualize this concept. A highway can only handle a certain number of cars at a time. If too many cars try to enter the highway at once, traffic congestion will occur and they have to wait to enter the highway. The idea is similiar to building many highways in parallel and create additional space for cars to drive on. This allows for more transactions to be processed in parallel, and therefore increases the overall throughput of the network. The beauty of this approach is its simplicity. It doesn't require any new untested innovations. The challenge, rather, is optimizing how Avalanche L1s interoperate and making switching from one Avalanche L1 to the other easy. ## Independence From Other Avalanche L1s Separating the ecosystem into different chains creates independence from another. If congestion builds on one chain due to high network activity (e.g. an NFT drop, high volatility in token prices, new game launched), other chains are unaffected. One chain's congestion or increasing fees won't impact other chains. Going back to our highway analogy, we can think of the scaling through multiple chains as building many short highways in parallel. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/12-B730ft1bJPU6lxd3G4B7rpdExaNuC6.png) Coming back to our highway analogy, we can imagine a scenario where congestion builds up on one highway. The other highways are not directly affected. Some cars may choose to take a different highway and the traffic bottleneck may decrease. In single-chain systems, like Ethereum, congestion and rising fees affect everyone, including parties that have nothing to do with the cause of the activity increase. While the unified, or monolithic, blockchain design does offer certain advantages (like unified liquidity, fewer bridges, and potential for enhanced user experience via co-location of dApps), it also introduces notable drawbacks. Validators must have powerful, costly machines to support a diverse array of applications, increasing centralization due to high operational costs of running a node. Lack of modularity can halt all on-chain activities during network disruptions, potentially causing significant financial losses. Additionally, each new dApp competes for the same block space as all others, leading to overcrowding. ## Customizability The creator of an Avalanche L1 can customize it to meet their needs. This can happen in numerous ways, including introducing a custom own gas token, using a different Virtual Machine (e.g. a WASM or Move VM), or limiting access to the Avalanche L1. This is very hard to do in a single-chain system because it would require a majority of users to agree. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/13-VegJrRJXmD83CRfgpx2wTbzLkszjBB.png) In our highway analogy, let's think of some travelers having a very unique requirement: They would like to travel by boat instead of a car. While technically possible to build a water lane on a single highway system, this would be challenging. However, when these custom requirements are met in a Avalanche L1, it's easy to do. The ability to choose or create custom Virtual Machines (VMs) offers unprecedented flexibility. Avalanche L1s allow for developers to have: - Multiple VM Support: Unlike single-VM networks like Bitcoin or Ethereum, Avalanche L1s can host multiple blockchain instances with different VMs. - Ease of Use: Leverage existing VMs like the Subnet-EVM or create entirely new custom VMs using our SDKs to suit your specific needs. - Network Effects: This flexible architecture creates network effects both within individual blockchains and across different Avalanche L1s and blockchains. ## Enhanced Privacy, Compliance and Access Control While public blockchains offer transparency, many business scenarios require controlled visibility of transaction data. Developers building an Avalanche L1 can optionally enable: - Selective Transparency: Private blockchains allow you to limit transaction visibility to authorized participants. - Data Protection: Implement transaction encryption to safeguard sensitive information. - Granular Access: Control which data is visible to which participants, supporting "need-to-know" access models. ## Interoperability Avalanche L1s come with native interoperability and can leverage: - Cross-Chain Communication: Facilitate seamless interaction between different custom blockchains in the Avalanche network leverage Avalanche Interchain Messaging. - Asset Bridges: Create efficient bridges for asset transfers between your custom blockchain and other networks such as the Avalanche Interchain Token Transfer. # Avalanche9000 Upgrade (/academy/avalanche-fundamentals/03-multi-chain-architecture-intro/03a-etna-upgrade) --- title: Avalanche9000 Upgrade description: Learn about how the Avalanche9000 upgrade changes the network architecture. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- import EtnaUpgradeMotivation from '@/content/common/multi-chain-architecture/etna-upgrade-motivation.mdx'; # Avalanche L1s vs Layer 2 (/academy/avalanche-fundamentals/03-multi-chain-architecture-intro/04-custom-blockchains-vs-layer-2) --- title: Avalanche L1s vs Layer 2 description: Comparing different blockchain scaling approaches. updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- Layer 2 blockchain solutions, such as rollups, are another innovation in the blockchain landscape. Layer 2s aim to enhance the scalability and performance of the Ethereum network. Rollups essentially perform computations off-chain and submit the resultant state changes to the base layer, thereby reducing the computational load on the main Ethereum chain. While both Avalanche Custom Blockchains and Layer 2 rollups strive to improve blockchain scalability and performance, they use different methods and each have their unique advantages and trade-offs. ## Decentralization and Security Avalanche Custom Blockchains are part of the base layer itself. Each blockchain in Avalanche maintains its security, hence a compromise in one blockchain doesn't necessarily impact the others. On the other hand, rollups delegate security to the Ethereum mainnet. As long as the mainnet remains secure, so does the Layer 2 solution, given the rollup performs properly. However, a security breach on the mainnet can potentially affect all Layer 2 solutions. ## Interoperability and Flexibility Avalanche's multi-chain structure offers great interoperability and flexibility, as each custom blockchain network can define its own rules and validate multiple blockchains of different virtual machines. This means Avalanche can cater to a vast array of use cases. Conversely, Layer 2 solutions are primarily designed to augment the Ethereum mainnet and might not offer the same flexibility. ## Performance and Cost Both approaches aim to offer higher transaction throughput and lower fees compared to traditional single-chain systems. Avalanche achieves this through parallel processing across its Avalanche L1s, while rollups offload computation off-chain. However, users of Layer 2 solutions might experience delays when transferring assets back to Layer 1. Furthermore, since Layer 2 systems need to checkpoint their activity to the L1, which effectively sets a price floor and couples the prices of the L1 gas token to the L2 gas token. In Avalanche, the gas tokens of an Avalanche L1 are completely independent from AVAX. # Set Up Core Wallet (/academy/avalanche-fundamentals/03-multi-chain-architecture-intro/05-setup-core) --- title: Set Up Core Wallet description: Learn about how to setup your own Core Wallet. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; import InstallCoreExtension from "@/content/common/core-wallet/install.mdx" import CreateWallet from "@/content/common/core-wallet/create-wallet.mdx" import TestnetMode from "@/content/common/core-wallet/testnet-mode.mdx" import Faucet from "@/content/common/core-wallet/faucet.mdx" To grasp the concept of the Avalanche L1s better we will interact with a chain. To do that, we will use Core. Core is an all-in-one command center built for multi-chain systems. It supports Avalanche, Bitcoin, Ethereum, and all EVM-compatible blockchains. Core has a browser extension, a web wallet, and a mobile app. It is optimized for multiple chains and makes navigating between them easy. ### Install Core Extension ### Create a Wallet ### Switch to Testnet ### Get Testnet Tokens ## Managing MetaMask & Core Extensions If had Metamask already installed, you could face some problems. We recommend temporarily disabling Metamask for the time being. Open the extensions page in browser. You can also type `chrome://extensions` in your address bar to open it. Scroll through your list of installed extensions, or use the search bar at the top of the page, to find the Metamask extension. Once you find Metamask, you will see a toggle switch next to it. Click on this switch to disable the extension. You can reenable it any time. When the switch is in the off position, Metamask is disabled, and you should be all set. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/14-7yGvfoSga3HSD9JF2bokKPBFd3OvdV.png) # Use Dexalot L1 (/academy/avalanche-fundamentals/03-multi-chain-architecture-intro/06-use-dexalot) --- title: Use Dexalot L1 description: Get first hand experience with an Avalanche L1. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this activity, we'll interact with our first Avalanche L1. The Dexalot Avalanche L1 aims to replicate the user experience of a centralized exchange (CEX) with greater decentralization and transparency. By using a Avalanche L1, Dexalot can offer cheaper transaction fees than other traditional decentralized exchanges while keeping the transparency that centralized exchanges lack. ### Display the Dexalot Avalanche L1 in Core Make sure your Core Wallet is set to Testnet mode, so we do not use real funds. Open your Core Wallet Browser Extension and click "Show all Networks." To display Dexalot to our home screen, click the star icon next to its name. ### Bridge Tokens to Dexalot Head over to the [Dexalot Testnet](https://app.dexalot-test.com/trade), connect your wallet (top right), and click on the `Deposit` button next to the pink icon with your connexted wallet. You will have to authenticate your wallet for the first time you interact with Dexalot. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/Dexalot1.png) Once you clicked the `Deposit` button, select to deposit into the Dexalot L1. Enter 1 AVAX and click Deposit. Confirm the transaction in your wallet. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/Dexalot2.png) A few moments later, your balances should be updated and you should be able to see that on the `Dashboard` tab: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/Dexalot3.png) ### Swap Tokens on Dexalot L1 Now head to the `Trade` tab and sell 0.5 AVAX for USDC. You can select the trading pair AVAX/USDC on the top right. The website will prompt your wallet to switch networks to the Dexalot L1. Confirm the switch and then sign the transaction. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/Dexalot3.png) Now, head back to the `Dashboard` tab and check how the balances updated. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/Dexalot4.png) ### Withdraw to Fuji C-Chain From the `Dashboard` tab, withdraw your AVAX and USDC back to the Fuji C-Chain by clicking on the "Withdraw" gray button on the left: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/Dexalot5.png) Make sure the network is set to Fuji C-Chain and enter the amount you want to withdraw. Click on the Withdraw button and confirm the transaction in your wallet. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/Dexalot6.png) ## What just happened? On the surface, it might feel like a regular decentralized swap. However, there is a substantial difference. We performed our operation not on the C-Chain, but bridged our tokens to a Avalanche L1, performed the operations there, and bridged our tokens back to the C-Chain. The Avalanche L1 (Dexalot) we used was highly specialized for our use case and very cheap. We actually got all the gas token ALOT we needed airdropped, just for bridging assets over. The only fees we had to pay were for bridging. Usually, we would not have done this for a single operation, so these costs do not occur for a single trade, but only for depositing and withdrawing from the portfolio. # Creating an L1 (/academy/avalanche-fundamentals/04-creating-an-l1/01-creating-an-l1) --- title: Creating an L1 description: Learn how to create an L1 blockchain in the Avalanche network using the Builder Tooling. updated: 2024-05-31 authors: [martineckardt] icon: Book --- Now that we've gone over what Avalanche L1s are and how Interoperability works, you are probably eager to test out the functionality of Avalanche L1s by creating one yourself! In this section, you will learn how to create an L1 blockchain using the the Avalanche Builder Tooling. For production deployments you can leverage a [Blockchain-as-a-Service Provider](/integrations#Blockchain%20as%20a%20Service) to oursurce the setup and maintenance of the L1 infrastructure. ## Video You can watch a video how an L1 is created: ## What You Will Learn In this section, you will go through the following steps: ### Claim Testnet AVAX Tokens In the first step we will show you how to set up the Core wallet chrome extension. After, we will show how to claim some testnet AVAX tokens from the faucet. Finally, you will learn how to bridge the tokens from the C-Chain to the P-Chain. ### Create the P-Chain records for the a Subnet and blockchain Here we will show you how to create the P-Chain records for the Subnet and the blockchain. You will issue a `CreateSubnetTx` transaction on the P-Chain to create a Subnet record. Then you will add a belockchain to the Subnet by issuing a `CreateChainTx` transaction on the P-Chain. ### Set up a node to track the Subnet In this step you will learn how to set up a node to track the Subnet. We will leverage Docker to set up a node on a cloud server. When the node is up and running, you will connect your wallet to it. ### Convert to an L1 To finish the process, you will convert the Subnet to an L1. This is done by issuing a `ConvertSubnetToL1Tx` transaction on the P-Chain. This will turn your node into an actual validator for the L1. ### Test the L1 In the final step, you will deploy a ERC-20 token on the L1 you had launched. # Create Builder Account (/academy/avalanche-fundamentals/04-creating-an-l1/01a-create-builder-account) --- title: Create Builder Account description: Create a Builder Account for easier access. updated: 2025-03-13 authors: [martineckardt] icon: CircleUserRound --- Before you start creating your first L1, we recommend you to create an Avalanche Builder Account. This will allow you to easily access many features of Builder Hub. You can complete the course without a Builder Account, but it will be much more convenient to have one. ### Benefits of a Builder Account - **Faucet**: With a Builder Account, you can claim testnet AVAX on the C-Chain and P-Chain right from the Builder Console without the need of a coupon code or holding a mainnet balance. - **Free Managed Testnet Infrastructure**: With a Builder Account, we will provide free managed testnet nodes and ICM relayers for your L1. This infrastructure is not suitable for production use, but it is great for testing and development. # Install Core Wallet (/academy/avalanche-fundamentals/04-creating-an-l1/02-connect-core) --- title: Install Core Wallet description: Learn how to install the Core wallet browser extension and create a wallet. updated: 2025-03-13 authors: [owenwahlgren] icon: Wallet --- In this section you will learn how to set up Core wallet and get some testnet AVAX on the P-Chain so you can launch your first L1. ### Download Core Wallet If you don't have Core Wallet installed already, download the Core Extension for Chrome: [Core Wallet Extension](https://chromewebstore.google.com/detail/core-crypto-wallet-nft-ex/agoakfejjabomempkjlepdflaleeobhb) Core is the only wallet that supports issuing P-Chain transactions, so it is not possible to use other wallets such as MetaMask or Rabby. ### Create a Wallet in Core Create a new wallet in Core by clicking the `Continue with Google` or by manually creating a new Wallet. Congratulations! You have successfully set up your Core wallet. # Claim Testnet Tokens (/academy/avalanche-fundamentals/04-creating-an-l1/02a-claim-testnet-tokens) --- title: Claim Testnet Tokens description: Learn how to connect Core wallet and claim testnet AVAX. updated: 2025-03-13 authors: [owenwahlgren] icon: Coins --- Connect your Core wallet below to claim testnet AVAX on the C-Chain and P-Chain. With a Builder Hub account, **tokens are automatically sent to your wallet** - no manual claims or coupon codes needed! **Automated Faucet:** Once you connect your wallet with a Builder Hub account, testnet tokens are automatically sent to your wallet when needed. You can also manually request tokens using the buttons above. ## Alternative Faucet If you don't want to create a [Builder Hub Account](https://build.avax.network/login), you can use the external [Avalanche Faucet](https://core.app/tools/testnet-faucet/?subnet=c&token=c) with coupon code `avalanche-academy` or `avalanche-academy25` to get 2 AVAX on the C-Chain. After you have claimed some AVAX on the C-Chain, you can bridge it to the P-Chain using this tool: # Network Architecture (/academy/avalanche-fundamentals/04-creating-an-l1/03-network-architecture) --- title: Network Architecture description: Learn about the network architecture of Avalanche and the role of the P-Chain. updated: 2025-03-13 authors: [owenwahlgren] icon: BookOpen --- To create an L1 you need to follow these steps: 1. Create a Subnet record on the P-Chain with the `CreateSubnet` transaction 2. Add one or more chains to the Subnet with the `CreateChain` transaction 3. Convert the Subnet to an L1 and add the initial validators with the `ConvertSubnetToL1` transaction Let's dive into what that means: ## Validators in a Multi-Chain Network Validators are nodes of a blockchain that secure the network by validating the transactions. Each L1 in the Avalanche network has it's own set of validators that is running the `AvalancheGo` client. L1 Creation ## Platform Chain The Platform Chain is the backbone for the native interoperability of the Avalanche network. It is a registry of all validators in the Avalanche network. This includes the validators of the Primary Network (including the C- and P-Chain), as well as all L1s and legacy Subnet validators. For each validator the P-Chain stores the following information: - A unique node ID identifying the validator - A BLS Public Key - The Stake Weight - The Balance of the Validator for the continous interoperability fee following graphic shows a simplified data model of the P-Chain: P-Chain Architecture Builders can create new L1 blockchains in the Avalanche Network by issuing transactions on the P-Chain. The P-Chain runs a special-purpose virtual machine called the [platformvm](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm). It is not EVM-based and therefore to interact with it you need a compatible wallet like Core wallet. Creating new records for L1 blockchains on the P-Chain is done by issuing transactions like the `CreateSubnetTx`. The P-Chain is secured by the Primary Network validators. L1 validators are syncing the P-Chain, meaning they always have the latest view of the validator set of all blockchains in the Avalanche network, but are not participating in the consensus of the P-Chain. ## Subnets When the Avalanche network was created the architecture included the concepts of Subnets. Subnets are blockchains that were validated by a *subset of the Primary Network validators*. Each Primary Network validator can be a member of multiple subnets, and each Subnet can have multiple validators. Since Primary network validators have to fullfil the Primary Network staking requirements of `2,000 AVAX`, this was also necessary for every validator that was validating a Subnet. There was no option to opt-out of validating the Primary Network. Primary Network
Architecture While the concept of Subnets is still supported in the Avalanche network, it is recommended to launch new blockchains as L1s to take advantage of the benefits outlined earlier. Since the naming of Subnets is deep enshrined in the Avalanche network, you will still see it sometimes in the code or the transaction names. # Create a Blockchain (/academy/avalanche-fundamentals/04-creating-an-l1/05-create-blockchain) --- title: Create a Blockchain description: Learn how to configure a blockchain and create a record for it on the P-Chain by issuing a CreateChainTx transaction using the Builder Tooling. updated: 2025-03-13 authors: [owenwahlgren] icon: SquareMousePointer --- Now that you have Core wallet set up and some AVAX on the P-Chain you can create a Subnet. You will do this by issuing a `CreateSubnetTx` transaction. This will create a Subnet that is uniquely identified by the transaction hash of the `CreateSubnetTx` transaction. The `CreateSubnetTx` transaction only has a single parameter: The owner of the Subnet. The owner can add blockchains to the Subnet and convert it to an L1. With the conversion to an L1 the owner will loose it's privileges. Therefore, the owner is only relevant during the creation time and does not have to be secured by a mulit-sig if a immediate conversion to an L1 is planned. We will just use your P-Chain address as the owner. Then you will issue the `CreateChainTx` on the P-Chain to create the blockchain record. The `CreateChainTx` transaction as the following parameters: - `name`: The name of the chain - `subnetID`: The ID of the Subnet you want to add the chain to - `vmID`: The ID of the Virtual Machine that will be used to run the chain. - `genesisData`: The genesis configuration of the chain The Genesis Builder tool allows us to configure many aspects of the blockchain like permissioning, it's tokenonomics and the transaction fee mechanism. Feel free to browse through the different configuration options, but for now don't change any of the defaults and just click the `View Genesis JSON` button. Then click `Create Chain` This will create the P-Chain record for your blockchain and associate it with the Subnet created in the previous step. The blockchain will be uniquely identified by the transaction hash of the `CreateChainTx` transaction. Congratulations! You have successfully created a blockchain record on the P-Chain. The blockchain does not have any validator nodes yet, so we can't connect our wallet to it or issue any transactions just yet. You will learn how to do that in the next section. # Set up Validator Nodes (/academy/avalanche-fundamentals/04-creating-an-l1/06-run-a-node) --- title: Set up Validator Nodes description: Learn how to deploy validator nodes for your L1 using managed infrastructure or self-hosted Docker deployments. updated: 2025-03-19 authors: [owenwahlgren] icon: Terminal --- import AvalancheGoDocker from "@/components/toolbox/console/layer-1/AvalancheGoDockerL1"; import ToolboxMdxWrapper from "@/components/toolbox/academy/wrapper/ToolboxMdxWrapper.tsx"; Now that the P-Chain records are set up, you can start syncing the node with your Subnet. ## Run Your L1 Node Use our free managed testnet infrastructure to instantly deploy a node for your L1, no need for Docker or Cloud Provider account set up with credits. Enjoy the benefits of: - Instant deployment with one click - Automatic configuration for your Subnet - Free for testnet development - Monitor and manage through the [Builder Console](/console/testnet-infra/nodes) Managed nodes automatically shut down after 3 days. For production or extended testing, see the self-hosted option below. ## Optional Alternative: Self-Hosted Infrastructure The free managed testnet nodes are a great option for playing around with Avalanche L1s, but are not intended for running on production environments. They are shut down automatically after 3 days. If you want to test out your production environment, running beyond 3 days, or anything more complex you should run nodes on your own infrastructure using Docker: # Convert a Subnet to an L1 (/academy/avalanche-fundamentals/04-creating-an-l1/07-convert-subnet-l1) --- title: Convert a Subnet to an L1 description: Learn how to convert a Subnet to an L1 on the P-Chain by issuing a ConvertSubnetToL1Tx transaction using the Builder Tooling. updated: 2025-03-13 authors: [owenwahlgren] icon: SquareMousePointer --- The node you have just launched is tracking the Subnet but is not yet a validator. In fact, since the Subnet does not have any validators yet it cannot process any transactions. In this step we will do two things at once: 1. Convert the Subnet to an L1 2. Add your node as a validator Converting a Subnet to an L1 requires the Subnet owner issuing a `ConvertSubnetToL1Tx` transaction on the P-Chain. This transaction establishes a new validator set for your blockchain and transforms it from a Subnet into a sovereign L1 chain. The `ConvertSubnetToL1Tx` transaction is a one-time transaction that can only be executed once by the Subnet owner(s). After the conversion the subnet owner looses all privileges and the L1 is controlled by a validator manager contract, that manages the validator set. You will learn more about this in the next chapter. The `ConvertSubnetToL1Tx` has the following parameters: - **Subnet ID** - The unique identifier of your Subnet - **Validator Manager Blockchain ID** - The ID of the blockchain where the Validator Manager contract will be deployed. This blockchain can belong to the L1, be the C-Chain or belong to any other L1 in the Avalanche network - **Validator Manager Address** - The address of the contract on that blockchain - **Validators** - The initial set of validators for the L1 The **Validator Manager Address** is initially set to an OpenZeppelin [TransparentUpgradeableProxy](https://docs.openzeppelin.com/contracts/4.x/api/proxy) contract pre-deployed in the `genesis.json`. After conversion, you'll deploy the actual `ValidatorManager` implementation contract and update the proxy to point to it. ## Conversion Tool Use the following tool to convert your Subnet to an L1: # Test your L1 (/academy/avalanche-fundamentals/04-creating-an-l1/08-test-l1) --- title: Test your L1 description: Test your freshly created L1 by deploying an ERC-20 using the Builder Tooling. updated: 2025-03-13 authors: [martineckardt] icon: SquareMousePointer --- Congratulations! You have successfully created your own L1. Now, let's deploy an ERC-20 token on your L1 and test it. You have just launched your own blockchain on Avalanche and already deployed your first token on it. In this following chapters you will learn more on how you can tailor your L1 to your applications needs. # Remove Node (/academy/avalanche-fundamentals/04-creating-an-l1/09-remove-node) --- title: Remove Node description: Learn how to stop and remove a node running on Docker. updated: 2025-03-19 authors: [martineckardt] icon: Terminal --- Now that you have tested your L1, you can stop and remove the node. This is useful if you want to free up resources or if you want to start a new node with different parameters. ### Stop Node To stop the node, you can use the following command: ```bash docker stop avago ``` The node credentials and the blockchain state is persisted in the `~/.avalanchego` directory. When you restart the node with `docker start avago` it will pick up where it left off. ### Remove Node ```bash docker rm avago ``` This will not remove the state and credentials of the node. To remove these you need to delete the `~/.avalanchego` directory. # Introduction (/academy/avalanche-fundamentals/05-interoperability/01-introduction) --- title: Introduction description: Learn about interoperability in the Avalanche ecosystem and its importance in multichain systems updated: 2024-08-26 authors: [martineckardt, nicolasarnedo] icon: Book --- Now that we know about [Avalanche's Multichain Architecture](/academy/avalanche-fundamentals/03-multi-chain-architecture-intro/01-multi-chain-architecture) it is important we understand how we achieve **native interoperability** between all of these chains. ## What is Interoperability Interoperability refers to the ability of different blockchain networks to communicate, share data, and interact with each other seamlessly. This capability allows assets, information, and functionalities to move between separate blockchain ecosystems without the need for intermediaries. These interactions can take many forms, including: - Asset Bridging (Tokens, NFTs, etc.) - DAO Voting across Chains - Cross-Chain Liquidity Pools - Decentralized Data Feeds - Cross-Chain Smart Contract Calls ## Why Interoperability Matters Without interoperability, blockchains face significant challenges: - **Lack of Liquidity**: New blockchains struggle to attract sufficient liquidity for tokens and financial instruments, limiting their viability and user adoption. - **Limited Developer Adoption**: Developers are hesitant to build on new blockchains without access to existing tools, communities, and infrastructure from established networks. - **Restricted User Access**: Users face barriers entering new blockchains due to lack of direct on-ramp services and limited asset availability. ## How Avalanche Achieves Interoperability Avalanche enables native cross-chain communication through a layered approach: ### Avalanche Warp Messaging (AWM) The foundational protocol that enables L1s to exchange authenticated messages. AWM leverages BLS multi-signatures where validators sign outgoing messages, and these signatures are aggregated for efficient verification on the destination chain. ### Interchain Messaging Contracts (ICM) Smart contracts that provide a developer-friendly interface on top of AWM. They handle message encoding/decoding, relayer incentives, and cross-chain dApp communication patterns. ### Interchain Token Transfer (ICTT) Built on Interchain Messaging contracts, ICTT enables asset transfers between L1s by locking tokens on the source chain and minting representations on the destination chain. ## What You'll Learn In this section, we'll explore: 1. **Interoperability Fundamentals** - Understanding the core concepts and use cases 2. **Securing Cross-Chain Communication** - Cryptographic foundations including BLS signatures 3. **Avalanche's Interoperability Solutions** - Deep dive into AWM and ICTT protocols 4. **Practical Implementation** - How these technologies work together in real applications Let's begin by understanding the fundamental concepts that make secure cross-chain communication possible. # Source, Message and Destination (/academy/avalanche-fundamentals/05-interoperability/02-source-message-destination) --- title: Source, Message and Destination description: Learn about interoperability and it's importance in multichain systems. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Interoperability is achieved by enabling blockchains to pass messages to one another. Each message originates from a source chain and is sent to one or more destination chains. These messages encode arbitrary data that can attest some event on the source chains, such as the deposit of an asset or the result of a vote.
![](/common-images/teleporter/source.png) # Source - Origin of communication - Sender calls contract
![](/common-images/teleporter/message.png) # Message - Contains source, destination, and encoded data - Signature guarantees authenticity
![](/common-images/teleporter/destination.png) # Destination - Submission of message as transaction - Verifies signatures
## Source Chain The source chain in a cross-blockchain communication system refers to the original blockchain where data, assets, or information originate before being communicated to another blockchain. It acts as the starting point for initiating cross-chain interactions. When a user intends to communicate with another blockchain, they utilize protocols or smart contracts to initiate the communication process from the source chain. ## Message The message is the data structure that will be sent to to the destination chain. It contains some metadata, the encoded message, and a signature. The signature attests the authenticity of the message, meaning that whatever the message claims has actually happened on the source chain. ## Destination Chain Conversely, the destination chain is the recipient blockchain where the communicated data, assets, or instructions will be received and processed. Validators, nodes, or protocols associated with the cross-blockchain communication system on the destination chain receive and authenticate the information relayed from the source chain. Once validated, the destination chain processes the data or executes the specified instructions. # ICM, ICM Contracts & ICTT (/academy/avalanche-fundamentals/05-interoperability/03-icm-icmContracts-and-ictt) --- title: ICM, ICM Contracts & ICTT description: Learn how Avalanche implements secure interoperability with ICM, ICM Contracts (Teleporter), and ICTT updated: 2025-07-21 authors: [nicolasarnedo] icon: BookOpen --- We can look at Avalanche's Native Interoperability with a layered approach, each step guaranteeing that our message is securely sent across systems. In order to understand the fundamentals of cross-chain messaging, here are 5 concepts that we will be covering in this chapter that make it possible: 1. **Secure Signatures** - The foundation that ensures messages can be trusted 2. **ICM (Interchain Messaging)** - Native blockchain feature that enables cross-chain communication 3. **ICM Contracts** - Developer-friendly tools that make building cross-chain apps easier 4. **ICTT (Interchain Token Transfer)** - Ready-to-use solution for moving tokens between chains *(not always used)* 5. **Relayers** - service that helps deliver messages between chains by collecting signatures and submitting transactions ![Interchain Messaging Layers](/common-images/avalanche-fundamentals/InterchainMessagingLayers+Relayer.png) Knowing the components is the first piece of the puzzle, grasping how they interact with each other is the next step. In the next image we can see ## Interchain Messaging (ICM) ICM is the foundation of cross-chain communication on Avalanche. It's built directly into every Avalanche blockchain, making it a native feature rather than an add-on. ### What ICM Does Think of ICM as a built-in postal system for blockchains: - **Creates Messages**: Smart contracts can create messages to send to other blockchains - **Signs Messages**: Validators sign these messages to prove they're authentic - **Verifies Messages**: Destination blockchains can verify that messages are genuine ### The Warp Precompile At the heart of ICM is the "warp precompile" - a special smart contract that comes pre-installed on every Avalanche blockchain. Unlike regular smart contracts that developers deploy, this one is built into the blockchain software itself (and is written in go unlike most smart contracts that are in solidity). ### Simple Message Flow ![Interchain Messaging Flow](/common-images/avalanche-fundamentals/InterchainMessagingExampleFlow.png) 1. A smart contract creates a message using the warp precompile 2. The blockchain emits an event saying "I have a message to send" 3. Validators sign this message to prove it's legitimate 4. A relayer picks up the message and signatures 5. The relayer delivers everything to the destination blockchain 6. The destination blockchain verifies the signatures and accepts the message ## ICM Contracts (Teleporter) While ICM provides the basic messaging capability, developers need an easier way to use it. That's where ICM Contracts come in - specifically a contract called "Teleporter". ### What Teleporter Does Teleporter is like a user-friendly interface for ICM: - **Simple Functions**: Instead of complex operations, developers just call `sendCrossChainMessage()` - **Message Management**: Automatically handles encoding, tracking, and delivering messages - **Fee Handling**: Manages payments to relayers for delivering messages - **Security**: Prevents messages from being delivered twice or replayed ### Why Developers Use Teleporter - It's deployed at the same address on every blockchain - It handles all the complex parts of cross-chain messaging - It provides a standard way for contracts to communicate across chains - It makes building cross-chain applications much simpler ## Relayers Relayers are the delivery service of the cross-chain messaging system. They're responsible for physically moving messages from one blockchain to another. ### What Relayers Do 1. **Monitor Blockchains**: Watch for new cross-chain messages 2. **Collect Signatures**: Gather validator signatures that prove a message is valid 3. **Submit Transactions**: Create and submit the transaction on the destination blockchain 4. **Pay Gas Fees**: Cover the transaction costs on the destination chain (and get reimbursed) ### Key Points About Relayers - Anyone can run a relayer - it's permissionless - Relayers need wallets with tokens on destination chains to pay for gas - They can be configured to handle specific routes or all messages - They're incentivized through fee mechanisms built into Teleporter ## ICTT: An Example Cross-Chain Application ICTT (Interchain Token Transfer) is not a core component of cross-chain messaging - it's an application built on top of ICM, Teleporter, and relayers. Think of it as one of many possible cross-chain applications. ### What ICTT Does ICTT is a pre-built solution specifically for moving tokens between blockchains: - **Token Home**: Manages the original tokens on their native blockchain - **Token Remote**: Creates wrapped versions of tokens on other blockchains - **Transfer Logic**: Handles locking, minting, burning, and releasing tokens ### Why ICTT is Special While anyone could build their own token bridge using ICM and Teleporter, ICTT provides: - A tested, secure implementation - Standard contracts that work the same way everywhere - Support for both native tokens and ERC-20 tokens - Advanced features like "send and call" for composability Think of ICTT like a pre-built e-commerce platform - you could build your own, but using the existing solution saves time and reduces risk. ## Summary Avalanche's cross-chain messaging system works like a well-coordinated postal service: **The Core Components:** - **ICM**: The built-in messaging system in every blockchain - **ICM Contracts (Teleporter)**: The easy-to-use interface for developers - **Relayers**: The delivery service that moves messages between chains **Why It Works So Well:** - **Trust**: Messages are secured by the same validators that run the blockchains - **Simplicity**: Developers can send messages with just a function call - **Flexibility**: Anyone can build cross-chain applications (like ICTT for tokens) - **Speed**: Messages are delivered in seconds, not minutes or hours This design means developers can focus on building their applications instead of worrying about the complex infrastructure of cross-chain communication. ## What Powers This System? Now that we understand how messages travel from one blockchain to another, you might wonder: what makes this system secure? How do we know a message really came from the source blockchain and hasn't been tampered with? The answer lies in cryptographic signatures - the digital equivalent of a tamper-proof seal. In the next sections, we'll explore: - How validators create these digital signatures - Why multiple signatures make the system secure - How signatures can be efficiently combined and verified Understanding these concepts will complete your knowledge of how Avalanche achieves secure, native interoperability. For hands-on practice with cross-chain messaging, check out the [Academy Interchain Messaging](/academy/interchain-messaging) course. # Signature Schemes (/academy/avalanche-fundamentals/05-interoperability/04-signature-schemes) --- title: Signature Schemes description: Learn how Signature Schemes work and enable secure cross-chain communication updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- import SignatureSchemes from "@/content/common/cryptography/signature-schemes.mdx" import defaultMdxComponents from "fumadocs-ui/mdx"; # Use a Signature Scheme (/academy/avalanche-fundamentals/05-interoperability/05-signature-demo) --- title: Use a Signature Scheme description: Hands-on demonstration of BLS signature schemes updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import SignatureSchemesDemo from "@/content/common/cryptography/signature-schemes-demo.mdx" # Multi-Signature Schemes (/academy/avalanche-fundamentals/05-interoperability/06-multi-signatures) --- title: Multi-Signature Schemes description: Learn how multiple signatures provide Byzantine fault tolerance updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- import MultiSignatureSchemes from "@/content/common/cryptography/multi-signature-schemes.mdx" # Use Multi-Signature Schemes (/academy/avalanche-fundamentals/05-interoperability/07-multi-signature-demo) --- title: Use Multi-Signature Schemes description: Hands-on demonstration of multi-signature schemes with BLS updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import SignatureSchemesDemo from "@/content/common/cryptography/multi-signature-schemes-demo.mdx" # BLS Signature Aggregation (/academy/avalanche-fundamentals/05-interoperability/08-signature-aggregation) --- title: BLS Signature Aggregation description: Learn how signature aggregation enables efficient multi-party verification updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- import SignatureKeyAggregation from "@/content/common/cryptography/signature-key-aggregation.mdx" import defaultMdxComponents from "fumadocs-ui/mdx"; # Use Cases (/academy/avalanche-fundamentals/05-interoperability/09-use-cases) --- title: Use Cases description: Learn about the practical applications of interoperability on Avalanche updated: 2024-08-26 authors: [owenwahlgren] icon: BookOpen --- Interoperability enables various use cases that enhance user experience and expand the Avalanche ecosystem's capabilities. Let's explore the key applications: ## 1. Cross-Chain Token Transfers Tokens can be seamlessly transferred across different L1 blockchains without centralized exchanges: - **Seamless Transfers**: Move tokens (like USDC) between chains with minimal fees and fast transaction times - **Liquidity Access**: Leverage liquidity from various networks for DeFi activities, trading, or payments - **Decentralization**: Maintain control over assets during transfers without third-party intermediaries ## 2. Decentralized Data Feeds Smart contracts can access reliable price feeds and other data from specialized oracle chains: - **Chainlink Integration**: Utilize real-time price data to power DeFi applications - **Trustless Access**: Obtain data directly from decentralized oracles - **Cross-Chain Applications**: Build financial applications that rely on consistent data across networks ## 3. Cross-Chain Token Swaps Direct token swaps between different blockchain networks without centralized exchanges: - **Decentralized Exchange**: Swap assets in a trustless environment - **Multi-Chain Access**: Access tokens from various ecosystems - **Lower Costs**: Avoid high fees of centralized platforms ## 4. Cross-Chain NFTs NFTs can be transferred and utilized across different blockchain networks: - **Broad Exposure**: NFTs can be showcased and traded across multiple chains - **Enhanced Utility**: Use NFTs in gaming, art, and virtual worlds across platforms - **Ownership Preservation**: Maintain authenticity and provenance across chains ## 5. Interoperable DeFi Protocols DeFi protocols can interact across chains for enhanced functionality: - **Cross-Chain Yield Farming**: Maximize returns across multiple chains - **Cross-Chain Collateralization**: Use assets from any chain as collateral - **Composability**: Build innovative financial products leveraging multiple networks ## 6. Cross-Chain Governance Decentralized governance that spans multiple blockchains: - **Unified Governance**: Govern multi-chain protocols from a single platform - **Decentralized Voting**: Enable participation from token holders on different chains - **Enhanced Participation**: More diverse and representative decision-making ## Real-World Example: Gaming Consider a gaming ecosystem where: - Game assets (NFTs) are minted on one L1 optimized for low fees - The game logic runs on another L1 optimized for high throughput - Rewards are distributed on a third L1 with established DeFi infrastructure Interoperability makes this seamless for users who can play, trade, and earn across all chains without friction. # Independent Tokenomics (/academy/avalanche-fundamentals/07-independent-tokenomics/01-independent-tokenomics) --- title: Independent Tokenomics description: Quickly recap our past learnings about Avalanche Custom Blockchains. updated: 2024-06-28 authors: [usmaneth] icon: Book --- Avalanche Custom Blockchains offer multiple ways to implement independent tokenomics. This gives developers more control and can enable new business models that would not be economically feasible on single-chain systems. The customizations include: - **Native Token:** Every Avalanche L1 has its own native token used for paying transaction fees. - **Transaction Fees:** We can configure how the transaction fees should be calculated. - **Initial Native Token Allocation:** We can specify how the inital token supply is distributed. - **Native Token Minting Rights:** We can specify if and who can mint more native tokens. - **Staking Token:** If our Avalanche L1 allows public and permissionless validation, we can define our logic how a node can become a validator. You will learn about all these topics and get hands-on experience how you can configure the tokenomics of your own custom blockchain. # Staking Token (/academy/avalanche-fundamentals/07-independent-tokenomics/11-staking-token) --- title: Staking Token description: Learn about staking tokens in blockchain networks. updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- In many networks, such as Ethereum, the same token is used for staking and paying for gas. In the Avalanche network staking tokens and gas tokens can be separated, since they fullfil different purposes within the blockchain ecosystem. Staking tokens are used for securing public permissionless Avalanche L1s through a proof-of-stake (PoS) consensus mechanism. Holders of staking tokens can run validators and participate in the consensus process by staking a certain amount of tokens as collateral. Validators propose and validate new blocks of your L1 blockchain. Staking tokens play a crucial role in maintaining network security and incentivizing their participation. Not all Avalanche L1s are public and permissionless. Many enterprises choose a Proof-of-Authority setup, where the validators are selected by the enterprise. These blockchains do not have a staking token. You will learn about setting up Proof-of-Stake in the L1 Validator Management course. # Introduction to VM Customization (/academy/avalanche-fundamentals/07b-vm-customization/00-vm-customization) --- title: Introduction to VM Customization description: Learn about customizing Virtual Machines. updated: 2024-05-31 authors: [ashucoder9] icon: Book --- For some use cases, it may be necessary to use a customized VM too. This is the case if an application cannot be built on a regular EVM on the C-Chain, or if it would result in gas costs too high to be economical for its users or creators. Depending on a builder's needs, Avalanche allows for different VM customizations: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/32-BJ1Orj9c9p0VxpSTUJlWH1jPvo8nlD.png) Customizing the EVM on the Ethereum network is difficult, requiring a wide consensus that the proposed change is mutually beneficial for all network participants. This can make customizations for unique use cases challenging, if not impossible. Additionally, Ethereum doesn't give users the option to add different chains. In Avalanche, every Avalanche L1 has autonomy over their Virtual Machines. Their creators can customize VMs to fit their unique requirements. This is one of the biggest advantages of multi-chain systems. In this section, we will see the three way to customize Virtual Machines at a high level. In later courses, we will dive deeper and learn how to actually customize each. # VM Configuration (/academy/avalanche-fundamentals/07b-vm-customization/01-configuration) --- title: VM Configuration description: Learn about customizing Virtual Machines. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- When building a VM, it is possible to define certain parameters that can change the behavior of the VM. In our soda dispenser analogy these may be the products and prices offered by the dispenser. We might want to have two dispenser blockchains that offer different products and prices. If the VM is built in a way that it has parameters for the products and prices, it can be easily reused for different use items. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/33-AibP9P6YHVSun8IyXBAkKAJMeXVLqp.png) This is a massive advantage over one-chain-fits-all systems, where the parameters have to be a compromise between all network participants. In Avalanche it is possible to have different blockchain with the same VM, but different parameters. Using VM Configuration we can easily create EVM-chains for different use cases such as trading a cheap gaming NFT or valuable Real estate on chain. These blockchains may differ in fees (low fees for cheap NFTs, high fees for valuable goods) and security levels (low security for cheap NFTs, high security for valuable goods). Examples of the configurable parameters of the subnetEVM include: **trxAllowList:** Define a whitelist of accounts that restrict which account's transactions are accepted by the VM. **contractDeployerAllowlist:** Defined a whitelist of accounts that restricts which account can deploy contracts on the blockchain Using these parameters we can adapt the VM to our requirements without writing a single line of code. This is by far the easiest, but also least flexible way to customize a VM to one's requirements. ```json { "config": { "chainId": 99999, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "subnetEVMTimestamp": 0, "feeConfig": { "gasLimit": 20000000, "minBaseFee": 1000000000, "targetGas": 100000000, "baseFeeChangeDenominator": 48, "minBlockGasCost": 0, "maxBlockGasCost": 10000000, "targetBlockRate": 2, "blockGasCostStep": 500000 }, "contractDeployerAllowListConfig": { "blockTimestamp": 0, "adminAddresses": [ "0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC" ] } }, "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x52B7D2DCC80CD2E4000000" }, "0x0Fa8EA536Be85F32724D57A37758761B86416123": { "balance": "0x52B7D2DCC80CD2E4000000" } }, "nonce": "0x0", "timestamp": "0x0", "extraData": "0x00", "gasLimit": "0x1312D00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000" } ``` You can find more examples of Genesis files [here](https://github.com/ava-labs/subnet-evm/tree/master/tests/precompile/genesis). # VM Modification (/academy/avalanche-fundamentals/07b-vm-customization/02-modification) --- title: VM Modification description: Learn about modifying Virtual Machines. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- The next more powerful way to customize a VM is VM Modification. To meet our requirements, we can modify and extend our existing VM. In our soda dispenser analogy, we might want to customize our soda dispenser to accept card payments. Our current soda dispenser machine simply does not have this feature. Instead of reinventing the wheel and building a new dispenser with the new feature from the ground up, we simply extend the existing VM using a plugin. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/34-C4A2siQ7sVC3E0NAYfJ60Kr6AJDVQA.png) In Avalanche, creating modified VMs is straightforward. The subnetEVM for instance can be customized through the development of plugins as well as precompiles. # VM Creation (/academy/avalanche-fundamentals/07b-vm-customization/03-creation) --- title: VM Creation description: Learn about creating Virtual Machines. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- For some use cases it might be necessary to create entirely new VMs. In our analogy this would be the case if we require completely different features, let's say we want to build a car VM. There is simply no way we can configure or extend a soda dispenser to turn it into a car. Instead we have to build a new VM from ground up. These newly created VMs can cater to much more specialized use cases and therefore this enables completely new blockchain applications not possible before. VM Creation is by far the most complex way, but also the most powerful way to customize a VM. It requires a deep understanding of the blockchain space and software development skill. There might be certain basic elements in most VMs, such as the notion of transaction, accounts and authentication. Therefore, Avalanche provides some SDKs, such as the HyperSDK, to make the creation of VMs in languages like Go and Rust easier. When creating a VM, there are two very distinct design patterns, that greatly vary in their intended use case. ## Special Purpose VMs Special Purpose VMs are highly optimized for a specific use case. They only allow a very limited set of operations closely knit to its use case. A popular example for this would be the VM of the Bitcoin blockchain. The operations are much more limited than the ones of the Ethereum Virtual Machines, hence it only allows transactions related to the transfer of BTC. ## General Purpose General Purpose VMs introduce another layer - commonly know as Smart Contracts or Programs - that can be submitted to the VM and enable user of the chain to interact with these. This way it allows its users to introduce their own logic to the VM. The most common General Purpose VM is the Ethereum Virtual Machine (EVM). However, some Avalanche L1s creators might want to create a VM that utilizes another language, such as Move, instead of Solidity as its smart contract language. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/35-ZGXXBzP8NzWvLQm5y1giGjRTfATKwO.png) One of the EVM's key features is its capability to execution of smart contracts in a deterministic and trustless manner, ensuring that the outcome of their execution is predictable and verifiable by all network participants. Developers can create decentralized applications and deploy them on the Ethereum blockchain. This enables a wide range of use cases including decentralized finance (DeFi), tokenization, supply chain management, and more. The EVM's flexibility, security, and compatibility have made it a fundamental component of the Ethereum ecosystem, powering a vibrant and rapidly evolving ecosystem of decentralized applications and services. # Permissioning (/academy/avalanche-fundamentals/08-permissioning-users/01-permissioning) --- title: Permissioning description: Learn about different ways of permissioning your Avalanche L1 blockchain. updated: 2024-06-28 authors: [usmaneth] icon: Book --- Permissioning your Avalanche L1 is an optional feature of having your own L1 blockchain. There may be many reasons why a blockchain creator may want to permission their network including these: - **Privacy and Confidentiality:** In industries where data privacy is crucial, such as finance or healthcare, permissioned blockchains allow only authorized parties to access sensitive information. This ensures that confidential data remains protected from unauthorized access. - **Regulatory Compliance:** Many industries are subject to strict regulatory requirements regarding data handling and security. Permissioned blockchains enable organizations to implement necessary controls to ensure compliance with regulatory standards without compromising the integrity of the blockchain. - **Cost-efficiency:** Restricting the usage of a blockchain to certain use cases may make it more cost efficient. Blockchain operators may be able to avoid transaction fee spikes caused events such as NFT drops by imposing restrictions. These permissions don't necessarily have to be administered by a centralized entity. There could also be a DAO in charge of determining who should be allowed to use the blockchain. In the upcoming lessons, you will learn about the different levels of Permissions and how they can be configured for your Avalanche L1. # Compliance (/academy/avalanche-fundamentals/08-permissioning-users/02-compliance) --- title: Compliance description: Learn how you can configure compliance for your Avalanche L1. updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- For institutions, reputational damage and legal complications could arise if their blockchain systems were inadvertently used for criminal activities. Regulatory bodies might impose penalties on such institutions if they were to fail to create sufficient controls to prevent these activities. Most institutions have nuanced compliance requirements. Therefore, many institutions need flexible blockchains that can meet these needs. ## Interacting with Criminal Actors In permissionless systems, the counterparty on a trade is unknown. If a user performs a swap on a decentralized exchange on a permissionless blockchain, such as Ethereum or the Avalanche C-Chain, the origin of funds that are received are mostly unknown. To solve this problem, each Avalanche L1 can restrict who can interact with the blockchain. Only a whitelist of accounts can issue transactions. One could build a blockchain where only accounts that went through a KYC process are permitted. ## Interacting with Illegal Services An additional risk to consider is that businesses may operate within an environment prone to illicit activities, or they may inadvertently become involved in such actions. In a permissionless system, any individual has the capability to deploy any contract, regardless of its compliance with the legal framework. Avalanche L1s can limit who deploys smart contracts on the blockchain. Consequently, those involved in the creation of contracts can be held accountable. This level of control reassures institutions, ensuring that all protocols they engage with are fully compliant with existing laws. Beyond compliance advantages, this approach helps to maintain a blockchain free from an excess of smart contracts that could otherwise congest the system. # Transaction Allowlist (/academy/avalanche-fundamentals/08-permissioning-users/04-tx-allowlist) --- title: Transaction Allowlist description: Learn how to restrict transactions to a specific set of addresses. updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- import TxAllowList from "@/content/common/evm-precompiles/transaction-allowlist.mdx"; # Activate Transaction Allowlist (/academy/avalanche-fundamentals/08-permissioning-users/05-activate-tx-allowlist) --- title: Activate Transaction Allowlist description: Learn how to activate the the transaction allowlist precompile in the genesis. updated: 2024-06-28 authors: [martineckardt] icon: Terminal --- Avalanche L1s do not implement permissioning by default. To enable permissioning, you need to activate the transaction allowlist precompile in the genesis. This allows only approved addresses to issue transactions on your blockchain. # Contract Deployer Allowlist (/academy/avalanche-fundamentals/08-permissioning-users/06-contract-deployer-allowlist) --- title: Contract Deployer Allowlist description: Learn how to restrict contract deployment to a specific set of addresses. updated: 2024-06-28 authors: [martineckardt] icon: BookOpen --- import ContractDeployerAllowlist from "@/content/common/evm-precompiles/contract-deployer-allowlist.mdx"; # Activate Contract Deployer Allowlist (/academy/avalanche-fundamentals/08-permissioning-users/07-activate-contract-deployer-allowlist) --- title: Activate Contract Deployer Allowlist description: Learn how to activate the contract deployer allowlist precompile in the genesis. updated: 2024-06-28 authors: [martineckardt] icon: Terminal --- Analogous to the transaction allowlist, Avalanche L1s do not implement contract deployer allowlist by default. To enable permissioning, you need to activate the contract deployer allowlist precompile in the genesis. This allows only approved addresses to deploy contracts on your blockchain. # Permissioning Validators (/academy/avalanche-fundamentals/09-permissioning-validators/01-permissioning-validators) --- title: Permissioning Validators description: Learn about different ways of permissioning your Avalanche L1 blockchain. updated: 2024-06-28 authors: [usmaneth] icon: Book --- The ability to control a blockchain's validator set has many benefits that can significantly enhance the efficiency, security, and governance of the network. It allows blockchain participants to have greater influence over the consensus process and decision-making, leading to a more robust and adaptable ecosystem. There are two ways to structure the validator set: - **Permissionless Validation:** Anyone can participate in the validation process and be rewarded in tokens to cover their costs for the hardware and running the validator. In Avalanche, we call Avalanche L1s that take this approach Elastic Avalanche L1s. - **Permissioned Validation:** Only whitelisted validators can validate the Avalanche L1. A permissioned Avalanche L1 can be turned into a Elastic Avalanche L1 at any time, but not the other way round. There are many reasons why the creators might want a permissioned validator set for their Avalanche L1. You will learn in detail about the technical options in the L1 Validator Management course. ## Enterprises and Corporations: Large enterprises and corporations often require blockchain solutions for their internal operations or specific industry use cases. A permissioned validator set allows them to control who can participate in the network and validate transactions. This is particularly relevant in industries with strict regulatory requirements, where entities need to ensure compliance and data privacy. ## Consortiums and Industry Associations: Consortiums or industry associations comprising multiple organizations with shared interests can benefit from a permissioned validator set. These entities often collaborate on initiatives requiring a distributed ledger, such as supply chain management, healthcare data sharing, or financial transactions. By establishing a permissioned validator set, the consortium can ensure that trusted participants from within the consortium validate the transactions. ## Government Agencies: Government entities may find value in launching a blockchain with a permissioned validator set to manage critical infrastructure, public services, or regulatory processes. They can select validators from trusted institutions or stakeholders to maintain control over the network while ensuring compliance with legal and governance requirements. Examples include land registry systems, voting platforms, or identity management solutions. ## Financial Institutions: Banks, payment processors, and other financial institutions could be interested in a permissioned validator set for blockchain solutions related to payments, remittances, or settlement systems. These institutions often require a level of control over the network to maintain regulatory compliance, prevent money laundering, and ensure adherence to Anti-Money Laundering (AML) regulations. # Private Blockchains (/academy/avalanche-fundamentals/09-permissioning-validators/02-private-blockchains) --- title: Private Blockchains description: Learn about different ways of permissioning your Avalanche L1 blockchain. updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- On a public blockchain, every transaction and change to the blockchain's state is visible to all. Anyone with internet access can see the transaction data. Block Explorers are one way to conveniently access the data on a chain. Although the identities of participants are often pseudonymous, the transaction data itself is open for anyone to see. However, there are many instances, especially in business or enterprise settings, where such transparency isn't desirable. A company might be conducting a high-value transaction that it doesn't want competitors or others in the market to see. There might also be legal or regulatory reasons for needing to keep transaction data private. In a private, permissioned blockchain, the visibility of transactions is limited to a select group of individuals or entities. This way, sensitive data isn't exposed. Only those who have been granted the necessary permissions can view the transaction data, and these permissions can be tightly controlled and monitored by network administrators. Further, transactions in a private blockchain can be encrypted, adding an extra layer of security. Even within the network, details about specific transactions can be hidden from certain participants, based on their level of permission. This allows for a much higher degree of confidentiality, as specific data can be revealed only on a need-to-know basis. In summary, privacy in the context of private, permissioned blockchains isn't just about restricting who can join. It's about having granular control over who can see what data, and when. It's about providing a secure, confidential environment where sensitive transactions can occur without the risk of exposure to unintended parties. This has made private, permissioned blockchains an attractive option for businesses and organizations dealing with sensitive data, high-value transactions, or strict confidentiality requirements. ## Private Blockchains The data of Avalanche L1s, by default, are publicly available. This means that every node can sync and listen to ongoing transactions/blocks in these blockchains, even if they're not validating the network. Nodes may do this for indexing purposes or just having access to the current state of the blockchain without relying on third-parties. Avalanche L1 validators can choose not to publish data from their blockchains. If a node sets `validatorOnly` to true, the node exchanges messages only with the blockchain's validators. Other peers won't be able to learn contents of this blockchain from their nodes. # Intain Markets Case Study (/academy/avalanche-fundamentals/09-permissioning-validators/03-case-study-intain-markets) --- title: Intain Markets Case Study description: Learn about Intain's permissioned Avalanche L1. updated: 2024-06-28 authors: [usmaneth] icon: Microscope --- Intain operates a structured finance platform. It has **IntainMARKETS**, a marketplace for tokenized asset-backed securities built as an Avalanche L1. The digital marketplace automates and integrates functions of key stakeholders in structured finance, including issuer, verification agent, underwriter, rating agency, servicer, trustee, and investor. Rather than replacing trust intermediaries, it integrates them onto a single platform to enable digital issuance and investment with a complete on-chain workflow. All parties work together in a transparent but private environment. ## Permissioned Validators and Tokenomics The **IntainMARKETS L1** also uses other ways to adhere to regulatory requirements. It has U.S-hosted infrastructure, which allows data to reside domestically, while validators chosen by network participants must also be verified U.S. entities and individuals. The Avalanche L1 economics are not dependent on any public token, and transaction costs are independent from those of the Avalanche C-Chain and other Avalanche L1s. Further Readings: [Intain Launch Announcement](https://medium.com/avalancheavax/intain-launches-avalanche-subnet-to-usher-in-new-era-for-multi-trillion-dollar-securitized-877c7cc1031f) # What is a Blockchain? (/academy/blockchain-fundamentals/02-what-is-a-blockchain/01-what-is-a-blockchain) --- title: What is a Blockchain? description: Understand the high-level structure of this chapter. updated: 2024-09-01 authors: [martineckardt] icon: Book --- Blockchains introduce a lot of new concepts. In this chapter we will approach the concept from a high level and see how it compares with other kinds of computers we are used to: - **Decentralized Computer:** How is a blockchain different from computers we know? - **Decentralized Applications:** What are dApps? - **Use Cases:** When does it make sense to use Blockchain? # Decentralized Computer (/academy/blockchain-fundamentals/02-what-is-a-blockchain/02-decentralized-computer) --- title: Decentralized Computer description: Explore how different types of computers, including smartphones, PCs, web servers, and blockchains, serve specific functions based on their unique characteristics. Understand the benefits and trade-offs of blockchain as a decentralized computer, ideal for secure transactions but less efficient and more complex than traditional centralized systems. updated: 2024-09-01 authors: [martineckardt] icon: BookOpen --- We interact with various types of computers every day, each designed to serve specific functions based on its unique characteristics. Smartphones, for example, are portable and convenient, offering powerful computing capabilities in a compact form. They are ideal for tasks that require mobility, such as communication, navigation, and media consumption. However, their small screen size and limited processing power compared to larger computers make them less suitable for tasks that require intensive computing power or extensive multitasking. PCs, on the other hand, provide more robust processing power and versatility. They are well-suited for tasks like video editing, gaming, or software development, where performance and the ability to use larger screens and peripherals like a mouse and keyboard are crucial. Yet, they lack the portability of smartphones and often require more space and power to operate. ![](/common-images/blockchain-basics/decentralized-computer/different-kinds-of-computer.png) Web servers represent another type of computer, designed to handle large volumes of data and manage multiple simultaneous requests from users across the internet. They are optimized for reliability, speed, and the ability to run continuously without interruption, making them crucial for hosting websites, online services, and cloud computing platforms. While web servers excel at handling complex backend processes and large-scale operations, they are not intended for direct user interaction or tasks requiring a graphical user interface, which is where PCs and smartphones come in. Each type of computer has its strengths and weaknesses, and they often work together within the broader digital ecosystem, each performing tasks that suit their design and capabilities best. The internet, as we know it today, is built on a centralized architecture, where data and services are controlled and maintained by a few large entities like data centers and internet service providers. However, the advent of blockchain technologies has opened up new possibilities for a more decentralized internet, often referred to as Web 3.0 or the decentralized web. ## Blockchains are Decentralized Computers Blockchain can be thought of as a new kind of computer. The key advantage of this new kind of computer is decentralization, meaning that no single entity has control over the entire system. Instead, control is distributed across many participants, which enhances security and transparency, as all transactions are verified by consensus and recorded on an immutable ledger. ![](/common-images/blockchain-basics/decentralized-computer/blockchain-computer.png) However, this decentralization comes with trade-offs. Blockchains are inherently less efficient than traditional centralized systems. The complexity of managing and maintaining a decentralized network, along with slower transaction speeds, can be significant downsides compared to more traditional, centralized computing models. This makes blockchain ideal for certain use cases, like secure financial transactions or decentralized applications, but less suitable for tasks requiring high speed and efficiency. # Decentralized Applications (/academy/blockchain-fundamentals/02-what-is-a-blockchain/03-decentralized-applications) --- title: Decentralized Applications description: TBD updated: 2024-09-01 authors: [martineckardt] icon: BookOpen --- Programs and apps are software applications designed to run on different types of computers, such as smartphones, PCs, and web servers. On smartphones, apps are typically designed for specific tasks like messaging, gaming, or social media, optimized for touch interfaces and mobile connectivity. PCs run more complex programs, such as word processors, video editors, or development tools, which take advantage of larger screens, more powerful hardware, and the ability to handle multitasking. Web servers, on the other hand, run server-side applications that power websites, process data, and handle multiple user requests simultaneously. These programs often provide the backend services that apps on smartphones and PCs rely on to function. ![](/common-images/blockchain-basics/decentralized-computer/decentralized-applications.png) Similarly, smart contracts or decentralized applications (dApps) are programs that run on a blockchain. A smart contract is a self-executing program with the terms of the agreement directly written into code, operating without the need for a trusted central authority. dApps are more complex, often consisting of multiple smart contracts that together offer a full application experience on the blockchain. Like traditional apps, these blockchain-based programs can perform a wide range of tasks, from managing digital assets to running decentralized finance (DeFi) protocols. ![](/common-images/blockchain-basics/decentralized-computer/decentralized-applications-connected.png) Different types of applications can interoperate across these platforms, enabling a seamless user experience. For instance, a mobile app can interact with a backend running on a web server to fetch or update data. Similarly, the same app could also interact with a smart contract on a blockchain to verify transactions or access decentralized services. This interoperability allows users to benefit from the strengths of each platform, combining the user-friendliness of mobile apps, the processing power of web servers, and the security and transparency of blockchain technology. # Use Cases (/academy/blockchain-fundamentals/02-what-is-a-blockchain/04-use-cases) --- title: Use Cases description: Explore the unique advantages and challenges of blockchain technology in finance, supply chain management, and voting systems. Learn how blockchain enhances security, transparency, and decentralization, while also considering its limitations in efficiency and complexity. updated: 2024-09-01 authors: [martineckardt] icon: BookOpen --- Blockchains have unique properties that make them an excellent fit for certain use cases where security, transparency, and decentralization are paramount: ### Finance Blockchains enable secure, transparent transactions without the need for intermediaries. This is the foundation of cryptocurrencies and decentralized finance (DeFi) platforms. The immutability of blockchain records ensures that financial transactions are tamper-proof and can be audited at any time, making it ideal for high-stakes environments where trust is critical. ### Supply Chain Management Blockchain can provide an immutable record of a product's journey from origin to consumer, increasing transparency and reducing fraud. This helps ensure that every step of the supply chain is accurately tracked and verified. ### Voting Systems Blockchain's transparency and security ensure that votes are accurately counted and cannot be altered. This is crucial for maintaining democratic integrity and building trust in electoral processes. ## Downsides However, the same properties that make blockchains so powerful in these areas also introduce significant challenges. Blockchains are inherently less efficient than traditional databases because they require consensus among distributed nodes, which can slow down transaction speeds and increase the complexity of operations. This makes blockchain less suitable for use cases where high speed and efficiency are critical, such as real-time data processing or high-frequency trading. Additionally, the complexity of developing and maintaining blockchain systems can be a barrier, particularly for applications that don't require the levels of security and decentralization that blockchains offer. When deciding whether to use blockchain, consider whether your use case requires the specific advantages of decentralization, transparency, and security. Blockchain is well-suited for applications where trust between parties is a major concern and where an immutable record of transactions is essential. However, if your application demands high throughput, real-time processing, or simplicity, a traditional centralized system might be more appropriate. In short, blockchain is a powerful tool for the right situations, but its use should be carefully considered against the specific needs of your application. # Payments Use Case (/academy/blockchain-fundamentals/03-payments-use-case/01-payments-use-case) --- title: Payments Use Case description: Explore how blockchain technology can revolutionize payments by building decentralized systems. This chapter delves into account balances, transactions, and user interactions, highlighting the benefits of blockchain for security, transparency, and decentralization. Discover how these concepts apply to various use cases beyond payments, including decentralized finance, voting systems, and supply chain management. updated: 2024-09-01 authors: [martineckardt] icon: Book --- In this chapter we will dive deeper in the payments use case. We will explore how blockchain technology can be used to build a decentralized payment system. We will look at the different components of a payment system, such as account balances, transactions, and user interactions. We will also discuss the benefits of using blockchain technology for payments, such as security, transparency, and decentralization. | Use Case | Data Structures | User Interactions | |----------------------- |------------------ |-------------------- | | Payments | Account Balances | Transfer Funds | | Decentralized Finance | Loans, ... | Borrow, Repay, .. | | Voting Systems | Votes | Vote | | Supply Chain | Shipments | Hand Over, Deliver | | Identity Management | Certificates | Issue, Proof | Many of the concepts we will discuss in this chapter are applicable to other use cases as well. For example, the idea of account balances and transactions is not unique to payments but can be found in other applications like decentralized finance (DeFi) or supply chain management. Similarly, user interactions such as transferring funds or voting can be applied to various use cases, each with its unique requirements and challenges. So while we are discussing this specific use case, try to think about how these concepts could be adapted to other scenarios. This will help you understand the broader implications of blockchain technology and how it can be used to solve a wide range of problems across different industries and domains. # Account Balances & Transfers (/academy/blockchain-fundamentals/03-payments-use-case/02-account-balances-transfers) --- title: Account Balances & Transfers description: Learn about account balances, which reflect the current amount of money or assets in an account after credits and debits. Discover how transfers work by moving funds between accounts, ensuring balance integrity across financial systems. updated: 2024-09-01 authors: [martineckardt] icon: BookOpen --- ## Account Balances Account Balances refer to the current amount of money or assets held in a specific account at a given time. This number reflects the total value after all credits (additions) and debits (subtractions) have been accounted for, indicating how much the account holder has available to use or withdraw. For example, in a bank account, the balance represents the amount of money that the account holder can access, whether for spending, saving, or transferring.
![](/common-images/blockchain-basics/payments/account-balances.png)
## Transfers A Transfer is the process of moving money or assets from one account to another. During a transfer, the balance of the sending account is reduced by the amount being transferred, while the balance of the receiving account is increased by the same amount. This operation ensures that the total value across all accounts remains constant, maintaining the integrity of the overall financial system. ![](/common-images/blockchain-basics/payments/transfer.png) ## Invalid Transfers An invalid transfer occurs when a transaction request exceeds the available balance in the sending account. In such cases, the transfer cannot be executed, and the system will reject the transaction to prevent overdrafts. As a result, both the sending and receiving account balances remain unchanged, preserving the integrity of the financial records. This safeguard ensures that accounts do not inadvertently fall into negative balances and maintains accurate and reliable financial tracking. ![](/common-images/blockchain-basics/payments/transfer-invalid.png) # Ledger (/academy/blockchain-fundamentals/03-payments-use-case/03-ledger) --- title: Ledger description: Understand the high-level structure of this chapter. updated: 2024-09-01 authors: [martineckardt] icon: BookOpen --- A ledger is a comprehensive record-keeping system that documents all transactions. It maintains a detailed and chronological list of transactions, capturing every transfers across accounts. This ensures that every action is accurately tracked and traceable, providing a clear and complete view of the system's activity. ![](/common-images/blockchain-basics/payments/ledger.png) ## Immutability An immutable ledger means that once a transaction is recorded, it cannot be altered or deleted. This immutability ensures that the historical record of all transactions remains intact and unchangeable, which is crucial for maintaining transparency and trust. Even if errors are made or transactions are invalid, the ledger preserves the original data, preventing tampering or unauthorized modifications. ## Append-Only An append-only ledger operates by adding new transactions to the end of the record without altering any previous entries. This means that while the ledger grows with each new transaction, past transactions remain unchanged and preserved. Invalid transactions, which fail to execute due to issues like insufficient funds, are still recorded in the ledger to maintain a complete history. To reverse the effects of a transaction, a new transaction must be appended that counteracts the previous one, ensuring the integrity and consistency of the ledger’s overall state. # Signatures (/academy/blockchain-fundamentals/04-signatures/01-signatures) --- title: Signatures description: Learn how digital signatures ensure transaction authenticity and security in blockchain systems through public-key cryptography. Understand the role of private and public keys in creating verifiable, tamper-proof transactions that maintain trust in decentralized networks. updated: 2025-10-29 authors: [martineckardt, katherineavalabs] icon: Book --- Digital signatures are a fundamental component of blockchain technology, providing the cryptographic foundation for secure, authentic, and tamper-proof transactions. As handwritten signatures do, digital signatures enable users to prove their identity, authorize transactions or attest events. ## What Are Digital Signatures? A **digital signature** is a cryptographic mechanism that ensures the authenticity, integrity, and non-repudiation of digital messages or transactions. Think of it as an electronic "fingerprint" that uniquely binds a transaction to its originator. Unlike physical signatures, digital signatures are mathematically generated and virtually impossible to forge. In blockchain systems, every transaction must be signed by the sender to prove that they authorize the transfer of assets. This signature provides: - **Authentication**: Confirms the identity of the transaction sender - **Integrity**: Ensures the transaction data hasn't been altered - **Non-repudiation**: Prevents the sender from denying they made the transaction ## How Digital Signatures Work Digital signatures rely on **asymmetric cryptography**, which uses a pair of mathematically related keys: ### Key Pairs 1. **Private Key**: A secret key known only to the owner. This key is used to sign transactions and must be kept secure. If someone gains access to your private key, they can impersonate you and authorize transactions on your behalf. 2. **Public Key**: A publicly shared key derived from the private key. Anyone can use this key to verify signatures created by the corresponding private key. The public key serves as your address or identity on the blockchain. These keys are mathematically linked in a special way: data signed with the private key can be verified using the public key, but the private key cannot be derived from the public key. ### The Signing Process When a user wants to make a transaction on a blockchain: 1. **Create Transaction**: The user creates a transaction specifying details like the recipient's address and the amount to transfer. 2. **Sign Transaction**: The user's wallet software uses their private key to create a unique digital signature for this specific transaction. The signature is generated by applying a cryptographic algorithm (like ECDSA - Elliptic Curve Digital Signature Algorithm) to the transaction data and the private key. 3. **Broadcast**: The transaction, along with the digital signature and the user's public key, is broadcast to the network. 4. **Verification**: Network nodes receive the transaction and use the provided public key to verify the signature. This confirms that: - The transaction was signed by the holder of the private key corresponding to the public key - The transaction data hasn't been modified since it was signed 5. **Accept or Reject**: If the signature is valid, the transaction is accepted for inclusion in a block. If invalid, it's rejected. ## Why Digital Signatures Matter Digital signatures solve several critical problems in decentralized systems: ### Ownership and Authorization In traditional systems, a bank verifies your identity when you make a transaction. In a decentralized blockchain, there's no central authority to verify identity. Digital signatures provide a mathematical proof that the person initiating a transaction owns the account and authorizes the transfer. ### Tamper Detection If anyone tries to modify a signed transaction—even by changing a single character—the signature becomes invalid. This makes it immediately obvious that the transaction has been tampered with, and network nodes will reject it. ### Privacy with Accountability Digital signatures allow you to prove you own an account without revealing your private key. You can transact publicly while keeping your credentials private. Every transaction is traceable to a public key, but the identity behind that key can remain pseudonymous. ### Trustless Verification Anyone can verify a signature using the public key without needing to trust a third party. This enables the trustless, peer-to-peer nature of blockchain networks where participants don't need to know or trust each other to transact safely. ## Common Signature Schemes Different blockchains use different cryptographic algorithms for digital signatures: - **ECDSA (Elliptic Curve Digital Signature Algorithm)**: provides strong security with relatively small key sizes, making it efficient for blockchain applications. - **EdDSA (Edwards-curve Digital Signature Algorithm)**: alternative to ECDSA, offering improved performance and security properties. - **BLS (Boneh-Lynn-Shacham) Signatures**: provide small sized signature aggregations to maintain security while saving space when multiple signatures are needed. - **Schnorr Signatures**: enable more complex use cases like multi-signature transactions in a more efficient way. ## Security Considerations The security of digital signatures depends entirely on keeping private keys secure: - **Never share your private key**: Anyone with access to your private key can sign transactions as you - **Use secure storage**: Store private keys in hardware wallets or secure software wallets - **Backup carefully**: Lose your private key, and you lose access to your assets forever - **Beware of phishing**: Never enter your private key on untrusted websites or applications Digital signatures are the cornerstone of blockchain security, enabling trustless, verifiable transactions in a decentralized environment. Understanding how they work is essential to grasping how blockchains maintain security without central authorities. # Transaction Ordering through Consensus (/academy/blockchain-fundamentals/04-tx-ordering-through-consensus/01-tx-ordering-through-consensus) --- title: Transaction Ordering through Consensus description: Explore how blockchain networks achieve agreement on transaction order through consensus mechanisms. Learn why consistent transaction ordering is critical for preventing double-spending and maintaining a unified ledger state across all network participants. updated: 2025-10-29 authors: [martineckardt, katherineavalabs] icon: Book --- In a blockchain network, thousands of participants may be submitting transactions simultaneously from anywhere in the world. For the system to function correctly, all participants must agree on a single, consistent order of transactions. This agreement is achieved through **consensus mechanisms**: the protocols that enable distributed networks to coordinate and maintain a unified view of the blockchain's state. ## Why Transaction Ordering Matters Imagine two people trying to spend the same money at the same time, this is called a **double-spending problem**. In traditional banking, a central server processes transactions one by one and ensures you can't spend the same dollar twice. But in a decentralized blockchain, there's no central authority to determine which transaction came first. Consider this scenario: - Alice has 10 tokens in her account - She creates two transactions almost simultaneously: - Transaction A: Send 10 tokens to Bob - Transaction B: Send 10 tokens to Charlie - Both transactions are valid on their own, but Alice doesn't have enough tokens to fulfill both The network must decide which transaction to process first. Whichever transaction is ordered first will succeed, and the second will be rejected as invalid (insufficient balance). Without a consensus mechanism to establish a definitive order, different nodes might process these transactions in different orders, leading to inconsistent ledger states across the network. This is why **transaction ordering through consensus** is fundamental to blockchain technology. ## What Is Consensus? **Consensus** is the process by which distributed network participants agree on a single, authoritative version of the truth: in this case, the order and validity of transactions. A consensus mechanism is the set of rules and procedures that enables this agreement. Key objectives of consensus mechanisms: 1. **Agreement**: All honest nodes eventually agree on the same transaction order 2. **Validity**: Only valid transactions (properly signed, sufficient balance, etc.) are included 3. **Termination**: The network reaches consensus in a reasonable time 4. **Integrity**: The agreed-upon order cannot be altered retroactively ## How Consensus Orders Transactions Different consensus mechanisms use different approaches, but they generally follow a similar pattern: ### The Basic Process 1. **Transaction Submission**: Users create and broadcast transactions to the network 2. **Transaction Pool**: Each node maintains a pool (or mempool) of unconfirmed transactions it has received 3. **Block Proposal**: Specific nodes (miners, validators, or leaders, depending on the consensus mechanism) are selected or compete to propose the next block of transactions 4. **Block Validation**: Other nodes verify that the proposed block contains valid transactions in a valid order 5. **Block Acceptance**: Through the consensus mechanism, the network agrees to accept or reject the proposed block 6. **Blockchain Extension**: Once accepted, the block is added to the blockchain, and its transaction order becomes permanent 7. **State Update**: All nodes update their local copy of the blockchain state based on the newly confirmed transactions ## Consensus vs. Transaction Ordering It's important to understand the relationship between these concepts: - **Consensus Mechanism**: The overall protocol that enables distributed agreement - **Transaction Ordering**: The specific outcome achieved through consensus—determining the sequence in which transactions are processed The consensus mechanism is the tool, and transaction ordering is one of the primary goals it achieves. But consensus mechanisms do more than just order transactions, they also: - Determine who can add new blocks to the chain - Prevent malicious actors from rewriting history - Ensure the network remains operational even when some nodes fail or act dishonestly - Distribute decision-making power across the network ## Why Decentralized Consensus Is Challenging Achieving consensus in a decentralized network is remarkably difficult because: 1. **No Central Authority**: There's no trusted party to make final decisions 2. **Network Delays**: Messages take time to propagate across the network, so different nodes may see transactions in different orders 3. **Byzantine Actors**: Some participants may be malicious and try to disrupt consensus or manipulate transaction ordering for their benefit 4. **Asynchrony**: The network doesn't operate in perfect synchronization—nodes may go offline, come back online, or experience varying network conditions # Longest Chain Consensus (/academy/blockchain-fundamentals/04-tx-ordering-through-consensus/02-longest-chain-consensus) --- title: Longest Chain Consensus description: Understand the longest chain rule and how it resolves conflicts in blockchain networks. Learn why accumulated chain weight determines the valid chain and how this mechanism prevents forks from permanently splitting the network. updated: 2025-10-29 authors: [martineckardt, katherineavalabs] icon: BookOpen --- The **Longest Chain Rule** is a fundamental principle used in many blockchain networks to resolve conflicts and maintain a single, unified ledger. It's the mechanism that ensures all nodes eventually agree on which version of the blockchain is the "correct" one, whether the network uses Proof of Work, Proof of Stake, or other mechanisms. ## The Problem: Competing Chains In a distributed network where multiple block producers are working simultaneously, it's possible for two validators to propose valid blocks at nearly the same time. When this happens: - Both validators broadcast their blocks to the network - Different nodes might receive different blocks first - The blockchain temporarily "forks" into two competing versions - Each version is valid on its own, but the network must choose one Without a clear rule for resolving this situation, the network could permanently split into different chains, destroying the consensus that makes blockchain valuable. ## What Is the Longest Chain Rule? The **Longest Chain Rule** states: **when multiple valid versions of the blockchain exist, nodes should accept the chain with the most accumulated weight as the authoritative chain.** "Longest" doesn't necessarily mean the most blocks—it means the chain representing the greatest cumulative weight according to the protocol's rules. The specific weight metric varies by consensus mechanism, but the principle remains the same. ### A Simpler Way to Think About It Think of it as the "preferred" chain—the one that the protocol considers most valid based on its consensus rules. The chain with the most accumulated weight represents the majority's view of the transaction history, whether that majority is measured by computational resources, stake, or other validation criteria. ## How the Longest Chain Rule Works Let's walk through a typical scenario: ### Step 1: The Fork 1. Block 100 is the latest confirmed block 2. Validator Alice proposes Block 101A at the same moment Validator Bob proposes Block 101B 3. Both blocks are valid and properly reference Block 100 4. Alice broadcasts Block 101A; Bob broadcasts Block 101B 5. Nodes closer to Alice add 101A to their chain; nodes closer to Bob add 101B 6. The network now has two competing chains ### Step 2: The Race Both chains continue: - Some validators build on top of Block 101A - Other validators build on top of Block 101B - Each chain is growing independently - Neither is "wrong"—they're both valid according to protocol rules ### Step 3: Resolution Eventually (usually within seconds or minutes): - A validator building on Block 101A proposes Block 102A - Now the chain ending in Block 102A is longer (has more cumulative weight) - Nodes following the 101B chain see the longer chain - According to the longest chain rule, they switch to the longer chain - Block 101B is "orphaned"—it's valid but no longer part of the main chain - Transactions in Block 101B return to the mempool to be included in future blocks ### Step 4: Convergence - All nodes now agree on the chain: Block 100 → Block 101A → Block 102A - The network has re-converged on a single version - Normal operation continues ## Why Does This Work? The longest chain rule works because of several key properties: ### Honest Majority Assumption If honest validators control the majority of the network's validation power: - The honest chain will grow faster on average - Eventually, the honest chain will always be longer - The network naturally converges on the honest version ### Self-Reinforcing Mechanism Once one chain gets ahead: - More validators see it as the valid chain - More validators build on top of it - It extends even further ahead - The gap becomes insurmountable for the shorter chain ### Economic Incentives Validators are incentivized to build on the longest chain: - Only blocks in the longest chain earn rewards - Building on a shorter chain wastes resources and validation opportunities - Rational validators immediately switch to the longest chain when they see it ## Implications for Transaction Finality The longest chain rule has important implications for when transactions can be considered "final": ### Confirmations A transaction's **confirmation count** is the number of blocks added after the block containing that transaction: - **0 confirmations**: Transaction is in the mempool but not yet in a block (unconfirmed) - **1 confirmation**: Transaction is in the latest block - **2 confirmations**: One block has been added after the block containing the transaction - **Multiple confirmations**: The more confirmations, the more secure the transaction ### Why Wait for Multiple Confirmations? The more confirmations a transaction has, the deeper it is in the blockchain: - To reverse a transaction with 1 confirmation, an attacker needs to produce 2 blocks faster than the honest network - To reverse a transaction with 6 confirmations, an attacker needs to produce 7 blocks faster than the honest network - This becomes exponentially harder with each additional confirmation **Probabilistic Finality**: In systems using the longest chain rule, finality is never 100% absolute—there's always a theoretical possibility of reversal. However, this probability becomes negligibly small after several confirmations. ## When the Longest Chain Rule Can Be Attacked ### The 51% Attack If an attacker controls more than 50% of the network's validation power: 1. **They can create a private chain**: Produce blocks in secret without broadcasting them 2. **Outpace the honest chain**: Because they control the majority, their secret chain grows faster 3. **Release the longer chain**: Suddenly broadcast their chain, which is now longer 4. **Network switches**: Nodes follow the longest chain rule and switch to the attacker's version 5. **Transactions reversed**: Transactions in the honest chain are reversed **Why This Rarely Happens**: - Acquiring 51% of validation power is extremely expensive (billions of dollars for major networks) - The attack would crash the cryptocurrency's value, destroying the attacker's investment - The attack is detectable and the community can respond (fork to a new chain, change consensus algorithm) ### Deep Reorgs A **deep reorganization** (reorg) occurs when a long chain is replaced by an even longer competing chain: - **Shallow reorgs** (1-2 blocks): Common and expected, happen naturally due to network delays - **Deep reorgs** (6+ blocks): Extremely rare and usually indicate an attack or major network problem - **Protection**: Waiting for more confirmations protects against all but the most resourced attackers ## Key Takeaways - The longest chain rule resolves temporary forks by having nodes accept the chain with the most accumulated weight - It ensures the network converges on a single, consistent version of transaction history - Transaction finality is probabilistic—more confirmations mean exponentially higher security - The rule works because honest validators control the majority of validation power and have incentives to follow it - 51% attacks are theoretically possible but economically impractical for large, decentralized networks The longest chain rule is a cornerstone of many blockchain consensus mechanisms, transforming a chaotic distributed system into an ordered, synchronized ledger that all participants can trust. It's a beautifully simple solution to a complex problem: letting the network reach agreement through a clear, objective metric rather than requiring explicit coordination. # Transaction Lifecycle (/academy/blockchain-fundamentals/04-tx-ordering-through-consensus/xx-tx-lifecycle) --- title: Transaction Lifecycle description: Follow a blockchain transaction from creation to finalization. Learn each step in the journey—from signing with your private key to achieving confirmation deep within the blockchain. updated: 2025-10-29 authors: [martineckardt, katherineavalabs] icon: Notebook --- Understanding the complete lifecycle of a blockchain transaction helps demystify how decentralized systems process payments and maintain security. Let's follow a transaction from start to finish, exploring what happens at each stage. ## Creation A transaction begins when a user decides to transfer assets. This involves specifying several key pieces of information: **What's Included**: - **Sender's address**: Derived from the sender's public key, identifying who is sending the assets - **Recipient's address**: The destination address where the assets will be sent - **Amount**: How much cryptocurrency or tokens to transfer - **Transaction fee**: Payment offered to miners/validators for including the transaction in a block - **Nonce**: A sequence number that prevents the same transaction from being processed multiple times - **Additional data**: Optional fields for smart contract interactions or notes **User Interaction**: The user typically interacts with wallet software (desktop app, browser extension, mobile app, or hardware wallet) that provides a user-friendly interface for creating the transaction. Behind the scenes, the wallet constructs the transaction data structure according to the blockchain's protocol specifications. **Example**: Alice wants to send 10 AVAX to Bob. She opens her wallet, enters Bob's address (0x742d35Cc...), specifies 10 AVAX, and reviews the estimated transaction fee of 0.01 avax. She confirms she wants to proceed. ## Signing Once the transaction is created, it must be cryptographically signed to prove authorization: **The Signing Process**: 1. The transaction data is hashed to create a unique fingerprint of the transaction 2. This hash is encrypted using the sender's **private key**, creating a digital signature 3. The signature is attached to the transaction along with the sender's public key 4. The combined package (transaction + signature + public key) is now ready for broadcasting **Why Signing Matters**: - **Authentication**: Proves the transaction was authorized by the owner of the private key - **Integrity**: Any modification to the transaction data would invalidate the signature - **Non-repudiation**: The sender cannot later deny creating the transaction **Security Note**: The private key never leaves the wallet and is never transmitted over the network. Only the signature (produced using the private key) is shared. **Example**: Alice's wallet uses her private key to sign the transaction. The wallet computes a unique signature like "0x8f3b2a..." that mathematically binds Alice's authorization to this specific transaction sending 10 AVAX to Bob. ## Broadcasting After signing, the transaction is broadcast to the peer-to-peer network: **How Broadcasting Works**: 1. The wallet sends the signed transaction to one or more nodes it's connected to 2. Each node that receives the transaction performs basic validation 3. If valid, the node forwards the transaction to its peer nodes 4. This process continues until the transaction propagates throughout the network 5. Within seconds, most nodes have received the transaction **Network Topology**: Blockchain networks use a peer-to-peer (P2P) gossip protocol where each node connects to multiple other nodes. When a node receives a new transaction, it shares it with its neighbors, who share it with their neighbors, creating exponential propagation. **What Gets Broadcast**: - The complete transaction data - The digital signature - The sender's public key **Example**: Alice's wallet connects to several Avalanche nodes and sends them her signed transaction. Within 1-2 seconds, the transaction has propagated to thousands of nodes worldwide. Each node now knows that Alice wants to send 10 AVAX to Bob. ## Verification Before accepting a transaction, nodes perform multiple checks: **Validation Checks**: 1. **Signature Verification**: - Use the provided public key to verify the signature - Confirm the transaction was signed by the owner of the sender's address - Reject if signature is invalid or doesn't match the public key 2. **Balance Check**: - Query the current blockchain state to check the sender's balance - Ensure the sender has enough assets to cover both the transfer amount and the transaction fee - Reject if insufficient balance 3. **Nonce Verification** (account-based systems): - Check that the transaction nonce matches the expected sequence number - Prevents replay attacks and ensures transactions are processed in order - Reject if nonce is incorrect 4. **Format Validation**: - Verify the transaction follows the correct data structure - Check that addresses are valid - Ensure the transaction isn't malformed 5. **Double-Spend Prevention**: - Check that the same input hasn't already been spent in another transaction (UTXO systems like Bitcoin) - Compare against pending transactions in the mempool **Two-Stage Validation**: - **Fast validation**: Nodes do lightweight checks when receiving a transaction via broadcast - **Full validation**: Miners/validators do comprehensive checks before including it in a block **What Happens If Verification Fails**: - The node rejects the transaction and doesn't forward it to peers - The transaction dies and doesn't enter the mempool - The sender's wallet may receive an error message - The sender must fix the issue and create a new transaction **Example**: Nodes receive Alice's transaction and verify: (1) The signature is valid using Alice's public key, (2) Alice's account has at least 10.01 AVAX (10 AVAX + 0.01 AVAX fee), (3) The nonce is 47, which is correct for Alice's next transaction, (4) The transaction format is valid. All checks pass, so the transaction enters the mempool. ## Mempool (Transaction Pool) After verification, valid transactions wait in a temporary holding area: **What Is the Mempool**: - Short for "memory pool"—a collection of unconfirmed transactions stored in each node's RAM - Each node maintains its own mempool (they may differ slightly due to network propagation times) - Transactions wait here until a miner or validator includes them in a block **Transaction Ordering in Mempool**: Transactions are typically prioritized by: - **Fee level**: Higher fees get priority (especially important during network congestion) - **Time received**: Older transactions may get preference among same-fee transactions - **Dependencies**: Some transactions must wait for others to be confirmed first **Mempool Dynamics**: - Size fluctuates based on network activity - During high congestion, mempools grow large and low-fee transactions may wait hours or days - Transactions can be evicted if mempool becomes too full (lowest-fee transactions removed first) - Users can often replace pending transactions by resubmitting with higher fees (Replace-By-Fee) **Example**: Alice's transaction sits in the mempool of thousands of nodes. She offered a fee of 0.01 AVAX, which is above the minimum required. The transaction will likely be included in the next block as validators select transactions to process. ## Consensus (Block Inclusion) Validators select transactions from the mempool to include in the next block: **Block Construction**: 1. The validator selects transactions from their mempool 2. Transactions are typically chosen by fee level and arrival time 3. The block has size or gas limits, so not all transactions can fit 4. The validator arranges transactions in order and creates a block **Consensus Process**: Different networks use different approaches: In systems with Proof of Work: - Block producers compete to solve a cryptographic puzzle - First to solve it gets to propose their block - Other nodes verify the solution and accept the block if valid - Can take several minutes or even hours per block In systems with Proof of Stake: - Validators are selected (often based on their stake) to propose blocks - Other validators attest that the block is valid - Block is accepted once enough validators attest to it - Can take seconds or minutes depending on implementation **Block Reward**: The validator who successfully adds the block typically receives: - Block reward (newly created cryptocurrency, if applicable) - All transaction fees from transactions in the block **Example**: A validator selects Alice's transaction along with hundreds of others and includes it in a block. Through the consensus process, the network validates and accepts the block. Alice's transaction now has 1 confirmation. ## Finalization The final stage is when the transaction becomes permanently part of the blockchain: **Confirmation Count**: - **1 confirmation**: Transaction is in the most recent block - **2 confirmations**: One additional block has been added after the block containing the transaction - **6 confirmations**: Six blocks deep (generally considered secure in Bitcoin) - **12+ confirmations**: Very secure, reversal extremely unlikely **Why Multiple Confirmations Matter**: - The deeper a transaction is in the blockchain, the harder it is to reverse - To reverse a transaction with 6 confirmations, an attacker would need to remine 6 blocks faster than the honest network - This becomes exponentially more difficult and expensive with each confirmation **Finality Types**: * **Probabilistic Finality** (PoW, longest-chain systems): - Never 100% final, but probability of reversal approaches zero - Each additional block makes reversal exponentially more difficult - Standard in Bitcoin, Ethereum (pre-merge), and similar systems * **Absolute Finality** (Some PoS systems): - Once finalized, transactions are mathematically impossible to reverse - Used in systems with finality gadgets or BFT consensus - Provides faster economic certainty but may sacrifice some decentralization **State Update**: Once confirmed, the transaction's effects are reflected in the blockchain state: - Sender's balance decreases by the amount plus fee - Recipient's balance increases by the amount - The nonce is incremented - Smart contract state may be updated (if applicable) **Example**: A few seconds after being included in a block, Alice's transaction is considered final on Avalanche's C-Chain due to its fast-finality consensus. Bob's wallet now shows the 10 AVAX as confirmed and available to spend. Alice's balance shows 10.01 AVAX less than before (10 AVAX transferred + 0.01 AVAX fee). The transaction is complete and irreversible. ## Summary: Complete Lifecycle Visualization ``` 1. CREATION User creates transaction ↓ 2. SIGNING Wallet signs with private key ↓ 3. BROADCASTING Propagates across P2P network ↓ 4. VERIFICATION Nodes validate signature, balance, format ↓ 5. MEMPOOL Waits in memory pool ↓ 6. CONSENSUS Included in block by validator ↓ 7. FINALIZATION Gains confirmations, becomes irreversible ``` ## Key Takeaways - **Creation**: User specifies transaction details in wallet software - **Signing**: Private key cryptographically signs the transaction to prove authorization - **Broadcasting**: Transaction propagates through P2P network in seconds - **Verification**: Nodes check signature, balance, and format before accepting - **Mempool**: Validated transactions wait to be included in a block - **Consensus**: Miners/validators include transaction in a block through consensus mechanism - **Finalization**: Additional blocks make transaction increasingly irreversible Understanding this lifecycle helps explain why blockchain transactions take time (consensus process), why fees matter (mempool prioritization), and why multiple confirmations are recommended (finality assurance). Each step serves a crucial purpose in maintaining the security and integrity of the decentralized system. # Sybil Protection (/academy/blockchain-fundamentals/05-sybil-protection/01-sybil-protection) --- title: Sybil Protection description: Understand Sybil attacks and how blockchain networks defend against fake identities attempting to gain disproportionate influence. Learn the crucial distinction between consensus mechanisms that order transactions and Sybil defense mechanisms that prevent identity manipulation. updated: 2025-10-29 authors: [martineckardt, katherineavalabs] icon: Book --- Decentralized networks face a unique challenge: how do you prevent a single malicious actor from creating thousands of fake identities to manipulate the system? This threat is known as a **Sybil attack**, and defending against it is critical for maintaining the security and fairness of blockchain networks. ## What Is a Sybil Attack? A **Sybil attack** occurs when a single entity creates multiple fake identities (Sybil nodes) to gain disproportionate influence or control over a network. The name comes from a famous case study of a patient with dissociative identity disorder who had multiple personalities. In a blockchain context, imagine if: - One person could create 1,000 fake "validator" identities - Each identity appears to be a different, independent participant - Together, these fake identities control enough voting power to manipulate the network - The attacker could approve fraudulent transactions, censor legitimate transactions, or even rewrite blockchain history ### Why Sybil Attacks Are Dangerous In a decentralized network, decision-making power is supposed to be distributed among many independent participants. But if one entity can masquerade as many participants, they can: 1. **Manipulate Consensus**: Control which transactions are included in blocks or influence the order of transactions 2. **Launch 51% Attacks**: Gain majority control to double-spend or rewrite history 3. **Censor Transactions**: Prevent specific users or types of transactions from being processed 4. **Corrupt Voting**: In governance systems, vote multiple times on protocol changes 5. **Drain Resources**: Spam the network or monopolize scarce resources The fundamental problem is that in a truly open, permissionless network, creating a new identity is essentially free—you can generate a new public/private key pair instantly at no cost. ## Sybil Defense Mechanisms **Sybil defense mechanisms** (also called Sybil resistance) are strategies designed to make it expensive or impractical for an attacker to create multiple identities. The goal is to ensure that influence in the network is tied to something scarce and difficult to fake. ### Making Identity Creation Costly The most common approach is to attach a real-world cost to each identity or vote in the system: **Resource-Based Defenses:** 1. **Computational Resources**: Require identities to perform expensive computations (Proof of Work) 2. **Financial Resources**: Require identities to stake valuable assets (Proof of Stake) 3. **Physical Resources**: Require identities to control unique hardware or network capabilities 4. **Human Verification**: Require identities to prove they are unique human beings (Proof of Personhood) The key principle is: if creating each identity requires spending something valuable and scarce, an attacker cannot create unlimited identities without unlimited resources. ## The Crucial Difference: Consensus vs. Sybil Defense This is one of the most important distinctions in blockchain technology, and they're often confused because the same mechanisms serve both purposes. Let's clarify: ### Consensus Mechanisms **Purpose**: Achieve agreement among network participants on the state of the blockchain **What They Do**: - Determine the order of transactions - Decide which block to add to the blockchain next - Ensure all honest nodes converge on the same history - Resolve conflicts when multiple blocks are proposed simultaneously **The Question They Answer**: "What is the correct state of the blockchain that we all agree on?" ### Sybil Defense Mechanisms **Purpose**: Prevent a single entity from controlling multiple identities to gain disproportionate influence **What They Do**: - Make it expensive to create new identities - Tie influence to scarce, real-world resources - Ensure each participant's power is limited - Prevent attackers from simulating a large number of network participants **The Question They Answer**: "How do we ensure that each identity in the network represents a truly independent entity?" ## Real-World Example: Bitcoin Bitcoin uses Proof of Work: **Consensus Role**: - Miners compete to find valid blocks - The network follows the longest valid chain - This ensures all nodes agree on transaction history **Sybil Defense Role**: - Creating 100 fake miner identities doesn't give you more power - What matters is computational power (hash rate), not number of identities - To control the network, you need 51% of total hash rate, which requires massive investment in hardware and electricity - This makes Sybil attacks prohibitively expensive ## Why Both Are Necessary You need both consensus mechanisms AND Sybil defense for a secure blockchain: **Without Consensus**: Nodes wouldn't agree on the transaction order or blockchain state, leading to fragmentation and inconsistency. **Without Sybil Defense**: An attacker could create unlimited fake identities to manipulate the consensus process, undermining the entire system. Think of it this way: - **Sybil Defense** ensures the participants are legitimate and their influence is fair - **Consensus Mechanisms** use these legitimate participants to agree on the blockchain state Together, they create a system where decentralized coordination is possible without trusted authorities, and where influence is earned through real-world resources rather than granted freely to anyone who creates an account. The choice of Sybil defense mechanism fundamentally shapes the security model, accessibility, and decentralization characteristics of a blockchain network. # Smart Contracts (/academy/blockchain-fundamentals/06-smart-contracts/01-smart-contracts) --- title: Smart Contracts description: Discover how smart contracts revolutionize digital agreements by automatically executing code when conditions are met. Learn how these self-enforcing programs enable decentralized applications, eliminate intermediaries, and create trustless systems on blockchain networks. updated: 2025-10-29 authors: [martineckardt, katherineavalabs] icon: Book --- Blockchains began as systems for recording and transferring digital currency, but they evolved into something much more powerful: platforms for running code. **Smart contracts** are self-executing programs that run on blockchains, automatically enforcing agreements without intermediaries. They represent one of the most transformative applications of blockchain technology. ## What Are Smart Contracts? A **smart contract** is a program stored on a blockchain that automatically executes when predetermined conditions are met. Think of it as a digital agreement written in code rather than legal language, where the blockchain automatically enforces the terms without requiring trust in any person or institution. ### Traditional Contracts vs. Smart Contracts **Traditional Contract:** - Written in legal language - Requires trusted intermediaries (lawyers, courts, escrow services) to enforce - Enforcement can be slow, expensive, and subjective - Parties must trust the legal system to be fair **Smart Contract:** - Written in programming code - Automatically enforced by the blockchain network - Execution is immediate and deterministic - No need to trust intermediaries—the code and blockchain guarantee execution ## How Smart Contracts Work Smart contracts are deployed on blockchain platforms that support programmable logic, with Ethereum being the most well-known example and Avalanche being out favorite one ;). ### The Basic Process 1. **Write the Code**: A developer writes a smart contract in a programming language. The contract defines: - The conditions that must be met - The actions to take when conditions are satisfied - The data the contract stores and manages 2. **Deploy to Blockchain**: The compiled contract is deployed to the blockchain by submitting a special transaction. Once deployed: - The contract gets a unique address on the blockchain - The code becomes permanent and immutable - Anyone can interact with the contract at its address 3. **Interact with the Contract**: Users send transactions to the contract's address to: - Call functions defined in the contract - Send cryptocurrency or tokens to the contract - Read data stored in the contract - Trigger the contract's automatic behaviors 4. **Automatic Execution**: When someone interacts with the contract: - Every node on the network executes the contract code - The contract's logic determines what happens (transfer funds, update data, etc.) - All nodes reach consensus on the outcome - The blockchain records the state changes ### Key Characteristics **Deterministic**: Given the same inputs and blockchain state, the contract always produces the same outputs. There's no randomness or external influence. **Autonomous**: Once deployed, the contract runs exactly as programmed without human intervention. No one can stop it or change how it behaves. **Transparent**: The contract's code and its current state are visible to everyone on the blockchain. Anyone can verify what it does and audit its behavior. **Immutable**: After deployment, the contract's code cannot be changed. This ensures that the rules can't be altered after people start using it. **Trustless**: Users don't need to trust the contract creator or any third party—they only need to trust that the code does what it says (which they can verify) and that the blockchain will execute it correctly. ## Real-World Applications Smart contracts enable a vast ecosystem of decentralized applications (dApps) across many domains: ### Decentralized Finance (DeFi) DeFi uses smart contracts to recreate traditional financial services without banks or brokers: - **Lending Protocols**: Automatically lend cryptocurrency and calculate interest in real-time - **Decentralized Exchanges**: Trade tokens directly with others without a centralized exchange - **Stablecoins**: Maintain stable cryptocurrency values through algorithmic smart contracts - **Yield Farming**: Automatically optimize returns across multiple platforms ### Non-Fungible Tokens (NFTs) Smart contracts power the creation, ownership, and trading of unique digital assets: - Prove ownership of digital art, collectibles, or virtual items - Automatically pay royalties to creators on secondary sales - Enable fractional ownership of high-value assets ### Supply Chain Management Track goods as they move through complex supply chains: - Record each step in a product's journey automatically - Verify authenticity and prevent counterfeits - Trigger payments automatically ### Decentralized Autonomous Organizations (DAOs) Organizations governed entirely by smart contracts: - Members vote on proposals using tokens - Approved proposals execute automatically - Treasury funds are managed transparently by code ### Gaming and Virtual Worlds Enable true ownership and interoperability of digital items: - Players truly own their in-game items as tokens - Items can be traded or used across different games - Game economies run on transparent, automated smart contracts ### Insurance Automate claims processing and payouts: - Parametric insurance that pays out automatically based on data (e.g., weather data for crop insurance) - No claims adjusters needed for straightforward cases - Faster payouts with lower overhead ## A Detailed Example: Token Swap Let's walk through how a smart contract enables a trustless cryptocurrency exchange: **Scenario**: Alice has 100 AVAX and wants to trade it for Bob's 2,000 USDC. Neither party trusts the other. **Traditional Solution**: Use a centralized exchange to facilitate the trade. Both parties must trust the exchange to handle their funds honestly and not freeze accounts. **Smart Contract Solution**: ```solidity contract TokenSwap { address public alice; address public bob; uint256 public avaxAmount; uint256 public usdcAmount; bool public aliceDeposited; bool public bobDeposited; uint256 public deadline; IERC20 public usdcToken; // Alice creates the swap offer constructor(address _bob, uint256 _usdcAmount, address _usdcToken) payable { alice = msg.sender; bob = _bob; avaxAmount = msg.value; // Alice deposits AVAX usdcAmount = _usdcAmount; usdcToken = IERC20(_usdcToken); aliceDeposited = true; deadline = block.timestamp + 1 hours; // 1 hour to complete } // Bob deposits his USDC to complete the swap function depositUSDC() public { require(msg.sender == bob, "Only Bob can deposit"); require(block.timestamp < deadline, "Swap expired"); usdcToken.transferFrom(bob, address(this), usdcAmount); bobDeposited = true; // Automatically execute the swap executeSwap(); } // Execute the swap once both have deposited function executeSwap() private { require(aliceDeposited && bobDeposited, "Both must deposit"); // Send AVAX to Bob payable(bob).transfer(avaxAmount); // Send USDC to Alice usdcToken.transfer(alice, usdcAmount); } // If Bob doesn't deposit before deadline, Alice gets her AVAX back function refund() public { require(block.timestamp >= deadline, "Deadline not reached"); require(!bobDeposited, "Swap already completed"); payable(alice).transfer(avaxAmount); } } ``` **How It Works**: 1. Alice creates the swap contract, specifying Bob's address and the amount of USDC she wants 2. Alice sends 100 AVAX to the contract when creating it 3. Bob has 1 hour to deposit 2,000 USDC into the contract 4. **If Bob deposits**: The contract automatically swaps—Bob gets the AVAX, Alice gets the USDC 5. **If Bob doesn't deposit**: After the deadline, Alice can call `refund()` to get her AVAX back This is truly trustless! Neither Alice nor Bob can cheat because: - Alice can't take back her AVAX once Bob deposits (the swap executes automatically) - Bob can't get the AVAX without depositing his USDC - If Bob doesn't participate, Alice simply gets her funds back - No centralized exchange can freeze, confiscate, or misuse the funds, they are just put on a contract that can't do anything other than executing the swap with them. - The swap is atomic: either both sides happen, or neither happens ## Challenges and Limitations While powerful, smart contracts face several challenges: ### Security Vulnerabilities **The Problem**: Bugs in smart contract code can be exploited. Because contracts are immutable, bugs can't be fixed after deployment. **Famous Example**: The DAO hack in 2016 resulted in $50 million stolen due to a reentrancy vulnerability. **Solution**: Rigorous testing, professional audits, formal verification, and secure coding patterns. ### Oracle Problem **The Problem**: Smart contracts can't access external data (weather, stock prices, sports scores) on their own. They need "oracles" to feed them real-world information. **Challenge**: Oracles reintroduce trust and potential points of failure into the system. **Solution**: Decentralized oracle networks that aggregate data from multiple sources. ### Legal Uncertainty **The Problem**: The legal status of smart contracts is unclear in many jurisdictions. What happens when code conflicts with law? **Challenge**: Dispute resolution and liability are not always clear. **Evolution**: Legal frameworks are gradually adapting to recognize smart contracts. ### Immutability Double-Edged Sword **Benefit**: No one can change the rules **Risk**: No way to fix bugs or upgrade functionality **Solutions**: Upgradeable contract patterns or designing migration paths into new contracts. ## The Future of Smart Contracts Smart contracts are still evolving, with exciting developments on the horizon: - **New Languages**: Enabling more complex logic and safer programming - **Cross-Chain Contracts**: Contracts that operate across multiple blockchains - **Formal Verification**: Mathematical proofs that contracts behave correctly - **Privacy-Preserving Contracts**: Using zero-knowledge proofs to execute contracts privately Smart contracts represent a fundamental shift in how we create and enforce agreements. By removing intermediaries and automating execution, they enable new forms of trust, coordination, and economic activity that were previously impossible. As the technology matures, smart contracts are poised to reshape industries from finance to governance to entertainment, creating a more automated, transparent, and accessible digital economy. ## Key Takeaways - Smart contracts are self-executing programs on blockchains that automatically enforce agreements - They eliminate the need for trusted intermediaries in many situations - Once deployed, they are immutable, transparent, and execute deterministically - They enable decentralized applications across finance, gaming, supply chain, and more - While powerful, they require careful design to avoid security vulnerabilities Understanding smart contracts is essential for anyone looking to build on or understand modern blockchain platforms. They're the foundation of the decentralized application ecosystem and represent one of blockchain technology's most significant innovations beyond simple currency. # Native Tokens (/academy/blockchain-fundamentals/07-independent-tokenomics/01-native-tokens) --- title: Native Tokens description: Learn about native tokens and their role in blockchain ecosystems. updated: 2024-09-03 authors: [0xstt] icon: Book --- A **native token** in a blockchain running the Ethereum Virtual Machine (EVM) refers to the primary digital currency or cryptocurrency native to that blockchain. Native tokens act as the foundation for value transfer and network operation within their respective ecosystems. - **Ethereum**: ETH - **Avalanche C-Chain**: AVAX - **Dexalot**: ALOT - many more... --- ### The Role of Native Tokens Native tokens serve multiple key roles within EVM-based blockchain networks, such as: - **Value Transfer**: Native tokens act as the primary currency for peer-to-peer transactions within the network, enabling value exchange between participants. - **Gas Fees**: Native tokens are used as **gas** to pay for transaction fees, contract deployments, and other network operations. This ensures that resources are allocated efficiently within the network. - **Security**: In Proof-of-Stake (PoS) networks, native tokens are often used for staking to secure the network and validate transactions. - **Governance**: In some cases, native tokens grant holders governance rights, allowing them to participate in decision-making processes that shape the blockchain’s future. --- Native tokens are the backbone of blockchain ecosystems, serving multiple roles that maintain the network's stability, security, and functionality. # ERC-20 Tokens (/academy/blockchain-fundamentals/07-independent-tokenomics/02-erc-20-tokens) --- title: ERC-20 Tokens description: Learn about ERC-20 tokens and their role in blockchain ecosystems. updated: 2024-09-03 authors: [0xstt] icon: Book --- While a blockchain has a single **native token**, the **ERC-20** token standard was developed to allow for the representation of a wide range of assets on EVM-compatible chains. **ERC** stands for **Ethereum Request for Comment**, and **20** is the identifier for the specific proposal that defines the standard. ERC-20 tokens are **fungible**, meaning each token is identical to another and can be exchanged on a one-to-one basis. These tokens are created and managed through smart contracts that adhere to the ERC-20 standard, ensuring interoperability between different tokens and decentralized applications (DApps). --- ### ERC-20 Token Architecture At the core of every ERC-20 token is a simple **mapping** of addresses to balances, representing the number of tokens an address holds. ```solidity abstract contract ERC20 is Context, IERC20, IERC20Metadata, IERC20Errors { mapping(address account => uint256) private _balances; //... } ``` Addresses holding ERC-20 tokens can belong to either **Externally Owned Accounts (EOAs)** or **smart contracts**. Both types of accounts can store and transfer ERC-20 tokens, making ERC-20 tokens versatile in decentralized finance (DeFi) and decentralized applications. --- ### Role of ERC-20 Tokens in Blockchain Ecosystems ERC-20 tokens play an essential role in enabling the creation of decentralized applications with various functionalities: - **Tokenized Assets**: ERC-20 tokens can represent anything from digital currencies to tokenized real-world assets. - **DeFi Protocols**: Many DeFi protocols use ERC-20 tokens for lending, staking, and liquidity pools. - **Token Sales**: ICOs (Initial Coin Offerings) and other fundraising models rely heavily on ERC-20 tokens. --- ### The ERC-20 Interface All ERC-20 tokens follow a standard interface to ensure compatibility with decentralized applications (DApps). This allows tokens to be easily transferred, approved for spending, and managed by any DApp that follows the same rules. ```solidity interface IERC20 { function name() external view returns (string memory); function symbol() external view returns (string memory); function decimals() external view returns (uint8); function totalSupply() external view returns (uint256); function balanceOf(address _owner) external view returns (uint256 balance); function transfer(address _to, uint256 _value) external returns (bool success); function transferFrom(address _from, address _to, uint256 _value) external returns (bool success); function approve(address _spender, uint256 _value) external returns (bool success); function allowance(address _owner, address _spender) external view returns (uint256 remaining); } ``` You can review the full **ERC-20 standard** [here](https://eips.ethereum.org/EIPS/eip-20). --- ### Transferring ERC-20 Tokens To transfer ERC-20 tokens between accounts, you use the `transfer()` function, where the sender specifies the recipient’s address and the amount to be transferred. For more complex interactions, such as allowing a smart contract to transfer tokens on behalf of someone else, the ERC-20 standard includes the `approve()` and `transferFrom()` functions. Transfers tokens from the sender’s account to another account, decreasing the sender's balance and increasing the recipient’s. This function allows the owner of a token balance to approve another account (the **spender**) to withdraw up to a specified amount of tokens. The spender can withdraw from the owner's balance multiple times, as long as the total amount doesn’t exceed the approved limit. The `allowance()` function returns the amount that a spender is still allowed to withdraw from an owner's balance. This function facilitates the transfer of tokens from one account to another on behalf of the account owner. It is typically used in scenarios where smart contracts need to execute token transfers according to the contract's logic. --- ERC-20 tokens revolutionized the blockchain space by enabling the tokenization of assets and simplifying the creation of decentralized applications. Their standardization ensures compatibility across platforms and DApps, making them an integral part of the broader crypto ecosystem. # Deploy and Transfer an ERC-20 Token (/academy/blockchain-fundamentals/07-independent-tokenomics/03-deploy-and-transfer-erc-20-tokens) --- title: Deploy and Transfer an ERC-20 Token description: Learn how to deploy an ERC-20 token and transfer it between accounts. updated: 2024-09-03 authors: [0xstt] icon: Terminal --- In this section, you will follow a step-by-step guide to deploy an ERC-20 token on a blockchain network and transfer it between accounts. This will provide practical experience with token creation, deployment, and handling transactions. ### Objectives: - Deploy an ERC-20 token smart contract. - Interact with your token by transferring it between different accounts. To learn how to deploy an ERC-20 token and interact with your token on a blockchain, follow [this guide](/academy/interchain-token-transfer/03-tokens/08-transfer-an-erc-20-token) to explore the deployment process using the CLI on our Avalanche L1. # Wrapped Native Tokens (/academy/blockchain-fundamentals/07-independent-tokenomics/04-wrapped-tokens) --- title: Wrapped Native Tokens description: Learn about wrapped tokens and their role in blockchain ecosystems. updated: 2024-09-03 authors: [0xstt] icon: Book --- **Wrapped tokens** are blockchain assets that represent a native cryptocurrency (e.g., AVAX, ALOT, ETH) in a tokenized form, typically conforming to the **ERC-20 token standard**. Wrapping a native token allows it to be used in decentralized applications (dApps) and protocols that require ERC-20 tokens. --- ### What Are Wrapped Tokens? Wrapped tokens are created through a process where the native cryptocurrency is locked in a smart contract, and an equivalent amount of the wrapped token is minted. These wrapped tokens are backed 1:1 by the underlying native asset, ensuring that the value of the wrapped token mirrors that of the original native cryptocurrency. This **ERC-20 compatibility** is crucial for enabling the native asset to interact with dApps, decentralized exchanges (DEXs), and smart contracts within the EVM ecosystem, where ERC-20 tokens are the standard. --- ### Why Are Wrapped Tokens Important? Wrapped tokens play an essential role in **interoperability** within the EVM ecosystem, facilitating seamless use across decentralized applications and protocols. Some key benefits include: - **Liquidity**: Wrapped tokens increase liquidity in DeFi by enabling users to participate in protocols that require ERC-20 tokens, even when their original asset is a native token. - **Cross-Chain Compatibility**: Wrapped tokens allow assets from one blockchain (e.g., Bitcoin) to be used on another chain, enhancing cross-chain functionality. - **DeFi Integration**: Wrapped tokens are vital in DeFi protocols such as lending, borrowing, staking, and liquidity pools, where ERC-20 tokens are the standard. --- ### Wrapped Token Contract Interface A **wrapped token contract** is typically an implementation of the ERC-20 token standard, with added functions for minting and burning tokens to facilitate the wrapping and unwrapping process. Here's a basic contract interface for a wrapped token: ```solidity interface IWrappedToken { function deposit() external payable; function withdraw(uint256 amount) external; function totalSupply() external view returns (uint256); function balanceOf(address account) external view returns (uint256); function transfer(address recipient, uint256 amount) external returns (bool); function allowance(address owner, address spender) external view returns (uint256); function approve(address spender, uint256 amount) external returns (bool); function transferFrom(address sender, address recipient, uint256 amount) external returns (bool); } ``` This function is used to wrap native tokens. When a user calls deposit(), they send the native cryptocurrency (e.g., AVAX, ETH) to the contract, which then mints an equivalent amount of the wrapped token. This function is used to unwrap the tokens. It burns the specified amount of wrapped tokens and returns the equivalent amount of native cryptocurrency to the user. # Deploy and Interact with Wrapped Token (/academy/blockchain-fundamentals/07-independent-tokenomics/05-deploy-and-interact-wrapped-tokens) --- title: Deploy and Interact with Wrapped Token description: Learn how to deploy and interact with wrapped tokens updated: 2024-09-03 authors: [0xstt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this section, we will deploy and interact with a wrapped token using **Forge** and **cast** commands. Forge simplifies smart contract deployment, and cast allows you to interact with deployed contracts. --- #### 1. Write the Wrapped Token Contract Here is a basic implementation of a wrapped token contract in Solidity: ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import "@openzeppelin/contracts/token/ERC20/ERC20.sol"; contract WrappedToken is ERC20 { address public nativeTokenHolder; constructor() ERC20("Wrapped Token", "WTKN") {} function deposit() external payable { _mint(msg.sender, msg.value); nativeTokenHolder = msg.sender; } function withdraw(uint256 amount) external { _burn(msg.sender, amount); payable(msg.sender).transfer(amount); } } ``` ### Deploy ERC20 Receiver Deploy the Contract Using Forge's `create` Command ```bash forge create --rpc-url myblockchain --private-key $PK WrappedToken.sol:WrappedToken --broadcast ``` ### Save the Wrapped Token Address Save the `Deployed to` address in an environment variable. ```bash export WRAPPED_TOKEN=
``` ### Interacting with the Deployed Contract Once the contract is deployed, you can interact with it using **cast** commands. To deposit native tokens and mint wrapped tokens, use the following `cast send` command: ```bash cast send $WRAPPED_TOKEN "deposit()" --value --rpc-url myblockchain --private-key $PK ``` - ``: The amount of native tokens you want to wrap (in wei). To burn the wrapped tokens and retrieve the equivalent amount of native tokens: ```bash cast send $WRAPPED_TOKEN "withdraw(uint256)" --rpc-url myblockchain --private-key $PK ``` - ``: The number of wrapped tokens to burn and convert back to native tokens. You can use the following page for [**wei conversions**](https://snowtrace.io/unitconverter). # Token Decimals (/academy/blockchain-fundamentals/07-independent-tokenomics/06-token-decimals) --- title: Token Decimals description: Learn about token decimals and their impact on blockchain applications. updated: 2024-09-03 authors: [0xstt] icon: Book --- **Token decimals** refer to the level of precision used to define a token’s smallest unit. When a token contract is created, the number of decimals is specified to determine how divisible the token will be. This can have significant implications for user experience, application development, and overall token functionality. For example: - **6 decimals**: A token with 6 decimals means the smallest unit is 0.000001. - **18 decimals**: A token with 18 decimals means the smallest unit is 0.000000000000000001. --- ### Why Are Token Decimals Important? Token decimals are critical because they determine how small a fraction of the token can be used in transactions. The choice of decimals directly affects how the token is used in various applications, including decentralized finance (DeFi), payments, and staking. - **Greater Precision**: More decimals allow for finer divisions of the token, which is useful in systems where very small amounts of a token need to be handled (e.g., high-frequency trading, micro-payments, or rewards distribution). - **User Perception**: The number of decimals can also influence how users perceive the value of the token. A token with fewer decimals might appear more “whole,” making it seem like larger amounts are being used in transactions. --- **Conclusion**: Token decimals are a fundamental aspect of token design. Choosing the right number of decimals depends on the token's use case, balancing the need for precision with user experience and technical requirements. # Virtual Machines and Blockchains (/academy/blockchain-fundamentals/08-vms-and-blockchains/01-vms-and-blockchains) --- title: Virtual Machines and Blockchains description: Learn about Virtual Machines. updated: 2024-05-31 authors: [martineckardt] icon: Book --- Virtual Machines are defining the behaviour of a blockchain. They can be the execution environment for smart contracts and decentralized applications. In this section, you will learn about the Virtual Machines used in Avalanche. ## What You Will Learn In this section, you will go through the following topics: - **State Machine:** Understand what a State Machine is and how it works - **Blockchain:** Learn about the role of VMs in blockchains - **Variety of VMs:** Learn how Avalanche supports different VMs # What is a State Machine? (/academy/blockchain-fundamentals/08-vms-and-blockchains/02-state-machine) --- title: What is a State Machine? description: Learn about the State Machines in blockchain systems. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- A virtual machine in the context of a blockchain system is like a decentralized computer that can execute a program in a controlled environment. A Virtual Machine (VM) defines the application-level logic of a blockchain. In technical terms, it specifies the blockchain’s state, state transition function, transactions, and the API through which users can interact with the blockchain. When you write a VM, you don't need to concern yourself with lower-level logic like networking, consensus, and blockchain structure. Avalanche does this behind the scenes, so you can focus on building. The most popular VM in blockchain is the Ethereum Virtual Machine (EVM) used in the Ethereum blockchain and others. It enables the creation and execution of smart contracts. However, blockchain systems are not limited to the EVM and many blockchains operate with different virtual machines today. For instance, Solana and Cardano use completely different Virtual Machines. ## Soda Dispenser: A Simple Machine To better comprehend Virtual Machines, take this analogy of a simple, everyday machine: a soda dispenser. For optimal functionality, this machine must **consistently monitor its state**. In this instance, the state may represent the current balance, total revenue, and the number of cans available per brand. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/28-njnfTjsYzVjFGF671wQLlnRwFU9GoV.png) This dispenser permits several **operations**, such as: - Inserting coins - Selecting a soda flavor User operations trigger **state transitions**. For example, when a user inserts a coin, the balance increases. Selecting a soda brand leads to a decrease of the balance by the soda's price and reduces the quantity of that specific brand by one. Additionally, certain logic might be incorporated to dictate various operational outcomes based on the current state. For example, the machine checks if the balance is sufficient when a user chooses a soda brand. As a result, the outcome could be different in cases where the balance is adequate compared to instances where it isn't. ## Advantages of VMs Implementing state machines comes with several advantages: **Clear Interface:** The range of operations provides clarity on how to interact with it. Once familiarized with the interface, you can interact with all soda machines (following the same blueprint) in the same manner. **Reproducibility:** With a Virtual Machine blueprint, one can create multiple identical instances. These behave consistently, implying that if you conduct the same operations in the same sequence on two different machines, the state will remain identical. # Blockchains (/academy/blockchain-fundamentals/08-vms-and-blockchains/03-blockchains) --- title: Blockchains description: Learn about how VMs work in blockchain. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Let’s look at how VMs work in blockchain. Each validator operates an instance of our hypothetical soda dispenser. So, they have their own instance of a machine running on their server. They do this so they do not have to trust a single party with the operation of a soda dispenser, and to make it easy for everyone to verify the outcome of operations. When a user wishes to execute an operation on this distributed soda dispenser, they transmit the transaction to the network. The validators then reach consensus on the sequence in which transactions are carried out. In Avalanche, validators use Avalanche Consensus, which we talked about earlier. Next, each validator executes the operation on their instance of the VM independently. Because each instance of our Virtual Machine behaves identically, validators maintain a uniform view of the machine’s state, so they balance and the number of available soda per flavor. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/28-njnfTjsYzVjFGF671wQLlnRwFU9GoV.png) Subsequently, each validator executes the operation on their instance independently. Again, because each instance behaves identically, validators maintain a uniform view of the system's state, encompassing balance and available soda quantities. Let’s take the example further. Assume we have 100 validators in our Soda Dispenser Avalanche L1. I submit multiple operations to the validators: - Insert a quarter - A little while later, I Insert another quarter - Choose lemonade Each of the validators executes the operations on their own machine. After the first quarter the state of the machine states that the balance is 25 cents. After the second operation the balance is 50 cents. After choosing lemonade, the balance goes back to zero and the amount of available lemonades decreases from 8 to 7 on each instance running on each validators server. This is the fundamental principle of Blockchains in Avalanche: A collective of validators operate identical virtual machines, come to consensus on the order of transactions, execute them, and have a uniform view on the machine's current state. # Variety of Virtual Machines (/academy/blockchain-fundamentals/08-vms-and-blockchains/04-variety-of-vm) --- title: Variety of Virtual Machines description: Learn about different types of Virtual Machines. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- We can take this concept even further: The same network of validators can run two or more blockchains. These could operate different VMs, like a soda dispenser and a candy dispenser, or identical VMs, like two soda dispensers. When a user wants to issue an operation, they specify which blockchain to interact with. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/30-dZcAyNCtqUOXo4LREKAsLTERgAK4Yp.png) You can think of a Virtual Machine (VM) as a blueprint for a blockchain, where the same VM can create multiple blockchains, each adhering to the same rules but remaining logically independent from the others. Even though two blockchains may use the same virtual machine, they each have an independent state. As an example, it could be the case that in one soda dispenser, a user has a credit of 1 dollar to their name while in the other soda dispenser, they do not have any credits to their name. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/avalanche-fundamentals/31-QO8n2upUS0rKCExySCPex6Yju50Sln.png) The number of blockchains a validator can validate is primarily limited by the additional computational resources required to operate the virtual machines of all the chains. # Regulation (/academy/blockchain-fundamentals/xx-regulation/01-regulation) --- title: Regulation description: TBD updated: 2024-05-31 authors: [martineckardt] icon: Book --- # Proof of Work (/academy/blockchain-fundamentals/xx-tbd-sections/02-proof-of-work) --- title: Proof of Work description: TBD updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- TBD # Proof of Stake (/academy/blockchain-fundamentals/xx-tbd-sections/03-proof-of-stake) --- title: Proof of Stake description: TBD updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- - Carrots and Sticks ## Staking Rewards - Carrot ## Slashing - Stick # Signature Schemes (/academy/blockchain-fundamentals/xx-tbd-sections/xx-signature-schemes) --- title: Signature Schemes description: TBD updated: 2024-05-31 authors: [martineckardt] icon: Notebook --- While transactions on ledgers in the analogue world were often authorized by hand-written signatures, that is not going to work for the adoption in blockchain. To utilize the concept of a ledger in the modern age, we leverage cryptography instead. import SignatureSchemes from "@/content/common/cryptography/signature-schemes.mdx" import defaultMdxComponents from "fumadocs-ui/mdx"; # Origin of the EVM (/academy/customizing-evm/02-intro-to-evm/01-origin-of-evm) --- title: Origin of the EVM description: Learn about the origin of the Ethereum Virtual Machine. updated: 2024-09-27 authors: [ashucoder9, owenwahlgren] icon: BookOpen --- The **Ethereum Virtual Machine (EVM)** is a fundamental component of Ethereum’s infrastructure, responsible for executing smart contracts across the network. The origins of the EVM can be traced back to the creation of the Ethereum blockchain itself, proposed in a whitepaper by **Vitalik Buterin** in late 2013. Buterin and the founding Ethereum team envisioned a platform where developers could build **decentralized applications (dApps)** on a blockchain. To accomplish this, they needed a way to execute arbitrary, complex computations in a secure and decentralized manner. This is how the concept of the EVM was born. The EVM, an integral part of the Ethereum ecosystem, operates as a **quasi-Turing complete machine**. This means it can run nearly any algorithm, given enough resources. It's isolated from the main network, providing a sandboxed environment for smart contract execution. Following the publication of the whitepaper, the Ethereum project moved into development. The first live version of the Ethereum network, including the EVM, launched on **July 30, 2015**. The Ethereum blockchain was designed with the EVM at the protocol level, enabling users to create and execute smart contracts. These smart contracts are essentially programs that autonomously execute the terms of an agreement. They are written in high-level languages like **Solidity** or **Vyper**, which are then compiled down to EVM bytecode for execution on the EVM. Since its creation, the EVM has become a crucial component of the Ethereum blockchain and has set a standard for other blockchain platforms that have adopted similar models for executing smart contracts. The advent of the EVM has undoubtedly revolutionized the blockchain world by introducing the concept of **programmable blockchains**. # Accounts, Keys, and Addresses (/academy/customizing-evm/02-intro-to-evm/02-accounts-keys-address) --- title: Accounts, Keys, and Addresses description: Learn about Accounts, Keys, and Addresses in EVM. updated: 2024-09-27 authors: [ashucoder9, owenwahlgrne] icon: BookOpen --- ## Accounts In the EVM, there are two types of accounts: 1. **Externally Owned Accounts (EOAs)**: These accounts are controlled by private keys and do not have associated code. They can send transactions by creating and signing them with their private keys. If you're an end user of Ethereum, you're likely using an EOA. 2. **Contract Accounts**: These accounts have associated code (smart contracts). Contract accounts can't initiate transactions on their own; instead, they only perform actions when instructed by an EOA. This could be as simple as a token transfer or a function call within a smart contract. ## Public and Private Keys In Ethereum, as in many blockchain platforms, the **Elliptic Curve Digital Signature Algorithm (ECDSA)** is used to generate a pair of keys: a public key and a private key. - The **private key** is a 32-byte number generated randomly. - The **public key** is derived from the private key using elliptic curve cryptography. It is 64 bytes long, made up of two concatenated 32-byte coordinates. ## Addresses An EVM address is derived from the public key through the following steps: 1. **Keccak-256 Hashing**: The public key is first passed through the Keccak-256 hashing function (the version of SHA-3 used by Ethereum). Keccak-256 outputs a 64-byte string, for example: `025ad33e2479a53b02dc0be66eb0ce272fc0f4358c57a8f0a442410c3d831a` 2. **Rightmost 20 bytes**: The resulting string is truncated to keep only the last 20 bytes. In hexadecimal, this means retaining the rightmost 40 hex digits (since two hex digits represent one byte), like so: `e66eb0ce272fc0f4358c57a8f0a442410c3d831a` 3. **Adding '0x'**: Finally, "0x" is prefixed to the address for a total of 42 characters. The "0x" indicates that the following characters represent a hexadecimal number, a common convention in Ethereum: `0xe66eb0ce272fc0f4358c57a8f0a442410c3d831a` This process ensures that each Ethereum address uniquely corresponds to a public key. Since the address is a truncated hash of the public key, it's impossible to reverse-engineer the public key from the address. # Transactions and Blocks (/academy/customizing-evm/02-intro-to-evm/03-transactons-and-blocks) --- title: Transactions and Blocks description: Learn about another fundamental aspect of the Ethereum ecosystem - Transactions and Blocks. updated: 2024-09-27 authors: [ashucoder9, owenwahlgren] icon: BookOpen --- ## Transactions In the EVM, a transaction is the smallest unit of work that can be included in a block. It represents a **state transition**, modifying account data and transferring Ether across the network. Transactions can either transfer Ether or invoke smart contracts. Each transaction includes the following components: - **Nonce**: A unique value set by the sender to ensure the transaction is processed only once, preventing replay attacks. - **Gas Price**: The amount the sender is willing to pay per unit of gas, typically measured in `nanoAvax` (1 `nAvax` = 1/(10^9) Avax). It consists of three parts: - **Base Gas Fee**: The minimum `nAvax` required per unit of gas. - **Maximum Priority Gas Fee**: Additional `nAvax` the sender is willing to pay on top of the base fee. - **Maximum Total Gas Fee**: The maximum `nAvax` the sender is willing to spend per unit of gas. If the sum of the base gas fee and the priority gas fee exceeds this amount, the gas price paid will be capped at the maximum total gas fee. - **Gas Limit**: The maximum amount of gas units the sender is willing to pay for executing the transaction. If this limit is reached, the transaction fails, preventing indefinite execution. - **To**: The recipient's Ethereum address or the target smart contract. - **V, R, S**: Cryptographic data used to create the sender's signature. ## Blocks A block in the EVM is a bundle of validated transactions, grouped together and linked to the previous block. Each block contains: - **Block Number**: The index of the block, representing its position in a zero-indexed linked list. - **Timestamp**: The time the block was mined. - **Transactions Root**: The Merkle root of all transactions within the block. - **Receipts Root**: The Merkle root of all transaction receipts. - **State Root**: A hash of the entire Ethereum state after executing all transactions in the block. - **Parent Hash**: The hash of the previous (parent) block. - **Gas Limit**: The maximum gas all transactions in the block can consume. - **Gas Used**: The total gas consumed by all transactions in the block. - **Extra Data**: An optional field for including arbitrary data up to 32 bytes. - **Validator**: The address of the node that proposed the block. # Different Versions of EVM (/academy/customizing-evm/02-intro-to-evm/05-different-evm-versions) --- title: Different Versions of EVM description: Learn about the different versions of the Ethereum Virtual Machine in the Avalanche ecosystem. updated: 2024-09-27 authors: [ashucoder9, owenwahlgren] icon: BookOpen --- ## Geth Geth (or [go-ethereum](https://github.com/ethereum/go-ethereum)) is the official implementation of an Ethereum client, written in the Go programming language. It is one of the original and most widely used Ethereum clients today. Geth handles transactions, deploys and executes smart contracts, and contains the Ethereum Virtual Machine. ## Coreth [Coreth](https://github.com/ava-labs/coreth) is a fork of Geth maintained by Ava Labs. It implements the EVM for the C-Chain and has been adapted to work with Avalanche Consensus. ## Subnet-EVM [Subnet-EVM](https://github.com/ava-labs/subnet-evm) is a fork of Coreth designed to facilitate launching customized EVM-based blockchains on an Avalanche L1. It differs from Coreth in the following ways: - **Configurable Fees and Gas Limits**: Customizable fees and gas limits can be set in the genesis file. - **Unified Subnet-EVM Hardfork**: All Avalanche hardforks are merged into a single "Subnet-EVM" hardfork. - **Atomic Transactions and Shared Memory Removed**: Support for atomic transactions and shared memory has been removed. - **Multicoin Contracts and State Removed**: Multicoin contracts and state tracking are no longer supported. ## Precompile-EVM Precompile-EVM allows precompiles to be registered with Subnet-EVM without needing to fork the Subnet-EVM codebase. This simplifies common customizations to Subnet-EVM, making them more accessible and easier to maintain. It also streamlines updates for Subnet-EVM. # Set Up Development Environment (/academy/customizing-evm/03-development-env-setup/00-intro) --- title: Set Up Development Environment description: Start by setting up your development environment. updated: 2024-05-31 authors: [ashucoder9] icon: Book --- It's time to set up the development environment necessary for customizing the EVM! While the primary focus of this course is teaching you about precompiles, it's also crucial to configure a development environment that ensures a smooth and efficient workflow. In this section, we will cover the following: - Creating a GitHub repository for our customized EVM - Setting up a default test account in your Core Wallet - Exploring different types of development setups - Choosing the setup for developing precompiles Let's get started! 🚀 # Create Codespaces (/academy/customizing-evm/03-development-env-setup/02-create-codespaces) --- title: Create Codespaces description: Learn how to create GitHub Codespaces. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- ## Open Precompile-EVM Repository in a Github Codespace Open a Codespace on the [Precompile-EVM](https://github.com/ava-labs/precompile-evm/tree/avalanche-academy-start) Repository on the `avalanche-academy-start` branch by clicking the button below: [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://github.com/codespaces/new?hide_repo_select=true&ref=avalanche-academy-start&repo=605560683&machine=standardLinux32gb) Next a window opens and the Codespace is built. This may take a few minutes. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/customizing-evm/11-Z5CM89rf9qvVIENBpZF2wlKt3bALSq.png) When the building is completed, you can access the Codespace through the browser IDE. ## Closing Codespaces The codespace time out after 30 minutes of inactivity by default. So if you are not too worried about the 30 hour/month limit, just close the window or the tab. Alternatively, you can open the command prompt (Cmd + Shift + P) and issue the command: **Stop Current Codespace**. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/customizing-evm/12-rESxJrPr97KL08SWhP30BMEjHN7UAS.png) ## Reopen Codespaces You can see your Codespaces on https://github.com/codespaces/. There you can stop them, reopen them and delete them. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/customizing-evm/13-BHHk6FjT3nu1ATITEUdeJhezVCpd6G.png) # Codespace in VS Code (/academy/customizing-evm/03-development-env-setup/03-codespace-in-vscode) --- title: Codespace in VS Code description: Learn how to setup GitHub Codespaces. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- ## Switch from Browser to VS Code You can switch any time from the browser IDE to Visual Studio Code: ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/customizing-evm/14-aVKH0qkDNUpTwxRaB1qSFNuSk55Mfu.png) The first time you switch, you will be asked to install the [Codespaces extension](https://marketplace.visualstudio.com/items?itemName=GitHub.codespaces) and connect VS Code to you GitHub account, if it is not already connceted. # Your Own EVM Blockchain (/academy/customizing-evm/04-your-evm-blockchain/00-intro) --- title: Your Own EVM Blockchain description: Learn how to spin up your own EVM blockchain. updated: 2024-09-27 authors: [ashucoder9, owenwahlgren] icon: Book --- In this part of the course, we'll explore how to run your own Avalanche L1 with a custom EVM. Running your own EVM allows you to address specific use cases, showcasing one of the key advantages of multi-chain systems over monolithic blockchains. ## Topics We’ll cover the following topics: - **Avalanche CLI**: Learn how to configure and launch an Avalanche L1 using the Avalanche CLI. - **Token Transfer**: Explore how to perform token transfers with Foundry. This hands-on exercise will solidify your knowledge and allow you to observe how customizations impact EVM performance. ## Learning Objective By the end of this section, you’ll have the skills to effectively run your own Avalanche L1 with a custom EVM blockchain, empowering you to start building your blockchain projects! # Avalanche CLI (/academy/customizing-evm/04-your-evm-blockchain/01-avalanche-cli) --- title: Avalanche CLI description: Learn about the Avalanche Command-Line Interface tooling. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## What is the Avalanche CLI? The Avalanche CLI is a command-line tool that gives developers comprehensive access to Avalanche's functionalities, making it easier to build and test independent blockchains. Each Avalanche network includes the Primary Network, which consists of the Contract (C), Platform (P), and Exchange (X) chains. It's important to note that "Primary Network" refers to a special Avalanche L1 rather than a distinct, standalone network. Your local network operates independently from both the Mainnet and Fuji Testnet. You can even run an Avalanche L1 offline. Local Avalanche networks support, but are not limited to, the following commands: - **Start and Stop a Network**: Easily start or stop a local network. - **Health Check**: Check the health status of each node in the network. - **Create Blockchains**: Spin up new blockchains with custom parameters. Managing a local network with multiple nodes can be complex, but the Avalanche CLI simplifies the process with user-friendly commands. ## Usage The Precompile-EVM repository comes preloaded with the Avalanche CLI and Foundry, so you don’t need to install additional tools when working within Codespaces. Just use the terminal in your Codespace to run Avalanche CLI commands and start building. # Create Your Blockchain (/academy/customizing-evm/04-your-evm-blockchain/02-create-your-blockchain) --- title: Create Your Blockchain description: Learn how to use Avalanche-CLI to spin up your own EVM blockchain. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import CreateDefaultBlockchain from "@/content/common/avalanche-starter-kit/create-default-blockchain.mdx"; import defaultMdxComponents from "fumadocs-ui/mdx"; # Sending Tokens (/academy/customizing-evm/04-your-evm-blockchain/03-sending-tokens) --- title: Sending Tokens description: Learn how to send tokens on your EVM blockchain. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- To ensure that the blockchain is up and running, let's perform a simple token transfer to a random address, `0x321f6B73b6dFdE5C73731C39Fd9C89c7788D5EBc`, using Foundry: ```bash cast send --rpc-url myblockchain --private-key $PK 0x321f6B73b6dFdE5C73731C39Fd9C89c7788D5EBc --value 1ether ``` To verify if the transaction was successful, check the balance of the address with the following command: ```bash cast balance --rpc-url myblockchain 0x321f6B73b6dFdE5C73731C39Fd9C89c7788D5EBc ``` ```bash 1000000000000000000 ``` You should see that the balance of the address `0x321f6B73b6dFdE5C73731C39Fd9C89c7788D5EBc` is now 1 (1 * 10^18). Congratulations! You have successfully sent tokens on your EVM blockchain. 🎉 # EVM Configuration (/academy/customizing-evm/05-genesis-configuration/00-vm-configuration) --- title: EVM Configuration description: Learn about Virtual Machine configuration. updated: 2024-09-27 authors: [ashucoder9] icon: Book --- In this part of the course, we'll explore how to optimize your EVM through chain configuration, tailoring it to fit specific use cases. Customizing EVM configurations is a key advantage of multi-chain systems. ## Exercise In this section, you won’t need to write any Go code. Instead, we’ll adjust values in the JSON file of the genesis block. ## Topics We will cover the following topics: - **Genesis Block**: The foundation of any blockchain. We’ll review its components and how to customize its properties. - **Fee Configuration**: Learn how to balance validator incentives with user affordability. This is crucial for managing network congestion and discouraging wasteful transactions on public networks. - **Initial Token Allocation**: Understand how to define the initial token distribution in your custom EVM, setting your network up for success. - **Preinstalled Precompiles**: Discover how to configure preinstalled precompiles to leverage features like restricting who can issue transactions or deploy contracts on your chain. Finally, we’ll demonstrate how to run a local EVM with a custom Genesis Block. This exercise will solidify your understanding and allow you to observe the performance impact of your EVM customizations. ## Learning Objective By the end of this section, you'll be able to effectively customize the EVM in Avalanche, unlocking new possibilities for your blockchain projects. Let’s dive into EVM customization and chain configuration together! # Genesis Block (/academy/customizing-evm/05-genesis-configuration/01-genesis-block) --- title: Genesis Block description: Learn about the Genesis Block. updated: 2024-09-27 authors: [ashucoder9] icon: BookOpen --- ## Background Each blockchain begins with a genesis state when it is created. For instance, the Ethereum mainnet genesis block included the addresses and balances from the Ethereum pre-sale, marking the initial distribution of ether. For Subnet-EVM and Precompile-EVM, the genesis block contains additional parameters that allow us to configure the behavior of our customized EVM to meet specific requirements. Since each blockchain has its own genesis block, you can create two blockchains with the same VM but different genesis blocks. ## Format Here’s an example of a genesis block: ```json { "config": { "chainId": 43214, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "subnetEVMTimestamp": 0, "feeConfig": { "gasLimit": 15000000, "minBaseFee": 25000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "targetBlockRate": 2, "blockGasCostStep": 200000 }, "allowFeeRecipients": false, "txAllowListConfig": { "blockTimestamp": 0, "adminAddresses": [ "0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC" ] } }, "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x295BE96E64066972000000" } }, "nonce": "0x0", "timestamp": "0x0", "extraData": "0x00", "gasLimit": "0xe4e1c0", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000" } ``` We will explore the relevant configurable parameters in the upcoming activities. Some parameters (e.g., `eip150Block`, `byzantiumBlock`) are omitted here as they are not relevant for most use cases. This version provides a clean and concise explanation of the genesis block and its structure, keeping the focus on what's essential. # Create Your Genesis File (/academy/customizing-evm/05-genesis-configuration/02-create-your-genesis) --- title: Create Your Genesis File description: Learn how to create your own genesis file. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Callout } from 'fumadocs-ui/components/callout'; ## Create File In your Precompile-EVM project, create a file called `evm-configuration-genesis.json` in the directory `tests/precompile/genesis/` and open it. You can use the command below as a shortcut to open the file in VSCode: ```bash code ./tests/precompile/genesis/evm-configuration-genesis.json ``` ## Fill with template Paste the following template in the new file: ```json { "config": { "chainId": , "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "subnetEVMTimestamp": 0, "feeConfig": { "gasLimit": , "minBaseFee": , "targetGas": , "baseFeeChangeDenominator": 36, "minBlockGasCost": , "maxBlockGasCost": , "targetBlockRate": , "blockGasCostStep": }, "allowFeeRecipients": false }, "alloc": { "": { "balance": "" } }, "nonce": "0x0", "timestamp": "0x0", "extraData": "0x00", "gasLimit": , "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000" } ``` If you do decide to set your own gasLimit, please set all gasLimit keys equal to the same value! If you set the 'gasLimit' keys to different values, you will still be able to deploy a blockchain, but it will halt during initialization! # Setup Your ChainID (/academy/customizing-evm/05-genesis-configuration/03-setup-chainid) --- title: Setup Your ChainID description: Learn how to setup ChainID for your own blockchain. updated: 2024-09-27 authors: [ashucoder9] icon: Terminal --- ## What is ChainID? The `ChainID` of an EVM blockchain is a unique identifier that distinguishes Ethereum chains from one another. Introduced in Ethereum Improvement Proposal (EIP) 155, this mechanism prevents transaction replay attacks, which could occur when the same transaction is valid across multiple chains. ## Setting the ChainID To set the `ChainID` for your blockchain, configure it in the `genesis` JSON file at the top level with the value `99999`. If you wish to use a custom `ChainID`, check [chainlist.org](https://chainlist.org) to ensure your proposed `ChainID` is not in use by any other network. Be sure to cross-check with Testnets as well! ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/customizing-evm/28-LNcOo16PhvaDVIb7crVasGVKvs8kR6.png) # Gas Fees and Gas Limit (/academy/customizing-evm/05-genesis-configuration/04-gas-fees-and-limit) --- title: Gas Fees and Gas Limit description: Learn about Gas Fees and Gas Limit in the context of the EVM. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## Background In the context of the EVM, gas is a unit that measures the computational effort required to execute specific operations. Each operation performed by a contract or transaction on an EVM chain consumes a certain number of gas units based on its complexity. Operations that require more computational resources cost more gas. The EVM calculates the required gas units automatically, and developers are encouraged to optimize their contract code to reduce gas consumption. The cost of executing a transaction is determined by the gas units consumed and the gas price, calculated as follows: ``` Transaction Cost = Gas Units * Gas Price ``` For EVM Avalanche L1s, gas payment can be configured to better suit the use case of the Avalanche L1. This means that the Avalanche L1 design can decide whether the gas fees are burned, paid to incentivize validators, or used for any other custom behavior. ## Purpose The primary goal of setting and enforcing computational costs via gas is to prevent spam and abuse on the network. By requiring users to pay for each computational step, the network deters malicious actors from launching denial-of-service (DoS) attacks, which involve flooding the network with spurious transactions. Essentially, the gas system serves as a deterrent against such attacks. ## Gas Price and Gas Limit Each transaction specifies the gas price and gas limit: **`Gas Price`:** The gas price is the amount of the Avalanche L1's native token that the sender is willing to spend per unit of gas, typically denoted in `gwei` (1 native token = 1,000,000,000 `gwei`). A well-designed gas mechanism adapts the gas price according to network activity to protect the network from spam. **`Gas Limit`:** The gas limit is the maximum amount of gas the sender is willing to use for the transaction. It was introduced to prevent infinite loops in contract execution. In a Turing-complete language like Solidity (the main programming language in the EVM), it is possible to write a contract with an infinite loop, either accidentally or intentionally. While an infinite loop might be a nuisance in traditional computing, it could cause significant issues in a decentralized blockchain by causing the network to hang as it attempts to process a never-ending transaction. The gas limit prevents this by halting execution once the gas consumed reaches the limit. If a transaction exceeds the gas limit, it fails, but the fee amounting to the gas limit is still paid by the sender. # Gas Fees Configuration (/academy/customizing-evm/05-genesis-configuration/05-gas-fee-configuration) --- title: Gas Fees Configuration description: Learn how to configure gas fees in your EVM blockchain. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## Configuration Format The fees are configured in the `chainConfig` in the `feeConfig` field: ```json { "config": { // ... "feeConfig": { // [!code highlight] "gasLimit": 15000000, "minBaseFee": 25000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "targetBlockRate": 2, "blockGasCostStep": 200000 }, "allowFeeRecipients": false }, "alloc": { // ... }, // ... "gasLimit": 0xe4e1c0, // ... } ``` ## Gas Configuration Parameters ### `gasLimit` Sets the maximum amount of gas consumed per block. This restriction caps the computational capacity of a single block and thereby limits the maximum gas usage allowed for any single transaction. For reference, the C-Chain value is set to 15,000,000. You might notice that the `gasLimit` field appears twice. This is because Avalanche introduced its own fee configuration under the `feeConfig` key while maintaining compatibility with the standard EVM configuration. Ensure that both fields have the same decimal and hexadecimal equivalent values. ### `targetBlockRate` Specifies the target rate of block production in seconds. For instance, a target of 2 aims to produce a block every 2 seconds. If blocks are produced faster than this rate, it signals that more blocks are being issued to the network than anticipated, leading to an increase in base fees. For C-Chain, this value is set to 2. ### `minBaseFee` Establishes a lower bound on the EIP-1559 base fee for a block. This minimum base fee effectively sets the minimum gas price for any transaction included in that block. ### `targetGas` Indicates the targeted amount of gas (including block gas cost) to be consumed within a rolling 10-second window. The dynamic fee algorithm adjusts the base fee proportionally based on how actual network activity compares to this target. If network activity exceeds the `targetGas`, the base fee is increased accordingly. ### `baseFeeChangeDenominator` Determines how much to adjust the base fee based on the difference between actual and target utilization. A larger denominator results in a slower-changing base fee, while a smaller denominator allows for quicker adjustments. For C-Chain, this value is set to 36, meaning the base fee changes by a factor of 1/36 of the parent block's base fee. ### `minBlockGasCost` Sets the minimum amount of gas charged for the production of a block. In the C-Chain, this value is set to 0. ### `maxBlockGasCost` Specifies the maximum amount of gas charged for the production of a block. ### `blockGasCostStep` Defines how much to increase or decrease the block gas cost based on the time elapsed since the previous block. If a block is produced at the target rate, the block gas cost remains the same as the parent block. If the production rate deviates from the target, the block gas cost is adjusted by the `blockGasCostStep` value for each second faster or slower than the target block rate. # Configure Gas Fees (/academy/customizing-evm/05-genesis-configuration/06-configuring-gas-fees) --- title: Configure Gas Fees description: Learn how to configure gas fees in your EVM blockchain. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- ## Benchmarks You can take these numbers below as a benchmark: ```json "feeConfig": { "gasLimit": 15000000, "minBaseFee": 25000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "targetBlockRate": 2, "blockGasCostStep": 200000 }, "gasLimit": 0xe4e1c0, ``` ## Choose Your Own Values ### `gasLimit` As previously discussed, the `gasLimit` acts as a restriction on the amount of computation that can be executed in a single block, and hence caps the maximum transaction size, as the entire transaction needs to be processed within the same block. Your choice of `gasLimit` values should depend on the specific use case and computational demands of your application. Here are some considerations: **High Throughput:** If your application requires a high volume of transactions, you might want to allow for more transactions (i.e., more computation) to be packed into a single block. This increases the block capacity and enables handling a higher transaction load. **Heavy Transactions:** Transactions that involve deploying large contracts or executing complex operations tend to consume more gas. If your application involves such operations, ensure that your `gasLimit` accommodates the full execution of these heavy transactions or contract deployments. Be cautious with setting excessively high `gasLimit` values. Larger computational allowances per block can translate to increased hardware requirements for your blockchain. Therefore, align the `gasLimit` with your use case and infrastructure requirements for validators. Avoid setting an unusually large value "just in case." For reference, consider that a typical native token transaction costs around 21,000 gas units, and the C-Chain `gasLimit` is set to 15,000,000. ### Gas Price Parameters The following parameters affect the gas pricing and dynamic adjustments in your network: - **`minBaseFee`**: Sets the minimum base fee per unit of gas, establishing the lower bound for transaction costs in a block. - **`targetGas`**: Defines the target amount of gas to be consumed within a rolling 10-second window. Adjust this based on the expected network activity under normal conditions. - **`baseFeeChangeDenominator`**: Controls how quickly the base fee adjusts in response to fluctuations in gas usage. A smaller denominator allows for faster adjustments. - **`minBlockGasCost`**: Specifies the minimum gas cost for producing a block. - **`maxBlockGasCost`**: Sets the maximum gas cost for producing a block. - **`targetBlockRate`**: Determines the desired block production rate in seconds. Choose this based on your expected block issuance frequency. - **`blockGasCostStep`**: Adjusts the block gas cost based on the deviation from the target block rate. A higher step value increases the block gas cost more rapidly with deviations. These parameters should be selected based on the stability of gas usage in your network. Since gas fees are crucial for protecting against spam and abuse, carefully consider how these parameters will adapt to changes in network activity. The goal is to dynamically adjust transaction costs during irregular activity periods while maintaining stability under normal conditions. # Initial Token Allocation (/academy/customizing-evm/05-genesis-configuration/07-initial-token-allocation) --- title: Initial Token Allocation description: Learn about the initial token allocation and configure in your EVM blockchain. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Callout } from 'fumadocs-ui/components/callout'; ## Background `Alloc` defines the initial balances of addresses at the time of chain creation. This field should be modified according to the specific requirements of each chain. If no genesis allocation is provided, you won't be able to interact with your new chain, as all transactions require a fee to be paid from the sender's balance. Without an initial allocation, there will be no funds available to cover transaction fees. ## Format The `alloc` field expects key-value pairs. Keys must be valid addresses, and the balance field in each value can be either a hexadecimal or decimal number representing the initial balance of the address. ```json { "config": { // ... }, "alloc": { // [!code highlight] "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x295BE96E64066972000000" // 50,000,000 tokens } }, // ... } ``` Keys in the allocation are hex addresses without the canonical 0x prefix. Balances are denominated in `Wei` (10^18 `Wei` = 1 Whole Unit of the native token of the chain) and expressed as hex strings with the canonical 0x prefix. Use [this converter](https://www.rapidtables.com/convert/number/hex-to-decimal.html) for translating between decimal and hex numbers. The default configuration for testing purposes allocates a significant number of tokens to the address `8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC`. The private key for this address (as defined in the `.devcontainer`) is: `56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027`. Never use this address or private key for anything other than testing on a local test network. The private key is publicly known, and any real funds transferred to this address are likely to be stolen. ## Configure Allocate tokens to two addresses: - The well-known test address `8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC`. - Another test address that you have created (avoid using addresses associated with real funds). ```json { "config": { // ... }, "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { // [!code highlight] "balance": "0x295BE96E64066972000000" // 50,000,000 tokens }, "": { // [!code highlight] "balance": "0x295BE96E64066972000000" // 50,000,000 tokens } }, // ... } ``` # Build and Run Custom Genesis EVM (/academy/customizing-evm/05-genesis-configuration/08-build-and-run-custom-genesis-blockchain) --- title: Build and Run Custom Genesis EVM description: Learn how to build and run your blockchain with custom genesis file. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- ### Build Your Precompile-EVM There's a simple build script in the Precompile-EVM we can utilize to build. First, make sure you are in the root folder of you Precompile-EVM: ```bash cd $GOPATH/src/github.com/ava-labs/precompile-evm ``` Then run the command to initiate the build script: ```bash ./scripts/build.sh ``` ### Create your blockchain configuration You can run your Precompile-EVM by using the Avalanche CLI. First, create the configuration for your blockchain: ```bash avalanche blockchain create myblockchain \ --custom \ --vm $VM_PATH \ --genesis "./.devcontainer/genesis-example.json" \ --force \ --sovereign=false \ --evm-token "TOK" \ --warp \ --icm ``` ### Launch L1 with you customized EVM ```bash avalanche blockchain deploy myblockchain --local ``` After around 1 minute, the blockchain should have been created, and additional output will appear in the terminal. You'll also see the RPC URL of your blockchain in the terminal. # What are Precompiles? (/academy/customizing-evm/06-precompiles/01-what-are-precompiles) --- title: What are Precompiles? description: Learn about Precompiled Smart Contracts in EVM. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Precompiled contracts allow the execution of code written in the low-level programming language Go from the EVM, which is significantly faster and more efficient than Solidity. ## Overview If you're familiar with Python, you might recognize a similar concept where many Python functions and libraries are implemented in C for efficiency. Python developers can import these precompiled modules and call functions as if they were written in Python. The main difference is that the modules execute faster and more efficiently. Precompiles can be called from a Solidity smart contract just like any other contract. The EVM maintains a list of reserved addresses mapped to precompiles. When a smart contract calls a function of a contract at one of these addresses, the EVM executes the precompile written in Go instead of the Solidity contract. For example, if we map the address `0x030...01` to the SHA256 precompile that hashes its input using the SHA256 hash function, we can call the precompile as follows: ```solidity // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; interface ISHA256 { // Computes the SHA256 hash of value function hashWithSHA256(string memory value) external view returns(bytes32 hash); } contract MyContract { ISHA256 mySHA256Precompile = ISHA256(0x0300000000000000000000000000000000000001); function doSomething() public { bytes32 hash = mySHA256Precompile.hashWithSHA256("test"); } } ``` In the code above, we call the precompile using the defined interface for our SHA256 precompile within `MyContract`. Note that there is no implementation of the precompile in Solidity itself. This will only work if the precompile is implemented in Go and registered at the address `0x030...01`. ### PrecompiledContract Interface When implementing a precompile in the Avalanche L1-EVM, the following function of the `StatefulPrecompiledContract` interface must be implemented in Go: ```go // StatefulPrecompiledContract is the interface for executing a precompiled contract type StatefulPrecompiledContract interface { // Run executes the precompiled contract. Run(accessibleState AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) } ``` We will cover the meaning of "stateful" and the first parameter `accessibleState` later. For now, let's focus on the function that specifies the logic of our precompile, which provides access to the following data: - caller: The address of the account that called the precompile. - addr: The address of the precompile being called. - input: All inputs encoded in a byte array. - suppliedGas: The amount of gas supplied for the precompile call. - readOnly: A boolean flag indicating if the interaction is only reading or modifying the state. The precompile implementation must return the following values: - ret: All return values encoded in a byte array. - remainingGas: The amount of gas remaining after execution. - err: If an error occurs, return it. Otherwise, return nil. # Why Precompiles? (/academy/customizing-evm/06-precompiles/02-why-precompiles) --- title: Why Precompiles? description: Learn about why you should utilize Precompiles in your smart contracts. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Adding precompiles to the EVM offers several significant advantages, which we will outline in this chapter. ## Performance Optimization Precompiles primarily optimize the performance of specific computations. Introducing a new precompile can greatly reduce the computational resources required for certain tasks, thereby enhancing the performance of smart contracts and decentralized applications (DApps) that rely on these tasks. For instance, the SHA256 hash function (0x02) and the RIPEMD160 hash function (0x03) serve as examples of precompiles that significantly boost performance. Implementing these functions within a smart contract would be computationally expensive and slow, whereas as precompiles, they execute quickly and efficiently. ## Security Incorporating a function as a precompile allows developers to leverage libraries that have been thoroughly reviewed and audited, thus reducing the risk of bugs and vulnerabilities, which enhances overall security. For example, the ModExp (0x05) precompile safely performs modular exponentiation, a complex mathematical operation utilized in various cryptographic functions. ## Access to Go Libraries Precompiles are implemented in Go, allowing access to the rich ecosystem of existing Go libraries. This access eliminates the need for reimplementation, which can be labor-intensive and carries the risk of introducing bugs during the translation from Go to Solidity. Consider the implementation of the SHA256 hash algorithm to understand the complexity involved in reimplementing it in Solidity. ## Gas Efficiency Introducing a new precompile to the EVM can enhance gas efficiency for specific computations, thereby lowering execution costs. This makes it feasible to incorporate more complex operations into smart contracts, expanding their functionality without significantly increasing transaction costs. The identity precompile (0x04), which copies and returns input data, exemplifies this. Though simple, it provides gas efficiency by being faster and cheaper than implementing the same functionality in a standard contract. ## Advanced Features and Functionality By adding new precompiles to the EVM, developers can introduce advanced features and functionalities, such as complex mathematical calculations, advanced cryptographic operations, and new data structures. This can unlock new possibilities for DApps and smart contracts, enabling them to execute tasks that would otherwise be too computationally demanding or technically challenging. Precompiles for elliptic curve operations, such as ecadd (0x06), ecmul (0x07), and ecpairing (0x08), enable advanced cryptographic functionality within EVM smart contracts. These precompiles are crucial for implementing zk-SNARKs, a form of zero-knowledge proof, in Ethereum. ## Interoperability Certain precompiles can enhance the interoperability of the EVM with other blockchains or systems. For instance, precompiles can be utilized to verify proofs from other chains or perform operations compatible with different cryptographic standards. The BLS12-381 elliptic curve operations precompiles (0x0a through 0x13, added in the Istanbul upgrade) improve EVM interoperability by allowing operations that are compatible with the BLS signature scheme, potentially facilitating inter-blockchain communication. # Interact with a Precompile (/academy/customizing-evm/06-precompiles/03-interact-wtih-precompile) --- title: Interact with a Precompile description: Learn about why you should utilize Precompiles in your smart contracts. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- So let's get to it and interact with a precompile on the C-Chain of your local network. The SHA256 precompile is already available on the C-Chain. ## Call Precompile from Foundry In this example, we will call the SHA256 precompile to generate hash of the input string. **Precompile Address:** `0x0000000000000000000000000000000000000002` ```bash cast call --rpc-url local-c --private-key $PK 0x0000000000000000000000000000000000000002 "run(string)(bytes32)" "test" ``` You should see a bytes32 hash of the input string `test` as the output. ```bash 0xa770b926e13a31fb823282e9473fd1da9e85afe23690336770c490986ef1b1fc ``` # Overview (/academy/customizing-evm/07-hash-function-precompile/00-intro) --- title: Overview description: Learn how to create a Hash Function Precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Book --- import {Step, Steps} from 'fumadocs-ui/components/steps'; ## What We're Building In this section, we'll create a precompile for the MD5 hash function. By utilizing the existing Go library for this hash function, we can avoid reimplementing the algorithm in Solidity and take advantage of Go's superior efficiency. A hash function is a special type of function that takes input data of any size (such as a letter, a word, the text of a book, or even all combined texts available on the internet) and converts it into a fixed-size output. This output, known as a hash value, represents the original data. The process is often referred to as "hashing." Hash functions possess several key properties: - **Deterministic**: Given the same input, a hash function will always produce the same output. For instance, hashing the word "apple" a thousand times will yield the same hash value each time. - **Fixed Size**: Regardless of the input data size—whether it's a single character or an entire novel—the hash function always produces an output (the hash value) of a consistent length. - **Fast Computation**: Hash functions are designed to be quick and efficient, returning a hash value with minimal computational resources and time. - **Preimage Resistance**: It's computationally infeasible to retrieve the original input data from the hash value alone. This property is especially crucial in cryptographic hash functions for data security. - **Avalanche Effect**: A small change to the input should cause significant changes in the output, making the new hash value appear uncorrelated with the old one. For example, hashing "apple" and "Apple" should produce completely different hash values. - **Collision Resistance**: It is highly unlikely for two different pieces of input data to yield the same hash value. However, due to the limited length of hash values, collisions are theoretically possible. A good hash function minimizes the occurrence of such collisions. Hash functions are utilized in various applications across computer science, including data retrieval, data integrity verification, password storage, and in blockchain technology for cryptocurrencies like Bitcoin. ## Reference Implementation To aid your understanding of building precompiles, we will compare each step with a reference example: a precompile for the SHA256 hash function. Both precompiles are quite similar, featuring a single function that returns the hashed value. ## Overview of Steps Here's an overview of the steps involved: Create a Solidity interface for the precompile. Generate the ABI. Write the precompile code in Go. Configure and register the precompile. Build and run your customized EVM. Connect Remix to your customized EVM and interact with the precompile. Let's get started! # Create an MD5 Solidity Interface (/academy/customizing-evm/07-hash-function-precompile/01-create-solidity-interface) --- title: Create an MD5 Solidity Interface description: Learn how to create an MD5 Solidity Interface updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- The first step is defining the interface that will wrap the precompile implementation and that other contracts and users will interact with. In addition to declaring the way users can interact with our MD5 precompile, defining the MD5 interface will also allow us to utilize a generator script. A precompile consists of many files, and generating boilerplate Go files will make implementing the precompile much easier. ## SHA-256 Precompile Interface Before defining the MD5 interface, we will first look at the interface of the very similar SHA-256 precompile. This reference implementation is included in the repository we have created earlier. ```solidity title="contracts/contracts/interfaces/ISHA256.sol" // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; interface ISHA256 { /// Compute the hash of value /// @param value the value to be hashed /// @return hash the hash of the value function hashWithSHA256(string memory value) external view returns(bytes32 hash); } ``` ISHA256 contains a single function `hashWithSHA256`. `hashWithSHA256` takes in a value of type string, which is the value which is to be hashed, and outputs a 32-byte hash. ## Creating the Solidity Interface For The MD5 Precompile Now it's your turn to define a precompile interface! Create the interface for the MD5 hash function. Start by going into the same directory where `ISHA256.sol` lives (`contracts/contracts/interfaces/`) and create a new file named `IMD5.sol`. Note that: → MD5 returns a 16-byte hash instead of a 32-byte hash → Make sure to name all parameters and return values ```solidity title="contracts/contracts/interfaces/IMD5.sol" // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; interface IMD5 { function hashWithMD5(string memory value) external view returns (bytes16 hash); } ``` ## Generate the ABI Now that we have an interface of our precompile, let's create an ABI of our Solidity interface. Open the integrated VS Code terminal (control + \`), and change to the `/contracts` directory. ```bash cd contracts ``` Run the command to compile the solidity interface to the ABI: ```bash npx solc@latest --abi ./contracts/interfaces/IMD5.sol -o ./abis --base-path . --include-path ./node_modules ``` Rename the file: ```bash mv ./abis/contracts_interfaces_IMD5_sol_IMD5.abi ./abis/IMD5.abi ``` Now, you should have a file called `IMD5.abi` in the folder `/contracts/abis` with the following content: ```json [ { "inputs": [ { "internalType": "string", "name": "value", "type": "string" } ], "name": "hashWithMD5", "outputs": [ { "internalType": "bytes16", "name": "hash", "type": "bytes16" } ], "stateMutability": "view", "type": "function" } ] ``` # Generate the Precompile (/academy/customizing-evm/07-hash-function-precompile/02-generate-the-precompile) --- title: Generate the Precompile description: Learn how to generate your precompile. updated: 2024-09-27 authors: [ashucoder9, owenwahlgren] icon: Terminal comments: true --- import { File, Files, Folder } from 'fumadocs-ui/components/files'; In the last section, we created the ABI for our precompile contract. Now, we'll use the precompile generation script provided by the precompile-evm template to generate a boilerplate code in go for the precompile implementation that will be wrapped in the solidity interface we created in the previous step. ## Running the Generation Script To start, go to the root directory of your precompile-evm project: ```bash cd .. ``` Now generate the files necessary for the precompile. ```bash ./scripts/generate_precompile.sh --abi ./contracts/abis/IMD5.abi --type Md5 --pkg md5 --out ./md5 ``` Now you should have a new directory in your root directory called `md5`: ### `contract.go` For the rest of this chapter, we'll work with the `md5/contract.go` file. If you generated the Go files related to your precompile, `contract.go` should look like the code below. Do not be intimidated if much of this code does not make sense to you. We'll cover the different parts and add some code to implement the logic of our MD5 precompile. ```go // Code generated // This file is a generated precompile contract config with stubbed abstract functions. // The file is generated by a template. Please inspect every code and comment in this file before use. package md5 import ( "errors" "fmt" "math/big" "github.com/ava-labs/subnet-evm/accounts/abi" "github.com/ava-labs/subnet-evm/precompile/contract" "github.com/ava-labs/subnet-evm/vmerrs" _ "embed" "github.com/ethereum/go-ethereum/common" ) const ( // Gas costs for each function. These are set to 1 by default. // You should set a gas cost for each function in your contract. // Generally, you should not set gas costs very low as this may cause your network to be vulnerable to DoS attacks. // There are some predefined gas costs in contract/utils.go that you can use. HashWithMD5GasCost uint64 = 1 /* SET A GAS COST HERE */ ) // CUSTOM CODE STARTS HERE // Reference imports to suppress errors from unused imports. This code and any unnecessary imports can be removed. var ( _ = abi.JSON _ = errors.New _ = big.NewInt _ = vmerrs.ErrOutOfGas _ = common.Big0 ) // Singleton StatefulPrecompiledContract and signatures. var ( // Md5RawABI contains the raw ABI of Md5 contract. //go:embed contract.abi Md5RawABI string Md5ABI = contract.ParseABI(Md5RawABI) Md5Precompile = createMd5Precompile() ) // UnpackHashWithMD5Input attempts to unpack [input] into the string type argument // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackHashWithMD5Input(input []byte) (string, error) { res, err := Md5ABI.UnpackInput("hashWithMD5", input) if err != nil { return "", err } unpacked := *abi.ConvertType(res[0], new(string)).(*string) return unpacked, nil } // PackHashWithMD5 packs [value] of type string into the appropriate arguments for hashWithMD5. // the packed bytes include selector (first 4 func signature bytes). // This function is mostly used for tests. func PackHashWithMD5(value string) ([]byte, error) { return Md5ABI.Pack("hashWithMD5", value) } // PackHashWithMD5Output attempts to pack given hash of type [16]byte // to conform the ABI outputs. func PackHashWithMD5Output(hash [16]byte) ([]byte, error) { return Md5ABI.PackOutput("hashWithMD5", hash) } // UnpackHashWithMD5Output attempts to unpack given [output] into the [16]byte type output // assumes that [output] does not include selector (omits first 4 func signature bytes) func UnpackHashWithMD5Output(output []byte) ([16]byte, error) { res, err := Md5ABI.Unpack("hashWithMD5", output) if err != nil { return [16]byte{}, err } unpacked := *abi.ConvertType(res[0], new([16]byte)).(*[16]byte) return unpacked, nil } func hashWithMD5(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, HashWithMD5GasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the HashWithMD5Input. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackHashWithMD5Input(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output [16]byte // CUSTOM CODE FOR AN OUTPUT packedOutput, err := PackHashWithMD5Output(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // createMd5Precompile returns a StatefulPrecompiledContract with getters and setters for the precompile. func createMd5Precompile() contract.StatefulPrecompiledContract { var functions []*contract.StatefulPrecompileFunction abiFunctionMap := map[string]contract.RunStatefulPrecompileFunc{ "hashWithMD5": hashWithMD5, } for name, function := range abiFunctionMap { method, ok := Md5ABI.Methods[name] if !ok { panic(fmt.Errorf("given method (%s) does not exist in the ABI", name)) } functions = append(functions, contract.NewStatefulPrecompileFunction(method.ID, function)) } // Construct the contract with no fallback function. statefulContract, err := contract.NewStatefulPrecompileContract(nil, functions) if err != nil { panic(err) } return statefulContract } ``` # Packing and Unpacking (/academy/customizing-evm/07-hash-function-precompile/03-unpack-input-pack-output) --- title: Packing and Unpacking description: Learn how to unpack inputs and pack outputs. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- In this first segment of examining the `contract.go` file generated for us, we will go over the packing and unpacking functions in this file. ## The Notion of Packing Those eager to implement the MD5 algorithm might be wondering why we're discussing packing. However, there is good reason to discuss packing, and it comes down to the specification of the `staticcall` function in Solidity. We begin by referring to the example of calling the SHA-256 precompiled contract: ```go (bool ok, bytes memory out) = address(2).staticcall(abi.encode(numberToHash)); ``` As seen above, the `staticcall` function accepts input in bytes format, generated by `abi.encode`, and returns a boolean value indicating success, along with a bytes format output. Therefore, our precompiled contract should be designed to accept and return data in bytes format, involving the packing and unpacking of values. Since packing is a deterministic process, there's no concern about data corruption during translation. However, some preprocessing or postprocessing is necessary to ensure the contract functions correctly. ## Unpacking Inputs In `contract.go`, the function `UnpackHashWithMd5Input` unpacks our data and converts it into a type relevant to us. It takes a byte array as an input and returns a Go string. We will look at more complex precompiles that have multiple functions that may take multiple inputs later. ```go // UnpackHashWithMD5Input attempts to unpack [input] into the string type argument // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackHashWithMD5Input(input []byte) (string, error) { res, err := Md5ABI.UnpackInput("hashWithMD5", input) if err != nil { return "", err } unpacked := *abi.ConvertType(res[0], new(string)).(*string) return unpacked, nil } ``` ## Packing Outputs Ignoring `hashWithMD5` for now, note that whatever value `hashWithMD5` outputs, we will need to postprocess it (i.e. pack it). `PackHashWithMD5Output` does just this, taking in an input of type [16]byte and outputting a byte array which can be returned by `staticcall`. ```go // PackHashWithMd5Output attempts to pack given hash of type [16]byte // to conform the ABI outputs. func PackHashWithMd5Output(hash [16]byte) ([]byte, error) { return Md5ABI.PackOutput("hash_with_md5", hash) } ``` This may seem trivial, but if our Solidity interface defined our function to return a uint or string, the type of our input to this function would differ accordingly. # Implement the Precompile (/academy/customizing-evm/07-hash-function-precompile/04-implementing-precompile) --- title: Implement the Precompile description: Learn how to implement the Precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Now, we'll implement the logic of the precompile in Go. We'll hash our string using the MD5 algorithm. ## SHA-256 Precompile Implementation Before defining the logic of our MD5 precompile, let's look at the logic of the function `hashWithSHA256` (located in `sha256/contract.go`), which computes the SHA-256 hash of a string: ```go title="sha256/contract.go" import ( "crypto/sha256" //... ) // ... func hashWithSHA256(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, HashWithSHA256GasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the HashWithSHA256Input. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackHashWithSHA256Input(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output [32]byte // CUSTOM CODE FOR AN OUTPUT output = sha256.Sum256([]byte(inputStruct)) packedOutput, err := PackHashWithSHA256Output(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` As you can see, we're performing the following steps: - Line 1: Importing the sha256 function from the crypto library (at the top of the Go file) - Line 15: Unpacking the input to the variable inputStruct. It doesn't make sense that the variable has Struct in its name, but you will see why it is done like this when we have multiple inputs in a later example - Line 24: Calling the sha256 function and assign its result to the output variable - Line 26: Packing the output into a byte array - Line 32: Returning the packed output, the remaining gas and nil, since no error has occurred ## Implementing the MD5 Precompile in `contract.go` Go ahead and implement the `md5/contract.go` for the MD5 precompile. You should only have to write a few lines of code. If you're unsure which function to use, the following documentation might help: [Go Documentation Crypto/md5](https://pkg.go.dev/crypto/md5#Sum) ```go title="md5/contract.go" import ( "crypto/md5" // ... ) // ... func hashWithMD5(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) {func hashWithMD5(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, HashWithMD5GasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the HashWithMD5Input. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackHashWithMD5Input(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output [16]byte // CUSTOM CODE FOR AN OUTPUT output = md5.Sum([]byte(inputStruct)) packedOutput, err := PackHashWithMD5Output(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` To solve this task, we did the following things: - Import the md5 function from the crypto library - Unpack the input to a variable inputStruct (It does not make sense that the variable has Struct in its name, but you will see why it is done like this when we have multiple inputs in a later example) - Call the md5 function and assign it's result to the output variable - Pack the output into a byte array - Return the packed output, the remaining gas and nil, since no error has occurred # ConfigKey, ContractAddress, and Genesis (/academy/customizing-evm/07-hash-function-precompile/05-configkey-and-contractaddr) --- title: ConfigKey, ContractAddress, and Genesis description: Learn how to set ConfigKey, ContractAddress, and update the genesis configuration. updated: 2024-09-27 authors: [ashucoder9, owenwahlgren] icon: Terminal --- ## Config Key The precompile config key is used to configure the precompile in the `chainConfig`. It's set in the `module.go` file of our precompile. The generator chooses an initial value that should be sufficient for many cases. In our case: ```go title="md5/module.go" // ConfigKey is the key used in json config files to specify this precompile precompileconfig. // must be unique across all precompiles. const ConfigKey = "md5Config" ``` This key is used for each precompile in the geneis configuration to set the activation timestamp of the precompile. ## Contract Address Each precompile has a unique contract address we can use to call it. This is the address we used earlier to instantiate the precompile in our solidity code or in remix when we interacted with the sha256 precompile. ```go title="md5/module.go" // ContractAddress is the defined address of the precompile contract. // This should be unique across all precompile contracts. // See precompile/registry/registry.go for registered precompile contracts and more information. var ContractAddress = common.HexToAddress("{ASUITABLEHEXADDRESS}") // SET A SUITABLE HEX ADDRESS HERE ``` The `0x01` range is reserved for precompiles added by Ethereum. The `0x02` range is reserved for precompiles provided by Avalanche. The `0x03` range is reserved for custom precompiles. Lets set our contract address to `0x0300000000000000000000000000000000000002` ```go title="md5/module.go" var ContractAddress = common.HexToAddress("0x0300000000000000000000000000000000000002") // SET A SUITABLE HEX ADDRESS HERE ``` ## Update Genesis Configuration Now that the `ConfigKey` and `ContractAddress` are set, we need to update the genesis configuration to register the precompile. ```json title=".devcontainer/genesis-example.json" { "config": { "chainId": 99999, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "subnetEVMTimestamp": 0, "feeConfig": { "gasLimit": 20000000, "minBaseFee": 1000000000, "targetGas": 100000000, "baseFeeChangeDenominator": 48, "minBlockGasCost": 0, "maxBlockGasCost": 10000000, "targetBlockRate": 2, "blockGasCostStep": 500000 }, "sha256Config": { "blockTimestamp": 0 }, "md5Config": { // [!code highlight:3] "blockTimestamp": 0 } }, "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x52B7D2DCC80CD2E4000000" } }, "nonce": "0x0", "timestamp": "0x0", "extraData": "0x00", "gasLimit": "0x1312D00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000" } ``` # Register Your Precompile (/academy/customizing-evm/07-hash-function-precompile/06-register-precompile) --- title: Register Your Precompile description: Learn how to register your precompile. updated: 2024-09-27 authors: [owenwahlgren] icon: Terminal --- The next step in developing our precompile is to **register it with precompile-evm**. Take a look at `plugin/main.go`. You should see the following file: ```go title="plugin/main.go" // (c) 2019-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. package main import ( "fmt" "github.com/ava-labs/avalanchego/version" "github.com/ava-labs/subnet-evm/plugin/evm" "github.com/ava-labs/subnet-evm/plugin/runner" // Each precompile generated by the precompilegen tool has a self-registering init function // that registers the precompile with the subnet-evm. Importing the precompile package here // will cause the precompile to be registered with the subnet-evm. // ADD YOUR PRECOMPILE HERE //_ "github.com/ava-labs/precompile-evm/{yourprecompilepkg}" ) const Version = "v0.1.4" func main() { versionString := fmt.Sprintf("Precompile-EVM/%s Avalanche L1-EVM/%s [AvalancheGo=%s, rpcchainvm=%d]", Version, evm.Version, version.Current, version.RPCChainVMProtocol) runner.Run(versionString) } ``` As of now, we do not have any precompile registered. To reigster a precompile, simply import the precompile package in the `plugin/main.go` file. ```go title="plugin/main.go" // (c) 2019-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. package main import ( "fmt" "github.com/ava-labs/avalanchego/version" "github.com/ava-labs/subnet-evm/plugin/evm" "github.com/ava-labs/subnet-evm/plugin/runner" // Each precompile generated by the precompilegen tool has a self-registering init function // that registers the precompile with the subnet-evm. Importing the precompile package here // will cause the precompile to be registered with the subnet-evm. _ "github.com/ava-labs/precompile-evm/sha256"// [!code highlight:2] _ "github.com/ava-labs/precompile-evm/md5" ) const Version = "v0.1.4" func main() { versionString := fmt.Sprintf("Precompile-EVM/%s Avalanche L1-EVM/%s [AvalancheGo=%s, rpcchainvm=%d]", Version, evm.Version, version.Current, version.RPCChainVMProtocol) runner.Run(versionString) } ``` # Build and Run (/academy/customizing-evm/07-hash-function-precompile/07-build-and-run) --- title: Build and Run description: Learn how to build and run your custom VM on a local network. updated: 2024-09-27 authors: [ashucoder9, owenwahlgren] icon: Terminal --- Time to build and run your customized EVM. Follow the steps below to build and run your custom VM on a local network. ### Build Your Custom VM There's a simple build script in the Precompile-EVM we can utilize to build. First, make sure you are in the root folder of you Precompile-EVM: ```bash cd $GOPATH/src/github.com/ava-labs/precompile-evm ``` Then run the command to initiate the build script: ```bash ./scripts/build.sh ``` If you do not see any error, the build was successful. ### Create your blockchain configuration You can run you customized Precompile-EVM by using the Avalanche CLI. First, create the configuration for your blockchain with custom VM. ```bash avalanche blockchain create myblockchain \ --custom \ --vm $VM_PATH \ --genesis "./.devcontainer/genesis-example.json" \ --force \ --sovereign=false \ --evm-token "TOK" \ --warp \ --icm ``` ### Launch L1 with you customized EVM Next, launch the Avalanche L1 with your custom VM: ```bash avalanche blockchain deploy myblockchain --local ``` After around 1 minute the blockchain should have been created and some more output should appear in the terminal. You'll also see the RPC URL of your blockchain in the terminal. # Interact with Precompile (/academy/customizing-evm/07-hash-function-precompile/08-interact-with-md5) --- title: Interact with Precompile description: Interact with the MD5 precompile we just deployed. updated: 2024-08-27 authors: [owenwahlgren] icon: Terminal --- ## Call Precompile from Foundry Now we will call our MD5 precompile to generate a bytes16 hash of the input string. **MD5 Precompile Address:** `0x0300000000000000000000000000000000000002` ```bash cast call --rpc-url myblockchain --private-key $PK 0x0300000000000000000000000000000000000002 "hashWithMD5(string)(bytes16)" "test" ``` You should see the bytes16 hash of the input string `test` as the output. ```bash 0x098f6bcd4621d373cade4e832627b4f6 ``` # Overview (/academy/customizing-evm/08-calculator-precompile/00-intro) --- title: Overview description: Learn how to create a Calculator precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; ## Reference Implementation In this section, we will showcase how to build a more complex precompile that offers selected simple math operations. Our calculator will support the following operations: 1. Add two numbers and return the result (3 + 5 = 8) 2. Get the next two greater numbers (7 => 8, 9) 3. Repeat a string x times (4, b => bbbb) This is somewhat odd calculator and real-life usability might not be the best, but as you will see in a bit the operations have been chosen to demonstrate some different scenarios we might face while building precompiles. ## What You Are Building Similar to the Calculator precompile, you will be building a precompile called **CalculatorPlus** which contains the following mathematical functions: - **Powers of Three**: takes in as input an integer base; returns the square, cube, and 4th power of the input - **Modulo+**: takes in as input two arguments: the dividend and the divisor. Returns how many times the dividend fits in the divisor, and the remainder. - **Simplify Fraction**: takes in two arguments: the numerator and the denominator. Returns the simplfied version of the fraction (if the denominator is 0, we return 0) ## Overview of Steps Compared to the process before, we will now also add tests for our precompile. Here's a quick overview of the steps we're going to follow: Create a Solidity interface for the precompile Generate the ABI Write the precompile code in Go Configure and register the precompile Add and run tests Build and run your customized EVM Connect Remix to your customized EVM and interact with the precompile This tutorial will help you understand how to create more complex precompiles. Let's begin! # Create Solidity Interface (/academy/customizing-evm/08-calculator-precompile/01-create-solidity-interface) --- title: Create Solidity Interface description: Learn how to create the Solidity interface for your calculator precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Just like in the MD5 section, we will start off by first demonstrating the Solidity interface for the Calculator precompile before guiding you on how to build the **CalculatorPlus** precompile. To start, let's take a look at the Calculator Solidity interface: ```solidity title="contracts/contracts/interfaces/ICalculator.sol" // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; interface ICalculator { function add(uint value1, uint value2) external view returns(uint result); function nextTwo(uint value1) external view returns(uint result1, uint result2); function repeat(uint times, string memory text) external view returns(string memory result); } ``` With this in mind, let's define the CalculatorPlus Solidity interface. Your interface should have the following three functions: 1. `powOfThree`: takes in an unsigned integer base, and returns three unsigned integers named secondPow, thirdPow, fourthPow . 2. `moduloPlus`: takes in unsigned integers dividend and divisor as input, and returns two unsigned integers named multiple and remainder . 3. `simplFrac`: takes in unsigned integers named numerator and denominator, and returns two unsigned integers named simplNum and simplDenom ```solidity title="solidity/interfaces/ICalculatorPlus.sol" // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; interface ICalculatorPlus { function powOfThree(uint256 base) external view returns(uint256 secondPow, uint256 thirdPow, uint256 fourthPow); function moduloPlus(uint256 dividend, uint256 divisor) external view returns(uint256 multiple, uint256 remainder); function simplFrac(uint256 numerator, uint256 denominator) external view returns(uint256 simplNum, uint256 simplDenom); } ``` ## Generate the ABI Now that we have an interface of our precompile, let's create an ABI of our Solidity interface. Open the terminal (control + \`), change to the `/contracts` directory and run the following command to compile the solidity interface to the ABI: ```bash # Go to the upper contracts directory of your project cd contracts # Compile ICalculatorPlus.sol to ABI npx solc@latest --abi ./contracts/interfaces/ICalculatorPlus.sol -o ./abis --base-path . --include-path ./node_modules # Rename using this script or manually mv ./abis/contracts_interfaces_ICalculatorPlus_sol_ICalculatorPlus.abi ./abis/ICalculatorPlus.abi ``` Now you should have a file called `ICalculatorPlus.abi` in the folder `/contracts/abis` with the following content: ```json title="/contracts/abis/ICalculatorPlus.abi" [ { "inputs": [ { "internalType": "uint256", "name": "dividend", "type": "uint256" }, { "internalType": "uint256", "name": "divisor", "type": "uint256" } ], "name": "moduloPlus", "outputs": [ { "internalType": "uint256", "name": "multiple", "type": "uint256" }, { "internalType": "uint256", "name": "remainder", "type": "uint256" } ], "stateMutability": "view", "type": "function" }, { "inputs": [ { "internalType": "uint256", "name": "base", "type": "uint256" } ], "name": "powOfThree", "outputs": [ { "internalType": "uint256", "name": "secondPow", "type": "uint256" }, { "internalType": "uint256", "name": "thirdPow", "type": "uint256" }, { "internalType": "uint256", "name": "fourthPow", "type": "uint256" } ], "stateMutability": "view", "type": "function" }, { "inputs": [ { "internalType": "uint256", "name": "numerator", "type": "uint256" }, { "internalType": "uint256", "name": "denominator", "type": "uint256" } ], "name": "simplFrac", "outputs": [ { "internalType": "uint256", "name": "simplNum", "type": "uint256" }, { "internalType": "uint256", "name": "simplDenom", "type": "uint256" } ], "stateMutability": "view", "type": "function" } ] ``` # Generating the Precompile (/academy/customizing-evm/08-calculator-precompile/02-generating-precompile) --- title: Generating the Precompile description: Learn how to generating the precompile updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- In this step, we will again utilize the precompile generation script to generate all the Go files based on the ABI for your calculator. ## Run Generation Script Change to the root directory of your precompile-evm project and run the command to generate the go files: ```bash # Change to root cd .. # Generate go files ./scripts/generate_precompile.sh --abi ./contracts/abis/ICalculatorPlus.abi --type Calculatorplus --pkg calculatorplus --out ./calculatorplus ``` Now you should have a new directory called `calculatorplus` in the root directory of your project. If you check our the generated contract.go file you will see right away that it is much longer than in our hash function precompile from earlier. This is due to the fact that our calculator precompile has more functions and parameters. Browse through the code and see if you can spot the new elements: ```go title="contract.go" // Code generated // This file is a generated precompile contract config with stubbed abstract functions. // The file is generated by a template. Please inspect every code and comment in this file before use. package calculatorplus import ( "errors" "fmt" "math/big" "github.com/ava-labs/subnet-evm/accounts/abi" "github.com/ava-labs/subnet-evm/precompile/contract" "github.com/ava-labs/subnet-evm/vmerrs" _ "embed" "github.com/ethereum/go-ethereum/common" ) const ( // Gas costs for each function. These are set to 1 by default. // You should set a gas cost for each function in your contract. // Generally, you should not set gas costs very low as this may cause your network to be vulnerable to DoS attacks. // There are some predefined gas costs in contract/utils.go that you can use. ModuloPlusGasCost uint64 = 1 /* SET A GAS COST HERE */ PowOfThreeGasCost uint64 = 1 /* SET A GAS COST HERE */ SimplFracGasCost uint64 = 1 /* SET A GAS COST HERE */ ) // CUSTOM CODE STARTS HERE // Reference imports to suppress errors from unused imports. This code and any unnecessary imports can be removed. var ( _ = abi.JSON _ = errors.New _ = big.NewInt _ = vmerrs.ErrOutOfGas _ = common.Big0 ) // Singleton StatefulPrecompiledContract and signatures. var ( // CalculatorplusRawABI contains the raw ABI of Calculatorplus contract. //go:embed contract.abi CalculatorplusRawABI string CalculatorplusABI = contract.ParseABI(CalculatorplusRawABI) CalculatorplusPrecompile = createCalculatorplusPrecompile() ) type ModuloPlusInput struct { Dividend *big.Int Divisor *big.Int } type ModuloPlusOutput struct { Multiple *big.Int Remainder *big.Int } type PowOfThreeOutput struct { SecondPow *big.Int ThirdPow *big.Int FourthPow *big.Int } type SimplFracInput struct { Numerator *big.Int Denominator *big.Int } type SimplFracOutput struct { SimplNum *big.Int SimplDenom *big.Int } // UnpackModuloPlusInput attempts to unpack [input] as ModuloPlusInput // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackModuloPlusInput(input []byte) (ModuloPlusInput, error) { inputStruct := ModuloPlusInput{} err := CalculatorplusABI.UnpackInputIntoInterface(&inputStruct, "moduloPlus", input) return inputStruct, err } // PackModuloPlus packs [inputStruct] of type ModuloPlusInput into the appropriate arguments for moduloPlus. func PackModuloPlus(inputStruct ModuloPlusInput) ([]byte, error) { return CalculatorplusABI.Pack("moduloPlus", inputStruct.Dividend, inputStruct.Divisor) } // PackModuloPlusOutput attempts to pack given [outputStruct] of type ModuloPlusOutput // to conform the ABI outputs. func PackModuloPlusOutput(outputStruct ModuloPlusOutput) ([]byte, error) { return CalculatorplusABI.PackOutput("moduloPlus", outputStruct.Multiple, outputStruct.Remainder, ) } // UnpackModuloPlusOutput attempts to unpack [output] as ModuloPlusOutput // assumes that [output] does not include selector (omits first 4 func signature bytes) func UnpackModuloPlusOutput(output []byte) (ModuloPlusOutput, error) { outputStruct := ModuloPlusOutput{} err := CalculatorplusABI.UnpackIntoInterface(&outputStruct, "moduloPlus", output) return outputStruct, err } func moduloPlus(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, ModuloPlusGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the ModuloPlusInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackModuloPlusInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output ModuloPlusOutput // CUSTOM CODE FOR AN OUTPUT packedOutput, err := PackModuloPlusOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // UnpackPowOfThreeInput attempts to unpack [input] into the *big.Int type argument // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackPowOfThreeInput(input []byte) (*big.Int, error) { res, err := CalculatorplusABI.UnpackInput("powOfThree", input) if err != nil { return new(big.Int), err } unpacked := *abi.ConvertType(res[0], new(*big.Int)).(**big.Int) return unpacked, nil } // PackPowOfThree packs [base] of type *big.Int into the appropriate arguments for powOfThree. // the packed bytes include selector (first 4 func signature bytes). // This function is mostly used for tests. func PackPowOfThree(base *big.Int) ([]byte, error) { return CalculatorplusABI.Pack("powOfThree", base) } // PackPowOfThreeOutput attempts to pack given [outputStruct] of type PowOfThreeOutput // to conform the ABI outputs. func PackPowOfThreeOutput(outputStruct PowOfThreeOutput) ([]byte, error) { return CalculatorplusABI.PackOutput("powOfThree", outputStruct.SecondPow, outputStruct.ThirdPow, outputStruct.FourthPow, ) } // UnpackPowOfThreeOutput attempts to unpack [output] as PowOfThreeOutput // assumes that [output] does not include selector (omits first 4 func signature bytes) func UnpackPowOfThreeOutput(output []byte) (PowOfThreeOutput, error) { outputStruct := PowOfThreeOutput{} err := CalculatorplusABI.UnpackIntoInterface(&outputStruct, "powOfThree", output) return outputStruct, err } func powOfThree(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, PowOfThreeGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the PowOfThreeInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackPowOfThreeInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output PowOfThreeOutput // CUSTOM CODE FOR AN OUTPUT packedOutput, err := PackPowOfThreeOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // UnpackSimplFracInput attempts to unpack [input] as SimplFracInput // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackSimplFracInput(input []byte) (SimplFracInput, error) { inputStruct := SimplFracInput{} err := CalculatorplusABI.UnpackInputIntoInterface(&inputStruct, "simplFrac", input) return inputStruct, err } // PackSimplFrac packs [inputStruct] of type SimplFracInput into the appropriate arguments for simplFrac. func PackSimplFrac(inputStruct SimplFracInput) ([]byte, error) { return CalculatorplusABI.Pack("simplFrac", inputStruct.Numerator, inputStruct.Denominator) } // PackSimplFracOutput attempts to pack given [outputStruct] of type SimplFracOutput // to conform the ABI outputs. func PackSimplFracOutput(outputStruct SimplFracOutput) ([]byte, error) { return CalculatorplusABI.PackOutput("simplFrac", outputStruct.SimplNum, outputStruct.SimplDenom, ) } // UnpackSimplFracOutput attempts to unpack [output] as SimplFracOutput // assumes that [output] does not include selector (omits first 4 func signature bytes) func UnpackSimplFracOutput(output []byte) (SimplFracOutput, error) { outputStruct := SimplFracOutput{} err := CalculatorplusABI.UnpackIntoInterface(&outputStruct, "simplFrac", output) return outputStruct, err } func simplFrac(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, SimplFracGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the SimplFracInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackSimplFracInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output SimplFracOutput // CUSTOM CODE FOR AN OUTPUT packedOutput, err := PackSimplFracOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // createCalculatorplusPrecompile returns a StatefulPrecompiledContract with getters and setters for the precompile. func createCalculatorplusPrecompile() contract.StatefulPrecompiledContract { var functions []*contract.StatefulPrecompileFunction abiFunctionMap := map[string]contract.RunStatefulPrecompileFunc{ "moduloPlus": moduloPlus, "powOfThree": powOfThree, "simplFrac": simplFrac, } for name, function := range abiFunctionMap { method, ok := CalculatorplusABI.Methods[name] if !ok { panic(fmt.Errorf("given method (%s) does not exist in the ABI", name)) } functions = append(functions, contract.NewStatefulPrecompileFunction(method.ID, function)) } // Construct the contract with no fallback function. statefulContract, err := contract.NewStatefulPrecompileContract(nil, functions) if err != nil { panic(err) } return statefulContract } ``` # Unpacking Multiple Inputs & Packing Multiple Outputs (/academy/customizing-evm/08-calculator-precompile/03-unpacking-and-packing) --- title: Unpacking Multiple Inputs & Packing Multiple Outputs description: Learn how to unpack multiple inputs and packing multiple outputs. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## Unpack/Pack Functions For Each Operation As with the MD5 precompile, the generator created the pack/unpack functions for each operation of our Calculator and CalculatorPlus precompiles. However, you might have noticed that the generator also created some struct in each respective `contract.go` file. In the case of CalculatorPrecompile, we have: ```go title="contract.go" type AddInput struct { Value1 *big.Int Value2 *big.Int } type NextTwoOutput struct { Result1 *big.Int Result2 *big.Int } type RepeatInput struct { Times *big.Int Text string } ``` The generator creates these structs whenever a function defined in the respective Solidity interface has more than one input or more than one output. These multiple values are now stored together via structs. ## Unpacking Multiple Inputs To understand how unpacking works when structs are introduced, let's take a look at the add function in the Calculator precompile, which takes in two inputs, and the nextTwo, which takes in only one input. ```go // UnpackAddInput attempts to unpack [input] as AddInput // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackAddInput(input []byte) (AddInput, error) { inputStruct := AddInput{} err := CalculatorABI.UnpackInputIntoInterface(&inputStruct, "add", input) return inputStruct, err } // UnpackNextTwoInput attempts to unpack [input] into the *big.Int type argument // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackNextTwoInput(input []byte) (*big.Int, error) { res, err := CalculatorABI.UnpackInput("nextTwo", input) if err != nil { return big.NewInt(0), err } unpacked := *abi.ConvertType(res[0], new(*big.Int)).(**big.Int) return unpacked, nil } ``` As you can see, both unpacker functions take a byte array as an input but the `UnpackAddInput` returns a value of the type `AddInput` (which contains two `big.Int` values) and `UnpackNextTwoInput` returns a value of the type `big.Int`. In each case, the respective unpacking function takes in a byte array as an input. However, `UnpackAddInput` returns a value of the type `AddInput`, which is a struct that contains two `big.Int` values. `UnpackNextTwo`, meanwhile, returns an value of type `big.Int`. To demonstrate how one could use the output of unpacking functions like `UnpackAddInput`, below is the implementation of the add function of the Calculator precompile: ```go func add(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, AddGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the AddInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackAddInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE var output *big.Int // CUSTOM CODE FOR AN OUTPUT output = big.NewInt(0).Add(inputStruct.Value1, inputStruct.Value2) packedOutput, err := PackAddOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` ## Packing Multiple Outputs To understand how structs affect the packing of outputs, lets refer to `PackNextTwoOutput` for the Calculator precompile: ```go // PackHashWithMd5Output attempts to pack given hash of type [16] // PackNextTwoOutput attempts to pack given [outputStruct] of type NextTwoOutput // to conform the ABI outputs. func PackNextTwoOutput(outputStruct NextTwoOutput) ([]byte, error) { return CalculatorABI.PackOutput("nextTwo", outputStruct.Result1, outputStruct.Result2, ) } ``` Notice here that even though we are no longer working with singular types, the generator still is able to pack our struct so that it is of the type bytes. As a result, we are able to return the packed version of any `NextTwoOutput` struct as bytes at the end of nextTwo. # Implementing Precompile (/academy/customizing-evm/08-calculator-precompile/04-implementing-precompile) --- title: Implementing Precompile description: Learn how to implement the precompile in `contract.go` updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- In this section, we will go define the logic for our CalculatorPlus precompile; in particular, we want to add the logic for the following three functions: `powOfThree`, `moduloPlus`, and `simplFrac`. For those worried about this section - don't be! Our solution only added 12 lines of code to `contract.go`. ## Looking at Calculator Before we define the logic of CalculatorPlus, we first will examine the implementation of the Calculator precompile: ```go func add(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, AddGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the AddInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackAddInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output *big.Int // CUSTOM CODE FOR AN OUTPUT output = big.NewInt(0).Add(inputStruct.Value1, inputStruct.Value2) packedOutput, err := PackAddOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // ... func repeat(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, RepeatGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the RepeatInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackRepeatInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output string // CUSTOM CODE FOR AN OUTPUT output = strings.Repeat(inputStruct.Text, int(inputStruct.Times.Int64())) packedOutput, err := PackRepeatOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // ... func nextTwo(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, NextTwoGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the NextTwoInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackNextTwoInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output NextTwoOutput // CUSTOM CODE FOR AN OUTPUT output.Result1 = big.NewInt(0).Add(inputStruct, big.NewInt(1)) output.Result2 = big.NewInt(0).Add(inputStruct, big.NewInt(2)) packedOutput, err := PackNextTwoOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` Although the code snippet above may be long, you might notice that we added only four lines of code to the autogenerated code provided to us by Precompile-EVM! In particular, we only added lines 19, 48, 79, and 80. In general, note the following: - Structs vs Singular Values: make sure to keep track which inputs/outputs are structs and which one are values like big.Int. As an example, in `nextTwo`, we are dealing with a big.Int type input. However, in repeat, we are passed in a input of type struct RepeatInput. - Documentation: for both Calculator and CalculatorPlus, the big package documentation is of great reference: https://pkg.go.dev/math/big Now that we have looked at the implementation for the Calculator precompile, its time you define the CalculatorPlus precompile! ## Implementing moduloPlus We start by looking at the starter code for `moduloPlus`: ```go func moduloPlus(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, ModuloPlusGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the ModuloPlusInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackModuloPlusInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output ModuloPlusOutput // CUSTOM CODE FOR AN OUTPUT packedOutput, err := PackModuloPlusOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` We want to note the following: - `inputStruct` is the input that we want to work with (i.e. `inputStruct` contains the two numbers that we want to use for the modulo calculation) - All of our code will go after line 15 - We want the struct output to contain the result of our modulo operation (the struct will contain the multiple and remainder) With this in mind, try to implement `moduloPlus`. ```go func moduloPlus(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) {func moduloPlus(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, ModuloPlusGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the ModuloPlusInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackModuloPlusInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output ModuloPlusOutput // CUSTOM CODE FOR AN OUTPUT output.Multiple, output.Remainder = big.NewInt(0).DivMod(inputStruct.Dividend, inputStruct.Divisor, output.Remainder) packedOutput, err := PackModuloPlusOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` ## Implementing powOfThree Likewise, for `powOfThree`, we want to define the logic of the function in the custom code section. However, note that while we are working with an output struct, our input is a singular value. With this in mind, take a crack at implementing `powOfThree`: ```go func powOfThree(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) {func powOfThree(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, PowOfThreeGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the PowOfThreeInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackPowOfThreeInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output PowOfThreeOutput // CUSTOM CODE FOR AN OUTPUT output.SecondPow = big.NewInt(0).Exp(inputStruct, big.NewInt(2), nil) output.ThirdPow = big.NewInt(0).Exp(inputStruct, big.NewInt(3), nil) output.FourthPow = big.NewInt(0).Exp(inputStruct, big.NewInt(4), nil) packedOutput, err := PackPowOfThreeOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` ## Implementing simplFrac For implementing `simplFrac`, note the following: - The documentation for the big package will be of use here - Remember to take care of the case when the denominator is 0 ```go func simplFrac(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) {func simplFrac(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, SimplFracGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the SimplFracInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackSimplFracInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output SimplFracOutput // CUSTOM CODE FOR AN OUTPUT // If denominator is 0, return both 0 if inputStruct.Denominator.Cmp(big.NewInt(0)) == 0 { output.SimplDenom = big.NewInt(0) output.SimplNum = big.NewInt(0) } else { // First, find common denominator var gcd big.Int gcd.GCD(nil, nil, inputStruct.Numerator, inputStruct.Denominator) // Now, simplify fraction output.SimplNum = big.NewInt(0).Div(inputStruct.Numerator, &gcd) output.SimplDenom = big.NewInt(0).Div(inputStruct.Denominator, &gcd) } packedOutput, err := PackSimplFracOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` ## Final Solution Below is the final solution for the CalculatorPlus precompile: ```go // Code generated // This file is a generated precompile contract config with stubbed abstract functions. // The file is generated by a template. Please inspect every code and comment in this file before use. package calculatorplus import ( "errors" "fmt" "math/big" "github.com/ava-labs/subnet-evm/accounts/abi" "github.com/ava-labs/subnet-evm/precompile/contract" "github.com/ava-labs/subnet-evm/vmerrs" _ "embed" "github.com/ethereum/go-ethereum/common" ) const ( // Gas costs for each function. These are set to 1 by default. // You should set a gas cost for each function in your contract. // Generally, you should not set gas costs very low as this may cause your network to be vulnerable to DoS attacks. // There are some predefined gas costs in contract/utils.go that you can use. ModuloPlusGasCost uint64 = 1 /* SET A GAS COST HERE */ PowOfThreeGasCost uint64 = 1 /* SET A GAS COST HERE */ SimplFracGasCost uint64 = 1 /* SET A GAS COST HERE */ ) // CUSTOM CODE STARTS HERE // Reference imports to suppress errors from unused imports. This code and any unnecessary imports can be removed. var ( _ = abi.JSON _ = errors.New _ = big.NewInt _ = vmerrs.ErrOutOfGas _ = common.Big0 ) // Singleton StatefulPrecompiledContract and signatures. var ( // CalculatorplusRawABI contains the raw ABI of Calculatorplus contract. //go:embed contract.abi CalculatorplusRawABI string CalculatorplusABI = contract.ParseABI(CalculatorplusRawABI) CalculatorplusPrecompile = createCalculatorplusPrecompile() ) type ModuloPlusInput struct { Dividend *big.Int Divisor *big.Int } type ModuloPlusOutput struct { Multiple *big.Int Remainder *big.Int } type PowOfThreeOutput struct { SecondPow *big.Int ThirdPow *big.Int FourthPow *big.Int } type SimplFracInput struct { Numerator *big.Int Denominator *big.Int } type SimplFracOutput struct { SimplNum *big.Int SimplDenom *big.Int } // UnpackModuloPlusInput attempts to unpack [input] as ModuloPlusInput // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackModuloPlusInput(input []byte) (ModuloPlusInput, error) { inputStruct := ModuloPlusInput{} err := CalculatorplusABI.UnpackInputIntoInterface(&inputStruct, "moduloPlus", input) return inputStruct, err } // PackModuloPlus packs [inputStruct] of type ModuloPlusInput into the appropriate arguments for moduloPlus. func PackModuloPlus(inputStruct ModuloPlusInput) ([]byte, error) { return CalculatorplusABI.Pack("moduloPlus", inputStruct.Dividend, inputStruct.Divisor) } // PackModuloPlusOutput attempts to pack given [outputStruct] of type ModuloPlusOutput // to conform the ABI outputs. func PackModuloPlusOutput(outputStruct ModuloPlusOutput) ([]byte, error) { return CalculatorplusABI.PackOutput("moduloPlus", outputStruct.Multiple, outputStruct.Remainder, ) } // UnpackModuloPlusOutput attempts to unpack [output] as ModuloPlusOutput // assumes that [output] does not include selector (omits first 4 func signature bytes) func UnpackModuloPlusOutput(output []byte) (ModuloPlusOutput, error) { outputStruct := ModuloPlusOutput{} err := CalculatorplusABI.UnpackIntoInterface(&outputStruct, "moduloPlus", output) return outputStruct, err } func moduloPlus(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, ModuloPlusGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the ModuloPlusInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackModuloPlusInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output ModuloPlusOutput // CUSTOM CODE FOR AN OUTPUT output.Multiple, output.Remainder = big.NewInt(0).DivMod(inputStruct.Dividend, inputStruct.Divisor, output.Remainder) packedOutput, err := PackModuloPlusOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // UnpackPowOfThreeInput attempts to unpack [input] into the *big.Int type argument // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackPowOfThreeInput(input []byte) (*big.Int, error) { res, err := CalculatorplusABI.UnpackInput("powOfThree", input) if err != nil { return new(big.Int), err } unpacked := *abi.ConvertType(res[0], new(*big.Int)).(**big.Int) return unpacked, nil } // PackPowOfThree packs [base] of type *big.Int into the appropriate arguments for powOfThree. // the packed bytes include selector (first 4 func signature bytes). // This function is mostly used for tests. func PackPowOfThree(base *big.Int) ([]byte, error) { return CalculatorplusABI.Pack("powOfThree", base) } // PackPowOfThreeOutput attempts to pack given [outputStruct] of type PowOfThreeOutput // to conform the ABI outputs. func PackPowOfThreeOutput(outputStruct PowOfThreeOutput) ([]byte, error) { return CalculatorplusABI.PackOutput("powOfThree", outputStruct.SecondPow, outputStruct.ThirdPow, outputStruct.FourthPow, ) } // UnpackPowOfThreeOutput attempts to unpack [output] as PowOfThreeOutput // assumes that [output] does not include selector (omits first 4 func signature bytes) func UnpackPowOfThreeOutput(output []byte) (PowOfThreeOutput, error) { outputStruct := PowOfThreeOutput{} err := CalculatorplusABI.UnpackIntoInterface(&outputStruct, "powOfThree", output) return outputStruct, err } func powOfThree(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, PowOfThreeGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the PowOfThreeInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackPowOfThreeInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output PowOfThreeOutput // CUSTOM CODE FOR AN OUTPUT output.SecondPow = big.NewInt(0).Exp(inputStruct, big.NewInt(2), nil) output.ThirdPow = big.NewInt(0).Exp(inputStruct, big.NewInt(3), nil) output.FourthPow = big.NewInt(0).Exp(inputStruct, big.NewInt(4), nil) packedOutput, err := PackPowOfThreeOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // UnpackSimplFracInput attempts to unpack [input] as SimplFracInput // assumes that [input] does not include selector (omits first 4 func signature bytes) func UnpackSimplFracInput(input []byte) (SimplFracInput, error) { inputStruct := SimplFracInput{} err := CalculatorplusABI.UnpackInputIntoInterface(&inputStruct, "simplFrac", input) return inputStruct, err } // PackSimplFrac packs [inputStruct] of type SimplFracInput into the appropriate arguments for simplFrac. func PackSimplFrac(inputStruct SimplFracInput) ([]byte, error) { return CalculatorplusABI.Pack("simplFrac", inputStruct.Numerator, inputStruct.Denominator) } // PackSimplFracOutput attempts to pack given [outputStruct] of type SimplFracOutput // to conform the ABI outputs. func PackSimplFracOutput(outputStruct SimplFracOutput) ([]byte, error) { return CalculatorplusABI.PackOutput("simplFrac", outputStruct.SimplNum, outputStruct.SimplDenom, ) } // UnpackSimplFracOutput attempts to unpack [output] as SimplFracOutput // assumes that [output] does not include selector (omits first 4 func signature bytes) func UnpackSimplFracOutput(output []byte) (SimplFracOutput, error) { outputStruct := SimplFracOutput{} err := CalculatorplusABI.UnpackIntoInterface(&outputStruct, "simplFrac", output) return outputStruct, err } func simplFrac(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, SimplFracGasCost); err != nil { return nil, 0, err } // attempts to unpack [input] into the arguments to the SimplFracInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackSimplFracInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE _ = inputStruct // CUSTOM CODE OPERATES ON INPUT var output SimplFracOutput // CUSTOM CODE FOR AN OUTPUT // If denominator is 0, return both 0 if inputStruct.Denominator.Cmp(big.NewInt(0)) == 0 { output.SimplDenom = big.NewInt(0) output.SimplNum = big.NewInt(0) } else { // First, find common denominator var gcd big.Int gcd.GCD(nil, nil, inputStruct.Numerator, inputStruct.Denominator) // Now, simplify fraction output.SimplNum = big.NewInt(0).Div(inputStruct.Numerator, &gcd) output.SimplDenom = big.NewInt(0).Div(inputStruct.Denominator, &gcd) } packedOutput, err := PackSimplFracOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } // createCalculatorplusPrecompile returns a StatefulPrecompiledContract with getters and setters for the precompile. func createCalculatorplusPrecompile() contract.StatefulPrecompiledContract { var functions []*contract.StatefulPrecompileFunction abiFunctionMap := map[string]contract.RunStatefulPrecompileFunc{ "moduloPlus": moduloPlus, "powOfThree": powOfThree, "simplFrac": simplFrac, } for name, function := range abiFunctionMap { method, ok := CalculatorplusABI.Methods[name] if !ok { panic(fmt.Errorf("given method (%s) does not exist in the ABI", name)) } functions = append(functions, contract.NewStatefulPrecompileFunction(method.ID, function)) } // Construct the contract with no fallback function. statefulContract, err := contract.NewStatefulPrecompileContract(nil, functions) if err != nil { panic(err) } return statefulContract } ``` # Setting the ConfigKey & ContractAddress (/academy/customizing-evm/08-calculator-precompile/05-set-configkey-contractaddr) --- title: Setting the ConfigKey & ContractAddress description: Learn how to set the ConfigKey, ContractAddress and register the Precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- ## ConfigKey Just as with the MD5 precompile in the previous section, go to the `module.go` file and set a `ConfigKey`. ## Contract Address In the same file, set a `ContractAddress`. Choose one that has not been used by other precompiles of earlier sections. ## Registration Go to the `plugin/main.go` file and register the new precompile. # Creating Genesis Block with precompileConfig (/academy/customizing-evm/08-calculator-precompile/06-create-genesis-block) --- title: Creating Genesis Block with precompileConfig description: Learn how to create a genesis block with precompileConfig. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- In order to create a new Blockchain from our customized EVM, we have to define a Genesis block just as we did in out section earlier. To incorporate the CalculatorPlus precompile as part of the genesis block, we need to create a new genesis file and include a key for the CalculatorPlus precompile in the configuration JSON. This key, in particular, is the `configKey` that we specified in the `module.go` file of the Calculator precompile in the previous section. Copy over `Calculator.json` into a new file called `CalculatorPlus.json` and add the necessary key-pair value to the config. ```json title="CalculatorPlus.json" { "config": { "chainId": 99999, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "subnetEVMTimestamp": 0, "feeConfig": { "gasLimit": 20000000, "minBaseFee": 1000000000, "targetGas": 100000000, "baseFeeChangeDenominator": 48, "minBlockGasCost": 0, "maxBlockGasCost": 10000000, "targetBlockRate": 2, "blockGasCostStep": 500000 }, "sha256Config": { "blockTimestamp": 0 }, "calculatorConfig": { "blockTimestamp": 0 }, "calculatorplusConfig" : { "blockTimestamp": 0 } }, "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x52B7D2DCC80CD2E4000000" } }, "nonce": "0x0", "timestamp": "0x0", "extraData": "0x00", "gasLimit": "0x1312D00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000" } ``` # Testing Precompiles via Go (/academy/customizing-evm/08-calculator-precompile/07-testing-precompile) --- title: Testing Precompiles via Go description: Learn how to test your Precompiles using Golang. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- > Program testing can be used to show the presence of bugs, but never to show their absence. - Edsger W. Dijkstra Let's once again examine the functions of the Calculator precompile and what each one does (in layman terms): - `add`: computes the sum of two numbers - `nextTwo`: returns the next two numbers that come after the input given - `repeat`: repeats a string N number of times While we spent a good amount of time in the previous sections examining Calculator Solidity interface and the Go logic, we forgot to do one thing: *test the functionality of Calculator*. But why test? For some, it may seems pointless to test such basic functions. However, testing all functions is important because it helps validate our belief that the logic of our functions is what we intended them to be. In this section, we will focus on utilizing three types of tests for our Calculator precompile: 1. Autogenerated Tests 2. Unit Tests 3. Fuzz Tests # Modify Autogenerated Tests (/academy/customizing-evm/08-calculator-precompile/08-autogenerated-tests) --- title: Modify Autogenerated Tests description: Learn how to modify autogenerated tests in Go. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- The autogenerated tests will help us making sure, that our precompile throw an error in case not enough gas is supplied. To start, go to calculator/contract_test.go, where you will see the following: ```go // Code generated // This file is a generated precompile contract test with the skeleton of test functions. // The file is generated by a template. Please inspect every code and comment in this file before use. package calculator import ( "testing" "github.com/ava-labs/subnet-evm/core/state" "github.com/ava-labs/subnet-evm/precompile/testutils" "github.com/ava-labs/subnet-evm/vmerrs" "github.com/ethereum/go-ethereum/common" "github.com/stretchr/testify/require" ) // These tests are run against the precompile contract directly with // the given input and expected output. They're just a guide to // help you write your own tests. These tests are for general cases like // allowlist, readOnly behaviour, and gas cost. You should write your own // tests for specific cases. var ( tests = map[string]testutils.PrecompileTest{ "insufficient gas for add should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // populate test input here testInput := AddInput{} input, err := PackAdd(testInput) require.NoError(t, err) return input }, SuppliedGas: AddGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, "insufficient gas for nextTwo should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // set test input to a value here var testInput *big.Int input, err := PackNextTwo(testInput) require.NoError(t, err) return input }, SuppliedGas: NextTwoGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, "insufficient gas for repeat should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // populate test input here testInput := RepeatInput{} input, err := PackRepeat(testInput) require.NoError(t, err) return input }, SuppliedGas: RepeatGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, } ) // TestCalculatorEmptyRun tests the Run function of the precompile contract. func TestCalculatorEmptyRun(t *testing.T) { // Run tests. for name, test := range tests { t.Run(name, func(t *testing.T) { test.Run(t, Module, state.NewTestStateDB(t)) }) } } func BenchmarkCalculatorEmpty(b *testing.B) { // Benchmark tests. for name, test := range tests { b.Run(name, func(b *testing.B) { test.Bench(b, Module, state.NewTestStateDB(b)) }) } } ``` There is a lot to digest in `contract_test.go`, but the file can be divided into the following three sections: - Unit Tests - TestCalculatorEmptyRun - BenchmarkCalculator The autogenerated unit tests are stored in the variable var, which is a mapping from strings (the description of the tests cases) to individual unit tests (`testutils.PrecompileTest`). Upon inspecting the keys of each pair, you will see that all three autogenerated unit tests are checking the same thing that: `add`, `nextTwo`, and `repeat` fail if not enough gas is provided. Each test expects to fail when supplying too little gas: ```go { // ... SuppliedGas: RepeatGasCost - 1, ExpectedErr: vmerrs.ErrOutOfGas.Error(), } ``` As of currently, if you attempt to build and run the test cases, you will not get very far because there is one step we need to do to: pass in arguments for each autogenerated unit test. For each variable testInput defined in each unit test, add an argument. What argument to put down does not matter, it just needs to be a valid argument. An example of what arguments to put in can be found below: ```go var ( tests = map[string]testutils.PrecompileTest{ "insufficient gas for add should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // populate test input here testInput := AddInput{big.NewInt(1), big.NewInt(1)} input, err := PackAdd(testInput) require.NoError(t, err) return input }, SuppliedGas: AddGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, "insufficient gas for nextTwo should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // set test input to a value here // var testInput *big.Int testInput := big.NewInt(1) input, err := PackNextTwo(testInput) require.NoError(t, err) return input }, SuppliedGas: NextTwoGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, "insufficient gas for repeat should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // populate test input here testInput := RepeatInput{big.NewInt(1), "EGS"} input, err := PackRepeat(testInput) require.NoError(t, err) return input }, SuppliedGas: RepeatGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, } ) ``` Now run the test by running the following command from the project root: ```bash ./scripts/build_test.sh ``` # Adding Unit Tests (/academy/customizing-evm/08-calculator-precompile/09-unit-tests) --- title: Adding Unit Tests description: Learn how to add unit tests. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Although the autogenerated units tests were useful in finding whether if the gas requirements of our functions were being fulfilled, we are yet to test the actual logic of our functions. We can add more unit tests by simply adding more mappings to the tests variable. To start, lets define the tests that we want to write for each function within our Calculator precompile: - `add`: check that **add(1, 2)** returns 3 - `nextTwo`: check that **nextTwo(1)** returns 2, 3 - `repeat`: check that **repeat(2, "EGS")** returns "EGSEGS" With this in mind, lets add the three units tests to the tests variable! Below is a code excerpt which shows the three unit tests incorporated: ```go var ( expectedNextTwoOutcome, _ = PackNextTwoOutput(NextTwoOutput{big.NewInt(2), big.NewInt(3)}) expectedRepeatOutcome, _ = PackRepeatOutput("EGSEGS") expectedAddOutcome = common.LeftPadBytes(big.NewInt(3).Bytes(), common.HashLength) tests = map[string]testutils.PrecompileTest{ "insufficient gas for add should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // populate test input here testInput := AddInput{big.NewInt(1), big.NewInt(1)} input, err := PackAdd(testInput) require.NoError(t, err) return input }, SuppliedGas: AddGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, "insufficient gas for nextTwo should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // set test input to a value here // var testInput *big.Int testInput := big.NewInt(1) input, err := PackNextTwo(testInput) require.NoError(t, err) return input }, SuppliedGas: NextTwoGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, "insufficient gas for repeat should fail": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { // CUSTOM CODE STARTS HERE // populate test input here testInput := RepeatInput{big.NewInt(1), "EGS"} input, err := PackRepeat(testInput) require.NoError(t, err) return input }, SuppliedGas: RepeatGasCost - 1, ReadOnly: false, ExpectedErr: vmerrs.ErrOutOfGas.Error(), }, "testing add": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { value1 := big.NewInt(1) value2 := big.NewInt(2) testInput := AddInput{value1, value2} input, err := PackAdd(testInput) require.NoError(t, err) return input }, SuppliedGas: AddGasCost, ReadOnly: true, ExpectedRes: expectedAddOutcome, }, "testing nextTwo": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { testInput := big.NewInt(1) input, err := PackNextTwo(testInput) require.NoError(t, err) return input }, SuppliedGas: NextTwoGasCost, ReadOnly: true, ExpectedRes: expectedNextTwoOutcome, }, "testing repeat": { Caller: common.Address{1}, InputFn: func(t testing.TB) []byte { baseString := "EGS" timesToRepeat := big.NewInt(2) input, err := PackRepeat(RepeatInput{timesToRepeat, baseString}) require.NoError(t, err) return input }, SuppliedGas: RepeatGasCost, ReadOnly: true, ExpectedRes: expectedRepeatOutcome, }, } ) ``` # Adding Fuzz Tests (/academy/customizing-evm/08-calculator-precompile/10-fuzz-tests) --- title: Adding Fuzz Tests description: Learn how to add Fuzz tests. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Units tests are useful when comparing the outcome of a predetermined input with the expected outcome. If we wanted to check that our function works as expected for all given inputs, the only way to show this would be to test every possible input. However, this is not possible for types such as uint256, as it would require us to test for 2^256 possible inputs. Even if we tested 1 input per second (or 2, 3, 4, 1000, etc.), this process would outlast the universe itself. Rather than testing every possible output, we can strengthen our confidence of the expected behavior of a function by testing a random sample of inputs. By testing randomly, we can test for inputs that we wouldn't have originally thought of and are able to see how our function behaves over a range of values. However, given that generating random inputs is not deterministic, we cannot use the tests variable itself. Rather, we will need to leverage the `TestCalculatorRun` function: ```go // TestCalculatorEmptyRun tests the Run function of the precompile contract. func TestCalculatorRun(t *testing.T) { // Run tests. for name, test := range tests { t.Run(name, func(t *testing.T) { test.Run(t, Module, state.NewTestStateDB(t)) }) } } ``` The `TestCalculatorRun` function, by default, iterates through each unit test defined in the tests variable on runs said tests. However, we are not limited to just using `TestCalculatorRun` for the unit tests in tests. We can expand the functionality of `TestCalculatorRun` by adding our fuzz tests. In particular, we can define the following logic for `TestCalculatorRun`: - Iterate N number of times - For each iteration, pick two numbers in the range from 0 to n - After picking two random numbers, create a unit test with the two random numbers as inputs and execute said unit test With this logic in mind, below is the code for how we would go about implementing fuzzing in `TestCalculatorRun`: ```go // TestCalculatorRun tests the Run function of the precompile contract. func TestCalculatorRun(t *testing.T) { // Run tests. for name, test := range tests { t.Run(name, func(t *testing.T) { test.Run(t, Module, state.NewTestStateDB(t)) }) } // Defining own test cases here N := 1_000 n := new(big.Int).Exp(big.NewInt(2), big.NewInt(int64(128)), nil) // Fuzzing N times for i := 0; i < N; i++ { // Adding randomization test here randomInt1, err := rand.Int(rand.Reader, n) randomInt2, err := rand.Int(rand.Reader, n) // Expected outcome expectedRandOutcome := common.LeftPadBytes(big.NewInt(0).Add(randomInt1, randomInt2).Bytes(), common.HashLength) // Pack add input randTestInput := AddInput{randomInt1, randomInt2} randInput, err := PackAdd(randTestInput) require.NoError(t, err) randTest := testutils.PrecompileTest{ Caller: common.Address{1}, Input: randInput, SuppliedGas: AddGasCost, ReadOnly: true, ExpectedRes: expectedRandOutcome, } t.Run("Testing random sum!", func(t *testing.T) { randTest.Run(t, Module, state.NewTestStateDB(t)) }) } } ``` # Testing CalculatorPlus (/academy/customizing-evm/08-calculator-precompile/11-test-calculatorplus) --- title: Testing CalculatorPlus description: Learn how to test CalculatorPlus precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Callout } from 'fumadocs-ui/components/callout'; Now that we have gone through an example of how to test the Calculator precompile, it is time that you develop your own tests for CalculatorPlus. Rather than explicitly tell you what test cases to write, we decided to leave that to you. After all, testing is a subjective profession - a test suite that fits one security needs might be inadequate for someone else. However, if you want an idea for what to test for, the following are some recommendations: - Unit Tests: create some unit tests involving values that you can come up with. Ideally, write 3-5 unit tests for each function. - Fuzz Tests: test more than 1000 random inputs for each function For both unit and fuzz tests, make sure to account for the case when the denominator is equal to 0 in `simplFrac`. # What are Stateful Precompiles? (/academy/customizing-evm/09-stateful-precompiles/00-intro) --- title: What are Stateful Precompiles? description: Learn about Stateful Precompiles in Avalanche L1 EVM. updated: 2024-05-31 authors: [ashucoder9] icon: Book --- When building the MD5 and Calculator precompiles, we emphasized their behavior. We focused on building precompiles that developers could call in Solidity to perform some algorithm and then simply return a result. However, one aspect that we have yet to explore is the statefulness of precompiles. Simply put, precompiles can store data which is persistent. To understand how this is possible, recall the interface that our precompile needed to implement: ```go // StatefulPrecompiledContract is the interface for executing a precompiled contract type StatefulPrecompiledContract interface { // Run executes the precompiled contract. Run(accessibleState AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) } ``` We also examined all parameters except for the `AccessibleState` parameter. As the name suggests, this parameter lets us access the blockchain's state. Looking at the interface of `AccessibleState`, we have the following: ```go // AccessibleState defines the interface exposed to stateful precompile contracts type AccessibleState interface { GetStateDB() StateDB GetBlockContext() BlockContext GetSnowContext() *snow.Context } ``` Looking closer, we see that `AccessibleState` gives us access to **StateDB**, which is used to store the state of the EVM. However, as we will see throughout this section, `AccessibleState` also gives us access to other useful parameters, such as `BlockContext` and `snow.Context`. ## StateDB The parameter we will use the most when it comes to stateful precompiles, StateDB, is a key-value mapping that maps: - **Key**: a tuple consisting of an address and the storage key of the type Hash - **Value**: any data encoded in a Hash, also called a word in the EVM A Hash in go-ethereum is a 32-byte array. In this context, we do not refer to hashing in the cryptographic sense. Rather, "hashing a value" means encoding it to a hash, a 32-byte array usually represented in hexadecimal digits in Ethereum. ```go const ( // HashLength is the expected length of the hash HashLength = 32 ) // Hash represents the 32 byte of arbitrary data. type Hash [HashLength]byte // Example of a data encoded in a Hash: // 0x00000000000000000000000000000000000000000048656c6c6f20576f726c64 Below is the interface of StateDB: // StateDB is the interface for accessing EVM state type StateDB interface { GetState(common.Address, common.Hash) common.Hash SetState(common.Address, common.Hash, common.Hash) SetNonce(common.Address, uint64) GetNonce(common.Address) uint64 GetBalance(common.Address) *big.Int AddBalance(common.Address, *big.Int) CreateAccount(common.Address) Exist(common.Address) bool AddLog(addr common.Address, topics []common.Hash, data []byte, blockNumber uint64) GetPredicateStorageSlots(address common.Address) ([]byte, bool) Suicide(common.Address) bool Finalise(deleteEmptyObjects bool) Snapshot() int RevertToSnapshot(int) } ``` As you can see in the interface of the **StateDB**, the two functions for writing to and reading from the EVM state all work with the Hash type. 1. The function **GetState** takes an address and a Hash (the storage key) as inputs and returns the Hash stored at that slot. 2. The function **SetState** takes an address, a Hash (the storage key), and another Hash (data to be stored) as inputs. We can also see that the **StateDB** interface allows us to read and write account balances of the native token (via GetBalance/SetBalance), check whether an account exists (via Exist), and some other methods. ## BlockContext `BlockContext` provides info about the current block. In particular, we can get the current block number and the current block timestamp. Below is the interface of `BlockContext`: ```go // BlockContext defines an interface that provides information to a stateful precompile about the current block. // The BlockContext may be provided during both precompile activation and execution. type BlockContext interface { Number() *big.Int Timestamp() *big.Int } ``` An example of how we could leverage `BlockContext` in precompiles: *building a voting precompile that only accepts votes within a certain time frame*. ## snow.Context `snow.Context` gives us info regarding the environment of the precompile. In particular, `snow.Context` tells us about the state of the network the precompile is hosted on. Below is the interface of `snow.Context`: ```go // Context is information about the current execution. // [NetworkID] is the ID of the network this context exists within. // [ChainID] is the ID of the chain this context exists within. // [NodeID] is the ID of this node type Context struct { NetworkID uint32 Avalanche L1ID ids.ID ChainID ids.ID NodeID ids.NodeID PublicKey *bls.PublicKey XChainID ids.ID CChainID ids.ID AVAXAssetID ids.ID Log logging.Logger Lock sync.RWMutex Keystore keystore.BlockchainKeystore SharedMemory atomic.SharedMemory BCLookup ids.AliaserReader Metrics metrics.OptionalGatherer WarpSigner warp.Signer // snowman++ attributes ValidatorState validators.State // interface for P-Chain validators // Chain-specific directory where arbitrary data can be written ChainDataDir string } ``` # Interacting with StringStore Precompile (/academy/customizing-evm/09-stateful-precompiles/01-interacting-with-precompile) --- title: Interacting with StringStore Precompile description: Learn how to interact with StringStore Stateful precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Rather than understanding the statefulness of precompiles only in theory, we can also play around with an example of a stateful precompile to learn how they work in practice. In this section, we'll interact with the **StringStore** precompile, a precompiled smart contract that stores a string. ## Checking the Genesis JSON As part of all Avalanche Academy branches, your Precompile-EVM should include the **StringStore/** folder along with all other relevant files for StringStore to be recognized by Precompile-EVM. This includes a genesis JSON for StringStore. Go to `tests/precompile/genesis/` and double-check that you have a `StringStore.json` folder in said directory. This JSON files will instantiate both the **SHA256** precompile and **StringStore**. ```json title="StringStore.json" { "config": { "chainId": 99999, // ... "stringStoreConfig" : { "blockTimestamp": 0, "defaultString": "Cornell" } }, // ... } ``` ## Building the EVM with StringStore Precompile The **avalanche-academy-start** branch already contains the StringStore and SHA256 precompile. If you are working from that branch, you can simply use the built binary from your latest exercise without making any changes. To verify, check if the **stringstore** directory is in your workspace and if the precompile is noted in the `plugin/main.go` file. If not, switch to the **avalanche-academy-start** branch and build the VM there. ## Start the Avalanche Network Use the Avalanche-CLI to start the server and the network. Use the provided genesis file `stringstore.json` mentioned above when you start the network. If all goes well, you will have successfully deployed a blockchain containing both the StringStore and SHA256 precompile. ## Connecting Core Similar to previous chapters, navigate to the **Add Network** section in the Core Wallet. You can find the RPC URL in the Avalanche-CLI logs or by executing the command: `avalanche blockchain list --deployed` Note: Make sure the RPC URL ends with **/rpc**. The RPC URL should look something like this: http://127.0.0.1:9650/ext/bc/P9nKPGPoAfFGkdvD3Ac6YxZieaG8ahpbR9xZosrWNPbJCzByu/rpc Once you have added the blockchain network, switch Core Wallet to your blockchain. ## Interact through Remix We will now load in the Solidity interface letting us interact with the **StringStore** precompile. To do this, open the link below, which will open a Remix workspace containing the StringStore precompile: [Workspace](https://remix.ethereum.org/#url=https://github.com/ava-labs/precompile-evm/blob/avalanche-academy-start/contracts/contracts/interfaces/IStringStore.sol&lang=en&optimize=false&runs=200&evmVersion=null&version=soljson-v0.8.26+commit.8a97fa7a.js) As usual, we will need to compile our Solidity interface. 1. Click the **Solidity** logo on the left sidebar. 2. In the new page, you will see a **Compile IStringStore.sol** button. After clicking the button, a green checkmark should appear next to the Solidity logo. 3. Next, go to the **Environment** tab and select the **Injected Provider** option. If successful, a text saying **Custom [99999] Network** will appear below. If not, change your network. 4. Enter the precompile address (find it in `precompile/stringstore/module.go`) and click **At Address**. First, we will call the `getString` function. By default, `getString` will return whatever was specified in the genesis JSON. Since we set our StringStore precompile to store the string **Cornell**, it'll return this value. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/customizing-evm/48-THf1urUZpFdGgScCm6ZhNWVQITtsiq.png) As you might have noticed, we can also set the string that **StringStore** stores. For example, if we wanted to change the string to Avalanche, we would type Avalanche in the box next to `setString` method, press the `setString` button, and then you would see in the Remix terminal a message displaying the success of your transaction. If we call `getString` again, you will see that the string has been changed to Avalanche. Congrats, you've just interacted with the stateful **StringStore** precompile 🎉 In contrast to the precompiles we have built earlier, the StringStore precompile has access to the EVM state. This way we can utilize precompile not only to perform calculations, but also persist something to the EVM state. This allows us to move even larger portions of our dApp from the solidity smart contract layer to the precompile layer. # Creating Counter Precompile (/academy/customizing-evm/10-stateful-counter-precompile/00-intro) --- title: Creating Counter Precompile description: Learn how to create a Stateful Counter Precompiles in Avalanche L1 EVM. updated: 2024-05-31 authors: [ashucoder9] icon: Book --- ## What We Are Building It's time to build our first stateful precompile. In particular, we'll build a counter that keeps track of an integer. Our Counter precompile will have the following logic: - Users are able to get the current value of the counter - Users are able to set a new value to the counter - Users are able to increment the value of the counter To help you understand, we've provided a reference stateful precompile: **StringStore**. As the name suggests, StringStore stores a string that users can change. This string is stored in the EVM state. ## Overview of Steps Compared to the process before, we must also add tests for our precompile. Here's a quick overview of the steps we'll follow: 1. Create a Solidity interface for the precompile and generate the ABI 2. Generate the precompile Go boilerplate files 3. Write the precompile code in Go with access to the EVM state 4. Set the initial state in the configurator and register the precompile 5. Add and run tests 6. Build and run your customized EVM 7. Connect Remix to your customized EVM and interact with the counter This tutorial will help you create more complex precompiles. Let's begin! # Create Solidity Interface (/academy/customizing-evm/10-stateful-counter-precompile/01-create-solidity-interface) --- title: Create Solidity Interface description: Learn how to create a solidity interface for your counter precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Now, we'll create a Solidity interface for our Stateful Precompile. ## StringStore Solidity Interface To start, let's look at the interface of the **StringStore** precompile: ```solidity title="contracts/contracts/interfaces/IStringStore.sol" // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; interface IStringStore { function getString() external view returns (string memory value); function setString(string memory value) external; } ``` As seen above, we have the following two functions defined: 1. `getString`: retreives the string from the stateful precompile 2. `setString`: sets a string in the stateful precompile ## Create Solidity Interface for Counter Create an interface for the counter in the same directory called `ICounter.sol`. Your interface should have the following three functions declared: 1. `getCounter`: returns the counter value from the stateful precompile 2. `incrementCounter`: when called, this function increments the current counter 3. `setCounter`: takes in a value, and sets it as the counter of the stateful precompile For any argument/return value, make sure it is named as `value`. ```solidity // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; interface ICounter { function getCounter() external view returns (uint value); function incrementCounter() external; function setCounter(uint value) external; } ``` ## Generate the ABI Now that we have an interface of our precompile, let's create an ABI of our Solidity interface. Open the terminal (control + \`), change to the `/contracts` directory, and run the command to compile the solidity interface to ABI: ```bash # Move to contracts directory cd contracts # Compile ICounter.sol to ABI npx solc@latest --abi ./contracts/interfaces/ICounter.sol -o ./abis --base-path . --include-path ./node_modules # Rename mv ./abis/contracts_interfaces_ICounter_sol_ICounter.abi ./abis/ICounter.abi ``` # Store Data in EVM State (/academy/customizing-evm/10-stateful-counter-precompile/02-store-data-in-evm) --- title: Store Data in EVM State description: Learn how to store data in the EVM state. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Like all stateful machines, the EVM provides us with a way to save data. In particular, the EVM exposes a key-value mapping we can leverage to create stateful precompiles. The specifics of this mapping are as follows: - Key: a tuple consisting of an address and the storage key of the type Hash - Value: any data encoded in a Hash, also called a word in the EVM ## Storage Slots Each storage slot is uniquely identified by the combination of an address and a storage key. To keep things organized, smart contracts and precompiles use their own address and a storage key to store data related to them. Look at the reference implementation `StringStore`. The storage key is defined in the second variable group in `contract.go`: ```go // Singleton StatefulPrecompiledContract and signatures. var ( // StringStoreRawABI contains the raw ABI of StringStore contract. //go:embed contract.abi StringStoreRawABI string StringStoreABI = contract.ParseABI(StringStoreRawABI) StringStorePrecompile = createStringStorePrecompile() // Key that defines where our string will be stored storageKeyHash = common.BytesToHash([]byte("storageKey")) ) ``` We can use any string as our storage key. Then we convert it to a byte array and convert it to the hex representation. > The common.BytesToHash is not hashing the string, but converting it to a 32 byte array in hex representation. If we store multiple variables in the EVM state, we will define multiple keys here. Since we only use a single storage slot in this example, we can just call it `storageKey`. At this point, we are not restricted in what state we can access from the precompile. We have access to the entire `stateDB` and can modify the entire EVM state including data other precompiles saved to state or balances of accounts. **This is very powerful, but also potentially dangerous**. ## Converting the Value to Hash Type Since the StateDB only stores data of the type *Hash*, we need to convert all values to the type *Hash* before passing it to the **stateDB**. ### Converting Numbers To convert numbers of type `big.Int`, we can utilize a function from the common package: ```go valueHash := common.BigToHash(value) ``` Since the `big.Int` data structure already is 32-bytes long, the conversion is straightforward. ```go // BigToHash sets byte representation of b to hash. // If b is larger than len(h), b will be cropped from the left. func BigToHash(b *big.Int) Hash { return BytesToHash(b.Bytes()) } ``` ### Converting Strings Converting strings is more challenging, since they are variable in length. Let's see an example: ```go input := "Hello World" ``` To start, let's convert input into type bytes: ```go inputAsBytes := []byte(input) // [72 101 108 108 111 32 87 111 114 108 100] ``` This is how you would convert input to type bytes in Go. Notice that the comment in the code snippet is the byte-representation of input, where each integer represents a byte. Right now, inputAsBytes is of length 11. We want it to be of length 32. Therefore, we pad `inputAsBytes`, adding however many zeros to the front until `inputAsBytes` has 32 integers: ```go inputPadded := common.LeftPadBytes(inputAsBytes, common.HashLength) // [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 72 101 108 108 111 32 87 111 114 108 100] ``` In Go, we would do that using the function common.LeftPadBytes, which takes the byte array to be padded and the desired length. The desired length is supplied with the common.HashLength variable, which has the value 32. As seen in the comment, inputPadded is now of length 32. In the next step, we convert the bytes to a Hash with the function `common.BytesToHash`: ```go inputHash := common.BytesToHash(inputPadded) // 0x00000000000000000000000000000000000000000048656c6c6f20576f726c64 ``` ## Helper Functions To make the code reusable and keep the precompile function clean, it makes sense to create helper functions for converting and storing the data. The first one we'll see is StoreString: ```go // StoreString sets the value of the storage key "storageKey" in the contract storage. func StoreString(stateDB contract.StateDB, newValue string) { newValuePadded := common.LeftPadBytes([]byte(newValue), common.HashLength) newValueHash := common.BytesToHash(newValuePadded) stateDB.SetState(ContractAddress, storageKeyHash, newValueHash) } ``` `StoreString` takes in the underlying key-value mapping, known as **StateDB**, and the new value, and updates the current string stored to be the new one. `StoreString` takes care of any type conversions and state management for us. Focusing now on `setString`, defining the logic is relatively easy. All we need to do is pass in **StateDB** and our string to `StoreString`. Thus, we have the following: ```go func setString(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, SetStringGasCost); err != nil { return nil, 0, err } if readOnly { return nil, remainingGas, vmerrs.ErrWriteProtection } // attempts to unpack [input] into the arguments to the SetStringInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackSetStringInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE // Get K-V Mapping currentState := accessibleState.GetStateDB() // Set the value StoreString(currentState, inputStruct) // this function does not return an output, leave this one as is packedOutput := []byte{} // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` # Implementing setCounter (/academy/customizing-evm/10-stateful-counter-precompile/03-implement-set-counter) --- title: Implementing setCounter description: Learn how to implement the setCounter method. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Having seen how strings are stored in `StringStore`, its time for us to store integers with Counter. The thought process for this section can be defined as follows: - Define the storage hash for our counter - Implement `StoreCounterValue`, a helper function which acts like `StoreString` in the previous - Implement `setCounter` ```go title="contract.go" // storageKeyHash storageKeyHash = common.BytesToHash([]byte("counterValue")) // StoreGreeting sets the value of the storage key in the contract storage. func StoreCounterValue(stateDB contract.StateDB, value *big.Int) { // Convert uint to left padded bytes inputPadded := common.LeftPadBytes(value.Bytes(), 32) inputHash := common.BytesToHash(inputPadded) stateDB.SetState(ContractAddress, storageKeyHash, inputHash) } //setCounter sets the counter value in the contract storage. func setCounter(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, SetCounterGasCost); err != nil { return nil, 0, err } if readOnly { return nil, remainingGas, vmerrs.ErrWriteProtection } // attempts to unpack [input] into the arguments to the SetCounterInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackSetCounterInput(input) if err != nil { return nil, remainingGas, err } // CUSTOM CODE STARTS HERE // Get the current state currentState := accessibleState.GetStateDB() // Set the value StoreCounterValue(currentState, inputStruct) // this function does not return an output, leave this one as is packedOutput := []byte{} // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` # Read Data From EVM State (/academy/customizing-evm/10-stateful-counter-precompile/04-read-date-from-evm) --- title: Read Data From EVM State description: Learn how to read the data from EVM state. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- In the section about storing data in the EVM, we learned about how to store our string in the EVM State. An equally important skill is how to read data from the EVM state. In this section, we'll learn about how to retrieve our string from the EVM state. ## Defining Helper Function Just like with setting the string in the EVM state, there are some conversions we'll have to perform to our string. In particular, we'll need to unhash the value stored in StateDB to get our original string. Thus, our helper function can be defined as follows: ```go // GetString returns the value of the storage key "storageKey" in the contract storage, // with leading zeroes trimmed. func GetString(stateDB contract.StateDB) string { // Get the value set at recipient value := stateDB.GetState(ContractAddress, storageKeyHash) return string(common.TrimLeftZeroes(value.Bytes())) } ``` With our helper function defined, we can implement the logic for `getString`. This will consists of retrieving the underlying key-value mapping and then passing it to `GetString`. ```go func getString(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, GetStringGasCost); err != nil { return nil, 0, err } // no input provided for this function // CUSTOM CODE STARTS HERE var output string // CUSTOM CODE FOR AN OUTPUT currentState := accessibleState.GetStateDB() output = GetString(currentState) packedOutput, err := PackGetStringOutput(output) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` # Implementing getCounter & increment (/academy/customizing-evm/10-stateful-counter-precompile/05-implement-getcounter-increment) --- title: Implementing getCounter & increment description: Learn how to implement getCounter and increment. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Having seen how to retrieve strings from the EVM state with the StoreString precompile, we are now ready to implement `getCounter`. Also, now that we are familiar with reading and writing to the EVM state, we can implement increment, which requires both read and write operations. ## Implementing getCounter For `getCounter`, the following thought process is helpful: - Create a helper function `GetCounterValue`, which takes in the current StateDB and returns the integer stored at the `storageKeyHash` - In `getCounter`, get the current StateDB and pass it to `GetCounterValue` ```go // GetCounterValue gets the value of the storage key in the contract storage. func GetCounterValue(stateDB contract.StateDB) *big.Int { // Get the value value := stateDB.GetState(ContractAddress, storageKeyHash) // Convert bytes to uint return new(big.Int).SetBytes(value.Bytes()) } func getCounter(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, GetCounterGasCost); err != nil { return nil, 0, err } // no input provided for this function // Get the current state currentState := accessibleState.GetStateDB() // Get the value set at recipient value := GetCounterValue(currentState) packedOutput, err := PackGetCounterOutput(value) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` ## Implementing increment For increment, the following thought process is helpful: - Get the current StateDB and pass it to GetCounterValue - Once you have the current counter, increment it by one - Store the new counter with StoreCounterValue ```go func incrementCounter(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, IncrementCounterGasCost); err != nil { return nil, 0, err } if readOnly { return nil, remainingGas, vmerrs.ErrWriteProtection } // no input provided for this function // CUSTOM CODE STARTS HERE // Get the current state currentState := accessibleState.GetStateDB() // Get the value of the counter value := GetCounterValue(currentState) // Set the value StoreCounterValue(currentState, value.Add(value, big.NewInt(1))) // this function does not return an output, leave this one as is packedOutput := []byte{} // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` # Setting Base Gas Fees of Your Precompile (/academy/customizing-evm/10-stateful-counter-precompile/06-setting-base-gasfees) --- title: Setting Base Gas Fees of Your Precompile description: Learn how to set the base gas fees of your precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Gas is used for spam prevention. If our precompile is accessible to everyone on our Avalanche L1, it is important to set the gas cost in a way that lets users economically utilize it in their apps, yet prevents spammers from spamming our blockchain with calls to the precompile. In this section, we'll modify the default values of the gas costs of our Calculator precompile. By default, all user-defined functions have a gas cost of 1. While this is good news for the average user, allowing users to call computationally expensive functions can leave our blockchain vulnerable to Denial-of-Service (DoS) attacks. We can find the default gas cost values for our functions in calculator/contract.go. Following the import statements, you should see: ```go title-"calculator/contract.go" const ( // Gas costs for each function. These are set to 1 by default. // You should set a gas cost for each function in your contract. // Generally, you should not set gas costs very low as this may cause your network to be vulnerable to DoS attacks. // There are some predefined gas costs in contract/utils.go that you can use. AddGasCost uint64 = 1 /* SET A GAS COST HERE */ NextTwoGasCost uint64 = 1 /* SET A GAS COST HERE */ RepeatGasCost uint64 = 1 /* SET A GAS COST HERE */ ) ``` The variables `AddGasCost`, `NextTwoGasCost`, and `RepeatGasCost` are the gas costs of the `add`, `nextTwo`, and `repeat` functions, respectively. As you can see, they are currently set to 1. However, changing the gas cost is as easy as changing the values themselves. As an example, change the gas costs to 7. # Setting ConfigKey and ContractAddress (/academy/customizing-evm/10-stateful-counter-precompile/07-set-configkey-contractaddr) --- title: Setting ConfigKey and ContractAddress description: Learn how to set the ConfigKey and ContractAddress, and Register the Precompile. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- ## ConfigKey Just as with the MD5 precompile in the previous section, go to the `module.go` file and set a ConfigKey. ## Contract Address In the same file, set a ContractAddress. Choose one that has not been used by other precompiles of earlier sections. # Initial State (/academy/customizing-evm/10-stateful-counter-precompile/08-initial-state) --- title: Initial State description: Setting the Initial State. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Throughout this chapter, we've covered storing variables in the EVM state. However, all of this was through the use of StringStore/Counter Solidity interface. An important part of stateful precompiles is the ability to initialize values stored by our precompiled contracts. In the next few chapters, we'll see how to initialize the values of our precompiled contracts by manually setting the values or allowing for the values to be defined in the `genesis.json` file. # Defining Default Values via Golang (/academy/customizing-evm/10-stateful-counter-precompile/09-define-default-values-via-go) --- title: Defining Default Values via Golang description: Learn how to set the default values of precompiled contracts using Go. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- In this section, we'll cover defining default values of precompiled contracts using Go. We'll refer to StringStore for this section. ## Configure Function To start, go to `StringStore/module.go` and scroll to the end of the file. There, you will find the Configure function: ```go // Configure configures [state] with the given [cfg] precompileconfig. // This function is called by the EVM once per precompile contract activation. // You can use this function to set up your precompile contract's initial state, // by using the [cfg] config and [state] stateDB. func (*configurator) Configure(chainConfig precompileconfig.ChainConfig, cfg precompileconfig.Config, state contract.StateDB, blockContext contract.ConfigurationBlockContext) error { config, ok := cfg.(*Config) if !ok { return fmt.Errorf("incorrect config %T: %v", config, config) } // CUSTOM CODE STARTS HERE return nil } ``` Configure handles the initialization of a precompiled contract. We want to use Configure to define the default value for the string we are storing. But how? Configure gives us access to the StateDB (as one of the function parameters) and also lets us call any functions defined in `contract.go`. For example, if we wanted the default string to be "EGS," then we would just have to write one line of code: ```go // Configure configures [state] with the given [cfg] precompileconfig. // This function is called by the EVM once per precompile contract activation. // You can use this function to set up your precompile contract's initial state, // by using the [cfg] config and [state] stateDB. func (*configurator) Configure(chainConfig precompileconfig.ChainConfig, cfg precompileconfig.Config, state contract.StateDB, blockContext contract.ConfigurationBlockContext) error { config, ok := cfg.(*Config) if !ok { return fmt.Errorf("incorrect config %T: %v", config, config) } // CUSTOM CODE STARTS HERE StoreString(state, "EGS") return nil } ``` We have just set a default value for our precompiled contract. # Defining Default Values via Genesis (/academy/customizing-evm/10-stateful-counter-precompile/10-define-default-values-via-genesis) --- title: Defining Default Values via Genesis description: Learn how to set the default values of precompiled contracts using genesis JSON file. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- In the last section, we saw how to initialize the default values of our precompiled contracts via the Configure function. While straightforward, it would be ideal for us (and developers who don't write in Go) to initialize default values via the genesis JSON. Here's how to do just that. ## Modifying config.go The first step is to extract the value from the JSON file. Go to `config.go` and look at the Config struct: ```go // Config implements the precompileconfig.Config interface and // adds specific configuration for StringStore. type Config struct { precompileconfig.Upgrade // CUSTOM CODE STARTS HERE // Add your own custom fields for Config here } ``` Within the Config struct, we can find another struct: the Upgrade struct. It is not necessary to look at the definition of this struct. However, it is useful to know that this struct allows for the `blockTimestamp` key to be parsed in our `genesis.json` file. ```go "stringStoreConfig" : { "blockTimestamp": 0, } ``` To pass in default values via the genesis JSON, we must complete the following: - Define a key-value in the genesis JSON - Update our Config struct so it is able to parse our new key-value In the case of StringStore, after defining blockTimestamp, we will add another key-pair for our default string: ```go "stringStoreConfig" : { "blockTimestamp": 0, "defaultString": "EGS" } ``` Next, we will want to update our Config struct: ```go type Config struct { precompileconfig.Upgrade // CUSTOM CODE STARTS HERE // Add your own custom fields for Config here DefaultString string `json:"defaultString,omitempty"` } ``` In the above snippet, we are defining a new field of type string named `DefaultString`. With respect to its initialization, the value of `DefaultString` is derived from the JSON key `defaultString` from our genesis JSON (the value `json:"defaultString,omitempty"` tells Go to ignore the JSON field if we do not define it in our genesis JSON). At this point, we have defined our new key-value pair in the genesis JSON and incorporated the logic so that precompile-evm can parse the new key-value. However, one step remains. Although precompile-evm can parse the new key-value, we are not actually utilizing it anywhere. Our last-step is to update `Configure` method in `module.go` so it can read our new key-value pair. ```go func (*configurator) Configure(chainConfig contract.ChainConfig, cfg precompileconfig.Config, state contract.StateDB, _ contract.BlockContext) error { config, ok := cfg.(*Config) if !ok { return fmt.Errorf("incorrect config %T: %v", config, config) } // CUSTOM CODE STARTS HERE StoreString(state, config.DefaultString) return nil } ``` # Testing Your Precompile (/academy/customizing-evm/10-stateful-counter-precompile/11-testing-precompile-hardhat) --- title: Testing Your Precompile description: Learn how to test your precompile using Hardhat. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Just like with the Calculator precompile, we want to test the functionality of our precompile to make sure it behaves as intended. Although we can write tests in Go, this is not the only way we can test our precompile. We can leverage Hardhat, a popular smart contract testing framework, for testing. In this section, we'll cover writing the smart contract component of our tests and setting up our testing environment so that precompile-evm can execute these tests for us. ## Structure of Testing with Hardhat For `StringStore`, we'll need to complete the following tasks in order to leverage Hardhat: 1. Define the Genesis JSON file we want to use for testing 2. Tell precompile-evm what Hardhat script to use to test `StringStore` 3. Write the smart contract that will hold the test cases for us 4. Write the actual script that will execute the Hardhat tests for us ## Defining the Genesis JSON The easiest part of this tutorial, we will need to define the genesis state of our testing environment. For all intents and purposes, you can copy the genesis JSON that you used in the Defining Default Values section and paste it in `precompile-evm/tests/precompile/genesis`, naming it `StringStore.json` (it is important that you name the genesis file the name of your precompile). ## Telling Precompile-EVM What To Test The next step is to modify `suites.go` and update the file so that precompile-evm can call the Hardhat tests for StringStore. Go to `precompile-evm/tests/precompile/solidity` and find `suites.go`. Currently, your file should look like the following: ```go title="suites.go" // Copyright (C) 2019-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // Implements solidity tests. package solidity import ( "context" "time" "github.com/ava-labs/subnet-evm/tests/utils" ginkgo "github.com/onsi/ginkgo/v2" ) var _ = ginkgo.Describe("[Precompiles]", ginkgo.Ordered, func() { utils.RegisterPingTest() // Each ginkgo It node specifies the name of the genesis file (in ./tests/precompile/genesis/) // to use to launch the subnet and the name of the TS test file to run on the subnet (in ./contract-examples/tests/) // ADD YOUR PRECOMPILE HERE /* ginkgo.It("your precompile", ginkgo.Label("Precompile"), ginkgo.Label("YourPrecompile"), func() { ctx, cancel := context.WithTimeout(context.Background(), time.Minute) defer cancel() ​ // Specify the name shared by the genesis file in ./tests/precompile/genesis/{your_precompile}.json // and the test file in ./contracts/tests/{your_precompile}.ts // If you want to use a different test command and genesis path than the defaults, you can // use the utils.RunTestCMD. See utils.RunDefaultHardhatTests for an example. utils.RunDefaultHardhatTests(ctx, "your_precompile") }) */ }) ``` If you view the comments generated, you will already see a general structure for how we can declare our tests for precompiles. Let's take this template and use it to declare the tests for StringStore! ```go title="suites.go" // Copyright (C) 2019-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // Implements solidity tests. package solidity import ( "context" "time" "github.com/ava-labs/subnet-evm/tests/utils" ginkgo "github.com/onsi/ginkgo/v2" ) var _ = ginkgo.Describe("[Precompiles]", ginkgo.Ordered, func() { utils.RegisterPingTest() // Each ginkgo It node specifies the name of the genesis file (in ./tests/precompile/genesis/) // to use to launch the subnet and the name of the TS test file to run on the subnet (in ./contract-examples/tests/) // ADD YOUR PRECOMPILE HERE /* ginkgo.It("your precompile", ginkgo.Label("Precompile"), ginkgo.Label("YourPrecompile"), func() { ctx, cancel := context.WithTimeout(context.Background(), time.Minute) defer cancel() ​ // Specify the name shared by the genesis file in ./tests/precompile/genesis/{your_precompile}.json // and the test file in ./contracts/tests/{your_precompile}.ts // If you want to use a different test command and genesis path than the defaults, you can // use the utils.RunTestCMD. See utils.RunDefaultHardhatTests for an example. utils.RunDefaultHardhatTests(ctx, "your_precompile") }) */ ginkgo.It("StringStore", ginkgo.Label("Precompile"), ginkgo.Label("StringStore"), func() { ctx, cancel := context.WithTimeout(context.Background(), time.Minute) defer cancel() utils.RunDefaultHardhatTests(ctx, "StringStore") }) }) ``` Again, the naming conventions are important. We recommend you use the same name as your precompile whenever possible. ## Defining Test Contract In contrast to using just Go, Hardhat allows us to test our precompiles using Solidity. To start, create a new file called `StringStoreTest.sol` in `precompile-evm/contract/contracts`. We can start defining our file by including the following: ```solidity title="StringStoreTest.sol" // SPDX-License-Identifier: MIT pragma solidity >= 0.8.0; import "ds-test/src/test.sol"; import {IStringStore} from "../contracts/interfaces/IStringStore.sol"; ``` As of right now, there are two important things to note. We are first importing `test.sol`, a testing file that includes a range of assertion functions to assert that our outputs are as expected. In particular, these assertion functions are methods of the DSTest contract. Next, we are importing the interface of the StringStore precompile that we want to test. With this in mind, let's fill in the rest of our test file: ```solidity title="StringStoreTest.sol" contract StringStoreTest is DSTest { IStringStore stringStore = IStringStore(0x0300000000000000000000000000000000000005); function step_getString() public { assertEq(stringStore.getString(), "Cornell"); } function step_getSet() public { string memory newStr = "Apple"; stringStore.setString(newStr); assertEq(stringStore.getString(), newStr); } } ``` In the contract `StringStoreTest`, we are inheriting the DSTest contract so that we can leverage the provided assertion functions. Afterward, we are declaring the variable stringStore so that we can directly call the StringStore precompile. Next, we have the two testing functions. Briefly looking at the logic of each test function: - `step_getString`: We are testing that the getString function returns the default string defined in the genesis JSON (in this example, the default string is set to "Cornell") - `step_getSet`: We are assigning a new string to our precompile and making sure that setString does so correctly We now want to note the following two details regarding Solidity test functions in Hardhat: - Any test functions that you want to be called must start with the prefix "step" - The assertion function you use to check your outputs is `assertEq` With our Solidity test contract defined, let's write actual Hardhat script! ## Writing Your Hardhat Script To start, go to `precompile-evm/contracts/test` and create a new file called `StringStore.ts`. It is important to name your TypeScript file the same name as your precompile. Here are the steps to take when defining our Hardhat script: 1. Specify the address at which our precompile is located 2. Deploy our testing contract so that we can call the test functions 3. Tell Hardhat to execute said test functions To make our lives even easier, Avalanche L1-EVM (a library that Precompile-EVM leverages) has helper functions to call our test functions simply by specifying the names of the test functions. You can find the helper functions here: https://github.com/ava-labs/subnet-evm/blob/master/contracts/test/utils.ts. In the end, our testing file will look as follows: ```ts title="StringStore.ts" // (c) 2019-2022, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. import { ethers } from "hardhat" import { test } from "@avalabs/subnet-evm-contracts" import { factory } from "typescript" const STRINGSTORE_ADDRESS = "0x0300000000000000000000000000000000000005" describe("StringStoreTest", function() { this.timeout("30s") beforeEach("Setup DS-Test", async function () { const stringStorePromise = ethers.getContractAt("IStringStore", STRINGSTORE_ADDRESS) return ethers.getContractFactory("StringStoreTest").then(factory => factory.deploy()) .then(contract => { this.testContract = contract return contract.deployed().then(() => contract) }) }) test("Testing get function", "step_getString") test("Testing get and set function", "step_getSet") }) ``` ## Running Your Hardhat Test To run your HardHat tests, first change to the root directory of precompile-evm and run the following command: ```bash ./scripts/build.sh ``` Afterward, run the following command: ```bash GINKGO_LABEL_FILTER=StringStore ./scripts/run_ginkgo.sh ``` The variable `GINKGO_LABEL_FILTER` simply tells precompile-evm which tests suite from `suites.go` to execute. Execute the `StringStore` test suite. However, if you have multiple precompiles to test, set `GINKGO_LABEL_FILTER` equal to "Precompile." You should see something like the following: ```bash title="Combined output" StringStoreTest ✓ Testing get function (4126ms) ✓ Testing get and set function (4070ms) 2 passing (12s) < Exit [It] StringStore - /Users/Rodrigo.Villar/go/src/github.com/ava-labs/precompile-evm/tests/precompile/solidity/suites.go:40 @ 07/19/23 15:37:03.574 (16.134s) • [16.134 seconds] ------------------------------ [AfterSuite] /Users/Rodrigo.Villar/go/pkg/mod/github.com/ava-labs/subnet-evm@v0.5.2/tests/utils/command.go:85 > Enter [AfterSuite] TOP-LEVEL - /Users/Rodrigo.Villar/go/pkg/mod/github.com/ava-labs/subnet-evm@v0.5.2/tests/utils/command.go:85 @ 07/19/23 15:37:03.575 < Exit [AfterSuite] TOP-LEVEL - /Users/Rodrigo.Villar/go/pkg/mod/github.com/ava-labs/subnet-evm@v0.5.2/tests/utils/command.go:85 @ 07/19/23 15:37:03.575 (0s) [AfterSuite] PASSED [0.000 seconds] ------------------------------ Ran 2 of 3 Specs in 50.822 seconds SUCCESS! -- 2 Passed | 0 Failed | 0 Pending | 1 Skipped PASS ``` Congrats, you can now write Hardhat tests for your stateful precompiles! # Build Your Precompile (/academy/customizing-evm/10-stateful-counter-precompile/12-build-your-precompile) --- title: Build Your Precompile description: Learn how to build and run your precompile as a custom EVM blockchain. updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- ## Build Your Custom VM There's a simple build script in the Precompile-EVM we can utilize to build. First, make sure you are in the root folder of you Precompile-EVM: ```bash cd $GOPATH/src/github.com/ava-labs/precompile-evm ``` Then run the command to initiate the build script: ```bash ./scripts/build.sh ``` If you do not see any error, the build was successful. ## Run a Local Network with Your Custom VM You can run you customized Precompile-EVM by using the Avalanche CLI. First, create the configuration for your blockchain with custom VM. ```bash avalanche blockchain create myblockchain --custom --vm $AVALANCHEGO_PLUGIN_PATH/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy --genesis ./.devcontainer/genesis-example.json ``` Make sure you replace the binary name with the actual binary name of your Precompile-EVM, and genesis file with the actual path to your genesis file. Next, launch the Avalanche L1 with your custom VM: ```bash avalanche blockchain deploy myblockchain ``` After around 1 minute the blockchain should have been created and some more output should appear in the terminal. You'll also see the RPC URL of your blockchain in the terminal. # Standards (/academy/encrypted-erc/01-what-is-a-digital-asset/01-digital-assets-overview) --- title: Standards description: Learn about ERC-20, ERC-721, and ERC-1155 in the Avalanche ecosystem, starting from the concept of digital assets. updated: 2025-09-02 authors: [alejandro99so] icon: Coins --- A **digital asset** is any representation of value, rights, or property stored and transacted on a blockchain. These can range from cryptocurrencies and stablecoins to in-game items, tokenized real estate, and certificates. On Avalanche, most digital assets are created and managed on the **C-Chain**, which is fully compatible with the **EVM (Ethereum Virtual Machine)**. This compatibility allows Avalanche to use the same token standards as the broader EVM ecosystem while adding **high throughput, sub-second finality, and low fees**. Because of this, developers building on Avalanche, whether deploying to **C-Chain** for **mainnet** or **Fuji** (testnet), use **standards** to ensure their tokens are interoperable with wallets, marketplaces, and decentralized applications. These standards define the rules for how tokens behave, how they are transferred, and how they are recognized across different platforms. The three most widely used token standards in the EVM ecosystem, and therefore in Avalanche, are: --- ## ERC-20: Fungible Token Standard The ERC-20 standard defines fungible tokens where each unit is identical. **Key Characteristics** - Fungible: Each token has the same value as any other token of the same contract. - Divisible: Commonly 18 decimal places for microtransactions. - Highly interoperable with all EVM-compatible tools and dApps. - Extremely low-cost transfers on Avalanche due to low gas fees. **Common Functions** - `transfer(address to, uint amount)` - `approve(address spender, uint amount)` - `transferFrom(address from, address to, uint amount)` - `balanceOf(address account)` --- ## ERC-721: Non-Fungible Token Standard The ERC-721 standard defines unique, non-fungible tokens (NFTs). **Key Characteristics** - Each token has a unique `tokenId`. - Supports metadata for traits, images, descriptions. - Provides immutable proof of ownership. **Common Functions** - `ownerOf(uint256 tokenId)` - `safeTransferFrom(address from, address to, uint256 tokenId)` - `tokenURI(uint256 tokenId)` --- ## ERC-1155: Multi-Token Standard The ERC-1155 standard supports multiple token types (fungible, non-fungible, and semi-fungible) in one contract. **Key Characteristics** - Can store and manage different asset types in a single contract. - Supports batch operations to save gas. - Flexible for gaming and metaverse applications. **Common Functions** - `safeTransferFrom(address from, address to, uint256 id, uint256 amount, bytes data)` - `safeBatchTransferFrom(address from, address to, uint256[] ids, uint256[] amounts, bytes data)` - `balanceOf(address account, uint256 id)` --- ## Standards Comparison Table (Avalanche Context) | Feature | ERC-20 | ERC-721 | ERC-1155 | |---------|--------|---------|----------| | Token Type | Fungible | Non-fungible | Fungible + Non-fungible | | Uniqueness | No | Yes | Optional per token ID | | Metadata | No | Yes (`tokenURI`) | Yes | | Batch Transfers | No | No | Yes | | Typical Use Cases on Avalanche | Stablecoins, governance, utilities | NFTs, in-game items, certificates | Gaming, collectibles, asset baskets | | Gas Efficiency on Avalanche | High | Moderate | High (batch) | | Privacy | None | None | None | | Interoperability | Very high | High | Medium–High | --- These standards are the foundation for asset creation and interaction in Avalanche’s C-Chain Mainnet and Fuji (Testnet). They allow developers to build interoperable applications and ensure that tokens can be used across different wallets, marketplaces, and decentralized protocols. In the **next section**, we will explore the **current uses of these standards in blockchain**, with real examples from Avalanche’s ecosystem, including DeFi platforms, NFT marketplaces, gaming L1, and tokenized real-world assets. # Current Uses in Blockchain (/academy/encrypted-erc/01-what-is-a-digital-asset/02-current-uses) --- title: Current Uses in Blockchain description: Explore how ERC-20, ERC-721, and ERC-1155 standards are applied in the Avalanche ecosystem. updated: 2025-09-02 authors: [alejandro99so] icon: Activity --- In the previous section, we covered the main **token standards** used on Avalanche: ERC-20, ERC-721, and ERC-1155. Now, let’s explore **how these standards are applied today** in the Avalanche ecosystem across the C-Chain **Mainnet**, **Fuji (Testnet)**, and **custom L1s**. --- ## ERC-20: Fungible Token Use Cases **On Avalanche C-Chain Mainnet** - **Stablecoins**: USDT: Widely used for DeFi protocols, payments, and trading pairs. USDC: Popular for lending, liquidity provision, and cross-chain settlements. - **Native Avalanche Tokens**: WAVAX: Wrapped AVAX used for liquidity pools, lending markets, and DEX operations. - **Governance Tokens**: JOE: Token powering governance and incentives in Trader Joe. QI: Governance token for Benqi lending and liquid staking platform. **On L1 Deployments** - **Custom Ecosystem Tokens**: L1 projects can issue ERC-20 tokens for payments, in-game economies, or DeFi services, with full control over tokenomics. **On Avalanche C-Chain Fuji (Testnet)** - Developers test token logic, minting, and burning before deploying to C-Chain or an L1, avoiding the cost of mainnet errors. --- ## ERC-721: Non-Fungible Token Use Cases **On Avalanche C-Chain Mainnet** - **NFT Marketplaces**: Kalao: Marketplace for art, gaming, and utility NFTs. NFTrade: Cross-chain NFT marketplace supporting Avalanche. - **In-Game Unique Assets**: Gaming NFTs: Rare weapons, characters, and skins in Avalanche C-Chain-native games. **On L1 Deployments** - **Project-Specific Collections**: L1s can issue NFTs for branding, game achievements, or membership systems, keeping transactions isolated from C-Chain congestion. - **Sports & Entertainment**: FIFA+ Collect: Official FIFA digital collectibles platform on L1 Avalanche, featuring historic moments, iconic goals, and tournament highlights. **On Avalanche C-Chain Fuji (Testnet)** - Used to test metadata integration, marketplace listings, and batch minting. --- ## ERC-1155: Multi-Token Standard Use Cases **On Avalanche C-Chain Mainnet** - **Gaming Projects**: Crabada: Combines fungible resources (minerals, tokens) with unique NFT characters in one contract. - **Collectible Platforms**: Multi-series collections: Managed without deploying new contracts for each. **On L1 Deployments** - **Metaverse Economies**: L1s can host both in-game currencies and unique items in one ERC-1155 contract, benefiting from low latency and high scalability. **On Avalanche C-Chain Fuji (Testnet)** - Testing batch transfers, multi-asset minting, and marketplace integrations before L1 or C-Chain Mainnet launch. --- ## Observations from Avalanche’s Current Uses - **Low Fees Enable Microtransactions**: The cost-efficiency of Avalanche makes even small-value ERC-20 and ERC-1155 transfers practical. - **L1s Enable Asset Isolation**: Projects with heavy transaction loads can launch their own L1 to keep asset operations unaffected by C-Chain activity. - **Fuji Testnet is a Critical Step**: Almost every major Avalanche project tests token standards on Fuji before going live. - **Bridging Expands Liquidity**: Avalanche supports assets from other EVM networks, extending utility and user reach. --- In the **next section**, we will analyze **the limits companies face when using these standards**, focusing on challenges like **privacy, compliance, and scalability** and set the stage for introducing **privacy-focused solutions like eERC**. # Limitations of Current Standards (/academy/encrypted-erc/01-what-is-a-digital-asset/03-limitations) --- title: Limitations of Current Standards description: Challenges companies face when using ERC-20, ERC-721, and ERC-1155 on Avalanche. updated: 2025-09-02 authors: [alejandro99so] icon: Book --- The ERC-20, ERC-721, and ERC-1155 standards have become the backbone of asset creation and exchange on Avalanche’s C-Chain, Fuji Testnet, and custom L1s. However, while they ensure interoperability and simplicity, they also present **important limitations** that companies and developers must address when building real-world solutions. --- ## 1. Privacy Limitations All three standards: ERC-20, ERC-721, and ERC-1155, operate on **transparent ledgers**. - **Balances are public**: Anyone can view wallet holdings. - **Transactions are public**: Every transfer amount and counterparty is visible. - **Asset ownership is public**: For ERC-721 and ERC-1155, ownership history and metadata are fully transparent. This transparency is useful for auditability but problematic for: - Regulated financial institutions. - Enterprises managing sensitive transactions. - Use cases requiring trade secrecy or personal data protection. --- ## 2. Compliance Challenges Avalanche provides speed and low costs, but these standards **do not include compliance tools by default**. - No built-in **selective disclosure** for regulators or auditors. - No native mechanism to **restrict transactions** based on jurisdiction. - Compliance features must be built as **custom layers**, increasing development cost and complexity. --- ## 3. Scalability Concerns While Avalanche’s architecture solves many scaling issues, token standards still introduce some constraints: - **ERC-20 and ERC-721**: Each transfer is a separate transaction, which can be inefficient for high-volume operations. - **ERC-721**: One contract per collection increases deployment and maintenance overhead. - **ERC-1155**: More efficient, but not all dApps and marketplaces fully support it yet. --- ## 4. Asset Management Limitations - **Single-purpose contracts**: ERC-20 can only handle one fungible token per contract. - **No native multi-asset governance**: ERC-721 collections and ERC-20 tokens require separate governance or management systems. - **Limited metadata security**: Metadata for NFTs is often stored off-chain and can be altered unless proper safeguards are in place. --- ## Summary Table — Key Limitations on EVM Blockchains | Standard | Privacy | Compliance | Scalability | Asset Management | |----------|---------|------------|-------------|------------------| | ERC-20 | No privacy, public balances & transfers | No built-in compliance | One transfer per transaction | Single token per contract | | ERC-721 | No privacy, public ownership & metadata | No built-in compliance | One contract per collection | Separate governance per collection | | ERC-1155 | No privacy, public ownership & metadata | No built-in compliance | Batch transfers supported, but partial ecosystem support | Multi-asset capable but complex | --- These limitations do not prevent successful deployments, but they **create barriers for industries that require privacy, compliance-readiness, and advanced asset control**. In the **next section**, we will explore **Real Privacy**: how much privacy blockchain truly offers, why compliance is critical, and how privacy-focused standards like **eERC** can address these challenges in the Avalanche ecosystem. # How Private is the Blockchain? (/academy/encrypted-erc/02-real-privacy/01-how-private-is-the-blockchain) --- title: How Private is the Blockchain? description: Understanding the level of privacy in Avalanche and the difference between transparency and confidentiality. updated: 2025-09-02 authors: [alejandro99so] icon: Eye --- One of the most common misconceptions about blockchain is that it is **private**. In reality, on Avalanche, whether you operate on the C-Chain **Mainnet**, **Fuji (Testnet)**, or a custom L1, the blockchain is **transparent by design**. Every transaction is recorded in a public ledger and can be inspected by anyone with the right tools. --- ## Transparency in Avalanche - **Every transaction is public**: Transfers, mints, burns, and contract interactions are visible in block explorers like [SnowTrace](https://snowtrace.io). - **Balances are public**: Anyone can check the token holdings of any address by querying the blockchain. - **Ownership history is public**: For NFTs (ERC-721 and ERC-1155), all past owners are permanently recorded. This transparency is valuable for: - **Auditability**: Anyone can verify that a transaction took place and check the exact amounts. - **Trustless systems**: No need for a central authority to confirm balances or transaction histories. - **Security monitoring**: Easier to detect suspicious activity or exploits in real time. --- ## Pseudonymity vs Anonymity Avalanche, like other EVM-compatible blockchains, is **pseudonymous**, not truly anonymous. - **Pseudonymity**: Users are identified by their public address (e.g., `0x1234...abcd`), but not by their real name. - **Linkability**: Once an address is tied to an identity (via KYC, exchange deposits, or off-chain information), all past and future activity from that address can be analyzed. - **Behavioral profiling**: Patterns in interactions (like dApps used, tokens traded, or NFTs purchased) can reveal insights about the owner. --- ## Confidentiality Limitations While transparency is useful for verification, it creates significant limitations for certain use cases: - **Business transactions**: Competitors can see volumes, counterparties, and frequency of payments. - **Personal privacy**: Anyone can track your activity and build a profile of your blockchain behavior. - **Regulated industries**: Some sectors require confidentiality for compliance (e.g., healthcare, finance). --- ## Example on Avalanche C-Chain If a company pays 50 employees using an ERC-20 token on Avalanche C-Chain: - The **amount**, **date**, and **recipient address** of each payment will be public. - A competitor or third party could track salary patterns, hiring trends, or even link addresses to individuals. --- Avalanche’s transparent design ensures trust and auditability, but **it does not provide native confidentiality** for balances, transaction amounts, or metadata. In the **next section**, we will explore **Compliance**, why privacy alone is not enough, and how companies must align with regulatory requirements when designing blockchain solutions. # Compliance (/academy/encrypted-erc/02-real-privacy/02-compliance) --- title: Compliance description: Why regulatory alignment matters for privacy tokens on Avalanche. updated: 2025-09-02 authors: [alejandro99so] icon: Scale --- Privacy is not the only requirement for enterprise-grade blockchain solutions, **compliance** is equally critical. On Avalanche, whether you deploy to C-Chain **Mainnet**, **Fuji (Testnet)**, or a custom L1, tokens must often meet **regulatory obligations** to operate legally in certain jurisdictions or industries. --- ## Why Compliance Matters In finance, healthcare, gaming, and government, compliance is not optional, it is enforced through regulations like: - **KYC/AML** (Know Your Customer / Anti-Money Laundering): Identifying and verifying participants to prevent illegal activity. - **Jurisdictional restrictions**: Blocking access from certain regions due to sanctions or laws. - **Data protection laws**: GDPR (EU), HIPAA (US), and other frameworks require handling sensitive data in specific ways. --- ## Challenges with Current Token Standards Standard ERC-20, ERC-721, and ERC-1155 implementations on Avalanche offer **no built-in compliance tools**: - No **whitelisting** or **blacklisting** capabilities at the protocol level. - No **selective disclosure**: you cannot show transaction details only to authorized parties while hiding them from the public. - No **transaction-level restrictions** to enforce rules automatically. As a result, compliance must be built as a **custom layer**, which: - Increases development and auditing costs. - Introduces potential security risks if not implemented correctly. - Creates inconsistency between projects, making interoperability harder. --- ## The Compliance–Privacy Balance True privacy on Avalanche must still allow **authorized oversight**: - **Auditor access**: Regulators, tax authorities, or compliance officers should be able to review specific transactions without making them public. - **Selective decryption**: Only authorized entities can view transaction amounts and counterparties. - **Revocable permissions**: The ability to update who has access without redeploying the entire token contract. --- ## Example in the Avalanche Ecosystem Imagine an **L1** created for a private lending network: - Loans are issued in a **privacy-enabled token** to protect borrower data. - Regulators need to verify the amounts, interest rates, and repayment schedules. - Without compliance features, the project must either make all data public (losing privacy) or create a parallel off-chain audit system (increasing complexity). --- In the **next section**, we will explore **Necessities Solved with Privacy**, real-world scenarios where privacy is not just a “nice to have,” but a **critical requirement** for operations, security, and compliance. # Necessities Solved with Privacy (/academy/encrypted-erc/02-real-privacy/03-necessities-solved-with-privacy) --- title: Necessities Solved with Privacy description: Real-world scenarios where privacy on Avalanche is essential. updated: 2025-09-02 authors: [alejandro99so] icon: Lock --- Privacy on Avalanche is not only about hiding information, it’s about **unlocking use cases that would otherwise be impossible** in a fully transparent blockchain environment. By combining confidentiality with auditability, privacy-enabled tokens can meet both **business** and **regulatory** requirements. --- ## 1. Sensitive Transactions Certain payments and transfers must remain confidential to protect business interests and personal security: - **Enterprise payments**: Salaries, supplier payments, and investment deals that should not be visible to competitors. - **Strategic asset moves**: Acquisitions, treasury management, or cross-company settlements. - **Personal financial data**: Protecting individuals from profiling or targeted attacks based on on-chain activity. --- ## 2. Enterprise Use Organizations building on Avalanche, whether on **C-Chain** or a dedicated **L1**, may require privacy for: - **Internal token economies**: Reward points, access tokens, or employee incentives where transaction history should remain private. - **Corporate NFT strategies**: Membership passes, certifications, or asset ownership records that could reveal strategic information. - **Supply chain confidentiality**: Tracking goods on-chain without exposing trade volumes and partner relationships. --- ## 3. Regulated Financial Products Financial institutions can benefit from privacy without sacrificing compliance: - **Tokenized securities or bonds**: Keeping investor details and trade sizes private while allowing regulator access. - **Private lending protocols**: Concealing borrower data while enabling repayment verification. - **Confidential DeFi**: Yield farming or liquidity provision without revealing exact positions to the public. --- ## Why Privacy is a Business Enabler Without privacy, many organizations avoid blockchain entirely due to: - **Risk of data exposure**: Competitors analyzing public activity to gain market advantage. - **Regulatory conflicts**: Inability to comply with privacy laws while using transparent ledgers. - **Security concerns**: Public transaction data can make users targets for scams or hacks. By enabling **selective transparency** — where only authorized entities can see sensitive details, Avalanche projects can: - Attract enterprise and institutional adoption. - Expand into regulated markets. - Create safer, more competitive ecosystems. --- In the **next section**, we will introduce **Encrypted Tokens**, starting with **eERC (Encrypted ERC)** — Avalanche’s privacy-focused token standard that brings **encrypted balances, private transfers, and compliance-ready auditability** to the EVM ecosystem. # What Kind of Privacy Does eERC Provide? (/academy/encrypted-erc/03-encrypted-tokens/01-privacity-eerc) --- title: What Kind of Privacy Does eERC Provide? description: Understanding the privacy features of eERC in the Avalanche ecosystem. updated: 2025-09-02 authors: [alejandro99so] icon: Key --- In the previous section, we explored why privacy is a critical enabler for blockchain adoption in industries like finance, enterprise, and gaming. Now we will focus on **eERC (Encrypted ERC)**, Avalanche’s encrypted token standard and see the exact kind of privacy it delivers on C-Chain **Mainnet**, **Fuji(Testnet)**, and **custom L1 deployments**. --- ## Encrypted Balances - Balances are **fully encrypted** on-chain. - Only the token holder (and authorized auditor) can see the actual balance. - Prevents competitors, analysts, or malicious actors from tracking your holdings. --- ## Private Transaction Amounts - Transfer amounts are encrypted and not visible on public explorers like SnowTrace. - Observers can see that a transfer occurred but not **how much** was sent. - Uses **zero-knowledge proofs** to validate transactions without revealing amounts. --- ## Auditor-Ready Privacy - Built-in **Auditability Module** allows authorized parties to decrypt specific transactions. - Access can be **revoked or rotated** without redeploying the contract. - Enables compliance with financial regulations while maintaining user privacy. --- ## EVM Compatibility - Works seamlessly on Avalanche C-Chain Mainnet, Fuji (Testnet), and custom L1s without protocol changes. - Fully compatible with existing EVM smart contract tooling. - Developers can integrate using the **EncryptedERC Repository**. --- ## Flexible Deployment Modes - **Standalone Mode**: Fully private token from creation, with optional hidden total supply. - **Converter Mode**: Wraps an existing ERC-20 into an encrypted token, enabling private transfers while preserving the ability to unwrap it back to ERC-20. --- ## Example Use Cases **Confidential DeFi** - Private yield farming, liquidity provision, and staking allocations. **Enterprise Payments** - Payroll systems where salaries are encrypted but verifiable to auditors. - Supplier payments that conceal exact deal sizes. **Private Asset Management** - Managing tokenized real-world assets without revealing portfolio composition. - In-game transactions in private gaming economies. --- In the **next section**, we will compare **ERC-20 vs eERC** to see exactly how these standards differ in privacy, compliance, and deployment options. # ERC-20 vs eERC (/academy/encrypted-erc/03-encrypted-tokens/02-comparison) --- title: ERC-20 vs eERC description: Comparing traditional ERC-20 tokens with eERC in the Avalanche ecosystem. updated: 2025-09-02 authors: [alejandro99so] icon: Table --- In the previous section, we learned what kind of privacy **eERC** offers on Avalanche. Now, let’s compare it directly to **ERC-20**, the most widely used fungible token standard in the EVM ecosystem. --- ## Key Differences | Feature | ERC-20 | eERC (Encrypted ERC) | |---------|--------|----------------------| | **Balance Visibility** | Public: anyone can see wallet balances on-chain. | Private: balances are encrypted, visible only to the holder and authorized auditors. | | **Transaction Amount Visibility** | Public: amounts are visible in every transfer. | Private: transfer amounts are encrypted and hidden from the public. | | **Privacy Technology** | None: all data is transparent. | Zero-knowledge proofs (zk-SNARKs) + homomorphic encryption. | | **Compliance Options** | Requires custom solutions for compliance and auditability. | Built-in Auditability Module allows selective decryption for authorized auditors. | | **Blockchain Compatibility** | Works on any EVM-compatible chain. | Works on any EVM-compatible chain, including Avalanche C-Chain Mainnet, Fuji (Testnet), and custom L1s. | | **Gas Efficiency** | Very efficient for transfers and approvals. | Slightly higher cost due to encryption and proof verification, but optimized for Avalanche’s low fees. | | **Modes Available** | Single implementation. | Standalone Mode and Converter Mode for different privacy needs. | | **Total Supply Visibility** | Public by default. | Concealed in Standalone Mode (optional in Converter Mode). | | **Use Cases** | Public tokens, open DeFi, governance tokens. | Confidential DeFi, enterprise payments, private RWA, regulated financial use. | --- ## Summary While **ERC-20** remains the foundation for most tokens in the EVM ecosystem, it offers no privacy features. **eERC**, on the other hand, is designed for scenarios where confidentiality, compliance, and selective transparency are required, making it ideal for enterprise, regulated markets, and advanced DeFi protocols on Avalanche. --- In the **next section**, we will explore the **Technology Behind eERC**, understanding the cryptographic tools, on-chain components, and modes of operation that make this privacy possible. # Technology Behind eERC (/academy/encrypted-erc/03-encrypted-tokens/03-technology-behind) --- title: Technology Behind eERC description: Understanding the cryptographic tools and architecture that power eERC on Avalanche. updated: 2025-09-02 authors: [alejandro99so] icon: Cpu --- The **eERC (Encrypted ERC)** standard combines modern cryptography with Avalanche’s high-performance infrastructure to deliver **confidential transactions** while remaining compatible with the EVM ecosystem. In this section, we will break down the key technologies and architecture that make this possible. --- ## Core Privacy Technologies ### 1. Zero-Knowledge Proofs (zk-SNARKs) - Used to **prove** that a transaction is valid without revealing its details. - Ensures the sender has enough balance and the transaction respects the rules, without disclosing amounts or balances. - Highly gas-efficient when deployed on Avalanche due to its low base fees. ### 2. Homomorphic Encryption - Allows operations on encrypted values **without decrypting them**. - Enables balance updates and validations while keeping amounts hidden. - Balances remain encrypted on-chain at all times. ### 3. BabyJubJub Elliptic Curve - A **twisted Edwards elliptic curve** optimized for zk-SNARKs. - Offers fast signature verification inside zero-knowledge circuits. - Used in eERC for **efficient key generation and proof validation**. ### 4. ElGamal Encryption - Public-key encryption scheme adapted for elliptic curve cryptography. - Used to encrypt token balances and transfer amounts on BabyJubJub. - Supports **homomorphic addition** so encrypted balances can be updated without decryption. ### 5. Poseidon Ciphertext Hashing - **Poseidon hash** is a ZK-friendly hashing algorithm designed for use inside proof circuits. - Optimized for minimal gas cost and high efficiency in zk-SNARK verification. - In eERC, used for commitments and ciphertext integrity checks. --- ## Auditability Module - Built directly into the eERC architecture. - Enables **selective decryption** of transactions for authorized auditors or regulators. - Permissions can be **rotated or revoked** without redeploying the contract. - Maintains **compliance-readiness** without compromising user privacy. --- ## On-Chain Components 1. **Registrar Contract** - Manages the list of authorized auditors. - Handles public keys for encryption and decryption permissions. 2. **EncryptedUserBalances Contract** - Stores all encrypted balances for token holders. - Updates balances through zk-SNARK validated operations. 3. **AuditorManager Contract** - Controls access and permissions for auditors. - Integrates with the Auditability Module. 4. **Proof Verification Circuits** - Specialized zk circuits that verify transactions meet the protocol’s rules. - Operates without revealing transaction details. - Optimized for BabyJubJub, Poseidon hashing, and ElGamal operations. --- ## Modes of Operation ### Standalone Mode - Token is **fully private** from the moment it is created. - Total supply can be hidden from the public. - Ideal for enterprise deployments or regulated private markets. ### Converter Mode - Wraps an existing ERC-20 token into eERC form for private transfers. - Allows unwrapping back into the original ERC-20 token. - Perfect for projects that want optional privacy without changing their base asset. --- ## Developer Integration - Contracts, circuits and scripts to be used. - Compatible with Avalanche C-Chain Mainnet, Fuji (Testnet), and custom L1s. - Does not require modifying the underlying blockchain protocol. --- In the **next section**, we will cover **Usability of eERC**, focusing on Standalone vs Converter modes, real-world use cases, and how developers and users interact with encrypted tokens on Avalanche. # Deployment Modes of eERC (/academy/encrypted-erc/04-usability-eerc/01-deployments-mode) --- title: Deployment Modes of eERC description: Understanding Standalone and Converter deployment modes of eERC in the Avalanche ecosystem. updated: 2025-09-02 authors: [alejandro99so] icon: Split --- The **eERC** standard offers two deployment modes to fit different project needs: **Standalone Mode** and **Converter Mode**. Both deliver encrypted balances and private transfers, but they differ in how the token is created and used. --- ## Standalone Mode - Token is **private from creation**, all balances and amounts are encrypted from the first mint. - Uses **ElGamal encryption** over the **BabyJubJub** curve for privacy. - Transaction proofs use **Poseidon hashing** for efficiency in zk-SNARKs. - **Optional hidden total supply** to prevent public insight into circulation. - Best suited for: - Enterprise payment systems. - Regulated financial assets with selective auditor access. - Private gaming economies on Avalanche L1s. --- ## Converter Mode - Wraps an **existing ERC-20 token** into an encrypted form. - Allows private transfers using the eERC format while preserving the option to unwrap back to the public ERC-20. - Uses the same encryption and proof flow as Standalone Mode. - Best suited for: - Adding privacy to an already deployed public token. - Hybrid systems where some transactions are public and others private. - Bridging between public liquidity pools and private markets. --- In the **next section**, we will explore **Use Cases of eERC**, including examples from Avalanche C-Chain Mainnet, Fuji (Testnet), and custom L1 deployments, along with a step-by-step view of a private transfer flow. # Use Cases of eERC (/academy/encrypted-erc/04-usability-eerc/02-use-cases) --- title: Use Cases of eERC description: Real-world applications of eERC on Avalanche and the flow of a private transfer. updated: 2025-09-02 authors: [alejandro99so] icon: Rocket --- The **eERC** standard can be deployed across Avalanche C-Chain **Mainnet**, **Fuji (Testnet)**, and **custom L1s**, enabling privacy in a variety of scenarios. --- ## Avalanche C-Chain - **Private DeFi operations**: Farming, lending, and trading without exposing transaction sizes. - **Confidential payments**: Salaries and supplier payments visible only to participants and authorized auditors. --- ## Custom L1 Deployments - **Tokenized real-world assets (RWA)**: Privacy-enabled trading with compliance auditability. - **Private gaming economies**: Fungible and non-fungible in-game assets with encrypted transfers. --- ## Step-by-Step: Private Transfer Flow (Standalone Mode) 1. **Transaction Creation** - Wallet integrates. - Amount is encrypted using **ElGamal** over **BabyJubJub**. - A **Poseidon hash** of the ciphertext is generated for proof validation. 2. **Proof Generation** - User creates a **zk-SNARK** proving the transaction’s validity without revealing details. 3. **On-Chain Verification** - Smart contract verifies the proof in a zk-friendly circuit optimized for BabyJubJub and Poseidon. - Encrypted balances in the `EncryptedUserBalances` contract are updated. 4. **Auditor Access (Optional)** - Authorized auditors can decrypt specific transactions if enabled in the **Registrar Contract**. 5. **Finalization** - Transaction is recorded on Avalanche with encrypted values only. - Explorers show the transfer occurred, but not the amount or updated balances. --- In the **next section**, we will examine the **eERC Contracts Flow**, covering how to create an eERC, configure auditor access, choose zk-proof types, and interact with it as a user. # eERC Contracts Flow (/academy/encrypted-erc/05-eerc-contracts-flow/01-step-by-step) --- title: eERC Contracts Flow description: Step-by-step process to create, configure, and use eERC on Avalanche. updated: 2025-09-02 authors: [alejandro99so] icon: GitBranch --- The **eERC (Encrypted ERC)** standard on Avalanche provides privacy-preserving tokens while remaining EVM-compatible. This section walks through the full lifecycle of an eERC, from deployment to interaction, including key considerations for developers. --- ## 1. Creating Your eERC ### Step 1: Select Deployment Mode - **Standalone Mode**: Fully private token from creation, optional hidden total supply. - **Converter Mode**: Wrap an existing ERC-20 into an encrypted format using a TokenTracker. ### Step 2: Deploy the Core Contracts - **encryptedERC**: Stores encrypted balances. - **Registrar Contract**: Manages auditor keys and permissions. - **Proof Verifier**: Validates zk-SNARK proofs for transfers. ### Step 3: Configure Auditor Access (Optional) - Register auditors in the **Registrar Contract**. - Provide them with decryption permissions for compliance scenarios. --- ## 2. About the zk-Proof Type In the current **EncryptedERC** implementation, the proving system is **not chosen by the end-user**. By default, the official Avalanche eERC contracts use **Groth16** for proof generation and verification. If a developer wants to change the proving system (e.g., to PLONK or Halo2), they must: 1. Modify the Circom circuits. 2. Regenerate the zk verifiers. 3. Deploy new contract versions using those verifiers. | Proof System | Why it Matters | Current Status in eERC | |--------------|---------------|------------------------| | **Groth16** | Fast verification, small proof size, low gas cost | ✅ Default in official repo | | **PLONK** | Universal setup, flexibility for new circuits | ❌ Requires custom build | | **Halo2** | No trusted setup, recursion support | ❌ Requires major changes | > **Avalanche Tip:** Groth16 is optimal for most L1 and C-Chain deployments due to its low gas cost and existing integration in the eERC stack. --- ## 3. Encryption & Hashing Details - **Curve**: BabyJubJub for zk-friendliness. - **ElGamal**: ElGamal for balance and transfer amounts. - **Poseidon**: Poseidon Cipher Text for proof inputs and Merkle trees. --- ## 4. Backend Deployment Flow ### Step 1: Set Up Environment ```bash git clone https://github.com/alejandro99so/eerc-backend-converter.git cd eerc-backend-converter ``` ### Step 2: Configure `.env` variables in Hardhat - This step is VERY important because you need those private keys to run `zkit` files. - Create `.env` file: - Add to `.env` file: RPC_URL = YOUR_RPC_URL // If you are using Fuji C-Chain (https://api.avax-test.network/ext/bc/C/rpc) - Add to `.env` file: PRIVATE_KEY = PRIVATE_KEY_1 - Add to `.env` file: PRIVATE_KEY_2 = PRIVATE_KEY_2 ### Step 3: Install dependencies and create `zkit` files ```bash npm install ``` ### Step 4: Choose your way (Converter way) and deploy main contracts ```bash npm run converter:init npm run converter:core ``` ### Step 5: Register Users ```bash npm run converter:register ``` - Change `WALLET_NUMBER = 1;` to `WALLET_NUMBER = 2;` to register PRIVATE_KEY_2 in the `scripts/converter/03_register-user.ts` file ```bash npm run converter:register ``` ### Step 6: Set auditor ```bash npm run converter:auditor ``` ### Step 7: Get faucet tokens for PRIVATE_KEY_2 - Let's check our first balance (PRIVATE_KEY_2) ```bash npm run converter:balance ``` - Let's get faucet tokens for PRIVATE_KEY_2 ```bash npm run converter:faucet ``` ### Step 8: Deposit tokens #### To PRIVATE_KEY_1 - Change `WALLET_NUMBER = 2;` to `WALLET_NUMBER = 1;` to get token balance from PRIVATE_KEY_1 in `scripts/converter/08_check_balance.ts` file - Let's check our first balance ```bash npm run converter:balance ``` - Let's deposit some ERC20 tokens to get eERC20 tokens ```bash npm run converter:deposit ``` - Let's check our new balance ```bash npm run converter:balance ``` #### To PRIVATE_KEY_2 - Change `WALLET_NUMBER = 1;` to `WALLET_NUMBER = 2;` to get token balance from PRIVATE_KEY_2 in `scripts/converter/08_check_balance.ts` file - Let's check our first balance ```bash npm run converter:balance ``` - Change `WALLET_NUMBER = 1;` to `WALLET_NUMBER = 2;` to deposit tokens from PRIVATE_KEY_2 in the `scripts/converter/06_deposittx.ts` file - Let's deposit some ERC20 tokens to get eERC20 tokens ```bash npm run converter:deposit ``` - Let's check our new balance ```bash npm run converter:balance ``` ### Step 9: Transfer tokens - Let's transfer encrypted tokens from PRIVATE_KEY_1 to PRIVATE_KEY_2 ```bash npm run converter:transfer ``` ### Step 10: Withdraw tokens from `PRIVATE_KEY_2` #### To PRIVATE_KEY_1 - Let's transfer encrypted tokens from PRIVATE_KEY_1 to PRIVATE_KEY_2 ```bash npm run converter:transfer ``` - Change `WALLET_NUMBER = 2;` to `WALLET_NUMBER = 1;` to get token balance from PRIVATE_KEY_1 in `scripts/converter/08_check_balance.ts` file - Let's check our new balance after transferring tokens ```bash npm run converter:balance ``` #### To PRIVATE_KEY_2 - Let's withdraw encrypted tokens (eERC20) from PRIVATE_KEY_2 to get tokens (ERC20) ```bash npm run converter:withdraw ``` - Change `WALLET_NUMBER = 1;` to `WALLET_NUMBER = 2;` to get token balance from PRIVATE_KEY_2 in `scripts/converter/08_check_balance.ts` file - Let's check our new balance for `PRIVATE_KEY_2` after transferring tokens ```bash npm run converter:balance ``` ### Step 11: Choose your way (Standalone way) - Follow the flow by yourself and reach out to us at https://t.me/avalancheacademy ## 6. Considerations for Production - **Gas Costs**: Proof verification adds overhead; optimize circuits. - **Auditor Key Rotation**: Plan for secure revocation and re-assignment. - **Compliance**: Align privacy settings with jurisdictional requirements. # Introduction (/academy/icm-chainlink/01-access-chainlink-vrf-services/01-introduction) --- title: Introduction description: Learn how to use ICM to access Chainlink VRF services on L1 networks that do not have direct Chainlink support. updated: 2024-10-21 authors: [0xstt] icon: Book --- As decentralized applications (dApps) expand across multiple blockchains, some Layer 1 (L1) networks lack direct support from essential services like **Chainlink VRF (Verifiable Random Functions)**. This presents a significant challenge for developers who rely on verifiable randomness for use cases such as gaming, lotteries, NFT minting, and other decentralized functions that require unbiased, unpredictable random numbers. The challenge arises because not every L1 network has integrated with Chainlink, meaning developers on those chains are left without native access to VRF services. Without verifiable randomness, critical aspects of dApps, such as fairness and security, can be compromised. ### Why ICM is Necessary To address this gap, **Interchain Messaging (ICM)** provides a solution by allowing L1 networks that don’t have direct Chainlink support to still access these services. Through ICM, any blockchain can request VRF outputs from a Chainlink-supported network (e.g., Fuji) and receive the results securely on their own L1. This cross-chain solution unlocks the ability to use Chainlink VRF on unsupported networks, bypassing the need for native integration and ensuring that dApp developers can continue building secure and fair decentralized applications. In the following sections, we will explore how to use ICM to access Chainlink VRF services across different chains by deploying two key smart contracts: one on the Chainlink-supported network and another on the target L1. # Understanding the Flow (/academy/icm-chainlink/01-access-chainlink-vrf-services/02-understanding-the-flow) --- title: Understanding the Flow description: Learn how requests for random words flow across chains using ICM. updated: 2024-10-21 authors: [0xstt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; This section explains how random words are requested and fulfilled using Chainlink VRF across two different blockchains through **Interchain Messaging (ICM)**. The entire process involves several key components: the decentralized application (DApp), `CrossChainVRFConsumer`, `CrossChainVRFWrapper`, and `TeleporterMessenger`. Let’s walk through the flow step by step: ### DApp Submits Request for Random Words The DApp, running on an L1 without direct Chainlink support, initiates a request for verifiable randomness by interacting with the `CrossChainVRFConsumer` contract deployed on its chain. ### `CrossChainVRFConsumer` Sends Cross-Chain Message Once the DApp submits the request, the `CrossChainVRFConsumer` prepares a message containing the necessary VRF request parameters (such as `keyHash`, `request confirmations`, `gas limit`, etc.). This message is sent across chains via the `TeleporterMessenger` to the `CrossChainVRFWrapper` on the Chainlink-supported network (e.g., Fuji). ### `TeleporterMessenger` Receives Message & Calls `CrossChainVRFWrapper` 1. On the Chainlink-supported network, the `TeleporterMessenger` receives the cross-chain message sent by the `CrossChainVRFConsumer`. It passes the message to the `CrossChainVRFWrapper`, which is authorized to handle VRF requests. 2. The `CrossChainVRFWrapper` contract, deployed on the supported network, sends the request to **Chainlink VRF** for random words, using the parameters received in the message (e.g., `subscription ID`, `callback gas limit`, etc.). ### Chainlink VRF Fulfills Random Words Request 1. The **Chainlink VRF** fulfills the request and returns the random words to the `CrossChainVRFWrapper` contract by invoking its callback function. 2. Once the random words are received, the `CrossChainVRFWrapper` encodes the fulfilled random words and sends them back as a cross-chain message to the `CrossChainVRFConsumer` on the original L1. ### `TeleporterMessenger` Returns Random Words to `CrossChainVRFConsumer` The `TeleporterMessenger` on the original L1 receives the message containing the random words and passes it to the `CrossChainVRFConsumer`. ### `CrossChainVRFConsumer` Returns Random Words to the DApp Finally, the `CrossChainVRFConsumer` processes the random words and sends them to the DApp that originally requested them, completing the flow. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/interchain-messaging/cross-chain-vrf-NiG2EHgc4ulWBUx9Ekx0DZxTGdoHXk.png) This end-to-end process demonstrates how decentralized applications on unsupported L1 networks can request verifiable randomness from Chainlink VRF, leveraging **ICM** to handle cross-chain communication. # Orchestrating VRF Requests Over Multiple Chains (Wrapper) (/academy/icm-chainlink/01-access-chainlink-vrf-services/03-orchestrating-vrf-requests) --- title: Orchestrating VRF Requests Over Multiple Chains (Wrapper) description: Learn how the CrossChainVRFWrapper contract on a Chainlink-supported L1 handles cross-chain requests for VRF. updated: 2024-10-21 authors: [0xstt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; The `CrossChainVRFWrapper` contract plays a critical role in handling requests for randomness from an unsupported L1. It is deployed on a **Chainlink-supported network** (like Avalanche Fuji) and serves as the intermediary that interacts with Chainlink VRF to request random words. Here’s how it functions as the provider for VRF services: ## Receives Cross-Chain Messages When the `CrossChainVRFConsumer` on the unsupported L1 initiates a request for randomness, it sends a cross-chain message via the `TeleporterMessenger` to the `CrossChainVRFWrapper` on the supported network. The `CrossChainVRFWrapper` verifies that the request came from an authorized address. This is essential to ensure that only verified consumers can request randomness. Upon receiving a valid request, the **VRFWrapper** calls **Chainlink VRF** and sends the request with the required parameters, such as the number of random words, request confirmations, and gas limits. The `CrossChainVRFWrapper` keeps track of all pending requests using a mapping, associating each request ID with its destination (the L1 where the `CrossChainVRFConsumer` resides). This ensures that the random words are returned to the correct destination once fulfilled. ```solidity function receiveTeleporterMessage( bytes32 originChainID, address originSenderAddress, bytes calldata message ) external { require(msg.sender == address(teleporterMessenger), "Caller is not the TeleporterMessenger"); // Verify that the origin sender address is authorized require(authorizedSubscriptions[originSenderAddress].isAuthorized, "Origin sender is not authorized"); uint256 subscriptionId = authorizedSubscriptions[originSenderAddress].subscriptionId; // Verify that the subscription ID belongs to the correct owner (,,,, address[] memory consumers) = s_vrfCoordinator.getSubscription(subscriptionId); // Check wrapper contract is a consumer of the subscription bool isConsumer = false; for (uint256 i = 0; i < consumers.length; i++) { if (consumers[i] == address(this)) { isConsumer = true; break; } } require(isConsumer, "Contract is not a consumer of this subscription"); // Decode message to get the VRF parameters CrossChainRequest memory vrfMessage = abi.decode(message, (CrossChainRequest)); // Request random words VRFV2PlusClient.RandomWordsRequest memory req = VRFV2PlusClient.RandomWordsRequest({ keyHash: vrfMessage.keyHash, subId: subscriptionId, requestConfirmations: vrfMessage.requestConfirmations, callbackGasLimit: vrfMessage.callbackGasLimit, numWords: vrfMessage.numWords, extraArgs: VRFV2PlusClient._argsToBytes(VRFV2PlusClient.ExtraArgsV1({nativePayment: vrfMessage.nativePayment})) }); uint256 requestId = s_vrfCoordinator.requestRandomWords(req); pendingRequests[requestId] = CrossChainReceiver({ destinationBlockchainId: originChainID, destinationAddress: originSenderAddress }); } ``` ## Handles Callback from Chainlink VRF When **Chainlink VRF** fulfills the randomness request, the `CrossChainVRFWrapper` receives the random words through a callback function. This ensures that the request has been successfully processed. After receiving the random words, the `CrossChainVRFWrapper` encodes the result and sends it back as a cross-chain message to the `CrossChainVRFConsumer` on the unsupported L1. This is done using the `TeleporterMessenger`. ```solidity function fulfillRandomWords(uint256 requestId, uint256[] calldata randomWords) internal override { require(pendingRequests[requestId].destinationAddress != address(0), "Invalid request ID"); // Create CrossChainResponse struct CrossChainResponse memory crossChainResponse = CrossChainResponse({ requestId: requestId, randomWords: randomWords }); bytes memory encodedMessage = abi.encode(crossChainResponse); // Send cross chain message using ITeleporterMessenger interface TeleporterMessageInput memory messageInput = TeleporterMessageInput({ destinationBlockchainID: pendingRequests[requestId].destinationBlockchainId, destinationAddress: pendingRequests[requestId].destinationAddress, feeInfo: TeleporterFeeInfo({ feeTokenAddress: address(0), amount: 0 }), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: encodedMessage }); teleporterMessenger.sendCrossChainMessage(messageInput); delete pendingRequests[requestId]; } ``` In summary, the `CrossChainVRFWrapper` contract acts as the **bridge** between the unsupported L1 and Chainlink’s VRF services, ensuring that random words are requested, fulfilled, and delivered back across chains efficiently. # Bringing Chainlink VRF to Unsupported L1s (Consumer) (/academy/icm-chainlink/01-access-chainlink-vrf-services/04-bring-vrf-to-unsupported-l1) --- title: Bringing Chainlink VRF to Unsupported L1s (Consumer) description: Learn how to request VRF from an unsupported L1 using CrossChainVRFConsumer. updated: 2024-10-21 authors: [0xstt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; The `CrossChainVRFConsumer` contract enables DApps on an unsupported L1 to request random words from Chainlink VRF using a cross-chain communication mechanism. Since Chainlink does not natively support all blockchains, this setup allows developers to access Chainlink's VRF service even on networks that don’t have direct support. ## Requesting Random Words The `CrossChainVRFConsumer` contract sends a cross-chain message to the `CrossChainVRFWrapper` on a Chainlink-supported L1, requesting random words. This request is sent using `TeleporterMessenger`, which handles cross-chain communication. ```solidity function requestRandomWords( bytes32 keyHash, uint16 requestConfirmations, uint32 callbackGasLimit, uint32 numWords, bool nativePayment, uint32 requiredGasLimit ) external { // Create CrossChainRequest struct CrossChainRequest memory crossChainRequest = CrossChainRequest({ keyHash: keyHash, requestConfirmations: requestConfirmations, callbackGasLimit: callbackGasLimit, numWords: numWords, nativePayment: nativePayment }); // Send Teleporter message bytes memory encodedMessage = abi.encode(crossChainRequest); TeleporterMessageInput memory messageInput = TeleporterMessageInput({ destinationBlockchainID: DATASOURCE_BLOCKCHAIN_ID, destinationAddress: vrfRequesterContract, feeInfo: TeleporterFeeInfo({ feeTokenAddress: address(0), amount: 0 }), requiredGasLimit: requiredGasLimit, allowedRelayerAddresses: new address[](0), message: encodedMessage }); teleporterMessenger.sendCrossChainMessage(messageInput); } ``` ## Processing the Request Once the request is received by the `CrossChainVRFWrapper`, it interacts with the Chainlink VRF Coordinator to request the random words on behalf of the consumer on the unsupported L1. ## Receiving Random Words Once Chainlink fulfills the request, the `CrossChainVRFWrapper` sends the random words back to the `CrossChainVRFConsumer` via a cross-chain message, enabling the DApp on the unsupported L1 to use them. ```solidity function receiveTeleporterMessage( bytes32 originChainID, address originSenderAddress, bytes calldata message ) external { require(originChainID == DATASOURCE_BLOCKCHAIN_ID, "Invalid originChainID"); require(msg.sender == address(teleporterMessenger), "Caller is not the TeleporterMessenger"); require(originSenderAddress == vrfRequesterContract, "Invalid sender"); // Decode the message to get the request ID and random words CrossChainResponse memory response = abi.decode(message, (CrossChainResponse)); // Fulfill the request by calling the internal function fulfillRandomWords(response.requestId, response.randomWords); } function fulfillRandomWords(uint256 requestId, uint256[] memory randomWords) internal { // Logic to handle the fulfillment of random words // Implement your custom logic here // Emit event for received random words emit RandomWordsReceived(requestId); } ``` # Deploy Wrapper (/academy/icm-chainlink/01-access-chainlink-vrf-services/05-deploy-vrf-wrapper) --- title: Deploy Wrapper description: Learn how to deploy the CrossChainVRFWrapper contract to handle cross-chain VRF requests. updated: 2024-10-21 authors: [0xstt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Once you have set up your Chainlink VRF subscription and have your LINK tokens ready, the next step is to deploy the **CrossChainVRFWrapper** contract. This contract will act as the bridge between your unsupported L1 and the Chainlink VRF network on a supported L1, enabling cross-chain requests for random words. ## Prerequisites Before deployment, make sure you have: - A valid **Chainlink VRF Subscription ID** (see previous section for details). - The `TeleporterMessenger` contract address on your supported L1 (e.g., Avalanche Fuji). ## Deploy the Contract Using the `forge create` command, deploy the `CrossChainVRFWrapper` contract to the supported L1 (e.g., Avalanche Fuji). ```bash forge create --rpc-url --private-key --broadcast --constructor-args src/CrossChainVRFWrapper.sol:CrossChainVRFWrapper ``` Replace the following: - ``: The RPC URL for the L1. - ``: The private key for the account used to deploy the contract. - ``: The address of the deployed `TeleporterMessenger` contract. After deployment, save the `Deployed to` address in an environment variable for future use. ```bash export VRF_WRAPPER=
``` ## Verify the Deployment After deploying the contract, verify that the `CrossChainVRFWrapper` has been successfully deployed by checking its address on a block explorer. ## Configure Authorized Subscriptions Once deployed, the `CrossChainVRFWrapper` contract needs to be configured with authorized subscriptions to process requests for random words. - Call the `addAuthorizedAddress` function to authorize a specific address with a given subscription ID. - This ensures that only authorized addresses can request random words via the wrapper. ```bash cast send --rpc-url --private-key $VRF_WRAPPER "addAuthorizedAddress(address caller, uint256 subscriptionId)" ``` Replace the following: - ``: The address that will be authorized to request random words. - ``: The ID of your Chainlink VRF subscription. # Deploy Consumer (/academy/icm-chainlink/01-access-chainlink-vrf-services/06-deploy-vrf-consumer) --- title: Deploy Consumer description: Learn how to deploy the CrossChainVRFConsumer contract on any L1 that does not support Chainlink VRF. updated: 2024-10-21 authors: [0xstt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Now that the `CrossChainVRFWrapper` is deployed on a Chainlink-supported L1, it’s time to deploy the `CrossChainVRFConsumer` contract on the L1 where Chainlink VRF is not supported. This contract will handle requests for random words and interact with the **TeleporterMessenger** to communicate with the supported L1. ## Prerequisites Make sure you have: - The `TeleporterMessenger` contract address on the unsupported L1. - The deployed `CrossChainVRFWrapper` on the supported L1. `($VRF_WRAPPER)` ## Deploy the Contract Use the following command to deploy the `CrossChainVRFConsumer` contract on your unsupported L1. ```bash forge create --rpc-url --private-key --broadcast --constructor-args $VRF_WRAPPER src/CrossChainVRFConsumer.sol:CrossChainVRFConsumer ``` Replace the following: - ``: The RPC URL for the L1. - ``: The private key for the account used to deploy the contract. - ``: The address of the `TeleporterMessenger` contract on your unsupported L1. After deployment, save the `Deployed to` address in an environment variable for future use. ```bash export VRF_CONSUMER=
``` ## Verify the Deployment Once the `CrossChainVRFConsumer` contract is deployed, verify the contract’s address and confirm that it has been successfully deployed on your L1 using a block explorer. # Create Chainlink VRF Subscription (/academy/icm-chainlink/01-access-chainlink-vrf-services/07-create-vrf-subscription) --- title: Create Chainlink VRF Subscription description: Learn how to create a Chainlink VRF subscription to enable cross-chain randomness requests. updated: 2024-10-21 authors: [0xstt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Before you can request random words using Chainlink VRF, you need to set up a **Chainlink VRF subscription**. This subscription allows you to fund requests for randomness and manage the list of consumers that are authorized to use the VRF service. ## Access Chainlink's VRF Subscription Manager To create a subscription, go to the Chainlink VRF Subscription Manager on the network where you plan to request VRF (e.g., Avalanche Fuji). You can access the manager here: [VRF | Subscription Management for Fuji](https://vrf.chain.link/fuji). ![](/common-images/chainlink-vrf/visit-subscription-management.png) ## Create a New Subscription Once on the subscription manager, create a new subscription. The subscription will have a unique ID, which you'll need to reference in your `CrossChainVRFWrapper` contract. This ID is used to track your random word requests and the balance associated with them. ![](/common-images/chainlink-vrf/create-subscription.png) ## Fund the Subscription After creating the subscription, you need to fund it with LINK tokens. These tokens are used to pay for the randomness requests made through Chainlink VRF. Make sure your subscription has enough funds to cover your requests. You can get testnet LINK tokens from the Fuji Faucet: [Chainlink Faucet for Fuji](https://faucets.chain.link/fuji) ![](/common-images/chainlink-vrf/faucet.png) ![](/common-images/chainlink-vrf/add-funds.png) ## Add Consumers After funding your subscription, add the `CrossChainVRFWrapper` contract `($VRF_WRAPPER)` as a consumer. This step authorizes the contract to make randomness requests on behalf of your subscription. You can add other consumers, such as other contracts or addresses, depending on your use case. ![](/common-images/chainlink-vrf/add-consumer.png) ## Save Subscription ID After completing these steps, save your subscription ID. You will need this ID when configuring the `CrossChainVRFWrapper` contract to request random words. ```bash export VRF_SUBSCRIPTION_ID= ``` --- # Request Random Words (/academy/icm-chainlink/01-access-chainlink-vrf-services/08-request-random-words) --- title: Request Random Words description: Learn how to request random words using CrossChainVRFConsumer. updated: 2024-10-21 authors: [0xstt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this section, you will learn how to request random words from Chainlink VRF using both the `CrossChainVRFConsumer` contracts. ## Authorize an Address to Request Random Words After deploying the `CrossChainVRFWrapper` contract, the first step is to authorize an address to make requests for random words. This ensures that only specific addresses linked to a subscription can request VRF. - Use the `addAuthorizedAddress` function to authorize a specific address with a given subscription ID. This step allows the address to make random word requests through the wrapper. ```bash cast send --rpc-url --private-key $VRF_WRAPPER "addAuthorizedAddress(address caller, uint256 subscriptionId)" $VRF_CONSUMER $VRF_SUBSCRIPTION_ID ``` ## Request Random Words from `CrossChainVRFConsumer` Once the address is authorized, the next step is to send a request for random words from the `CrossChainVRFConsumer` contract on the unsupported L1. This request is then sent to the `CrossChainVRFWrapper` via a cross-chain message. ```bash cast send --rpc-url --private-key $VRF_CONSUMER "requestRandomWords(bytes32 keyHash, uint16 requestConfirmations, uint32 callbackGasLimit, uint32 numWords, bool nativePayment, uint32 requiredGasLimit)" ``` Replace the placeholders with: - ``: The VRF key hash used for random word generation. - ``: Number of confirmations required for the request. - ``: The gas limit for the VRF callback function. - ``: The number of random words requested. - ``: Indicates whether the payment will be made in the native token. - ``: The gas limit required for the cross-chain message to be processed. # Introduction (/academy/icm-chainlink/02-access-chainlink-functions/01-introduction) --- title: Introduction description: Learn how to use ICM to access Chainlink VRF services on L1 networks that do not have direct Chainlink support. updated: 2024-10-21 authors: [Andrea] icon: Book --- TBD # Introduction (/academy/icm-chainlink/03-access-chainlink-automation/01-introduction) --- title: Introduction description: Learn how to use ICM to access Chainlink VRF services on L1 networks that do not have direct Chainlink support. updated: 2024-10-21 authors: [Andrea] icon: Book --- TBD # Introduction (/academy/icm-chainlink/04-ccip-icm/01-introduction) --- title: Introduction description: Learn how to use ICM to access Chainlink VRF services on L1 networks that do not have direct Chainlink support. updated: 2024-10-21 authors: [Andrea] icon: Book --- TBD # Interoperability between blockchain (/academy/interchain-messaging/02-interoperability/01-interopability-between-blockchains) --- title: Interoperability between blockchain description: Learn about interoperability and it's importance in multichain systems updated: 2024-05-31 authors: [martineckardt] icon: Book --- Interoperability between blockchains refers to the ability of different blockchain networks to communicate and interact with one another in a seamless and coordinated manner. It allows these separate blockchain systems to share data, assets, or functionalities, enabling them to work together as if they were part of a single unified network. These interactions can take many forms, including: - Asset Bridging (Tokens, NFTs, etc.) - DAO Voting across Chains - Cross-Chain Liquidity Pools As you will see soon, these different interactions are based on the same fundamentals of blockchains exchanging messages. You can learn about some of them in the recording of this panel: ## What you will learn In this section, you will explore the following topics: - **Source, Message, and Destination:** What is a message and what are the roles of the source and destination blockchain in interoperability? - **Multi-Chain Systems:** A quick recap of multi-chain systems - **Interoperability in Multi-Chain Systems:** Learn about the importance of interoperability to multi-chain systems # Source, Message and Destination (/academy/interchain-messaging/02-interoperability/02-source-message-destination) --- title: Source, Message and Destination description: Learn about interoperability and it's importance in multichain systems. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Interoperability is achieved by enabling blockchains to pass messages to one another. Each message originates from a source chain and is sent to one or more destination chains. These messages encode arbitrary data that can attest some event on the source chains, such as the deposit of an asset or the result of a vote.
![](/common-images/teleporter/source.png) # Source - Origin of communication - Sender calls contract
![](/common-images/teleporter/message.png) # Message - Contains source, destination, and encoded data - Signature guarantees authenticity
![](/common-images/teleporter/destination.png) # Destination - Submission of message as transaction - Verifies signatures
## Source Chain The source chain in a cross-blockchain communication system refers to the original blockchain where data, assets, or information originate before being communicated to another blockchain. It acts as the starting point for initiating cross-chain interactions. When a user intends to communicate with another blockchain, they utilize protocols or smart contracts to initiate the communication process from the source chain. ## Message The message is the data structure that will be sent to to the destination chain. It contains some metadata, the encoded message, and a signature. The signature attests the authenticity of the message, meaning that whatever the message claims has actually happened on the source chain. ## Destination Chain Conversely, the destination chain is the recipient blockchain where the communicated data, assets, or instructions will be received and processed. Validators, nodes, or protocols associated with the cross-blockchain communication system on the destination chain receive and authenticate the information relayed from the source chain. Once validated, the destination chain processes the data or executes the specified instructions. # Recap of Multi-Chain Networks (/academy/interchain-messaging/02-interoperability/03-multi-chain-networks) --- title: Recap of Multi-Chain Networks description: Learn about interoperability and it's importance in multichain systems updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/l1s.png) Avalanche is a multi-chain network, meaning the network has multiple chains being validated in parallel, while other networks, such as Bitcoin and Ethereum only have single chain. This feature provides greater scalability, independence, and customizability. Each blockchain is optimized for specialized use cases, boosting the network's overall performance. ## Transactions per Second (TPS) and Time to Finality (TTF) To measure the performance of a blockchain, we can use two primary metrics: Time to finality (measured in seconds) and throughput (measured in transactions per second, TPS). To illustrate, we can think of a highway an as analogy. ![](/common-images/consensus/tps-vs-ttf.png) Having a short time to finality is a short highway, taking a direct route from origin (submitting a transaction) to destination (finalization of the transaction) without any unnecessary detours. A finalized transaction is irreversibly written to the ledger. Once a transaction is final, users can be sure it has executed. Having high transaction throughput is like having many lanes on the highway, meaning many cars (transactions) can use it at the same time. When building blockchain systems, we want to achieve high throughput. ## Scaling For The Masses Different blockchain networks use different scaling approaches, such as Layer 2s, including roll-ups. Networks aim to maximize the throughput. Multi-chain systems have a simple, but incredible effective way of scaling. By running independent chains in parallel, the overall network can reach a massive combined throughput. ![](/common-images/multi-chain-architecture/combined-throughput.png) While roll-ups may enable a very high-throughput of a single chain, they can never outcompete the combined throughput of a multi-chain system. This is because there's no limit to the number of chains that can run in parallel. # Interoperability in Multi-Chain Systems (/academy/interchain-messaging/02-interoperability/04-interoperability-in-multi-chain-systems) --- title: Interoperability in Multi-Chain Systems description: Learn about interoperability and it's importance in multichain systems updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Interoperability is crucial for multi-chain systems, as the price for easy scalability is fragmentation. However, this interoperability is equally important to all blockchain systems. Even though more dApps are based on the same chain in single-chain systems, many of them require interaction with other chains for liquidity or governance. Therefore, coming up with secure and efficient interoperability solutions is crucial for the entire blockchain space. ![](/common-images/multi-chain-architecture/vm-variety.png) Avalanche, as a multi-chain system, provides flexibility on the execution and application layer but also offers a standardized messaging protocol. The necessary cryptographic algorithms are implemented in AvalancheGo, enabling cross-Avalanche L1 communication out of the box, even when the application and execution layers of the chains may be completely different. The VMs can utilize these modules to sign and verify messages, making interoperability between Avalanche L1s much easier than between unrelated chains from different networks. # Finality Importance in Interoperabile Systems (/academy/interchain-messaging/02-interoperability/05-finality-and-interoperability) --- title: Finality Importance in Interoperabile Systems description: Learn about interoperability and it's importance in multichain systems updated: 2024-06-10 authors: [andyvargtz] icon: BookOpen --- Finality is crucial for building cross-chain applications and interoperable systems because it ensures that transactions on the source chain are confirmed and immutable, therefore whatever action is triggered on the destination chain can securely be executed knowing the source chain won't revert the state induced by that cross chain message. # Trusted Third Parties (/academy/interchain-messaging/02-interoperability/06-trusted-third-parties) --- title: Trusted Third Parties description: Learn about the challenges of cross-chain communication. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- # "Cross-chain communication is impossible without a trusted third party, contrary to common beliefs in the blockchain community" - A. Zamyatin et al. in [SoK: Communication Across Distributed Ledgers](https://eprint.iacr.org/2019/1128.pdf) Signature Schemes give us the tool to ensure the authenticity of message, but who will send these messages? Can we build a cross-chain communication protocol that does not introduce additional trust assumption? The paper [SoK: Communication Across Distributed Ledgers](https://eprint.iacr.org/2019/1128.pdf) looks into this question and states that a CCC protocol must have the following attributes: - **Effectiveness:** all relevant transactions are posted on their respective chains at the desired time - **Atomicity:** either all transactions were posted on their respective chains at the desired time, or no transactions were posted at all - **Timeliness:** all transactions will eventually be posted on their respective chains It is possible to create a cross-chain communication protocol that adheres to the three attributes mentioned above? Can a correct CCC protocol that is trustless exist? In SoK, the researchers come to the conclusion that such a trustless, correct CCC protocol is impossible. Using computational complexity theory, the trustless, correct CCC protocol problem can be reduced into the Fair Exchange problem, a problem that has been proven to have no solution. We see this also in practice. Many bridges and cross-chain communications often rely on some centralized infrastructure that observes the source chain and then delivers the message to the destination chain. Their users need to trust this setup, often involving a set of third-party bridge validators. However, there might be a smart trick we can use. We'll explore this in the next section about Avalanche Warp Messaging. # Avalanche Starter Kit (/academy/interchain-messaging/03-avalanche-starter-kit/01-avalanche-starter-kit) --- title: Avalanche Starter Kit description: Environment Setup updated: 2024-05-31 authors: [martineckardt] icon: Book --- The Avalanche Starter Kit provides a pre-configured GitHub Codespace environment with everything you need to develop cross-chain applications on Avalanche. This environment comes with: - Foundry for smart contract development - Docker support for running the ICM relayer - Required VS Code extensions - All necessary development tools ## What You Will Learn In this section, you will: - Launch your own GitHub Codespace - Set up your Core wallet and get test tokens - Configure your development environment - Learn basic Foundry commands for contract interaction By the end of this section, you'll have a fully configured environment ready for cross-chain development with Interchain Messaging # Initial Setup (/academy/interchain-messaging/03-avalanche-starter-kit/02-set-up) --- title: Initial Setup description: Environment and Wallet Setup updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import Link from 'next/link'; import { cn } from '@/utils/cn'; import { buttonVariants } from '@/components/ui/button.tsx' import { Step, Steps } from 'fumadocs-ui/components/steps'; The Avalanche Starter Kit contains everything you need to get started quickly with Avalanche. The kit provides a self-contained environment with Foundry so you can follow the course without the need of installing anything else other than launching the environment. In this course we will run the Avalanche Starter Kit in a hosted environment on Github. This is the quickest way to get started. ### Create a Codespace The quickest way to get started is using GitHub Codespaces, which provides a pre-configured development environment: [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://github.com/codespaces/new?hide_repo_select=true&ref=interchain-messaging&repo=769886086&machine=standardLinux32gb) Wait a few minutes for the environment to build. You must be logged into GitHub to use Codespaces. Alternatively, you can clone the repository: Open Avalanche Starter Kit ### Install Dependencies Once your Codespace is ready, install the project dependencies: ```bash forge install # Optional: Mute Foundry nightly warning export FOUNDRY_DISABLE_NIGHTLY_WARNING=1 ``` ### Set Up Core Wallet 1. Install Core wallet: - Visit [Core wallet website](https://core.app/download) - Download and install for your OS - Create a new wallet or import existing one 2. Get your [wallet credentials](https://support.avax.network/en/articles/8832783-core-extension-how-do-i-export-my-private-key): - Open Core wallet - Click your account name (top right) - Select "View Private Key" - Enter your password - Copy your: - Account address (0x...) - Private key (0x...) ### Configure Environment 1. Create your environment file: ```bash cp .env.example .env ``` 2. Add your Core wallet credentials to `.env`: ```bash # Your private key for signing transactions (for testing only, never use in production) PK=your_private_key_here # Your funded address (the address derived from your private key) FUNDED_ADDRESS=your_funded_address_here # Teleporter and chain configuration # Fuji C-Chain, Dispatch & Echo have the same address, but this might no be the case for other L1s TELEPORTER_REGISTRY_FUJI_DISPATCH=0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228 TELEPORTER_REGISTRY_FUJI_C_CHAIN=0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228 # Blockchain IDs pre-filled FUJI_DISPATCH_BLOCKCHAIN_ID_HEX=0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7 FUJI_C_CHAIN_BLOCKCHAIN_ID_HEX=0x31233cae135e3974afa396e90f465aa28027de5f97f729238c310d2ed2f71902 # Incentevize a Relayer FEE_TOKEN_ADDRESS=your_example_erc20_address ``` 3. Load the environment: ```bash source .env ``` 4. Verify your configuration: ```bash # Should show your wallet address echo $FUNDED_ADDRESS ``` # Close and Reopen Codespace (/academy/interchain-messaging/03-avalanche-starter-kit/03-close-and-reopen-codespace) --- title: Close and Reopen Codespace description: Environment Setup updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import CloseAndReopen from "@/content/common/codespaces/close-and-reopen-codespace.mdx"; # Getting Test Tokens (/academy/interchain-messaging/03-avalanche-starter-kit/04-networks) --- title: Getting Test Tokens description: Fund your wallet for development updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Before you can deploy contracts or send transactions, you'll need test tokens on both the Fuji C-Chain and Dispatch networks. ## Getting Test Tokens 1. Open Core wallet and switch to Fuji Testnet: - Click the network selector (top of window) - Select "Fuji (C-Chain)" 2. Get test tokens: - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive tokens automatically on C-Chain and Dispatch - **Alternative:** Use external faucets: - Fuji C-Chain: [Faucet](https://core.app/tools/testnet-faucet/?subnet=c&token=c) with code **avalanche-academy** - Dispatch: [Faucet](https://core.app/tools/testnet-faucet/?subnet=dispatch&token=dispatch) 3. Verify your balances inside of the codespace, you should have **2 AVAX** and **2 DIS**: ```bash # Check balance on Fuji C-Chain cast balance $FUNDED_ADDRESS --rpc-url fuji-c # Check balance on Dispatch cast balance $FUNDED_ADDRESS --rpc-url fuji-dispatch ``` You can also check your balance from the core extension, don't forget to toggle **testnet mode** inside the advanced settings. ## Available Networks For this course, we'll be using two testnet networks: ### Fuji C-Chain - RPC URL: https://api.avax-test.network/ext/bc/C/rpc - Chain ID: 43113 - Native Token: AVAX - Teleporter Messenger: 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf - Teleporter Registry: 0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228 ### Dispatch Testnet - RPC URL: https://subnets.avax.network/dispatch/testnet/rpc - Chain ID: 779672 - Native Token: DISP - Teleporter Messenger: 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf - Teleporter Registry: 0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228 Both networks are configured with the same Teleporter Messenger and Registry address for seamless cross-chain communication. # Contract Development with Foundry (/academy/interchain-messaging/03-avalanche-starter-kit/05-foundry-quickstart) --- title: Contract Development with Foundry description: Deploy and interact with smart contracts updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Now that your environment is set up and funded, let's understand the key Foundry commands we'll be using throughout this tutorial. ## Understanding Foundry Commands ### forge create The `forge create` command is used for deploying smart contracts. Key aspects: - Requires an RPC URL to specify which network to deploy to - Needs a private key for transaction signing - Can handle constructor arguments if your contract needs them - Returns the deployed contract's address - Supports contract verification on block explorers ### cast send `cast send` is used for state-changing interactions with contracts. This includes: - Sending transactions that modify contract state - Calling functions that require gas - Approving token transfers - Executing contract functions with parameters - Requires private key for transaction signing ### cast call `cast call` is for read-only operations that don't modify state: - Reading contract state variables - Calling view/pure functions - Doesn't require gas or transaction signing - Returns function results immediately - Useful for verifying contract state ## Development Best Practices 1. Always test with small transactions first 2. Save contract addresses in environment variables 3. Double-check network configurations and addresses 4. Monitor transactions in block explorers 5. Keep private keys secure and never commit them ## Learn More For more detailed information: - [Foundry Book](https://book.getfoundry.sh/) - [Avalanche Documentation](https://docs.avax.network/) - [Teleporter Documentation](https://docs.avax.network/build/cross-chain/teleporter/overview) # ICM Basics (/academy/interchain-messaging/04-icm-basics/01-icm-basics) --- title: ICM Basics description: Learn about the Avalanche Interchain Messaging basics. updated: 2024-05-31 authors: [martineckardt] icon: Book --- Teleporter enables you to send messages from one blockchain to another blockchain in the Avalanche network by simply calling the `sendCrossChainMessage` on the `TeleporterMessenger` contract. This invokes a smart contract on another Avalanche L1, where the called contract implements the `ITeleporterReceiver` interface to receive messages on the destination Avalanche L1. We will look at what happens under the hood in a later chapter. ![](/common-images/teleporter/teleporter-source-destination.png) ## What You Will Learn In this section, you will go through the following topics: - **Recap: Bytes, Encoding, and Decoding:** Understand how data is encoded and decoded in bytes - **Sending messages:** Write your first simple sender contract - **Receiving messages:** Write your first simple receiver contract ## Exercise You will apply your learned knowledge by building your first Cross-Chain application! This application will be deployed on two chains: the Fuji testnet (C-Chain) and the Dispatch test L1. Therefore, you will deploy two contracts: - **Sender on Fuji C-Chain:** Send a message with a simple string - **Receiver on Dispatch test L1:** Receives the message and saves the most recent string At the end of the section, you will be adapting the example to build an 'Adder'. This application is similar to the example above, but the message will include a number. The receiving contract keeps track of the sum of all sent numbers. # Recap of Bytes, Encoding and Decoding (/academy/interchain-messaging/04-icm-basics/02-recap-bytes-encoding-decoding) --- title: Recap of Bytes, Encoding and Decoding description: Recap message encoding/decoding. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- When we send an interchain message, we are just sending a single value of type bytes. This value can contain multiple values of various types (e.g. string, uint & address) using encoding and decoding. In this section we will recap these topics: - **Bytes:** What is this data type and why are we using it? - **Encoding:** How do we turn values into bytes? - **Decoding:** How can we turn the bytes back into the values? We will use these concepts a lot going forward for our messages we send, so make sure you read this carefully. ## Bytes In Solidity, the bytes data type serves as a versatile container for encoding multiple values of different types into a single unified value. You can think of it as a flexible box that can hold various forms of data, such as integers, text, images, or any binary information. The bytes data type allows you to mix and match these different types of data and put them all in one box. This makes it a pragmatic choice for encapsulating diverse data and data type ensures that all the data is efficiently packaged into a single unit for transfer, and recipients can decode and extract the individual components as needed. ## Encoding & Decoding In Solidity we can use the functions `abi.encode()` and `abi.decode()` for encoding and decoding. These are part of the solidity language, so we do not need to import anything to use them. ![Encoding and Decoding](/common-images/solidity/encoding-decoding.png) ### Encoding When we encode data, we convert it into a byte array. This is useful when we want to send data from one contract to another. We can encode multiple values into a single byte array, which can then be decoded by the receiving contract. ```solidity string memory someString = "test"; uint someNumber = 42; bytes message = abi.encode(someString, someNumber); ``` This single value message here we can now use as a message for Teleporter. ### Decoding When we decode data, we convert it from a byte array back into its original form. This is useful when we receive data from another contract. We can decode the byte array into its original values. ```solidity ( string memory someString, uint someNumber ) = abi.decode(message, (string, uint)); ``` This will give us back the original values that we encoded before. Note that we do need to know the types of the values and their order that we encoded to decode them correctly. # Sending a Message (/academy/interchain-messaging/04-icm-basics/03-sending-a-message) --- title: Sending a Message description: Learn to send messages with Avalanche Interchain Messaging. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Sending a message is nothing more than a simple contract call to the Interchain Messaging messenger contract. The dApp on the Source L1 has to call the `sendCrossChainMessage` function of the Interchain Messaging contract. The Interchain Messaging contract implements the `ITeleporterMessenger` interface below. Note that the dApp itself does not have to implement the interface. ```solidity title="/lib/icm-contracts/contracts/teleporter/ITeleporterMessenger.sol" pragma solidity 0.8.18; struct TeleporterMessageInput { bytes32 destinationBlockchainID; address destinationAddress; TeleporterFeeInfo feeInfo; uint256 requiredGasLimit; address[] allowedRelayerAddresses; bytes message; } struct TeleporterFeeInfo { address feeTokenAddress; uint256 amount; } /** * @dev Interface that describes functionalities for a cross chain messenger. */ interface ITeleporterMessenger { /** * @dev Emitted when sending a interchain message cross chain. */ event SendCrossChainMessage( uint256 indexed messageID, bytes32 indexed destinationBlockchainID, TeleporterMessage message, TeleporterFeeInfo feeInfo ); /** * @dev Called by transactions to initiate the sending of a cross L1 message. */ function sendCrossChainMessage(TeleporterMessageInput calldata messageInput) external returns (uint256); } ``` The `sendCrossChainMessage` function takes `TeleporterMessageInput` struct as an input. In that multiple values are contained: This data will then be included in the payload of the Warp message: - **`destinationChainID`:** The blockchainID in hex where the contract that should receive the message is deployed. This is not the EVM chain ID you may know from adding a network to a wallet, but the blockchain ID on the P-Chain. The P-Chain uses the transaction ID of the transaction that created those blockchain on the P-Chain for the chain ID, e.g.: 0xd7cdc6f08b167595d1577e24838113a88b1005b471a6c430d79c48b4c89cfc53 - **`destinationAddress`:** The address of the contract that should receive the message - **`feeInfo`:** A struct consisting of a contract address of an ERC20 which the fee is paid in as well as the amount of tokens to be paid as an incentive for the relayer. We will look at this later in more detail. - **`requiredGasLimit`:** The amount of gas the delivery of the message requires. If the relayer provides the required gas, the message will be considered delivered whether or not its execution succeeds, such that the relayer can claim their fee reward. - **`allowedRelayerAddresses`:** An array of addresses of allowed relayers. An empty allowed relayers list means anyone is allowed to deliver the message. We will look at this later in more detail. - **`message`:** The message to be sent as bytes. The message can contain multiple encoded values. DApps using Interchain Messaging are responsible for defining the exact format of this payload in a way that can be decoded on the receiving end. The message can hold multiple values that be encoded in a single bytes object. For example, applications may encode multiple method parameters on the sending side, then decode this data in the contract implementing the receiveTeleporterMessage function and call another contract with the parameters from there. # Sender Contract (/academy/interchain-messaging/04-icm-basics/04-create-sender-contract) --- title: Sender Contract description: Create a contract to send messages with Teleporter. updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Lets start by deploying our sender contract on C-Chain. It will be responsible for calling the the TeleporterMessenger contract, encoding our message and sending it to the destination chain. ### Read the Sender Contract The following contract is located inside `contracts/interchain-messaging/send-receive` directory. Read through the contract below and and understand what is happening: ```solidity title="contracts/interchain-messaging/send-receive/senderOnCChain.sol" // (c) 2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: Ecosystem pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; // [!code highlight] contract SenderOnCChain { ITeleporterMessenger public immutable messenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); // [!code highlight] /** * @dev Sends a message to another chain. */ function sendMessage(address destinationAddress, string calldata message) external { messenger.sendCrossChainMessage( // [!code highlight] TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, // [!code highlight] destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: abi.encode(message) }) ); } } ``` The key things to understand: - **Importing ITeleporterMessenger (Line 8):** We are importing the `ITeleporterMessenger` Interface we looked at in the previous activity. - **Defining teleporterMessenger contract (Line 12):** We are defining a `teleporterMessenger` contract using the imported interface. It is important to note, that our cross-chain dApp is not implementing the interface itself, but initializes a contract using that interface. - **Sending the message (Line 21):** We are sending the message by calling the function of our `teleporterMessenger`. As an input we are defining a `TeleporterMessageInput`. The `destinationChainId` should be set to the Dispatch test L1's blockchain ID. We will need to provide the address of the receiving contract on the Dispatch test L1 as a parameter to the function, since we have not deployed it yet and don't know the address at this time. - **No fees (Line 25):** In this exercise we are not providing any fees to the relayer for relaying the message. This is only possible since the relayer we are running here is configured to pick up any message even if it does not provide any rewards. - **Encoding the Message (Line 31):** The `TeleporterMessageInput` defines a message as an array of bytes. For now we will just simply encode the string with `abi.encode()`. In the future activities, you will see how we can encode multiple values of any type in that message. - **Hardcoded destinationBlockchainId:** For this course, we are using Dispatch, but normally you will have to replace the `destinationBlockchainID` with whatever chain you want to send a message to. ### Deploy Sender Contract To deploy a contract using Foundry use the following command: ```bash forge create --rpc-url fuji-c --private-key $PK --broadcast contracts/interchain-messaging/send-receive/senderOnCChain.sol:SenderOnCChain ``` ``` [⠊] Compiling... [⠒] Compiling 2 files with Solc 0.8.18 [⠢] Solc 0.8.18 finished in 81.53ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC // [$FUNDED_ADDRESS] Deployed to: 0x5DB9A7629912EBF95876228C24A848de0bfB43A9 // [$SENDER_ADDRESS] Transaction hash: 0xcde7873e9e3c68fb00a2ad6644dceb64a01a41941da46de5a0f559d6d70a1638 ``` ### Save Sender Address Then save the sender contract address in an environment variable: ```bash export SENDER_ADDRESS={your-sender-address} ``` # Receiving a Message (/academy/interchain-messaging/04-icm-basics/05-receiving-a-message) --- title: Receiving a Message description: Learn to receive messages with Avalanche Interchain Messaging. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- To receive a message we need to enable our cross-L1 dApps to being called by the Interchain Messaging contract. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/teleporter/teleporter-source-destination-with-relayer-SvUFYGP77XxLjoyWqf7IpC85Ssxmmo.png) The Interchain Messaging does not know our contract and what functions it has. Therefore, our dApp on the destination L1 has to implement the ITeleporterReceiver interface. It is very straight forward and only requires a single method for receiving the message that then can be called by the Interchain Messaging contract: ```solidity pragma solidity 0.8.18; /** * @dev Interface that cross-chain applications must implement to receive messages from Teleporter. */ interface ITeleporterReceiver { /** * @dev Called by TeleporterMessenger on the receiving chain. * * @param originChainID is provided by the TeleporterMessenger contract. * @param originSenderAddress is provided by the TeleporterMessenger contract. * @param message is the TeleporterMessage payload set by the sender. */ function receiveTeleporterMessage( bytes32 originChainID, address originSenderAddress, bytes calldata message ) external; } ``` The function receiveTeleporterMessage has three parameters: - **`originChainID`**: The chainID where the message originates from, meaning where the user or contract called the `sendCrossChainMessage` function of the Interchain Messaging contract - **`originSenderAddress`**: The address of the user or contract that called the `sendCrossChainMessage` function of the Interchain Messaging contract on the origin L1 - **`message`**: The message encoded in bytes An example for a contract being able to receive Interchain Messaging messages and storing these in a mapping could look like this: ```solidity pragma solidity 0.8.18; import "https://github.com/ava-labs/teleporter/blob/main/contracts/src/Teleporter/ITeleporterMessenger.sol"; import "https://github.com/ava-labs/teleporter/blob/main/contracts/src/Teleporter/ITeleporterReceiver.sol"; contract MessageReceiver is ITeleporterReceiver { // Messages sent to this contract. struct Message { address sender; string message; } mapping(bytes32 => Message) private _messages; ITeleporterMessenger public immutable teleporterMessenger; // Errors error Unauthorized(); constructor(address teleporterMessengerAddress) { teleporterMessenger = ITeleporterMessenger(teleporterMessengerAddress); } /** * @dev See {ITeleporterReceiver-receiveTeleporterMessage}. * * Receives a message from another chain. */ function receiveTeleporterMessage( bytes32 originChainID, address originSenderAddress, bytes calldata message ) external { // Only the Interchain Messaging receiver can deliver a message. if (msg.sender != address(teleporterMessenger)) { revert Unauthorized(); } string memory messageString = abi.decode(message, (string)); _messages[originChainID] = Message(originSenderAddress, messageString); } } ``` This contract stores the last `Message` and it's sender of each chain it has received. When it is instantiated, the address of the Interchain Messaging contract is supplied to the constructor. The contract implements the `ITelepoterReceiver` interface and therefore we also implement the `receiveTeleporterMessage` function. # Receiver Contract (/academy/interchain-messaging/04-icm-basics/06-create-receiver-contract) --- title: Receiver Contract description: Create a contract to receive messages with Teleporter. updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Now it's time to deploy our receiver contract to our L1. It will implement the callback for the `TeleporterMessenger` contract when the message is received, decoding our message and storing the last received string. ### Read the Receiver Contract The following contract is located inside `contracts/interchain-messaging/send-receive` directory. Read through the contract below and and understand what is happening: ```solidity title="contracts/interchain-messaging/send-receive/receiverOnDispatch.sol" // (c) 2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: Ecosystem pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; import "@teleporter/ITeleporterReceiver.sol"; contract ReceiverOnDispatch is ITeleporterReceiver { ITeleporterMessenger public immutable messenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); string public lastMessage; function receiveTeleporterMessage(bytes32, address, bytes calldata message) external { // Only the Interchain Messaging receiver can deliver a message. require(msg.sender == address(messenger), "ReceiverOnDispatch: unauthorized TeleporterMessenger"); // Store the message. lastMessage = abi.decode(message, (string)); } } ``` **The key things to understand:** - **Importing Interchain Messaging contracts (L8, L9):** We are importing the `ITeleporterMessenger` and `ITeleporterReceiver` interface we looked at in the previous activity. - **Inheriting from ITeleporterReceiver (L11):** We are inheriting the interface that will require us to implement the `receiveTeleporterMessage()` function. - **Defining the lastMessage variable (L14):** Setting the `lastMessage` variable as `public`, will make that variable readable from the outside without the need of a `getValue` function. - **Implementing receiveTeleporterMessage (L16):** We implement the function that will be called when the message is received. Please note that we are not checking who calls that function for now. So anyone can pretend to deliver messages for now. - **Decode message (L21):** We decode the message using `abi.decode(message, (string))`, which takes the bytes array as the first input and a tuple of the types of the encoded data in the message. Since our message only contains a single value, the tuple only has one value. ### Deploy Receiver Contract To deploy a contract using Foundry use the following command: ```bash forge create --rpc-url fuji-dispatch --private-key $PK contracts/interchain-messaging/send-receive/receiverOnDispatch.sol:ReceiverOnDispatch --broadcast ``` ``` [⠊] Compiling... [⠢] Compiling 2 files with Solc 0.8.18 [⠆] Solc 0.8.18 finished in 158.51ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC // [$FUNDED_ADDRESS] Deployed to: 0x52C84043CD9c865236f11d9Fc9F56aa003c1f922 // [$RECEIVER_ADDRESS] Transaction hash: 0x48a1ffaa8aa8011f842147a908ff35a1ebfb75a5a07eb37ae96a4cc8d7feafd7 ``` ### Save Receiver Contract Address Then save the receiver contract address in an environment variable: ```bash export RECEIVER_ADDRESS={your-receiver-address} ``` # Send a Message (/academy/interchain-messaging/04-icm-basics/07-send-a-message) --- title: Send a Message description: Send your first Cross-Chain message with Teleporter. updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; The final step, is to send the message between the two chains. Assuming everything went smooth so far, your local C-chain and your Avalanche L1 will have their corresponding parts of the sender/receiver deployed on them. ### Send a Message ```bash cast send --rpc-url fuji-c --private-key $PK $SENDER_ADDRESS "sendMessage(address,string)" $RECEIVER_ADDRESS "Hello" ``` A relayer will now take the message and deliver it to the destination. ### Verify Message was Received Now let's check if the "Hello" string sent from our C-chain was actually stored properly on the `lastMessage` variable on our Avalanche L1. Run the following command replacing the receiver contract address. ```bash cast call --rpc-url fuji-dispatch $RECEIVER_ADDRESS "lastMessage()(string)" ``` If the output says `Hello` then the message was successfully delivered and processed. Congratulations 🎉 You have just sent your first cross-chain message. This will be one of many to come! # Adapt the contract (/academy/interchain-messaging/04-icm-basics/08-adapting-the-contract-exercise) --- title: Adapt the contract description: Assignment- Modify the Cross-Chain Messenger. updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- To create a deeper understanding of the concepts, now change the provided contracts. Instead of sending a string, now send a number and add up the numbers on the destination chain: Once you finished with the assignment you can review the correct answer at `/contracts/interchain-messaging/send-receive-assignment-solution` in the [Avalanche-Starter-Kit](https://github.com/ava-labs/avalanche-starter-kit/tree/interchain-messaging/contracts/interchain-messaging/send-receive-assignment-solution) # Two-Way Communication (/academy/interchain-messaging/05-two-way-communication/01-two-way-communication) --- title: Two-Way Communication description: Learn about roundtrip messages. updated: 2024-05-31 authors: [martineckardt] icon: Book --- When we send messages with Interchain Messaging, we can check if a message has been delivered, but we do not get any feedback wether the message has been processed correctly or any return value. If we wanted to achieve this, we need to send a message back from the receiver to the original sender. ![](/common-images/teleporter/two-way-communication.png) ## What You Will Learn In this section, you will go through the following topics: - **Sender & Receiver:** Understand how we need to change the contracts to send a message back - **Send & Track:** Send a roundtrip message and follow the logs - **Adapt the example:** Take what you learned and adapt the example ## Exercises You will apply your learned knowledge by building your first Cross-Chain application! This application will be deployed on two chains: the Fuji testnet (C-Chain) and the Dispatch test L1. Therefore, you will deploy two contracts: - **Sender on Fuji C-Chain:** Send a message with a simple string - **Receiver on Dispatch test L1:** Receives the message, adds something to the string and send it back At the end of the section, you will be adapting the example to build a cross-chain Avalanche L1 dApp that send a number to a contract on another Avalanche L1 then receives the result of a mathematical operation. This application is similar to the example above, but the message will include a number. The receiving contract will perform a mathematical operation. # Sender Contract (/academy/interchain-messaging/05-two-way-communication/02-sender-contract) --- title: Sender Contract description: Adapt the sender contract to also receive messages updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- The sender contract has now two tasks: - **Send a message to the receiver**: Same as in the last example - **Receive a message back from the receiver**: Now our sender contract needs to be able to receive a message back from the receiver. Therefore, we need to change the sender contract to be able to receive a message back. We will need to implement the same interface `ITeleporterReceiver` as the receiver contract and implement the `receiveTeleporterMessage` function. ```solidity title="contracts/interchain-messaging/send-roundtrip/senderOnCChain.sol" // (c) 2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: Ecosystem pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; import "@teleporter/ITeleporterReceiver.sol"; // [!code highlight] contract SenderOnCChain is ITeleporterReceiver { // [!code highlight] ITeleporterMessenger public immutable messenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); string public roundtripMessage; /** * @dev Sends a message to another chain. */ function sendMessage(address destinationAddress) external { messenger.sendCrossChainMessage( TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: abi.encode("Hello") }) ); } function receiveTeleporterMessage(bytes32, address, bytes calldata message) external { // [!code highlight:7] // Only the Interchain Messaging receiver can deliver a message. require(msg.sender == address(messenger), "SenderOnCChain: unauthorized TeleporterMessenger"); // Store the message. roundtripMessage = abi.decode(message, (string)); } } ``` # Create the Sender Contract (/academy/interchain-messaging/05-two-way-communication/03-create-sender-contract) --- title: Create the Sender Contract description: Deploy roundtrip sender contract updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Go ahead and deploy the sender contract on the C-Chain. ### Deploy the Sender Contract ```bash forge create --rpc-url fuji-c --private-key $PK contracts/interchain-messaging/send-roundtrip/senderOnCChain.sol:SenderOnCChain --broadcast ``` ``` [⠊] Compiling... [⠢] Compiling 1 files with Solc 0.8.18 [⠆] Solc 0.8.18 finished in 165.79ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC // [$FUNDED_ADDRESS] Deployed to: 0xA4cD3b0Eb6E5Ab5d8CE4065BcCD70040ADAB1F00 // [$SENDER_ADDRESS] Transaction hash: 0x4f41cf829fbc525b64d9773c41dc9fabb3b93dfd03bf6c1568dcc4a4c6bdeb1a ``` ### Update the Sender Address Overwrite the `SENDER_ADDRESS` environment variable with the new address: ```bash export SENDER_ADDRESS={your-sender-address} ``` # Receiver Contract (/academy/interchain-messaging/05-two-way-communication/04-receiver-contract) --- title: Receiver Contract description: Adapt the receiver contract to send messages back updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- The receiver contract now has two tasks: - **Receive a message from the sender**: Same as in the [last example](/academy/interchain-messaging/04-icm-basics/05-receiving-a-message) - **Send a message back to the sender**: Now our receiver contract needs to be able to send a message back to the sender. Therefore, we need to change the receiver contract to be able to send a message back. We will need to instantiate a `TeleporterMessenger` and call the `sendCrossChainMessage()` function. ```solidity title="contracts/interchain-messaging/send-roundtrip/receiverOnDispatch.sol" // (c) 2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: Ecosystem pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; // [!code highlight] import "@teleporter/ITeleporterReceiver.sol"; contract ReceiverOnDispatch is ITeleporterReceiver { ITeleporterMessenger public immutable messenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); // [!code highlight] function receiveTeleporterMessage(bytes32 sourceBlockchainID, address originSenderAddress, bytes calldata message) external { // Only the Interchain Messaging receiver can deliver a message. require(msg.sender == address(messenger), "ReceiverOnDispatch: unauthorized TeleporterMessenger"); // Send Roundtrip message back to sender string memory response = string.concat(abi.decode(message, (string)), " World!"); messenger.sendCrossChainMessage( // [!code highlight:9] TeleporterMessageInput({ // Blockchain ID of C-Chain destinationBlockchainID: sourceBlockchainID, destinationAddress: originSenderAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: abi.encode(response) }) ); } } ``` # Create the Receiver Contract (/academy/interchain-messaging/05-two-way-communication/05-create-the-receiver-contract) --- title: Create the Receiver Contract description: Deploy roundtrip receiver contract updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Go ahead and deploy the receiver contract: ### Deploy the Receiver Contract ```bash forge create --rpc-url fuji-dispatch --private-key $PK contracts/interchain-messaging/send-roundtrip/receiverOnDispatch.sol:ReceiverOnDispatch --broadcast ``` ``` [⠊] Compiling... [⠢] Compiling 1 files with Solc 0.8.18 [⠆] Solc 0.8.18 finished in 101.71ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC // [$FUNDED_ADDRESS] Deployed to: 0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25 // [$RECEIVER_ADDRESS] Transaction hash: 0x11df9e14bdae4af60a618a900faa955d75e0ce7dd0f2ccc1ea6a764c149ef805 ``` ### Update the Receiver Address Update the `RECEIVER_ADDRESS` environment variable with the new address: ```bash export RECEIVER_ADDRESS={your-receiver-address} ``` # Send a Roundtrip Message (/academy/interchain-messaging/05-two-way-communication/06-send-a-roundtrip-message) --- title: Send a Roundtrip Message description: Send roundtrip message updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Alright, now let's send the message: ### Send the Message: ```bash cast send --rpc-url fuji-c --private-key $PK $SENDER_ADDRESS "sendMessage(address)" $RECEIVER_ADDRESS ``` ### Verify Message Receipt To check whether the message has been received, we can call the `roundtripMessage()` function on the sender contract. ```bash cast call --rpc-url fuji-c $SENDER_ADDRESS "roundtripMessage()(string)" ``` If successful, you should see the following output: ```bash "Hello World!" ``` {/* TODO: Add instructions for setting up and running the relayer in Docker before this section ### Check the Relayer Logs To understand in more detail what happened, check out the relayer logs: ```bash avalanche interchain relayer logs ``` */} # Adapt the Contracts (/academy/interchain-messaging/05-two-way-communication/07-adapt-the-contracts) --- title: Adapt the Contracts description: Assignment - Modify the roundtrip messenger dApp updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Taking what you've learned in the previous chapter, adapt the sender to send a number instead of a string. The receiver should now multiply the number by 2 and send it back to the sender. The sender saves the result in a public variable. # Invoking Functions (/academy/interchain-messaging/06-invoking-functions/01-invoking-functions) --- title: Invoking Functions description: Learn how to invoke functions on destination. updated: 2024-05-31 authors: [martineckardt] icon: Book --- Well done, you successfully learned how to send simple data back and forth between blockchains. In this section we will look further and invoke a smart contract function on the destination chain. ![](/common-images/teleporter/invoking-functions.png) ## What You Will Learn In this section, you will go through the following topics: - **Encoding:** Most smart contract functions have multiple parameters. So the first thing we will learn is how to encode multiple values in the message - **Specify which function to call:** Most smart contracts have multiple functions. We will learn how to specify which function to call and encode different parameters depending on which function is been called ## Exercise In this section you will apply the learned knowledge by building a cross-chain dApp where multiple functions are called on the destination chain. - Build a Calculator that takes two parameters and adds them up - Modify the Calculator to support more functions that can be called from the source Avalanche L1 # Encoding of multiple Values (/academy/interchain-messaging/06-invoking-functions/02-encoding-multiple-values) --- title: Encoding of multiple Values description: Learn how to encode multiple function parameters updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- In this section, we will learn how to pack multiple values into a single message. ![](/common-images/teleporter/message-multiple-parameters.png) We can use `abi.encode()` to encode multiple values into a single byte array: ```solidity bytes message = abi.encode( someString, someNumber, someAddress ); ``` Here we can see how the function `abi.encode()` is used to turn multiple values `(someString, someNumber & someAddress)` of various types `(string, uint & address)` into a single value of the type bytes called `message`. This bytearray can then be sent to the destination chain using the Teleporter. ```solidity function sendMessage(address destinationAddress) external returns (uint256 messageID) { string someString = "test"; uint someNumber = 43; address someAddress = address(0); bytes message = abi.encode( // [!code highlight:4] someString, someNumber, someAddress ); return teleporterMessenger.sendCrossChainMessage( TeleporterMessageInput({ destinationChainID: destinationChainID, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({ feeTokenAddress: feeContractAddress, amount: adjustedFeeAmount }), requiredGasLimit: requiredGasLimit, allowedRelayerAddresses: new address[](0), message: message // [!code highlight] }) ); } ``` The receiving contract can then decode the byte array back into its original values: ```solidity function receiveTeleporterMessage( bytes32 originChainID, address originSenderAddress, bytes calldata message ) external { // Only the Interchain Messaging receiver can deliver a message. if (msg.sender != address(teleporterMessenger)) { revert Unauthorized(); } // Decoding the function parameters // [!code highlight:6] ( string someString, uint256 someNumber, address someAddress ) = abi.decode(message, (string, uint256, address)); // Calling the internal function _someFunction(someString, someNumber, someAddress) // [!code highlight] } function _someFunction(string someString, uint256 someNumber, address someAddress) private { // Do something } ``` Here we are using `abi.decode()` to unpack the three values `(someString, someNumber & someAddress)` from the parameter `message` of the type `bytes`. As you can see, we need to provide the message as well as the types of values encoded in the message. It is important to note the types must be the same order as parameters to `abi.decode()`. ```solidity ( string memory someString, uint someNumber, address someAddress ) = abi.decode(message, (string, uint, address)); // [!code highlight] ``` # Create Simple Calculator Sender (/academy/interchain-messaging/06-invoking-functions/03-create-simple-calculator-sender) --- title: Create Simple Calculator Sender description: Create a contract that sends multiple parameters of a function updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Now, our goal is to create a dApp that works as a cross-Avalanche L1 Calculator, receiving multiple parameters and using those for a calculation. For now our calculator will only have the `Sum` function. ### Create Sender Contract On the Fuji C-Chain, we need to create the sender part of our cross-chain calculator. This will send two numbers (uint) to the receiver contract on the Dispatch test L1. ```solidity title="contracts/interchain-messaging/invoking-functions/SimpleCalculatorSenderOnCChain.sol" pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; contract SimpleCalculatorSenderOnCChain { ITeleporterMessenger public immutable teleporterMessenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); function sendAddMessage(address destinationAddress, uint256 num1, uint256 num2) external { teleporterMessenger.sendCrossChainMessage( TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: encodeAddData(num1, num2) }) ); } //Encode helpers function encodeAddData(uint256 a, uint256 b) public pure returns (bytes memory) { bytes memory paramsData = abi.encode(a, b); // [!code highlight] return paramsData; } } ``` To increase the readability of the code, we have created a helper function `encodeAddData` that encodes the two numbers into a single byte array. ### Deploy the Sender Contract Deploy the sender contract on the Fuji C-Chain: ```bash forge create --rpc-url fuji-c --private-key $PK contracts/interchain-messaging/invoking-functions/SimpleCalculatorSenderOnCChain.sol:SimpleCalculatorSenderOnCChain --broadcast ``` ``` [⠊] Compiling... [⠒] Compiling 1 files with Solc 0.8.18 [⠢] Solc 0.8.18 finished in 84.20ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC // [$FUNDED_ADDRESS] Deployed to: 0x789a5FDac2b37FCD290fb2924382297A6AE65860 // [$SENDER_ADDRESS] Transaction hash: 0xd6cb47dd4ff38e4447711ebbec8b646b93f492f48e80b51719f860984cc25413 ``` ### Save the Sender Contract Address Overwrite the `SENDER_ADDRESS` environment variable with the new address: ```bash export SENDER_ADDRESS={your-sender-address} ``` # Create Simple Calculator Receiver (/academy/interchain-messaging/06-invoking-functions/04-create-simple-calulcator-receiver) --- title: Create Simple Calculator Receiver description: Create a contract that receives multiple parameters and execute a function updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; On the Dispatch test L1, we need to create the receiver part of our cross-chain calculator. It will receive two numbers and store the result. ### Create Receiver Contract ```solidity title="contracts/interchain-messaging/invoking-functions/SimpleCalculatorReceiverOnDispatch.sol" pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; import "@teleporter/ITeleporterReceiver.sol"; contract SimpleCalculatorReceiverOnDispatch is ITeleporterReceiver { ITeleporterMessenger public immutable teleporterMessenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); uint256 public result_num; function receiveTeleporterMessage(bytes32, address, bytes calldata message) external { // Only the Interchain Messaging receiver can deliver a message. require( msg.sender == address(teleporterMessenger), "CalculatorReceiverOnDispatch: unauthorized TeleporterMessenger" ); (uint256 a, uint256 b) = abi.decode(message, (uint256, uint256)); // [!code highlight:2] _calculatorAdd(a, b); } function _calculatorAdd(uint256 _num1, uint256 _num2) internal { result_num = _num1 + _num2; } } ``` ### Deploy the Receiver Contract Deploy the receiver contract on the Dispatch test L1: ```bash forge create --rpc-url fuji-dispatch --private-key $PK contracts/interchain-messaging/invoking-functions/SimpleCalculatorReceiverOnDispatch.sol:SimpleCalculatorReceiverOnDispatch --broadcast ``` ``` [⠊] Compiling... [⠒] Compiling 1 files with Solc 0.8.18 [⠢] Solc 0.8.18 finished in 44.12ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC // [$FUNDED_ADDRESS] Deployed to: 0x5DB9A7629912EBF95876228C24A848de0bfB43A9 // [$RECEIVER_ADDRESS] Transaction hash: 0x2d40c53b493556463a28c458e40bc455a248df69a10679bef84145974b7424f3 ``` ### Save the Receiver Contract Address Overwrite the `RECEIVER_ADDRESS` environment variable with the new address: ```bash export RECEIVER_ADDRESS={your-receiver-address} ``` # Call simple Calculator (/academy/interchain-messaging/06-invoking-functions/05-call-simple-calculator) --- title: Call simple Calculator description: Execute a function in one chain from another chain updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Alright, now let's call our Calculators: ### Call the Sender Contract ```bash cast send --rpc-url fuji-c --private-key $PK $SENDER_ADDRESS "sendAddMessage(address, uint256, uint256)" $RECEIVER_ADDRESS 2 3 ``` We need to call the `sendAddMessage` function on the sender contract with the address of the receiver contract and the two numbers we want to add. If you didn't export the sender and receiver address, you will get an executionr reverted error. For this example we will add 2 + 3. ### Verify Message Receipt To check wether the message has been received, we can call the `result_num()` function on the `Receiver` contract. ```bash cast call --rpc-url fuji-dispatch $RECEIVER_ADDRESS "result_num()(uint)" ``` ```bash 5 ``` 2 + 3 = 5. Our cross-chain calculation was successful! # Encoding the Function Name (/academy/interchain-messaging/06-invoking-functions/06-encoding-function-name) --- title: Encoding the Function Name description: Work with multiple functions in a single Cross-Chain dApp updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Great work. We can now pack multiple values into a message. In this section, we will learn how to encode the function name and, depending on it, its parameters in the message. Let's imagine a more advanced calculator that not only has a single `add` function but also a `concatenate` function, that lets you concatenate two strings. For the add function we need to encode two numbers. For the concatenate function we need to encode two strings. How can we go about this? The concept is easy: It's like packing an envelope into another envelope: ![](/common-images/teleporter/message-function-call.png) ## Ecoding the Function Name and Parameters The first step is to create a `CalculatorAction` enum that specifies the different functions that can be called on the calculator. ```solidity title="contracts/interchain-messaging/invoking-functions/CalculatorActions.sol" pragma solidity ^0.8.18; enum CalculatorAction { add, concatenate } ``` In the next step we can add this to our `encode helpers` in the sender contract: ```solidity title="contracts/interchain-messaging/invoking-functions/CalculatorSenderOnCChain.sol" pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; import "./CalculatorActions.sol"; // [!code highlight] contract CalculatorSenderOnCChain { ITeleporterMessenger public immutable teleporterMessenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); function sendAddMessage(address destinationAddress, uint256 num1, uint256 num2) external { teleporterMessenger.sendCrossChainMessage( TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: encodeAddData(num1, num2) // [!code highlight] }) ); } function sendConcatenateMessage(address destinationAddress, string memory text1, string memory text2) external { teleporterMessenger.sendCrossChainMessage( TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: encodeConcatenateData(text1, text2) // [!code highlight] }) ); } // Encode helpers function encodeAddData(uint256 a, uint256 b) public pure returns (bytes memory) { bytes memory paramsData = abi.encode(a, b); // [!code highlight:2] return abi.encode(CalculatorAction.add, paramsData); } function encodeConcatenateData(string memory a, string memory b) public pure returns (bytes memory) { bytes memory paramsData = abi.encode(a, b); // [!code highlight:2] return abi.encode(CalculatorAction.concatenate, paramsData); } } ``` As you can see here we are calling `abi.encode` twice in the encode helpers. The first time we encode the function parameters and the second time we encode the function name with the byte array containing parameters. ## Decode the Function Name and Parameters Let's now look at the receiver: ```solidity title="contracts/interchain-messaging/invoking-functions/CalculatorReceiverOnDispatch.sol" pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; import "@teleporter/ITeleporterReceiver.sol"; import "./CalculatorActions.sol"; contract CalculatorReceiverOnDispatch is ITeleporterReceiver { ITeleporterMessenger public immutable teleporterMessenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); uint256 public result_num; string public result_string; function receiveTeleporterMessage(bytes32, address, bytes calldata message) external { // Only the Interchain Messaging receiver can deliver a message. require( msg.sender == address(teleporterMessenger), "CalculatorReceiverOnDispatch: unauthorized TeleporterMessenger" ); // Decoding the Action type: // [!code highlight:2] (CalculatorAction actionType, bytes memory paramsData) = abi.decode(message, (CalculatorAction, bytes)); // Route to the appropriate function. // [!code highlight:10] if (actionType == CalculatorAction.add) { (uint256 a, uint256 b) = abi.decode(paramsData, (uint256, uint256)); _calculatorAdd(a, b); } else if (actionType == ...) { (string memory text1, string memory text2) = abi.decode(paramsData, (string, string)); _calculatorConcatenateStrings(text1, text2); } else { revert("CalculatorReceiverOnDispatch: invalid action"); } } function _calculatorAdd(uint256 _num1, uint256 _num2) internal { result_num = _num1 + _num2; } function _calculatorConcatenateStrings(string memory str1, string memory str2) internal { bytes memory str1Bytes = bytes(str1); bytes memory str2Bytes = bytes(str2); bytes memory combined = new bytes(str1Bytes.length + str2Bytes.length + 1); for (uint256 i = 0; i < str1Bytes.length; i++) { combined[i] = str1Bytes[i]; } combined[str1Bytes.length] = " "; for (uint256 i = 0; i < str2Bytes.length; i++) { combined[str1Bytes.length + i + 1] = str2Bytes[i]; } result_string = string(combined); } } ``` You can see that we first decode the `CalculatorAction` enum: ```solidity // Decoding the Action type: (CalculatorAction actionType, bytes memory paramsData) = abi.decode(message, (CalculatorAction, bytes)); ``` Then based on the function name we decide how to unpack the parameters ```solidity // Route to the appropriate function. if (actionType == CalculatorAction.add) { (uint256 a, uint256 b) = abi.decode(paramsData, (uint256, uint256)); // [!code highlight] _calculatorAdd(a, b); } else if (actionType == ...) { (string memory text1, string memory text2) = abi.decode(paramsData, (string, string)); // [!code highlight] _calculatorConcatenateStrings(text1, text2); } else { revert("CalculatorReceiverOnDispatch: invalid action"); } ``` For the `add` function we decode two numbers and for the `concatenate` function we decode two strings. After the decoding we call the appropriate internal function. ## Try it Out Deploy the sender and receiver contracts and try out the `add` and `concatenate` functions. ### Deploy the Sender Contract ```bash forge create --rpc-url fuji-c --private-key $PK contracts/interchain-messaging/invoking-functions/CalculatorSenderOnCChain.sol:CalculatorSenderOnCChain --broadcast ``` ``` [⠃] Compiling... [⠆] Compiling 2 files with Solc 0.8.18 [⠰] Solc 0.8.18 finished in 240.23ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC // [$FUNDED_ADDRESS] Deployed to: 0x8B3BC4270BE2abbB25BC04717830bd1Cc493a461 // [$SENDER_ADDRESS] Transaction hash: 0xf9cce28a714764bb265bba7522bfd10d620fa0cb0f5dae26de2ac773b0a878ee ``` ### Save the Sender Address: ```bash export SENDER_ADDRESS={your-sender-address} ``` ### Deploy the Receiver Contract: ```bash forge create --rpc-url fuji-dispatch --private-key $PK contracts/interchain-messaging/invoking-functions/CalculatorReceiverOnDispatch.sol:CalculatorReceiverOnDispatch --broadcast ``` ``` [⠊] Compiling... [⠢] Compiling 1 files with Solc 0.8.18 [⠆] Solc 0.8.18 finished in 148.40ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC // [$FUNDED_ADDRESS] Deployed to: 0x5DB9A7629912EBF95876228C24A848de0bfB43A9 // [$RECEIVER_ADDRESS] Transaction hash: 0xa8efb88abfef486d2caba30cb4146b1dc56a0ee88c7fb4c46adccdf1414ae39e ``` ### Save the Receiver address: ```bash export RECEIVER_ADDRESS={your-receiver-address} ``` ### Call the Functions Now you can call the `sendAddMessage` and `sendConcatenateMessage` functions on the sender contract and see the results on the receiver contract. ```bash cast send --rpc-url fuji-c --private-key $PK $SENDER_ADDRESS "sendAddMessage(address, uint, uint)" $RECEIVER_ADDRESS 1 2 ``` ### Verify the Result: ```bash cast call --rpc-url fuji-dispatch $RECEIVER_ADDRESS "result_num()(uint)" ``` # Extend the Calculator (/academy/interchain-messaging/06-invoking-functions/07-extend-calculator) --- title: Extend the Calculator description: Assignment- Extend calculator functionality updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- Now let's add a third feature to the calculator: adding three numbers. We want you to practice handling different amounts of parameters depending on the action to perform, so your task will be to add a `tripleSum` function that sums up three values of the type uint. You will need to: - Add the `trippleSum` action to the enum file - Add a new encode helper and send function to the sender contract - Add a new route to the receiver and implement the `tripleSum` function Check out the solution under `contracts/interchain-messaging/invoking-functions` in the Avalanche-Starter-Kit on the **interchain-messaging** branch. # Interchain Messaging Registry (/academy/interchain-messaging/07-icm-registry/01-icm-registry) --- title: Interchain Messaging Registry description: Learn about the Interchain Messaging Registry to manage multiple Interchain Messaging versions. updated: 2024-05-31 authors: [martineckardt] icon: Book --- When sending a message from the source chain we are calling the Interchain Messaging contract: The TeleporterMessenger contract is non-upgradable. Once a version of the contract is deployed it cannot be changed. This is with the intention of preventing any changes to the deployed contract that could potentially introduce bugs or vulnerabilities. However, there could still be new versions of TeleporterMessenger contracts needed to be deployed in the future. So far we used a fixed address for the TeleporterMessenger in all of our contracts. It is hardcoded in our contract. This is not a good practice, as it makes the contract less flexible and harder to upgrade. ```solidity contract SenderOnCChain { ITeleporterMessenger public immutable teleporterMessenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); // [!code highlight] function sendMessage( address destinationAddress, string calldata message ) external { teleporterMessenger.sendCrossChainMessage( // ... ); } } ``` While we could manually update the address of the used TeleporterMessenger in our contract, TeleporterRegistry provides us an easy way to use the latest version of TeleporterMessenger. ## What you will learn In this section, you will explore the following topics: - How the Interchain Messaging Registry works - How to retrieve the latest version of the TeleporterMessenger on a blockchain # How the ICM Registry works (/academy/interchain-messaging/07-icm-registry/02-how-the-icm-registry-works) --- title: How the ICM Registry works description: Learn about the different functionality available in the Interchain Messaging Registry. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- TeleporterRegistry keeps track of TeleporterMessenger contract versions. Cross-Avalanche L1 dApps can request the latest or a specific version of the TeleporterMessenger: Internally the TeleporterRegistry maintains a mapping of TeleporterMessenger contract versions to their addresses. Each registry's mapping of version to contract address is independent of registries on other blockchains, and chains can decide on their own registry mapping entries. So the contract of version 4 on one chain does not have to be equal to that version on another chain. ```solidity title="/lib/icm-contracts/contracts/teleporter/registry/TeleporterRegistry.sol" contract TeleporterRegistry { // .... // The latest protocol version. 0 means no protocol version has been added, and isn't a valid version. uint256 public latestVersion; // Mappings that keep track of the protocol version and corresponding contract address. mapping(uint256 version => address protocolAddress) private _versionToAddress; mapping(address protocolAddress => uint256 version) private _addressToVersion; // ... } ``` In the `TeleporterRegistry` contract, the `latestVersion` state variable returns the highest version number that has been registered in the registry. The `getLatestTeleporter` function returns the `ITeleporterMessenger` that is registered with the corresponding version. Version zero is an invalid version, and is used to indicate that a `TeleporterMessenger` contract has not been registered yet. ```solidity title="/lib/icm-contracts/contracts/teleporter/registry/TeleporterRegistry.sol" contract TeleporterRegistry { // .... // Mappings that keep track of the protocol version and corresponding contract address. mapping(uint256 version => address protocolAddress) private _versionToAddress; // The latest protocol version. 0 means no protocol version has been added, and isn't a valid version. uint256 public latestVersion; function getLatestTeleporter() external view returns (ITeleporterMessenger) { return ITeleporterMessenger(getAddressFromVersion(latestVersion)); } function getAddressFromVersion(uint256 version) public view returns (address) { require(version != 0, "TeleporterRegistry: zero version"); address protocolAddress = _versionToAddress[version]; require(protocolAddress != address(0), "TeleporterRegistry: version not found"); return protocolAddress; } // ... } ``` If a cross-Avalanche L1 dApps prefers a specific version, it can also call directly the `getAddressFromVersion` function: ```solidity title="/lib/icm-contracts/contracts/teleporter/registry/TeleporterRegistry.sol" contract TeleporterRegistry { // Mappings that keep track of the protocol version and corresponding contract address. mapping(uint256 version => address protocolAddress) private _versionToAddress; // .... function getAddressFromVersion(uint256 version) public view returns (address) { require(version != 0, "TeleporterRegistry: zero version"); address protocolAddress = _versionToAddress[version]; require(protocolAddress != address(0), "TeleporterRegistry: version not found"); return protocolAddress; } // ... } ``` If you are interested in the entire implementation, check it out [here](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/registry/TeleporterRegistry.sol). # Interact with the ICM Registry (/academy/interchain-messaging/07-icm-registry/03-interact-with-the-registry) --- title: Interact with the ICM Registry description: Retrieve latest Interchain Messaging version using the Registry. updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- The ICM Registry is already deployed on Fuji C-Chain, Dispatch, and Echo networks. You can use it to keep track of the latest ICM messenger version and its address. ### Retrieve Latest ICM Messenger Version Let's interact with the Registry by getting the latest version of the Interchain Messaging Messenger deployed on Fuji C-Chain: ```bash cast call --rpc-url fuji-c $TELEPORTER_REGISTRY_FUJI_C_CHAIN "latestVersion()(uint256)" ``` The response should be something like this: ```bash 1 ``` # Retrieving Interchain Messenger from the Registry (/academy/interchain-messaging/07-icm-registry/04-retrieving-icm-from-registry) --- title: Retrieving Interchain Messenger from the Registry description: Use Registry in a Cross-Chain dApp. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Let's now integrate the registry into a smart contract. Let's go back to the very simple string sending contract from the beginning: ```solidity title="contracts/interchain-messaging/registry/SenderOnCChainWithRegistry.sol" pragma solidity ^0.8.18; import "@teleporter/upgrades/TeleporterRegistry.sol"; // [!code highlight] import "@teleporter/ITeleporterMessenger.sol"; contract SenderOnCChain { // The Interchain Messaging registry contract manages different Interchain Messaging contract versions. // [!code highlight:3] TeleporterRegistry public immutable teleporterRegistry = TeleporterRegistry(0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228); /** * @dev Sends a message to another chain. */ function sendMessage(address destinationAddress, string calldata message) external { ITeleporterMessenger messenger = teleporterRegistry.getLatestTeleporter(); // [!code highlight] messenger.sendCrossChainMessage( TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: abi.encode(message) }) ); } } ``` The key things to understand: - We are importing the `ITeleporterRegistry.sol` interface - We have a variable for the registry address instead of the messenger address - Before sending the message we get the latest version from the registry # Verify if Sender is Interchain Messaging (/academy/interchain-messaging/07-icm-registry/05-verify-sender-is-icm) --- title: Verify if Sender is Interchain Messaging description: Safety verification of the message sender updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- We can also leverage the registry to check if `msg.sender` is a registered Interchain Messaging contract. Previously, we hardcoded this check in the contract: ```solidity import "@teleporter/ITeleporterMessenger.sol"; import "@teleporter/ITeleporterReceiver.sol"; contract ReceiverOnDispatch is ITeleporterReceiver { ITeleporterMessenger public immutable messenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); string public lastMessage; function receiveTeleporterMessage(bytes32, address, bytes calldata message) external { // Only the Interchain Messaging receiver can deliver a message. // [!code highlight:2] require(msg.sender == address(messenger), "ReceiverOnDispatch: unauthorized TeleporterMessenger"); // Store the message. lastMessage = abi.decode(message, (string)); } } ``` If there was now a new Interchain Messaging contract version that would be used for sending messages, we would have to update the contract. Instead, we can use the registry to check if the sender is a registered Interchain Messaging contract. ```solidity pragma solidity ^0.8.18; import "@teleporter/upgrades/TeleporterRegistry.sol"; import "@teleporter/ITeleporterMessenger.sol"; import "@teleporter/ITeleporterReceiver.sol"; contract ReceiverOnDispatchWithRegistry is ITeleporterReceiver { // The Interchain Messaging registry contract manages different Interchain Messaging contract versions. TeleporterRegistry public immutable teleporterRegistry = // [!code highlight:2] TeleporterRegistry(0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228); string public lastMessage; function receiveTeleporterMessage(bytes32, address, bytes calldata message) external { // Only a Interchain Messaging Messenger registered in the registry can deliver a message. // [!code highlight:3] // Function throws an error if msg.sender is not registered. teleporterRegistry.getVersionFromAddress(msg.sender); // Store the message. lastMessage = abi.decode(message, (string)); } } # Avalanche Warp Messaging (/academy/interchain-messaging/08-avalanche-warp-messaging/01-avalanche-warp-messaging) --- title: Avalanche Warp Messaging description: Learn about Avalanche Warp Messaging, the Cross-Chain communication primitives on Avalanche. updated: 2024-05-31 authors: [martineckardt] icon: Book --- Avalanche Warp Messaging (AWM) was developed to make all Avalanche blockchains natively interoperable. Ideally, this solution would introduce as few trust assumptions as possible, meaning the blockchains would not depend on a third-party or intermediary that could risk centralization or single-point failures. AWM allows Avalanche L1s to communicate with one another via authenticated messages by providing signing and verification primitives in AvalancheGo. These are used by the blockchain VMs to sign outgoing messages and verify incoming messages. Any Virtual Machine (VM) on Avalanche can integrate AWM to send and receive messages across Avalanche L1s. Therefore, AWM is not EVM specific, but rather a general cross-Avalanche L1 communication protocol. ## What you will learn In this section, you will go through the following topics: - **P-Chain:** What is the role of the P-Chain in Avalanche Warp Messaging? - **Warp Message Format:** What does a Warp message consist of? - **Warp Message Signing:** How does AWM utilize the multi-signature scheme BLS to create aggregated signatures? - **Warp Message Signature Verification:** How are the signatures of warp messages verified? - **Trust Assumptions:** What trust assumptions does AWM introduce? # Recap P-Chain (/academy/interchain-messaging/08-avalanche-warp-messaging/02-p-chain) --- title: Recap P-Chain description: Recap about the P-Chain main functionality of handling validator operations. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- import PChain from "@/content/common/primary-network/p-chain.mdx"; # Warp Message Format (/academy/interchain-messaging/08-avalanche-warp-messaging/03-warp-message-format) --- title: Warp Message Format description: Learn about the structure of a Warp Message. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Warp Messages have a minimal message format that only contains the absolute necessary data for the transportation: - **NetworkID:** The ID of the Network (Mainnet, Fuji Test Network, Local Test Network). This is included as a security mechanisms, so messages from the Fuji Test Network cannot be replayed on the Mainnet. - **SourceChainID:** The chainID where the message originates from. This is not the EVM chain ID you may know from adding a network to a wallet, but the blockchain ID on the P-Chain. The P-Chain uses the transaction ID of the transaction that created those blockchain on the P-Chain for the chain ID, e.g.: 0xd7cdc6f08b167595d1577e24838113a88b1005b471a6c430d79c48b4c89cfc53 - **SourceAddress:** The address of the message sender - **Payload:** The payload of the message Every Warp message can be uniquely identified by its ID: the SHA256 hash of the serialized message (NetworkID, SourceChainID, SourceAddress & Payload). The fields above are actually an UnsignedMessage. Hence, it has not been attached an aggregated signature yet. We will examine this in the following sections. # AWM Relayer (/academy/interchain-messaging/08-avalanche-warp-messaging/04-awm-relayer) --- title: AWM Relayer description: Learn the basic AWM role and configuration. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- The AWM Relayer is an application run independently of the blockchain clients. Anyone can run their own AWM Relayer to create a communication channel between two Avalanche L1s, and there can be multiple relayers for the same communication channel. There is an open source implementation anyone can use [here](https://github.com/ava-labs/awm-relayer). [AvaCloud](https://www.avacloud.io) also offers their own hosted relayers. ![AWM Relayer data flow](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/teleporter/teleporter-source-destination-with-relayer-SvUFYGP77XxLjoyWqf7IpC85Ssxmmo.png) The AWM Relayers are responsible for picking up the messages at the source Avalanche L1, aggregating the BLS signatures from a sufficiently large share of the source Avalanche L1's validators and submitting a transaction on the destination Avalanche L1. It is up to the cross-chain application to decide if they allow any relayer to be part of the delivery process, or whether to restrict participation to certain Relayers. ## AWM Relayer Config The AWM Relayer can be configured so that it is reusable for different networks (Mainnet, Fuji Testnet, Local) and communication channels between Avalanche L1s. The following configurations are available: ```json title="https://github.com/ava-labs/awm-relayer/blob/main/sample-relayer-config.json" { "info-api": { "base-url": "https://api.avax-test.network" }, "p-chain-api": { "base-url": "https://api.avax-test.network" }, "source-blockchains": [ { "subnet-id": "11111111111111111111111111111111LpoYY", "blockchain-id": "yH8D7ThNJkxmtkuv2jgBa4P1Rn3Qpr4pPr7QYNfcdoS6k6HWp", "vm": "evm", "rpc-endpoint": { "base-url": "https://api.avax-test.network/ext/bc/C/rpc" }, "ws-endpoint": { "base-url": "wss://api.avax-test.network/ext/bc/C/ws" }, "message-contracts": { "0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf": { "message-format": "teleporter", "settings": { "reward-address": "0x5072..." } } } } ], "destination-blockchains": [ { "subnet-id": "7WtoAMPhrmh5KosDUsFL9yTcvw7YSxiKHPpdfs4JsgW47oZT5", "blockchain-id": "2D8RG4UpSXbPbvPCAWppNJyqTG2i2CAXSkTgmTBBvs7GKNZjsY", "vm": "evm", "rpc-endpoint": { "base-url": "https://subnets.avax.network/dispatch/testnet/rpc" }, "account-private-key": "0x7493..." } ] } ``` - **Info API URL:** An RPC endpoint where the AWM Relayer can access the Info API to retrieve the information about the network - **P-Chain API URL:** An RPC endpoint where the AWM Relayer can access the P-Chain to retrieve the validators of the source Avalanche L1 - **Source Avalanche L1 Configs:** An array of configurations for the source Avalanche L1s where the messages are picked up from - **Destination Avalanche L1 Configs:** An array of configurations for the destination Avalanche L1s where the messages are delivered to # Dataflow (/academy/interchain-messaging/08-avalanche-warp-messaging/05-dataflow) --- title: Dataflow description: Learn the complete flow of a Cross-Chain message. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Looking at the bigger picture, the data flow of an interchain message can be described as follows: ### Message Initialization Cross-Chain dApp initiating the message calls the Interchain Messaging Contract on the source Avalanche L1 ### Warp Precompile The Interchain Messaging contracts interacts with the AWM precompile of the EVM ### Message Relaying An AWM Relayer relays the message to the destination Chain. It periodically checks the source Avalanche L1 for outgoing messages and delivers these by calling the Interchain Messaging contract on the destination Avalanche L1. ### Signature Verification The Interchain Messaging contract on the destination chain interacts with AWM to verify the signature of the message and whether it has been signed by a sufficiently large stake share of the source Avalanche L1's validator set. ### Destination Contract Call The Interchain Messaging contract then calls the destination dApp contract ### Message Processing The dApp decodes the message payload and processes it accordingly. As you noticed in the `Teleporter Basics` chapter most of the cross-chain communication has been abstracted away from the dApp developer. The only interfaces for them are: - **Sending a message:** Simply calling the Interchain Messaging contract on the source chain - **Receiving a message:** Being able to be called by the Interchain Messaging on the destination chain To start, we will assume there is always an AWM Relayer to deliver our messages without any incentives. Let's dive deeper into the sending and receiving of messages in the next sections. # Message Pickup (/academy/interchain-messaging/08-avalanche-warp-messaging/06-message-pickup) --- title: Message Pickup description: Learn about the role of the Relayer when picking up the message. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- When the AWM Relayer detects a new message, it collects the signatures of that message from the validators of the source Avalanche L1. The validator uses the BLS private key corresponding to their BLS public key (registered on the P-Chain) to create a signature for the unsigned warp message we've learned about earlier (in the Warp Message Format). ![](/common-images/awm/relayer-pickup.png) The AWM Relayer does not necessarily need to collect the signature of all validators. It only needs to ensure that the validators of the collected signatures represent a sufficiently large share of the total stake of the Avalanche L1. It is up to the receiving Avalanche L1 to determine what the threshold is. ## New Outgoing Messages How does a AWM Relayer learn about new messages? The AWM Relayer has two options to check for outgoing warp message on the source Avalanche L1: - **Polling:** The AWM Relayer periodically checks the source chain for new outgoing messages. - **Notification:** The AWM Relayer is triggered whenever a new outgoing message is detected by a node. ## AWM Relayer Source Avalanche L1 Configuration For the source Avalanche L1s, the AWM Relayer offers the following configuration: ```json "source-blockchains": [ { "subnet-id": "11111111111111111111111111111111LpoYY", "blockchain-id": "yH8D7ThNJkxmtkuv2jgBa4P1Rn3Qpr4pPr7QYNfcdoS6k6HWp", "vm": "evm", "rpc-endpoint": { "base-url": "https://api.avax-test.network/ext/bc/C/rpc" }, "ws-endpoint": { "base-url": "wss://api.avax-test.network/ext/bc/C/ws" }, "message-contracts": { "0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf": { "message-format": "teleporter", "settings": { "reward-address": "0x5072..." } } } } ], ``` - **subnet-id:** The id of the source Avalanche L1 - **blockchain-id:** The chainId of the source Avalanche L1's blockchain that should be checked for outgoing messages - **vm:** A string specifying the virtual machine of the source Avalanche L1's blockchain. Currently, only the evm is supported, but this field has been added in anticipation of communication between blockchains powered by different virtual machines in the future. - **rpc-endpoint:** The host of a node that can be queried for outgoing messages - **ws-endpoint:** The websocket endpoint of a node that can be queried for outgoing messages - **message-contracts:** A map with the address of the Interchain Messaging contract as key and the following config parameters as values: - **message-format:** A string specifying the format. Currently, only the format "teleporter" exists, but this field has been added in anticipation of multiple formats in the future - **settings:** A dictionary with settings for this Interchain Messaging contract - **reward-address:** The address rewards are paid out to # Message Delivery (/academy/interchain-messaging/08-avalanche-warp-messaging/07-message-delivery) --- title: Message Delivery description: Learn about the role of the Relayer for delivering the message. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Next, the AWM Relayer delivers the message with the aggregated signature by simply submitting a transaction to the destination Avalanche L1. The transaction is propagated through the validator set of the destination blockchain just as any other regular transaction would be. In order to submit the transaction, the AWM Relayer needs access to the private key of a wallet holding the gas token of the destination blockchain to sign the transaction. ![](/common-images/awm/relayer-delivery.png) When the validators verify the transaction, they perform the following steps: ### Query the P-Chain: The verifying validators of the destination L1 each query the validator set of the source Avalanche L1 with its stake weights and BLS Public Keys. ### Verify Sufficient Stake Weight: First, the verifying validators are checking if the combined stake weight of the validators that have signed the Warp message is large enough to accept the message (e.g. validators representing 50% of the total stake have signed the message). The required stake weight to accept a message from a source chain can be set arbitrarily by the destination chain. An Avalanche L1 may require 50% for chain A and 90% for chain B. If the stake weight of the validators of the aggregate signature is insufficient the message is rejected. ### Aggregate BLS Public Keys: The verifying validators of the destination L1 aggregate the BLS Public Keys of the source L1 validators that signed the message. ### Verify BLS Multi-Signature: The aggregated signature (created by the AWM Relayer) is now being verified using the aggregated BLS Public Key (aggregated by the destination L1 validators). If the verification fails (e.g. the AWM Relayer modified the message), the message is rejected. ### Message Execution: If the verification is successful, the message is executed. ## Why doesn't the AWM Relayer also aggregate the BLS Public Keys? The BLS public key aggregation has to be done by every validator of the destination L1. Some may ask why we don't perform this action off-chain through the AWM Relayer and attach it to the message (similar to the BLS signature aggregation). Even though this would make the verification much faster, there is a security issue here. How can we be sure that the aggregated public key actually represents the public keys of the signing validators? How do we know that the AWM Relayer did not just create a bunch of public keys and an aggregated signature of these? The only way to verify that is to aggregate the public keys of the signing validators and compare it to the one sent by the AWM Relayer. Therefore, it is pointless to attach it to the message. # Load Considerations (/academy/interchain-messaging/08-avalanche-warp-messaging/08-load-considerations) --- title: Load Considerations description: Learn which chains experience a load increase when sending a Cross-Chain message. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Let's take a moment and determine where AWM takes up load: - **Source Avalanche L1**: Validators sign the message - **Relayer**: Signature aggregation - **Destination Avalanche L1**: Validators verify the message using the BLS Public Key registry on the P-Chain There is no record of the network communication, the signatures, or the message itself on the P-Chain. It solely functions as a registry of the BLS public keys where the destination Avalanche L1 validators read from. Since they each already validate the primary network, they are already tracking the P-Chain. Thus, sending messages from one Avalanche L1 to another does not put any load on the P-Chain or another part of the Primary Network. This is a crucial feature, as there is no bottleneck or dependencies between Avalanche L1s for cross-chain communication. # Trust Assumptions (/academy/interchain-messaging/08-avalanche-warp-messaging/09-trust-assumption-of-awm) --- title: Trust Assumptions description: Learn where the trust relays when using AWM. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- Avalanche Warp Messaging uses a trick: The trusted third party is the validator set of the source L1. We are already trusting them, or else we wouldn't communicate with that L1. The only thing that needs to be ensured is that there is always an AWM Relayer doing the signature aggregation and delivery of the warp message. Since it will have to pay gas on the destination chain, the sender needs to make sure it has enough incentives to do so. Either those incentives come from the sender itself, or from the receiving L1 if it has an inherent interest to onboard users to its chain. You will learn how users can incentivize AWM Relayers with ERC20 tokens with Interchain Messaging in following chapters. Generally, if there were no relayer available, anyone could run a AWM Relayer and deliver their own messages. # Relayer Introduction (/academy/interchain-messaging/09-running-a-relayer/01-relayer-introduction) --- title: Relayer Introduction description: Learn how to run and manage a AWM Relayer. updated: 2024-05-31 authors: [martineckardt] icon: Book --- As you now know Interchain Messaging requires a running relayer to deliver the warp messages underlying it, so far we just assumed a relayer is running. However, in a production environment, you will either need to run a relayer yourself or have someone else run it for you. ![AWM Relayer data flow](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/teleporter/teleporter-source-destination-with-relayer-SvUFYGP77XxLjoyWqf7IpC85Ssxmmo.png) In this section you will learn how to run a relayer for Avalanche Warp Messaging. As discussed in the section on AWM, the relayer is responsible for checking for outgoing warp messages on the source L1, aggregate the signatures and deliver it with a transaction on the destination L1. This chapter utilizes the reference relayer implementation created by Ava Labs. In this case the relayer is a standalone application written in Go. There may be different implementations in the future that are adapted to special use cases. Anyone can build and run a relayer. ## What you will learn In this section, you will learn how to: - **Configure the Relayer:** Create and understand the relayer configuration file that defines which blockchains to monitor and how to interact with them - **Set Up the Relayer Wallet:** Learn about the relayer's wallet requirements and how to fund it with testnet tokens - **Run the Relayer:** Deploy and operate the ICM relayer using Docker - **Monitor Message Delivery:** Track and verify cross-chain message delivery through relayer logs and blockchain transactions # Configuration Breakdown (/academy/interchain-messaging/09-running-a-relayer/02-relayer-configuration) --- title: Configuration Breakdown description: Understand the Relayer Configuration updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- A relayer is configured to relayer messages between certain source and destination L1s. Therefore, the configuration consists of three parts: - **General Config:** Configuration independent of the source and destination L1s concerning the network as a whole - **Source Blockchain Configs:** All parameters necessary for the relayer to pick up the messages - **Destination Blockchain Configs:** All parameters necessary to deliver the messages Let's go through an example JSON understanding the different values you will configure with the Toolbox in the next section: ## General Config The general config contains among others the following parameters: ```json { "p-chain-api": { "base-url": "http://127.0.0.1:9650", "query-parameters": {}, "http-headers": null }, "info-api": { "base-url": "http://127.0.0.1:9650", "query-parameters": {}, "http-headers": null }, // ... } ``` - **info-api-url:** The URL of the [Info API](/docs/api-reference/info-api) node to which the relayer will connect to to receive information like the NetworkID. - **p-chain-api-url:** The URL of the Avalanche [P-Chain API](/docs/api-reference/p-chain/api) node to which the relayer will connect to query the validator sets. ## Source Blockchain Configs Next is the configuration for our source blockchain. This is the configuration of the blockchain where messages will be initiated and picked up. The relayer will aggregate the signatures of the validators of that L1. ```json { // General Config ... "source-blockchains": [ { "subnet-id": "11111111111111111111111111111111LpoYY", "blockchain-id": "epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku", "vm": "evm", "rpc-endpoint": { "base-url": "http://127.0.0.1:9650/ext/bc/epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku/rpc", "query-parameters": null, "http-headers": null }, "ws-endpoint": { "base-url": "ws://127.0.0.1:9650/ext/bc/epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku/ws", "query-parameters": null, "http-headers": null }, "message-contracts": { "0x0000000000000000000000000000000000000000": { "message-format": "off-chain-registry", "settings": { "teleporter-registry-address": "0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25" } }, "0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf": { "message-format": "teleporter", "settings": { "reward-address": "0xbAE6Ff34d6Da45128C1ddFEDA008e55A328f5665" } } } } ] // Destination Blockchains } ``` - **subnet-id:** The Blockchain ID of the L1 that the source blockchain is part of. In this example this is the Blockchain ID of the Primary Network. - **blockchain-id:** The blockchain ID of the source. In this example this is the blockchain ID of Fuji's C-Chain. - **vm:** A string specifying the virtual machine of the destination L1's blockchain. Currently, only the EVM is supported, but this field has been added in anticipation of communication between blockchains powered by different virtual machines in the future. - **rpc-endpoint:** An API Config containing:
- **base-url:** RPC endpoint of the source L1's API node. - **query-parameters:** Additional query parameters to include in the API requests - **http-headers:** Additional HTTP headers to include in the API requests
- **wss-endpoint:** An API Config containing:
- **base-url**: The WebSocket endpoint of the source L1's API node - **query-parameters:** Additional query parameters to include in the API requests - **http-headers:** Additional HTTP headers to include in the API requests
- **message-contracts:** A map of contract addresses to the config options of the protocol (e.g. Teleporter) at that address. Each MessageProtocolConfig consists of a unique message-format name, and the raw JSON settings. "0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf" is the address of the TeleporterMessenger on the Fuji C-Chain.
- **message-format:** should be set to Teleporter. Additional message formats next to Interchain Messaging may be developed in the future - **settings > reward-address:** The address that will be rewarded if an L1 is incentivizing relayers to send messages on it's behalf. This is the address of the wallet you will create for this relayer.
## Destination Blockchains Next is the configuration for our destination blockchain. This is the configuration of the blockchain where messages will be sent to. The relayer will aggregate the signatures of the validators of that L1. ```json { "destination-blockchains": [ { "subnet-id": "11111111111111111111111111111111LpoYY", "blockchain-id": "epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku", "vm": "evm", "rpc-endpoint": { "base-url": "http://127.0.0.1:9650/ext/bc/epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku/rpc", "query-parameters": null, "http-headers": null }, "kms-key-id": "", "kms-aws-region": "", "account-private-key": "6dc6ba26b9b17f82b7b44fc316857a35ff201613072d500231ce3f2ee235bc16" }, ], } ``` For each destination L1, the relayer has the following config parameter: - **subnet-id:** The ID of the L1 that the destination blockchain is part of. - **blockchain-id:** The blockchain ID of the destination blockchain. This is the Blockchain ID of Fuji's Dispatch L1. - **vm:** A string specifying the virtual machine of the destination L1's blockchain. Currently, only the EVM is supported, but this field has been added in anticipation of communication between blockchains powered by different virtual machines in the future. - **rpc-endpoint:** The RPC endpoint of the destination L1's API node. Used in favor of api-node-host, api-node-port, and encrypt-connection when constructing the endpoint. - **kms-key-id:** The ID of the KMS key to use for signing transactions on the destination blockchain. Only one of account-private-key or kms-key-id should be provided. If kms-key-id is provided, then kms-aws-region is required. Please note that the private key in KMS should be exclusive to the relayer. - **kms-aws-region:** The AWS region in which the KMS key is located. Required if kms-key-id is provided. - **account-private-key:** A private key for a wallet that holds gas tokens on the destination L1. This is required so the relayer can sign the transaction that delivers the warp message. If you have been following all the course up to this point with the Avalanche-starter-kit and Avalanche-CLI will provide you with a relayer already funded to perform transactions between the local C-chain and your own chain Some configuration fields have been omitted for the purpose of this exercise. If you are interested to read the extensive list of relayer configurations, you can visit the awm-relayer GitHub repository [here](https://github.com/ava-labs/awm-relayer/tree/main?tab=readme-ov-file#configuration). ## Two-Way Messaging The relayer can be configured to support two-way messaging between multiple Layer 1 blockchains. This means you can use a single relayer instance to handle communication in both directions between your chains. The key points to remember are: To enable two-way communication: - Add both chains to the `source-blockchains` array to listen for messages on both sides - Add both chains to the `destination-blockchains` array to deliver messages to both sides - Ensure the relayer account has enough funds on both chains for transaction fees The ICM Relayer component in the Toolbox automatically handles this configuration for you, making it easy to set up two-way messaging between your chains. # Configure & Run Relayer (/academy/interchain-messaging/09-running-a-relayer/03-configure-and-run-relayer) --- title: Configure & Run Relayer description: Configure and run an ICM relayer using Toolbox updated: 2024-05-31 authors: [martineckardt] icon: Terminal --- The ICM Relayer is responsible for delivering messages between chains. Using the Toolbox interface, you can easily configure and manage your relayer instance. ## Relayer Setup The Toolbox interface provides a simple way to configure and setup your relayer. Follow these steps to run your relayer: 1. Select both **Dispatch and C-Chain** as source and destination networks 2. Fund the relayer address with enough native tokens on each chain 3. Copy and run the configuration command 4. Start the relayer using the provided Docker command on you **interchain-messaging codespaces terminal** In order to check if the relayer is running you can run this command: ```bash docker logs -f relayer ``` And the output should look like this: ```bash {"level":"info","timestamp":"2024-07-11T01:38:40.093Z","logger":"awm-relayer","caller":"main/main.go:94","msg":"Initializing awm-relayer"} {"level":"info","timestamp":"2024-07-11T01:38:40.093Z","logger":"awm-relayer","caller":"main/main.go:99","msg":"Set config options."} {"level":"info","timestamp":"2024-07-11T01:38:40.093Z","logger":"awm-relayer","caller":"main/main.go:102","msg":"Initializing destination clients"} {"level":"info","timestamp":"2024-07-11T01:38:40.094Z","logger":"awm-relayer","caller":"evm/destination_client.go:105","msg":"Initialized destination client","blockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku","evmChainID":"43112","nonce":0} {"level":"info","timestamp":"2024-07-11T01:38:40.096Z","logger":"awm-relayer","caller":"evm/destination_client.go:105","msg":"Initialized destination client","blockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ","evmChainID":"123","nonce":0} {"level":"info","timestamp":"2024-07-11T01:38:40.096Z","logger":"awm-relayer","caller":"main/main.go:121","msg":"Initializing app request network"} {"level":"info","timestamp":"2024-07-11T01:38:41.757Z","logger":"awm-relayer","caller":"main/main.go:440","msg":"starting metrics server...","port":9091} {"level":"info","timestamp":"2024-07-11T01:38:41.757Z","logger":"awm-relayer","caller":"main/main.go:373","msg":"Creating application relayers","originBlockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku"} {"level":"info","timestamp":"2024-07-11T01:38:41.758Z","logger":"awm-relayer","caller":"main/main.go:373","msg":"Creating application relayers","originBlockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ"} {"level":"info","timestamp":"2024-07-11T01:38:41.758Z","logger":"awm-relayer","caller":"checkpoint/checkpoint.go:40","msg":"Creating checkpoint manager","relayerID":"0x96ba68805e937e7695ffb1c956dab0aff39ec5529fd037d8168eb6d5587866fe","startingHeight":4} {"level":"info","timestamp":"2024-07-11T01:38:41.758Z","logger":"awm-relayer","caller":"checkpoint/checkpoint.go:40","msg":"Creating checkpoint manager","relayerID":"0xf8b2da5b262b41a2ee2d2b46e61e68c95ec7fb8f6b0abbe4101f75f8281b85aa","startingHeight":4} {"level":"info","timestamp":"2024-07-11T01:38:41.758Z","logger":"awm-relayer","caller":"main/main.go:283","msg":"Created application relayers","blockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku"} {"level":"info","timestamp":"2024-07-11T01:38:41.759Z","logger":"awm-relayer","caller":"relayer/listener.go:127","msg":"Creating relayer","subnetID":"11111111111111111111111111111111LpoYY","subnetIDHex":"0000000000000000000000000000000000000000000000000000000000000000","blockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku","blockchainIDHex":"55e1fcfdde01f9f6d4c16fa2ed89ce65a8669120a86f321eef121891cab61241"} {"level":"info","timestamp":"2024-07-11T01:38:41.760Z","logger":"awm-relayer","caller":"checkpoint/checkpoint.go:40","msg":"Creating checkpoint manager","relayerID":"0xe73a74b50d44b1b216bf806bc7e230d5305e544d35d41c2313cda26766ccd8ec","startingHeight":4} {"level":"info","timestamp":"2024-07-11T01:38:41.760Z","logger":"awm-relayer","caller":"checkpoint/checkpoint.go:40","msg":"Creating checkpoint manager","relayerID":"0x66f44659b15f9d819caf71978931df3d5d7925b5d800cc4ef8153f12151401f6","startingHeight":4} {"level":"info","timestamp":"2024-07-11T01:38:41.760Z","logger":"awm-relayer","caller":"main/main.go:283","msg":"Created application relayers","blockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ"} {"level":"info","timestamp":"2024-07-11T01:38:41.772Z","logger":"awm-relayer","caller":"relayer/listener.go:127","msg":"Creating relayer","subnetID":"26eqgD4Kt1MvTKXC9BDjEwBAfhcBcHCKj2EXjR2UuFpSWoAHhw","subnetIDHex":"9087d2f383171f73819baa49c3be63a5a6e2b492114dfc3556e42eedd182050a","blockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ","blockchainIDHex":"52f2c4d51ef13a5781babe42c1b916e98fc88fc72919b20527782c939c8be71d"} {"level":"info","timestamp":"2024-07-11T01:38:41.782Z","logger":"awm-relayer","caller":"evm/subscriber.go:131","msg":"Successfully subscribed","blockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ"} {"level":"info","timestamp":"2024-07-11T01:38:41.782Z","logger":"awm-relayer","caller":"evm/subscriber.go:131","msg":"Successfully subscribed","blockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku"} {"level":"info","timestamp":"2024-07-11T01:38:41.782Z","logger":"awm-relayer","caller":"relayer/listener.go:166","msg":"processed-missed-blocks set to false, starting processing from chain head","blockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ"} {"level":"info","timestamp":"2024-07-11T01:38:41.783Z","logger":"awm-relayer","caller":"main/main.go:335","msg":"Created listener","blockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ"} {"level":"info","timestamp":"2024-07-11T01:38:41.783Z","logger":"awm-relayer","caller":"main/main.go:348","msg":"Listener initialized. Listening for messages to relay.","originBlockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ"} {"level":"info","timestamp":"2024-07-11T01:38:41.782Z","logger":"awm-relayer","caller":"relayer/listener.go:166","msg":"processed-missed-blocks set to false, starting processing from chain head","blockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku"} {"level":"info","timestamp":"2024-07-11T01:38:41.783Z","logger":"awm-relayer","caller":"main/main.go:335","msg":"Created listener","blockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku"} {"level":"info","timestamp":"2024-07-11T01:38:41.783Z","logger":"awm-relayer","caller":"main/main.go:348","msg":"Listener initialized. Listening for messages to relay.","originBlockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku"} {"level":"info","timestamp":"2024-07-11T01:43:59.040Z","logger":"awm-relayer","caller":"relayer/listener.go:406","msg":"Unpacked warp message","sourceBlockchainID":"epm5fG6Pn1Y5rBHdTe36aZYeLqpXugreyHLZB5dV81rVTs7Ku","originSenderAddress":"0x5DB9A7629912EBF95876228C24A848de0bfB43A9","destinationBlockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ","destinationAddress":"0x52C84043CD9c865236f11d9Fc9F56aa003c1f922","warpMessageID":"2ZV5u7WPVozo386A3d8T5B81vAKvcSDdNsggpmHFvihy5Yut1S"} {"level":"info","timestamp":"2024-07-11T01:43:59.052Z","logger":"awm-relayer","caller":"relayer/application_relayer.go:320","msg":"Fetching aggregate signature from the source chain validators via AppRequest"} {"level":"info","timestamp":"2024-07-11T01:43:59.060Z","logger":"awm-relayer","caller":"relayer/application_relayer.go:461","msg":"Created signed message.","destinationBlockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ"} {"level":"info","timestamp":"2024-07-11T01:43:59.060Z","logger":"awm-relayer","caller":"teleporter/message_handler.go:187","msg":"Sending message to destination chain","destinationBlockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ","warpMessageID":"2ZV5u7WPVozo386A3d8T5B81vAKvcSDdNsggpmHFvihy5Yut1S","teleporterMessageID":"Hqr8jHV4NjTf4sbCoGqNJEuntm8QoWnDXdivFNLizAzFxRxm3"} {"level":"info","timestamp":"2024-07-11T01:43:59.063Z","logger":"awm-relayer","caller":"evm/destination_client.go:192","msg":"Sent transaction","txID":"0x291bc3a3784dd79871cf4f95ba8f3fb3519e25c73848ee27bc5ebd7e3fe178d5","nonce":0} {"level":"info","timestamp":"2024-07-11T01:43:59.266Z","logger":"awm-relayer","caller":"teleporter/message_handler.go:250","msg":"Delivered message to destination chain","destinationBlockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ","warpMessageID":"2ZV5u7WPVozo386A3d8T5B81vAKvcSDdNsggpmHFvihy5Yut1S","teleporterMessageID":"Hqr8jHV4NjTf4sbCoGqNJEuntm8QoWnDXdivFNLizAzFxRxm3","txHash":"0x291bc3a3784dd79871cf4f95ba8f3fb3519e25c73848ee27bc5ebd7e3fe178d5"} {"level":"info","timestamp":"2024-07-11T01:43:59.266Z","logger":"awm-relayer","caller":"relayer/application_relayer.go:246","msg":"Finished relaying message to destination chain","destinationBlockchainID":"dXobeMTZB89Fy6E987aaRBDz2j8mVparqRRfvvmQDsH8haNJJ"} ``` # Restricting a Relayer (/academy/interchain-messaging/10-restricting-the-relayer/01-restricting-the-relayer) --- title: Restricting a Relayer description: Learn about the basics of Avalanche. updated: 2024-05-31 authors: [martineckardt] icon: Book --- In this segment, we'll delve into the mechanisms of restricting relayers for transmitting messages across Avalanche L1s. While the security in AWM and Interchain Messaging doesn't rely at all on the Relayer, there might be some reasons why you would want to restrict the delivery of the message to certain Relayers. The more obvious one is when you run your own Relayer, and don't want to incentivize any other to ensure your message gets to the destination. On following sections you will learn how to run your own Relayer so you can restrict the messages to your own. ## What You Will Learn In this section, you will go through the following topics: - **Allowed Relayers:** Learn how to restrict the relayer for transmitting messages across Avalanche L1s - **Dynamically update the allowed Relayers:** Understand how to update the allowed relayers dynamically # Allowed Relayer (/academy/interchain-messaging/10-restricting-the-relayer/02-allowed-relayers) --- title: Allowed Relayer description: Learn about the basics of Avalanche. updated: 2024-05-31 authors: [martineckardt] icon: BookOpen --- In this segment, we'll delve into the mechanisms of restricting relayers for transmitting messages across Avalanche L1s. While the security in AWM and Interchain Messaging doesn't rely at all on the Relayer, there might be some reasons why you would want to restrict the delivery of the message to certain Relayers. The more obvious one is when you run your own Relayer, and don't want to incentivize any other to ensure your message gets to the destination. On following sections you will learn how to run your own Relayer so you can restrict the messages to your own. Let's look at the `TeleporterMessageInput` structure included when we call the `sendCrossChainMessage` one more time. ```solidity struct TeleporterMessageInput { bytes32 destinationBlockchainID; address destinationAddress; TeleporterFeeInfo feeInfo; uint256 requiredGasLimit; address[] allowedRelayerAddresses; bytes message; } interface ITeleporterMessenger { function sendCrossChainMessage(TeleporterMessageInput calldata messageInput) external returns (bytes32); } ``` As you can see the `TeleporterMessageInput` allows you to specify an array of addresses for the allowed relayers. Previously we have set that to an empty array, which means that any relayer can pick up the message. ```solidity messenger.sendCrossChainMessage( TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), // [!code highlight] message: abi.encode(message) }) ); ``` If you want to restrict the message to a certain relayer, you can simply add the address of the relayer to the array. A Relayer will be identified by the reward address it is associating each time it delivers a message. ```solidity messenger.sendCrossChainMessage( TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: address(0), amount: 0}), requiredGasLimit: 100000, allowedRelayerAddresses: [0x321f6B73b6dFdE5C73731C39Fd9C89c7788D5EBc], // [!code highlight] message: abi.encode(message) }) ); ``` # Incentivizing a Relayer (/academy/interchain-messaging/11-incentivizing-a-relayer/01-incentivizing-a-relayer) --- title: Incentivizing a Relayer description: Learn how to add Relayer Incentives. updated: 2024-06-09 authors: [andyvargtz] icon: Book --- You already mastered how to send messages from one chain to another, but we have been working under the assumption that all messages will be picked by the Relayers. That is not always true, and external Relayers might want to have something in exchange for their services. Relayer fees are an optional way to incentivize relayers to deliver a Interchain Messaging message to its destination chain. They are not strictly necessary, and may be omitted if a relayer is willing to relay messages with no fee, such as with a self-hosted relayer. If the relayer is not self hosted though, user will need to incentivize them, since they need to pay the transaction fees on the destination chain ## What you will learn - What is the data flow for fees - How to calculate an adecuate incentive amount - How to implement those incentives ## Exercises In this section you will apply the learned knowledge by calculating and adding fees to our basic send-receive contracts used in the [Interchain Messaging Basics chapter](/academy/interchain-messaging/04-icm-basics/01-icm-basics). # Fee Data Flow (/academy/interchain-messaging/11-incentivizing-a-relayer/02-fee-data-flow) --- title: Fee Data Flow description: Learn the logic behind the fee incentivization. updated: 2024-06-09 authors: [andyvargtz] icon: BookOpen --- The basic idea of incentivizing an external AWM Relayer to relay our message is to deposit an amount of an ERC-20 token into the Interchain Messaging contract as a reward for whichever relayer delivers our message. Let's see how the fees are transferred from the user at the very beginning of sending a Cross Chain message, up to the point where the relayer is able to claim the fee. ![AWM Relayer with fees data flow](/common-images/teleporter/teleporter-fees-receipt.png) We are using the following emoji guide to show the actors of each action: **👩 User** **📜 Cross-Chain dApp** **👾 TeleporterMessenger** **🛸 Relayer** ### Previous to Sending the Message: - 👩 The message sender (user) requires to approve the Cross-Avalanche L1 dApp as a spender on the ERC20 contract to be used as fee. ### Sending the Message: - 👩 User sends a message. - 📜 Fee funds (previously approved) are transferred from the user's wallet to the Cross-Chain dApp. - 📜 Cross-Chain dApp approves the Interchain Messaging Contract to spend the ERC20 fee tokens. - 👾 Fee tokens are transferred into the Interchain Messaging Contract. - 👾 Interchain Messaging assigns a unique ID to the Message and keeps track of the fee Info. - 🛸 Relayer can now pick and deliver the Message. ### Message Delivered in the Destination Chain: - 👾 Interchain Messaging in the Destination Chain generates a receipt with the `messageID` and Relayer address who delivered the message. - Receipt is sent back to the Source Chain. - 🛸 Relayers can send the receipts back to the Source Chain themselves or wait until a new (unrelated) message is sent now from the once Destination Chain to the former Source chain. Messages include up to 5 pending receipts in a message. ### Once Message is Received in the Source Chain: - 👾 The Interchain Messaging marks the `messageID` as received and increases the Relayer redeemable reward. - 🛸 The relayer can now claim its unlocked rewards. As you can see, there are just a few actions you as a cross-chain dApp developer need to implement so dApp works smoothly with fees, everything else is being done by the Interchain Messaging contract and the Relayer. If you run your own relayer, you don't have to attach any fees. # Determining the Fee (/academy/interchain-messaging/11-incentivizing-a-relayer/03-determining-the-fee-amount) --- title: Determining the Fee description: Calculate a fair incentive amount. updated: 2024-06-09 authors: [andyvargtz] icon: BookOpen --- ## Economic Considerations for Relayers ### Determining Relayer Incentives The first question to address is: How much should you pay a Relayer to ensure they are incentivized to deliver the message? The answer is straightforward but requires some analysis. The incentive should at least cover the expenses the Relayer will incur by delivering your message. As mentioned in previous lessons, a message will be considered delivered in the destination Chain if the Relayer sends at least the `requiredGasLimit` stated in the message, regardless of whether the execution succeeds or fails. Thus, the associated cost in $USD (or another asset like AVAX) that the Relayer will face to deliver your message is: *Cost = requiredGasLimit * gas_price_in_native_token * native_token_price* While the income the Relayer will be paid for delivering your message is: *Income = Fee_Amount * ERC20_Price* Excluding the Relayer's hosting costs, any Fee amount where *Costs < Income* will be profitable for the Relayer. Different types of messages or actions available in a cross-chain dApp may require different amounts of gas to deliver successfully. For instance, in a cross-chain ERC20 minter, creating the token contract is more expensive than minting new tokens, simply because creation requires deploying a new contract. Thus, it makes sense to reward the Relayer with a higher amount for delivering a creation message compared to a minting message (assuming both `requiredGasLimit` are set accordingly). ### Example Calculation Let's calculate the minimum fee that could incentivize a Relayer to pick and deliver a message. To simplify, consider the following assumptions: - We are sending a message from C-Chain to Dispatch. - ERC20 incentives will be paid in TLP. - DIS is the fee token in Bulletin. - Assume *DIS_price = TLP_price*. - Relayer will only take your message if it can make at least 10% profit. - The `requiredGasLimit` for sending our message is 10,000 gas units. - Gas price in Dispatch = 50 nDIS (DIS * 10^-9). ### Calculating the Fee The cost of relaying this message will be: *Cost = 10,000 * 50 * DIS_price* The Relayer will take your message only if it can make a 10% profit: *Income = 1.1 * Cost* Therefore: *TLP_price * Amount = 1.1 * 10,000 * 50 * DIS_price* Knowing that *TLP_price = DIS_price*: *Amount = 1.1 * 10,000 * 50* Then: *Amount = 550,000 nTLP* *Amount = 550,000,000,000,000 weiTLP* All amounts in Solidity need to be declared in wei, so use a unit converter to convert from nTLP to wei (10^-18). # Setting Incentives (/academy/interchain-messaging/11-incentivizing-a-relayer/04-setting-incentives) --- title: Setting Incentives description: Learn where incentives details need to be included updated: 2024-06-09 authors: [andyvargtz] icon: BookOpen --- As we studied in previous lessons, the `sendCrossChainMessage` function takes `TeleporterMessageInput` struct as an input. The fields feeInfo in the `TeleporterMessageInput` will be a `TeleporterFeeInfo` struct formed by the ERC20 contract address in which the fee will be paid, as well as the amount of tokens to incentivize the relayer. This amount needs to be set in wei units. Let's take a look at it: ```solidity // (c) 2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: Ecosystem pragma solidity 0.8.18; struct TeleporterMessageInput { bytes32 destinationBlockchainID; address destinationAddress; TeleporterFeeInfo feeInfo; // [!code highlight] uint256 requiredGasLimit; address[] allowedRelayerAddresses; bytes message; } struct TeleporterFeeInfo { address contractAddress; // [!code highlight] uint256 amount; // [!code highlight] } interface ITeleporterMessenger { function sendCrossChainMessage(TeleporterMessageInput calldata messageInput) external returns (bytes32); } ``` As you can see, the `TeleporterMessageInput` now needs to include the ERC20 address that will pay the incentives and the amount, all this as part of the `TeleporterFeeInfo` # Deploy Fee Token Contract (/academy/interchain-messaging/11-incentivizing-a-relayer/05-deploy-fee-token-contract) --- title: Deploy Fee Token Contract description: Deploy an ERC20 to work as the Fee Token for incentives. updated: 2024-06-09 authors: [andyvargtz] icon: Terminal --- To add incentives for relayers, we need an ERC20 token on the source chain (Fuji C-Chain). We'll deploy a simple ERC20 token that will be used to pay relayers for delivering our cross-chain messages. ### Save the Token Address After deploying the token through the interface above, save the token address to use it in the next steps: ```bash export FEE_TOKEN_ADDRESS= ``` Replace `` with the address shown in the interface after deployment. ### Verify Your Token Balance You can check that you received the initial supply: ```bash cast call --rpc-url fuji-c \ $FEE_TOKEN_ADDRESS \ "balanceOf(address)(uint256)" \ $FUNDED_ADDRESS ``` The balance should show 1,000,000 * 10^18 tokens (1,000,000 tokens with 18 decimals). These tokens will be used in the next sections to incentivize relayers to deliver your cross-chain messages. # Incentivize an AWM relayer (/academy/interchain-messaging/11-incentivizing-a-relayer/06-incentivize-an-awm-relayer) --- title: Incentivize an AWM relayer description: Add incentives to the basic send-receive contracts. updated: 2024-06-09 authors: [andyvargtz] icon: Terminal --- ## Add Incentives to our Sender Contract Now that we have an ERC20 token deployed and some of those tokens in our account to incentivize our relayer, let's include the incentive in our Sender contract. ```solidity title="contracts/interchain-messaging/incentivize-relayer/senderWithFees.sol" // (c) 2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: Ecosystem pragma solidity ^0.8.18; import "@teleporter/ITeleporterMessenger.sol"; import {IERC20} from "@openzeppelin/contracts@4/token/ERC20/IERC20.sol"; contract SenderWithFeesOnCChain { ITeleporterMessenger public immutable messenger = ITeleporterMessenger(0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf); /** * @dev Sends a message to another chain. */ function sendMessage(address destinationAddress, string calldata message, address feeAddress) external { IERC20 feeContract = IERC20(feeAddress); uint256 feeAmount = 500000000000000; feeContract.transferFrom(msg.sender, address(this), feeAmount); feeContract.approve(address(messenger), feeAmount); messenger.sendCrossChainMessage( TeleporterMessageInput({ // BlockchainID of Dispatch L1 destinationBlockchainID: 0x9f3be606497285d0ffbb5ac9ba24aa60346a9b1812479ed66cb329f394a4b1c7, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({feeTokenAddress: feeAddress, amount: feeAmount}), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: abi.encode(message) }) ); } } ``` Notice, our contract implementing Fees now needs to include the functionality we described in the fee flow section: - Import `IERC20` and create the fee contract interface instance - 📜The Cross-Avalanche L1 dApp transfers the ERC20 fee amount to the control of the Cross-Chain dApp contract. - 📜Cross-Avalanche L1 dApp needs to implement the approval for the Interchain Messaging contract of these ERC20 tokens. - Include the `TeleporterFeeInfo` struct in the message. # Interaction Flow With Fees (/academy/interchain-messaging/11-incentivizing-a-relayer/07-interaction-flow-with-fees) --- title: Interaction Flow With Fees description: Go trhough the complete deployment flow and send an incentivized message. updated: 2024-06-09 authors: [andyvargtz] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Now we have all the Smart Contract 📜 set we can start sending incentivized cross-chain messages. Now let complete the flow - Deploy sender contract on **C-Chain** - Deploy receiver contract on **Dispatch** - Approve the sender contract to spend ERC20 funds from the message sender address - Send the message ### Deploy Sender Contract To deploy the sender contract run: ```bash forge create --rpc-url fuji-c --private-key $PK contracts/interchain-messaging/incentivize-relayer/senderWithFees.sol:SenderWithFeesOnCChain --broadcast ``` ```bash {6} [⠊] Compiling... [⠒] Compiling 2 files with Solc 0.8.18 [⠢] Solc 0.8.18 finished in 92.98ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC Deployed to: 0xa4DfF80B4a1D748BF28BC4A271eD834689Ea3407 Transaction hash: 0xb0ff20527818fcc0836944a13ddb3b336b95047d434bb52f6c064ab407950166 ``` ### Save the Sender Contract Address ```bash export SENDER_ADDRESS=0xa4DfF80B4a1D748BF28BC4A271eD834689Ea3407 ``` ### Deploy Receiver Contract Now, deploy the receiver contract. ```bash forge create --rpc-url fuji-dispatch --private-key $PK contracts/interchain-messaging/incentivize-relayer/receiverWithFees.sol:ReceiverOnDispatch --broadcast ``` ```bash {6} [⠊] Compiling... [⠢] Compiling 1 files with Solc 0.8.18 [⠆] Solc 0.8.18 finished in 105.36ms Compiler run successful! Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC Deployed to: 0xe336d36FacA76840407e6836d26119E1EcE0A2b4 Transaction hash: 0x379f0a4b875effc9b80a33f039d9a49404514e7aabaef4490f57b56fb4818f65 ``` ### Save the Receiver Contract Address ```bash export RECEIVER_ADDRESS=0xe336d36FacA76840407e6836d26119E1EcE0A2b4 ``` ### Approve ERC20 Token Expense 👩 The Message Sender (User) requires to approve the Cross-Avalanche L1 dApp as a spender on the ERC20 contract to be used as fee. We'll approve 0.0005 tokens (500000000000000 wei) from your total supply of 1,000,000 tokens which matches the fee amount set in the sender contract. ```bash cast send --rpc-url fuji-c --private-key $PK $FEE_TOKEN_ADDRESS "approve(address,uint256)" $SENDER_ADDRESS 500000000000000 ``` ### Send an incentivized message ```bash cast send --rpc-url fuji-c --private-key $PK $SENDER_ADDRESS "sendMessage(address, string, address)" $RECEIVER_ADDRESS "Hello" $FEE_TOKEN_ADDRESS ``` ### Verify message was received on Avalanche L1 ```bash cast call --rpc-url fuji-dispatch $RECEIVER_ADDRESS "lastMessage()(string)" ``` ```bash "Hello" ``` **Success!** The message was received on the destination chain while incentivizing the Relayer. # Introduction (/academy/interchain-messaging/13-access-chainlink-vrf-services/01-introduction) --- title: Introduction description: Learn how to use ICM to access Chainlink VRF services on L1 networks that do not have direct Chainlink support. updated: 2024-10-21 authors: [0xstt] icon: Book --- As decentralized applications (dApps) expand across multiple blockchains, some Layer 1 (L1) networks lack direct support from essential services like **Chainlink VRF (Verifiable Random Functions)**. This presents a significant challenge for developers who rely on verifiable randomness for use cases such as gaming, lotteries, NFT minting, and other decentralized functions that require unbiased, unpredictable random numbers. The challenge arises because not every L1 network has integrated with Chainlink, meaning developers on those chains are left without native access to VRF services. Without verifiable randomness, critical aspects of dApps, such as fairness and security, can be compromised. ### Why ICM is Necessary To address this gap, **Interchain Messaging (ICM)** provides a solution by allowing L1 networks that don’t have direct Chainlink support to still access these services. Through ICM, any blockchain can request VRF outputs from a Chainlink-supported network (e.g., Fuji) and receive the results securely on their own L1. This cross-chain solution unlocks the ability to use Chainlink VRF on unsupported networks, bypassing the need for native integration and ensuring that dApp developers can continue building secure and fair decentralized applications. In the following sections, we will explore how to use ICM to access Chainlink VRF services across different chains by deploying two key smart contracts: one on the Chainlink-supported network and another on the target L1. # Understanding the Flow (/academy/interchain-messaging/13-access-chainlink-vrf-services/02-understanding-the-flow) --- title: Understanding the Flow description: Learn how requests for random words flow across chains using ICM. updated: 2024-10-21 authors: [0xstt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; This section explains how random words are requested and fulfilled using Chainlink VRF across two different blockchains through **Interchain Messaging (ICM)**. The entire process involves several key components: the decentralized application (DApp), `CrossChainVRFConsumer`, `CrossChainVRFWrapper`, and `TeleporterMessenger`. Let’s walk through the flow step by step: ### DApp Submits Request for Random Words The DApp, running on an L1 without direct Chainlink support, initiates a request for verifiable randomness by interacting with the `CrossChainVRFConsumer` contract deployed on its chain. ### `CrossChainVRFConsumer` Sends Cross-Chain Message Once the DApp submits the request, the `CrossChainVRFConsumer` prepares a message containing the necessary VRF request parameters (such as `keyHash`, `request confirmations`, `gas limit`, etc.). This message is sent across chains via the `TeleporterMessenger` to the `CrossChainVRFWrapper` on the Chainlink-supported network (e.g., Fuji). ### `TeleporterMessenger` Receives Message & Calls `CrossChainVRFWrapper` 1. On the Chainlink-supported network, the `TeleporterMessenger` receives the cross-chain message sent by the `CrossChainVRFConsumer`. It passes the message to the `CrossChainVRFWrapper`, which is authorized to handle VRF requests. 2. The `CrossChainVRFWrapper` contract, deployed on the supported network, sends the request to **Chainlink VRF** for random words, using the parameters received in the message (e.g., `subscription ID`, `callback gas limit`, etc.). ### Chainlink VRF Fulfills Random Words Request 1. The **Chainlink VRF** fulfills the request and returns the random words to the `CrossChainVRFWrapper` contract by invoking its callback function. 2. Once the random words are received, the `CrossChainVRFWrapper` encodes the fulfilled random words and sends them back as a cross-chain message to the `CrossChainVRFConsumer` on the original L1. ### `TeleporterMessenger` Returns Random Words to `CrossChainVRFConsumer` The `TeleporterMessenger` on the original L1 receives the message containing the random words and passes it to the `CrossChainVRFConsumer`. ### `CrossChainVRFConsumer` Returns Random Words to the DApp Finally, the `CrossChainVRFConsumer` processes the random words and sends them to the DApp that originally requested them, completing the flow. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/interchain-messaging/cross-chain-vrf-NiG2EHgc4ulWBUx9Ekx0DZxTGdoHXk.png) This end-to-end process demonstrates how decentralized applications on unsupported L1 networks can request verifiable randomness from Chainlink VRF, leveraging **ICM** to handle cross-chain communication. # Orchestrating VRF Requests Over Multiple Chains (Wrapper) (/academy/interchain-messaging/13-access-chainlink-vrf-services/03-orchestrating-vrf-requests) --- title: Orchestrating VRF Requests Over Multiple Chains (Wrapper) description: Learn how the CrossChainVRFWrapper contract on a Chainlink-supported L1 handles cross-chain requests for VRF. updated: 2024-10-21 authors: [0xstt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; The `CrossChainVRFWrapper` contract plays a critical role in handling requests for randomness from an unsupported L1. It is deployed on a **Chainlink-supported network** (like Avalanche Fuji) and serves as the intermediary that interacts with Chainlink VRF to request random words. Here’s how it functions as the provider for VRF services: ## Receives Cross-Chain Messages When the `CrossChainVRFConsumer` on the unsupported L1 initiates a request for randomness, it sends a cross-chain message via the `TeleporterMessenger` to the `CrossChainVRFWrapper` on the supported network. The `CrossChainVRFWrapper` verifies that the request came from an authorized address. This is essential to ensure that only verified consumers can request randomness. Upon receiving a valid request, the **VRFWrapper** calls **Chainlink VRF** and sends the request with the required parameters, such as the number of random words, request confirmations, and gas limits. The `CrossChainVRFWrapper` keeps track of all pending requests using a mapping, associating each request ID with its destination (the L1 where the `CrossChainVRFConsumer` resides). This ensures that the random words are returned to the correct destination once fulfilled. ```solidity function receiveTeleporterMessage( bytes32 originChainID, address originSenderAddress, bytes calldata message ) external { require(msg.sender == address(teleporterMessenger), "Caller is not the TeleporterMessenger"); // Verify that the origin sender address is authorized require(authorizedSubscriptions[originSenderAddress].isAuthorized, "Origin sender is not authorized"); uint256 subscriptionId = authorizedSubscriptions[originSenderAddress].subscriptionId; // Verify that the subscription ID belongs to the correct owner (,,,, address[] memory consumers) = s_vrfCoordinator.getSubscription(subscriptionId); // Check wrapper contract is a consumer of the subscription bool isConsumer = false; for (uint256 i = 0; i < consumers.length; i++) { if (consumers[i] == address(this)) { isConsumer = true; break; } } require(isConsumer, "Contract is not a consumer of this subscription"); // Decode message to get the VRF parameters CrossChainRequest memory vrfMessage = abi.decode(message, (CrossChainRequest)); // Request random words VRFV2PlusClient.RandomWordsRequest memory req = VRFV2PlusClient.RandomWordsRequest({ keyHash: vrfMessage.keyHash, subId: subscriptionId, requestConfirmations: vrfMessage.requestConfirmations, callbackGasLimit: vrfMessage.callbackGasLimit, numWords: vrfMessage.numWords, extraArgs: VRFV2PlusClient._argsToBytes(VRFV2PlusClient.ExtraArgsV1({nativePayment: vrfMessage.nativePayment})) }); uint256 requestId = s_vrfCoordinator.requestRandomWords(req); pendingRequests[requestId] = CrossChainReceiver({ destinationBlockchainId: originChainID, destinationAddress: originSenderAddress }); } ``` ## Handles Callback from Chainlink VRF When **Chainlink VRF** fulfills the randomness request, the `CrossChainVRFWrapper` receives the random words through a callback function. This ensures that the request has been successfully processed. After receiving the random words, the `CrossChainVRFWrapper` encodes the result and sends it back as a cross-chain message to the `CrossChainVRFConsumer` on the unsupported L1. This is done using the `TeleporterMessenger`. ```solidity function fulfillRandomWords(uint256 requestId, uint256[] calldata randomWords) internal override { require(pendingRequests[requestId].destinationAddress != address(0), "Invalid request ID"); // Create CrossChainResponse struct CrossChainResponse memory crossChainResponse = CrossChainResponse({ requestId: requestId, randomWords: randomWords }); bytes memory encodedMessage = abi.encode(crossChainResponse); // Send cross chain message using ITeleporterMessenger interface TeleporterMessageInput memory messageInput = TeleporterMessageInput({ destinationBlockchainID: pendingRequests[requestId].destinationBlockchainId, destinationAddress: pendingRequests[requestId].destinationAddress, feeInfo: TeleporterFeeInfo({ feeTokenAddress: address(0), amount: 0 }), requiredGasLimit: 100000, allowedRelayerAddresses: new address[](0), message: encodedMessage }); teleporterMessenger.sendCrossChainMessage(messageInput); delete pendingRequests[requestId]; } ``` In summary, the `CrossChainVRFWrapper` contract acts as the **bridge** between the unsupported L1 and Chainlink’s VRF services, ensuring that random words are requested, fulfilled, and delivered back across chains efficiently. # Bringing Chainlink VRF to Unsupported L1s (Consumer) (/academy/interchain-messaging/13-access-chainlink-vrf-services/04-bring-vrf-to-unsupported-l1) --- title: Bringing Chainlink VRF to Unsupported L1s (Consumer) description: Learn how to request VRF from an unsupported L1 using CrossChainVRFConsumer. updated: 2024-10-21 authors: [0xstt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; The `CrossChainVRFConsumer` contract enables DApps on an unsupported L1 to request random words from Chainlink VRF using a cross-chain communication mechanism. Since Chainlink does not natively support all blockchains, this setup allows developers to access Chainlink's VRF service even on networks that don’t have direct support. ## Requesting Random Words The `CrossChainVRFConsumer` contract sends a cross-chain message to the `CrossChainVRFWrapper` on a Chainlink-supported L1, requesting random words. This request is sent using `TeleporterMessenger`, which handles cross-chain communication. ```solidity function requestRandomWords( bytes32 keyHash, uint16 requestConfirmations, uint32 callbackGasLimit, uint32 numWords, bool nativePayment, uint32 requiredGasLimit ) external { // Create CrossChainRequest struct CrossChainRequest memory crossChainRequest = CrossChainRequest({ keyHash: keyHash, requestConfirmations: requestConfirmations, callbackGasLimit: callbackGasLimit, numWords: numWords, nativePayment: nativePayment }); // Send Teleporter message bytes memory encodedMessage = abi.encode(crossChainRequest); TeleporterMessageInput memory messageInput = TeleporterMessageInput({ destinationBlockchainID: DATASOURCE_BLOCKCHAIN_ID, destinationAddress: vrfRequesterContract, feeInfo: TeleporterFeeInfo({ feeTokenAddress: address(0), amount: 0 }), requiredGasLimit: requiredGasLimit, allowedRelayerAddresses: new address[](0), message: encodedMessage }); teleporterMessenger.sendCrossChainMessage(messageInput); } ``` ## Processing the Request Once the request is received by the `CrossChainVRFWrapper`, it interacts with the Chainlink VRF Coordinator to request the random words on behalf of the consumer on the unsupported L1. ## Receiving Random Words Once Chainlink fulfills the request, the `CrossChainVRFWrapper` sends the random words back to the `CrossChainVRFConsumer` via a cross-chain message, enabling the DApp on the unsupported L1 to use them. ```solidity function receiveTeleporterMessage( bytes32 originChainID, address originSenderAddress, bytes calldata message ) external { require(originChainID == DATASOURCE_BLOCKCHAIN_ID, "Invalid originChainID"); require(msg.sender == address(teleporterMessenger), "Caller is not the TeleporterMessenger"); require(originSenderAddress == vrfRequesterContract, "Invalid sender"); // Decode the message to get the request ID and random words CrossChainResponse memory response = abi.decode(message, (CrossChainResponse)); // Fulfill the request by calling the internal function fulfillRandomWords(response.requestId, response.randomWords); } function fulfillRandomWords(uint256 requestId, uint256[] memory randomWords) internal { // Logic to handle the fulfillment of random words // Implement your custom logic here // Emit event for received random words emit RandomWordsReceived(requestId); } ``` # Deploy Wrapper (/academy/interchain-messaging/13-access-chainlink-vrf-services/05-deploy-vrf-wrapper) --- title: Deploy Wrapper description: Learn how to deploy the CrossChainVRFWrapper contract to handle cross-chain VRF requests. updated: 2024-10-21 authors: [0xstt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Once you have set up your Chainlink VRF subscription and have your LINK tokens ready, the next step is to deploy the **CrossChainVRFWrapper** contract. This contract will act as the bridge between your unsupported L1 and the Chainlink VRF network on a supported L1, enabling cross-chain requests for random words. ## Prerequisites Before deployment, make sure you have: - A valid **Chainlink VRF Subscription ID** (see previous section for details). - The `TeleporterMessenger` contract address on your supported L1 (e.g., Avalanche Fuji). ## Deploy the Contract Using the `forge create` command, deploy the `CrossChainVRFWrapper` contract to the supported L1 (e.g., Avalanche Fuji). ```bash forge create --rpc-url --private-key --broadcast --constructor-args src/CrossChainVRFWrapper.sol:CrossChainVRFWrapper ``` Replace the following: - ``: The RPC URL for the L1. - ``: The private key for the account used to deploy the contract. - ``: The address of the deployed `TeleporterMessenger` contract. After deployment, save the `Deployed to` address in an environment variable for future use. ```bash export VRF_WRAPPER=
``` ## Verify the Deployment After deploying the contract, verify that the `CrossChainVRFWrapper` has been successfully deployed by checking its address on a block explorer. ## Configure Authorized Subscriptions Once deployed, the `CrossChainVRFWrapper` contract needs to be configured with authorized subscriptions to process requests for random words. - Call the `addAuthorizedAddress` function to authorize a specific address with a given subscription ID. - This ensures that only authorized addresses can request random words via the wrapper. ```bash cast send --rpc-url --private-key $VRF_WRAPPER "addAuthorizedAddress(address caller, uint256 subscriptionId)" ``` Replace the following: - ``: The address that will be authorized to request random words. - ``: The ID of your Chainlink VRF subscription. # Deploy Consumer (/academy/interchain-messaging/13-access-chainlink-vrf-services/06-deploy-vrf-consumer) --- title: Deploy Consumer description: Learn how to deploy the CrossChainVRFConsumer contract on any L1 that does not support Chainlink VRF. updated: 2024-10-21 authors: [0xstt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Now that the `CrossChainVRFWrapper` is deployed on a Chainlink-supported L1, it’s time to deploy the `CrossChainVRFConsumer` contract on the L1 where Chainlink VRF is not supported. This contract will handle requests for random words and interact with the **TeleporterMessenger** to communicate with the supported L1. ## Prerequisites Make sure you have: - The `TeleporterMessenger` contract address on the unsupported L1. - The deployed `CrossChainVRFWrapper` on the supported L1. `($VRF_WRAPPER)` ## Deploy the Contract Use the following command to deploy the `CrossChainVRFConsumer` contract on your unsupported L1. ```bash forge create --rpc-url --private-key --broadcast --constructor-args $VRF_WRAPPER src/CrossChainVRFConsumer.sol:CrossChainVRFConsumer ``` Replace the following: - ``: The RPC URL for the L1. - ``: The private key for the account used to deploy the contract. - ``: The address of the `TeleporterMessenger` contract on your unsupported L1. After deployment, save the `Deployed to` address in an environment variable for future use. ```bash export VRF_CONSUMER=
``` ## Verify the Deployment Once the `CrossChainVRFConsumer` contract is deployed, verify the contract’s address and confirm that it has been successfully deployed on your L1 using a block explorer. # Create Chainlink VRF Subscription (/academy/interchain-messaging/13-access-chainlink-vrf-services/07-create-vrf-subscription) --- title: Create Chainlink VRF Subscription description: Learn how to create a Chainlink VRF subscription to enable cross-chain randomness requests. updated: 2024-10-21 authors: [0xstt] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Before you can request random words using Chainlink VRF, you need to set up a **Chainlink VRF subscription**. This subscription allows you to fund requests for randomness and manage the list of consumers that are authorized to use the VRF service. ## Access Chainlink's VRF Subscription Manager To create a subscription, go to the Chainlink VRF Subscription Manager on the network where you plan to request VRF (e.g., Avalanche Fuji). You can access the manager here: [VRF | Subscription Management for Fuji](https://vrf.chain.link/fuji). ![](/common-images/chainlink-vrf/visit-subscription-management.png) ## Create a New Subscription Once on the subscription manager, create a new subscription. The subscription will have a unique ID, which you'll need to reference in your `CrossChainVRFWrapper` contract. This ID is used to track your random word requests and the balance associated with them. ![](/common-images/chainlink-vrf/create-subscription.png) ## Fund the Subscription After creating the subscription, you need to fund it with LINK tokens. These tokens are used to pay for the randomness requests made through Chainlink VRF. Make sure your subscription has enough funds to cover your requests. You can get testnet LINK tokens from the Fuji Faucet: [Chainlink Faucet for Fuji](https://faucets.chain.link/fuji) ![](/common-images/chainlink-vrf/faucet.png) ![](/common-images/chainlink-vrf/add-funds.png) ## Add Consumers After funding your subscription, add the `CrossChainVRFWrapper` contract `($VRF_WRAPPER)` as a consumer. This step authorizes the contract to make randomness requests on behalf of your subscription. You can add other consumers, such as other contracts or addresses, depending on your use case. ![](/common-images/chainlink-vrf/add-consumer.png) ## Save Subscription ID After completing these steps, save your subscription ID. You will need this ID when configuring the `CrossChainVRFWrapper` contract to request random words. ```bash export VRF_SUBSCRIPTION_ID= ``` --- # Request Random Words (/academy/interchain-messaging/13-access-chainlink-vrf-services/08-request-random-words) --- title: Request Random Words description: Learn how to request random words using CrossChainVRFConsumer. updated: 2024-10-21 authors: [0xstt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this section, you will learn how to request random words from Chainlink VRF using both the `CrossChainVRFConsumer` contracts. ## Authorize an Address to Request Random Words After deploying the `CrossChainVRFWrapper` contract, the first step is to authorize an address to make requests for random words. This ensures that only specific addresses linked to a subscription can request VRF. - Use the `addAuthorizedAddress` function to authorize a specific address with a given subscription ID. This step allows the address to make random word requests through the wrapper. ```bash cast send --rpc-url --private-key $VRF_WRAPPER "addAuthorizedAddress(address caller, uint256 subscriptionId)" $VRF_CONSUMER $VRF_SUBSCRIPTION_ID ``` ## Request Random Words from `CrossChainVRFConsumer` Once the address is authorized, the next step is to send a request for random words from the `CrossChainVRFConsumer` contract on the unsupported L1. This request is then sent to the `CrossChainVRFWrapper` via a cross-chain message. ```bash cast send --rpc-url --private-key $VRF_CONSUMER "requestRandomWords(bytes32 keyHash, uint16 requestConfirmations, uint32 callbackGasLimit, uint32 numWords, bool nativePayment, uint32 requiredGasLimit)" ``` Replace the placeholders with: - ``: The VRF key hash used for random word generation. - ``: Number of confirmations required for the request. - ``: The gas limit for the VRF callback function. - ``: The number of random words requested. - ``: Indicates whether the payment will be made in the native token. - ``: The gas limit required for the cross-chain message to be processed. # Introduction (/academy/l1-native-tokenomics/01-tokens-fundamentals/01-introduction) --- title: Introduction description: chapter outline and learning goals updated: 2025-08-21 authors: [nicolasarnedo] icon: Book --- ## What You Will Learn This foundational section covers the essential concepts of blockchain tokens: - **Token Fundamentals**: How tokens represent ownership and value - **Token Design**: Technical parameters that shape token behavior - **Native Tokens**: Primary blockchain currencies and their roles - **ERC-20 Tokens**: Standardized smart contract tokens for DeFi - **Practical Skills**: Deploy and transfer ERC-20 tokens ## Learning Goals By the end of this section, you will understand: - What native and ERC-20 tokens are and how they work - How to design tokens with appropriate technical parameters - How to deploy and transfer ERC-20 tokens The next section will cover the key differences between native and ERC-20 tokens, including wrapped tokens. Let's get started! # Token Design (/academy/l1-native-tokenomics/01-tokens-fundamentals/02-token-design) --- title: Token Design description: Technical parameters that shape how a token behaves on-chain. updated: 2024-09-03 authors: [0xstt] icon: BookOpen --- Designing a token can also have technical approach: choices you make in the contract (or at the protocol level for native tokens) determine precision, issuance, transfer rules, permissions, gas usage, and composability. Below are the core levers to consider when specifying a token ### Decimals (Precision) Defines the smallest unit and affects UX, pricing, rounding, and integrations. - 6 decimals → smallest unit is 0.000001 - 18 decimals → smallest unit is 0.000000000000000001 (common on EVM) Implementation detail: ERC‑20 exposes `decimals()`; native tokens inherit precision from the chain/protocol. --- ### Total Supply and Issuance Model Specify whether supply is fixed at deployment or elastic over time. - Fixed supply: immutable cap set in constructor/genesis - Elastic supply: emissions/vesting/mint/burn mechanisms adjust supply - Programmatic schedules vs role‑gated actions --- ### Minting and Burning Functions Control how supply can change post‑deployment. - `mint(address,toAmount)`: who can call (owner, `MINTER_ROLE`)? caps? - `burn(amount)` / `burnFrom(address,amount)`: opt‑in user burn vs admin burn - Event emissions (`Transfer` from/to zero address) for indexers --- ### Transfer Mechanics Define how balances move and what hooks apply. - Plain ERC‑20 `transfer/transferFrom` - Fee/tax on transfer (discouraged for DeFi composability) - Blacklist/whitelist or trading windows (be careful: centralization + DEX issues) - Pausable transfers (circuit breakers via `Pausable`) ### Allowance and Approvals Permissions for third‑party spenders. - Standard `approve/allowance/transferFrom` - EIP‑2612 Permit (gasless approvals via signatures) - Race‑condition safe patterns (set to 0 then new amount) --- ### Access Control and Roles Who can mint, pause, upgrade, or change parameters. - `Ownable` vs `AccessControl` (role‑based: `MINTER_ROLE`, `PAUSER_ROLE`) - Timelocks and multisig admins for safer governance changes --- Choosing these parameters up front ensures your token remains secure, composable, and easy to integrate while matching the precision, control, and lifecycle you intend. # Native Tokens (/academy/l1-native-tokenomics/01-tokens-fundamentals/03-native-tokens) --- title: Native Tokens description: Learn about native tokens and their role in blockchain ecosystems. updated: 2024-09-03 authors: [0xstt] icon: BookOpen --- A **native token** in a blockchain running the Ethereum Virtual Machine (EVM) refers to the primary digital currency or cryptocurrency native to that blockchain. Native tokens act as the foundation for value transfer and network operation within their respective ecosystems. - **Ethereum**: ETH - **Avalanche C-Chain**: AVAX - **Dexalot**: ALOT - many more... --- ### The Role of Native Tokens Native tokens serve multiple key roles within EVM-based blockchain networks, such as: - **Value Transfer**: Native tokens act as the primary currency for peer-to-peer transactions within the network, enabling value exchange between participants. - **Gas Fees**: Native tokens are used as **gas** to pay for transaction fees, contract deployments, and other network operations. This ensures that resources are allocated efficiently within the network. - **Security**: In Proof-of-Stake (PoS) networks, native tokens are often used for staking to secure the network and validate transactions. - **Governance**: In some cases, native tokens grant holders governance rights, allowing them to participate in decision-making processes that shape the blockchain’s future. - **Configurability**: On Avalanche L1s, networks may designate different assets for gas and staking, separating user fee UX from validator economics. --- Native tokens are the backbone of blockchain ecosystems, serving multiple roles that maintain the network's stability, security, and functionality. # Transfer Native Tokens (/academy/l1-native-tokenomics/01-tokens-fundamentals/04-transfer-native-token) --- title: Transfer Native Tokens description: Learn how to transfer native tokens using Core Wallet updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this exercise, you will learn how to transfer native tokens (AVAX) between accounts using Core Wallet. We'll demonstrate this using the Fuji testnet. ## Open Core Wallet First, make sure you have Core Wallet installed. If you don't have Core Wallet installed already, click the Download Core Wallet button below. ## Switch to Fuji Testnet ## Get Test AVAX Before transferring tokens, ensure you have some test AVAX: **Option 1 (Recommended):** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet AVAX automatically on the C-Chain. **Option 2:** Use the external [Avalanche Faucet](https://core.app/tools/testnet-faucet/?subnet=c&token=c) with coupon code `avalanche-academy25` to claim AVAX tokens on the C-Chain of the Fuji testnet. ## Transfer AVAX To transfer AVAX to another address: 1. Click the "Send" button in Core Wallet 2. Enter the recipient's address 3. Enter the amount of AVAX to send 4. Review the transaction details 5. Click "Send" to confirm ## Verify the Transfer To verify the transfer was successful: 1. Check the transaction status in Core Wallet 2. View the transaction details on the [Fuji Explorer](https://testnet.snowtrace.io) 3. The recipient can check their balance in their Core Wallet Remember that you need to have enough AVAX in your wallet to cover both the transfer amount and the gas fees. Always double-check the recipient address before sending to avoid any loss of funds. Now that you know how to transfer native tokens, you can proceed to learn about transferring ERC-20 tokens in the next section. # ERC-20 Tokens (/academy/l1-native-tokenomics/01-tokens-fundamentals/05-erc20) --- title: ERC-20 Tokens description: Learn about ERC-20 tokens and their role in blockchain ecosystems. updated: 2024-09-03 authors: [0xstt] icon: BookOpen --- While a blockchain has a single **native token**, the **ERC-20** token standard was developed to allow for the representation of a wide range of assets on EVM-compatible chains. **ERC** stands for **Ethereum Request for Comment**, and **20** is the identifier for the specific proposal that defines the standard. ERC-20 tokens are **fungible**, meaning each token is identical to another and can be exchanged on a one-to-one basis. These tokens are created and managed through smart contracts that adhere to the ERC-20 standard, ensuring interoperability between different tokens and decentralized applications (DApps). --- ### ERC-20 Token Architecture At the core of every ERC-20 token is a simple **mapping** of addresses to balances, representing the number of tokens an address holds. ```solidity abstract contract ERC20 is Context, IERC20, IERC20Metadata, IERC20Errors { mapping(address account => uint256) private _balances; //... } ``` Addresses holding ERC-20 tokens can belong to either **Externally Owned Accounts (EOAs)** or **smart contracts**. Both types of accounts can store and transfer ERC-20 tokens, making ERC-20 tokens versatile in decentralized finance (DeFi) and decentralized applications. --- ### Role of ERC-20 Tokens in Blockchain Ecosystems ERC-20 tokens play an essential role in enabling the creation of decentralized applications with various functionalities: - **Tokenized Assets**: ERC-20 tokens can represent anything from digital currencies to tokenized real-world assets. - **DeFi Protocols**: Many DeFi protocols use ERC-20 tokens for lending, staking, and liquidity pools. - **Token Sales**: ICOs (Initial Coin Offerings) and other fundraising models rely heavily on ERC-20 tokens. --- ### The ERC-20 Interface All ERC-20 tokens follow a standard interface to ensure compatibility with decentralized applications (DApps). This allows tokens to be easily transferred, approved for spending, and managed by any DApp that follows the same rules. ```solidity interface IERC20 { function name() external view returns (string memory); function symbol() external view returns (string memory); function decimals() external view returns (uint8); function totalSupply() external view returns (uint256); function balanceOf(address _owner) external view returns (uint256 balance); function transfer(address _to, uint256 _value) external returns (bool success); function transferFrom(address _from, address _to, uint256 _value) external returns (bool success); function approve(address _spender, uint256 _value) external returns (bool success); function allowance(address _owner, address _spender) external view returns (uint256 remaining); } ``` You can review the full **ERC-20 standard** [here](https://eips.ethereum.org/EIPS/eip-20). --- ### Transferring ERC-20 Tokens To transfer ERC-20 tokens between accounts, you use the `transfer()` function, where the sender specifies the recipient’s address and the amount to be transferred. For more complex interactions, such as allowing a smart contract to transfer tokens on behalf of someone else, the ERC-20 standard includes the `approve()` and `transferFrom()` functions. Transfers tokens from the sender’s account to another account, decreasing the sender's balance and increasing the recipient’s. This function allows the owner of a token balance to approve another account (the **spender**) to withdraw up to a specified amount of tokens. The spender can withdraw from the owner's balance multiple times, as long as the total amount doesn’t exceed the approved limit. The `allowance()` function returns the amount that a spender is still allowed to withdraw from an owner's balance. This function facilitates the transfer of tokens from one account to another on behalf of the account owner. It is typically used in scenarios where smart contracts need to execute token transfers according to the contract's logic. --- ERC-20 tokens revolutionized the blockchain space by enabling the tokenization of assets and simplifying the creation of decentralized applications. Their standardization ensures compatibility across platforms and DApps, making them an integral part of the broader crypto ecosystem. # Deploy an ERC-20 (/academy/l1-native-tokenomics/01-tokens-fundamentals/06-deploy-erc20) --- title: Deploy an ERC-20 description: Transfer an ERC-20 token between accounts updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; ### Deploy ERC-20 Let's deploy a basic demo ERC20 token on the Fuji testnet using our toolbox. ### Add Token to Core Wallet After deploying your ERC-20 token: 1. Open Core Wallet 2. Go to the Tokens tab 3. Click "Manage" 4. Click "+ Add Custom Token" 5. Enter the token contract address from the deployment 6. The token symbol and decimals should be automatically detected 7. Click "Add Token" Make sure you're connected to the Fuji testnet when adding the token. The token will only be visible on the network where it was deployed. ### Transfer ERC-20 Token To transfer your ERC-20 token: 1. In Core Wallet, select your token from the token list 2. Click "Send" 3. Enter the recipient's address 4. Enter the amount to transfer 5. Review the transaction details 6. Click "Send" to confirm Always double-check the recipient address before sending. ERC-20 transfers cannot be reversed once confirmed. ### Verify Transfer To verify the transfer was successful: 1. Check the transaction status in Core Wallet 2. View the transaction details on the [Fuji Explorer](https://testnet.snowtrace.io) 3. The recipient can add the token to their Core Wallet to see their balance If the recipient doesn't see the token in their wallet, they'll need to add it using the token contract address, just like you did in step 2. Now that you know how to deploy and transfer ERC-20 tokens, you can proceed to learn about token bridging in the next section. # Key Differences (/academy/l1-native-tokenomics/01b-native-vs-erc20/08-native-and-erc20-tokens) --- title: Key Differences description: Learn the key differences between native tokens and ERC20 tokens on Avalanche L1s updated: 2025-01-15 authors: [nicolasarnedo] icon: Book --- Before moving forward, and now that we have seen both, it's crucial to understand the fundamental differences between native tokens and ERC20 tokens. Each has distinct characteristics that affect how they function within your blockchain ecosystem. ## Key Differences | Feature | ERC20 | Native | |---------|-------|--------| | **Defined By** | Smart Contract Deployer | Protocol Level | | **Balance Storage** | Mapping in contract storage
`mapping(address => balance)` | EVM Account state
`account.balance` | | **Transfer Logic** | Handled by contract functions
`transfer()`, `transferFrom()` | Handled by EVM opcodes
`CALL`, `SELFDESTRUCT` | | **Smart Contract Awareness** | Built-in hooks for DeFi compatibility
Allows programmatic approvals
(e.g., Uniswap, Aave) | Can't call functions
`approve()`, `transferFrom()` | | **Receiving in Functions** | No special modifier needed
`function deposit(token, amount)` | Requires `payable` modifier
`function deposit() payable` | ## The Protocol Limitation One critical limitation of native tokens is that **protocols cannot "pull" native tokens from your wallet using `transferFrom()`**. This fundamental difference impacts how DeFi protocols interact with native tokens. ## Real-World Example: Aave Lending Let's look at how supplying ETH (a native token) to Aave works: 1. **Wrap it into WETH** - Convert native ETH to ERC20 WETH 2. **Call `approve()`** - Give Aave permission to spend your WETH 3. **Aave calls `transferFrom()`** - Aave pulls the WETH from your wallet 4. **You receive aTokens** - Start earning interest and can borrow against your deposit ## Why ERC20 Standardization Matters The main advantage of ERC20 tokens is **standardization**. Without it: - You could build a `deposit()` payable function for native tokens - But then you'd have two different logics for depositing tokens - This creates complexity and potential security issues # Wrapped Native Tokens (/academy/l1-native-tokenomics/01b-native-vs-erc20/09-wrapped-tokens) --- title: Wrapped Native Tokens description: Learn about wrapped tokens and their role in blockchain ecosystems. updated: 2025-09-15 authors: [0xstt] icon: BookOpen --- Now that we understand the key differences between native tokens and ERC20s, we can explore **native token wrapping** - what it is and why it's necessary for blockchain ecosystems. **Wrapped tokens** are blockchain assets that represent a native cryptocurrency (e.g., AVAX, ALOT, ETH) in a tokenized form, typically conforming to the **ERC-20 token standard**. Wrapping a native token allows it to be used in decentralized applications (dApps) and protocols that require ERC-20 tokens. --- ### What Are Wrapped Tokens? Wrapped tokens are created through a process where the native cryptocurrency is locked in a smart contract, and an equivalent amount of the wrapped token is minted. These wrapped tokens are backed 1:1 by the underlying native asset, ensuring that the value of the wrapped token mirrors that of the original native cryptocurrency. This **ERC-20 compatibility** is crucial for enabling the native asset to interact with dApps, decentralized exchanges (DEXs), and smart contracts within the EVM ecosystem, where ERC-20 tokens are the standard. As we learned in the previous section, this solves the fundamental limitation where protocols cannot "pull" native tokens using `transferFrom()`. --- ### Why Are Wrapped Tokens Important? Wrapped tokens play an essential role in **interoperability** within the EVM ecosystem, facilitating seamless use across decentralized applications and protocols. They directly address the limitations we discussed about native tokens: - **DeFi Integration**: Wrapped tokens enable native assets to participate in DeFi protocols that require ERC-20 tokens. Remember our Aave example? Native ETH needs to be wrapped to WETH before it can be supplied to lending protocols. - **Protocol Compatibility**: They solve the `transferFrom()` limitation by allowing protocols to "pull" tokens from user wallets, which is impossible with native tokens. - **Liquidity**: Wrapped tokens increase liquidity in DeFi by enabling users to participate in protocols that require ERC-20 tokens, even when their original asset is a native token. - **Cross-Chain Compatibility**: Wrapped tokens allow assets from one blockchain (e.g., Bitcoin) to be used on another chain, enhancing cross-chain functionality. --- ### Wrapped Token Contract Interface A **wrapped token contract** is typically an implementation of the ERC-20 token standard, with added functions for minting and burning tokens to facilitate the wrapping and unwrapping process. Here's a basic contract interface for a wrapped token: ```solidity interface IWrappedToken { function deposit() external payable; function withdraw(uint256 amount) external; function totalSupply() external view returns (uint256); function balanceOf(address account) external view returns (uint256); function transfer(address recipient, uint256 amount) external returns (bool); function allowance(address owner, address spender) external view returns (uint256); function approve(address spender, uint256 amount) external returns (bool); function transferFrom(address sender, address recipient, uint256 amount) external returns (bool); } ``` This function is used to wrap native tokens. When a user calls deposit(), they send the native cryptocurrency (e.g., AVAX, ETH) to the contract, which then mints an equivalent amount of the wrapped token. This function is used to unwrap the tokens. It burns the specified amount of wrapped tokens and returns the equivalent amount of native cryptocurrency to the user. # Deploy a Wrapped Token (/academy/l1-native-tokenomics/01b-native-vs-erc20/10-deploy-wrapped-tokens) --- title: Deploy a Wrapped Token description: Learn how to deploy and interact with wrapped tokens updated: 2024-09-03 authors: [0xstt] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this section, we will deploy and interact with a wrapped token using the Dev Console. --- #### 1. Deploy Wrapped Native Token with the Dev Console Use the Dev Console to easily deploy a wrapped native token without any setup or command line tools. The Dev Console provides a simple interface to deploy wrapped native tokens directly from your browser. This eliminates the need to: - Set up a local development environment - Install and configure Foundry - Manage private keys manually - Write deployment scripts You can deploy a wrapped native token in just a few clicks using the integrated deployment tool below. ### Deploy Your Wrapped Native Token Use the integrated Dev Console below to deploy your wrapped native token: ### Note Your Wrapped Token Address After deployment, the Dev Console will display your wrapped token address. You can copy this address for use in other tools and interactions. The address will be automatically saved in the Dev Console session for easy reference across different tools. ### Interacting with Your Wrapped Token Once deployed, you can interact with your wrapped native token using various methods: **Using the Dev Console:** - The Dev Console provides additional tools for interacting with ERC-20 tokens - Look for token interaction tools in the ICTT section - These tools provide a user-friendly interface without requiring command-line knowledge **Using Wallets:** - Import the token address into MetaMask or other EVM-compatible wallets - The wrapped token will appear as a standard ERC-20 token - You can send, receive, and view balances directly in your wallet **Key Functions:** - **Deposit/Wrap**: Convert native tokens to wrapped tokens (increases your wrapped token balance) - **Withdraw/Unwrap**: Convert wrapped tokens back to native tokens (decreases your wrapped token balance) The wrapped token maintains a 1:1 ratio with the native token, making it perfect for DeFi integrations and cross-chain applications. # Introduction (/academy/l1-native-tokenomics/02-custom-tokens/01-introduction) --- title: Introduction description: Learn about custom native token of your Avalanche L1 blockchain. updated: 2025-09-12 authors: [nicolasarnedo] icon: Book --- ## What is Independent Tokenomics? Avalanche Custom Blockchains offer multiple ways to implement independent tokenomics. This gives developers more control and can enable new business models that would not be economically feasible on single-chain systems. The customizations you'll learn about include: - **Native Token:** Every Avalanche L1 has its own native token used for paying transaction fees - **Transaction Fees:** Configure how transaction fees should be calculated - **Initial Native Token Allocation:** Specify how the initial token supply is distributed - **Native Token Minting Rights:** Define if and who can mint more native tokens - **Staking Token:** For public and permissionless validation, define your logic for how a node can become a validator You'll get hands-on experience configuring the tokenomics of your own custom blockchain throughout this course. Firstly, for financial institutions, having their blockchain with a custom native token can facilitate cheaper transactions compared to traditional banking systems. This flexibility can enable real-time payments, cross-border transactions, and micropayments, fostering financial inclusion and innovation. For gaming platforms, having their blockchain with a custom native token can revolutionize in-game economies and user experiences. They can design their tokenomics to incentivize gameplay, reward loyal players, and monetize virtual assets. # Custom Token vs ERC20 (/academy/l1-native-tokenomics/02-custom-tokens/02-custom-native-vs-erc20-native) --- title: Custom Token vs ERC20 description: Understanding your options for native tokens on Avalanche L1s updated: 2025-01-15 authors: [nicolasarnedo] icon: BookOpen --- Every Avalanche L1 blockchain has its own native token used for paying for transactions (gas). This isolates the fee to use the blockchain from any activity on other chains in the ecosystem. For example, if there is high usage of the primary network, it does not affect the fees on another Avalanche L1. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/multi-chain-architecture/gas-comparison-6CHcnngJHLqW9ZluUv60SzpiIdUjo2.png) Having their own blockchain with a custom native token in the Avalanche network can open up a wide range of new use cases for companies, ranging from financial institutions to gaming platforms. By having control over their gas token, these companies can revolutionize the blockchain landscape and enable innovative applications. When designing your Avalanche L1, you have two options for what will be the native token of your chain. In this section we outline which one will create the right tokenomics for your use case. ## Option 1 - *Custom Native Token* Creating a **custom native token** gives you complete control over your blockchain's gas token. This approach offers several advantages: - **Custom Tokenomics**: Design token distribution, inflation, and utility to match your project's needs - **Business Model Flexibility**: Avoid the high costs of sponsoring transaction fees on other chains. You can make the native token intentionally valueless to deliver gas-free experiences, eliminate fee sponsorship burdens, and create Web2-like UX that removes barriers for mainstream users - **Fee Control**: Set predictable, isolated transaction costs independent of other chains' activity and token price volatility - **Economic Design**: Create token utility, reward mechanisms, and incentive structures that align with your application's goals ## Option 2 - *Existing ERC20 as Native Token* You can also tie your native token's value to an **existing ERC20 token** (like USDC). This approach offers: - **Price Stability** (if you chose a stablecoin): It will ensure predictable gas costs for users; say 0.001 AVAX is the gas cost, AVAX might it be worth more in November than in January. During high congestion moments on your Stablecoin-based L1 gas price will still increase, just the underlying asset to pay it will always remain constant. - **Familiar Asset**: Users already understand and trust the underlying asset ## L1 Native Tokenomics Decision Outline This **L1 Native Tokenomics** course focuses on implementing and creating an L1 with a **custom native token**. You'll learn how to: - Create custom token - Assign Initial Allocation - Use Wrapped Native Token pre-deployed contract - Configure Native Minting Rights - Configure Fee Configuration for Transactions *Using existing ERC20 tokens* or **another chain's native token** as your L1's native token will be covered in the **Cross-Chain L1 Native** course*. # Native and Staking Tokens (/academy/l1-native-tokenomics/02-custom-tokens/03-native-vs-staking) --- title: Native and Staking Tokens description: Understanding your options for native tokens on Avalanche L1s updated: 2025-01-15 authors: [nicolasarnedo] icon: BookOpen --- It's important to understand that your **native token and staking token don't necessarily have to be the same**: - **Native Token**: Used for paying transaction fees (gas) - **Staking Token**: Used for validator rewards and network security This separation allows for more flexible tokenomics design and can optimize different aspects of your blockchain's economy. **Validator management and staking tokenomics** will be covered in: - [Permissioned L1s](/academy/permissioned-l1s) course: Learn about validator manager deployment options - [Permissionless L1s](/academy/permissionless-l1s) course: Learn about staking tokenomics and validator economics # Token Symbol (/academy/l1-native-tokenomics/02-custom-tokens/04-token-symbol) --- title: Token Symbol description: Understanding token symbol conventions and their importance in blockchain ecosystems updated: 2025-08-21 authors: [nicolasarnedo] icon: BookOpen --- When creating a custom native token for your Avalanche L1, choosing the right token symbol is more important than it might initially seem. A token symbol serves as the shorthand identifier for your token across wallets, exchanges, and applications. ## Symbol Conventions Token symbols typically follow these conventions: - **Length**: 3-5 characters (ETH, AVAX, USDC, BTC) - **Format**: All uppercase letters - **Uniqueness**: Should be distinct from existing popular tokens - **Relevance**: Often relates to the project name or purpose ## Native Token Symbol in Code In Solidity and other EVM-based smart contracts, there's an important convention to understand. The native token is always referred to as **"ether"** in the code, regardless of what the actual token symbol is on your blockchain. For example: ```solidity // This always refers to the native token, whether it's ETH, AVAX, or your custom token msg.value // Amount of native token sent address.balance // Native token balance payable(recipient).transfer(1 ether) // Transfer 1 unit of native token ``` This convention exists because Solidity was originally designed for Ethereum, where the native token is Ether. The keyword "ether" became a unit denomination in the language itself. ## Custom L1 Considerations When launching your Avalanche L1: - Your chosen symbol (e.g., "GAME", "DEFI") is what users see in wallets and explorers - But in smart contract code, you'll still use "ether" as the unit - This separation between display symbol and code convention helps maintain compatibility ## Best Practices 1. **Research existing symbols** to avoid conflicts 2. **Keep it memorable** and easy to type 3. **Consider your brand** - the symbol becomes part of your identity 4. **Check trademark considerations** for your chosen symbol Your token symbol is often the first thing users encounter, so choose wisely! # Create Custom Token (/academy/l1-native-tokenomics/02-custom-tokens/05-create-custom-token) --- title: Create Custom Token description: Understanding token symbol conventions and their importance in blockchain ecosystems updated: 2025-08-21 authors: [nicolasarnedo] icon: Terminal --- In this course, we'll use the [Developer Console](/console) to create a new L1. While you may have already created an L1 in the Avalanche Fundamentals course, we'll now walk through the process again—this time explaining in detail each tokenomics configuration option available. The goal of this first step is simple: create your Subnet and set your blockchain's token name and token symbol. ## Create Subnet and Chose Token name Use the tool below to create a Subnet and then add a blockchain to it. In the Genesis configuration, only set the token name and token symbol for now (you can ignore the other sections). **Token Name**: The token name is purely cosmetic and not recorded anywhere in the blockchain code (except for the wrapped native token contract). It's mainly used for display purposes in wallets and explorers. Don't hit **create chain** yet, we will configure other tokenomics (supply, allocation, fees, etc.) in later steps of this course. # Native Token Allocation (/academy/l1-native-tokenomics/02-custom-tokens/06-native-token-allocation) --- title: Native Token Allocation description: Initial token allocation of a blockchain refers to the distribution of native tokens when a new blockchain network is launched. updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- The creator of an Avalanche L1 can pre-allocate the native tokens during the chain genesis event. The initial token allocation of a blockchain refers to the distribution of tokens or digital assets when a new blockchain network is launched. This allocation is a critical aspect of the network's design, as it determines how tokens are distributed among various stakeholders and participants. There are several considerations that go into the initial token allocation: - **Founders and Development Team:** Often, a portion of the initial token supply is allocated to the founders and development team as a reward for their efforts in creating the blockchain protocol. This allocation serves as an incentive for them to continue developing and maintaining the network. - **Early Investors and Backers:** Another portion of the token supply may be allocated to early investors and backers who provided funding or support during the project's early stages. These individuals or entities take on early risk and are rewarded with tokens that may increase in value as the network grows. - **Community:** A significant portion of tokens may be allocated to the community through mechanisms such as a token sale, airdrops, or other distribution methods. This ensures widespread ownership and participation in the network, fostering decentralization and security. - **Reserve:** Some tokens may be allocated to a reserve or treasury to fund ongoing development, marketing, and ecosystem growth initiatives. This reserve can be managed by a decentralized autonomous organization (DAO) or a designated entity responsible for allocating funds in the best interest of the network. Overall, the initial token allocation of a blockchain is a complex and crucial aspect of its design, shaping the network's distribution of ownership, governance, and incentives from the very beginning. Transparent and equitable token allocation mechanisms are essential for fostering trust and confidence in the network among its stakeholders. # Configure Native Token Allocation (/academy/l1-native-tokenomics/02-custom-tokens/07-configure-token-allocation) --- title: Configure Native Token Allocation description: Learn how to configure your blockchain native token allocation. updated: 2024-06-28 authors: [usmaneth] icon: Terminal --- Alright, so let's set the initial token allocation. When prompted select **addresses and the amount of your token they will have from the genesis*. # Introduction (/academy/l1-native-tokenomics/03-precompiles/01-introduction) --- title: Introduction description: Chapter overview and core blockchain concepts review updated: 2025-01-15 authors: [nicolasarnedo] icon: Book --- In this section we are going to quickly address what Precompiles are, why we need them and how they are implemented. The reason we cover this in the L1 Native Tokenomics course is because in the next two sections we will be working with 2 precompiles that heavily influence the tokenomics of a chain, the [Native Minter](/l1-native-tokenomics/04-native-minter) and [Fee Config](/l1-native-tokenomics/05-fee-config) Precompiles. Specifically we are going to look into **Stateful Precompiles**, but if you wish to learn more in detail about Precompiles and how you can use them to configure your L1 please check out the [Customizing the VM](/customizing-evm) course! ## Core Concepts Review (Optional) Before we look more closely into precompiles, it is important we re-visit and fully understand some core concepts about how blockchains and virtual machines work. ### Deterministic State Transitions Blockchains rely on a **deterministic machine (the EVM)** to calculate transitions from one state to the next. Every transaction modifies the blockchain state in a predictable, reproducible way—ensuring all nodes reach consensus on the new state. ![State Transitions](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/Precompiles_State.png) ### EVM as a Configurable Specification Think of the EVM like a class in Object-Oriented Programming: it's a specification that defines behavior. Each blockchain—Ethereum, Avalanche C-Chain, or your custom L1—is an **instance** of this EVM class, with its own configuration. ![EVM Instances](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/Precompiles_OOP.png) **Through Avalanche, you can configure your EVM instance with precompiles that live at the execution level** (will dive deeper into this later), giving you control over core blockchain functionality like native token minting and transaction fee configuration. # Stateful Precompiles (/academy/l1-native-tokenomics/03-precompiles/02-precompiles) --- title: Stateful Precompiles description: Understanding Stateful Precompiles and their role in blockchain optimization updated: 2025-01-15 authors: [nicolasarnedo] icon: BookOpen --- First of all we will start by saying the notion of precompiles contracts, or "*Precompiles*" as we are calling them, is something inherited from the Ethereum community. The EVM, while very robust and efficient for handling thousands of simple transactions like token minting, swapping or so, has a series of limitations: ### Lack of Libraries Solidity is a relatively new language, hence it doesn't have as many powerful and battle-tested libraries as other languages like *Go*, *Typescript* or *Rust* ### Expensive Computation Solidity code is compiled into EVM opcodes—low-level instructions designed for simple operations like addition, storage reads, and basic logic. When performing complex mathematical operations (like elliptic curve cryptography or modular exponentiation), these simple opcodes must be chained together, requiring many steps and frequent use of expensive operations like `SSTORE` (storage writes) which cost 20,000 gas or more. This makes advanced cryptography impractically expensive in pure Solidity ## Basic Precompiles Given these limitations - many developer teams/communities have proposed creating programs written in more efficient languages (Go/Rust) for well-known or frequently used operations that may temporarily by-pass the limitations of the smart contract language of that blockchain. These "programs" are then wrapped in a interface (solidity/move) and deployed to a pre-defined address so they can be called from other smart contracts in the application layer. Precompiles Structure Instances of these Precompiles exists across most major blockchain platforms: - Ethereum has standard precompiles at fixed addresses - Solana uses "Native Programs" written in Rust - Aptos and Sui have "Native Functions" exposed through Move modules For Avalanche, these precompiles are especially important because you are the **owner** of your L1. Rather than having to go through several community approvals (the case for single-chain environments like Ethereum and Solana) and executing a network upgrade to implement them, you can directly define them in the genesis block! (*or add them as well through a network upgrade*) ### Common Precompiles Here are some of the most widely used precompiles across EVM-compatible blockchains: **Cryptographic Operations:** - `ecRecover` (0x01): Recover Ethereum address from ECDSA signature - `SHA256` (0x02): SHA-256 hash function ## Stateful Precompiles The basic precompiles we've discussed so far (like ecRecover and SHA256) are **stateless**—they perform computations and return results without modifying the blockchain state. However, Avalanche introduces a more powerful concept: **Stateful Precompiles**. Stateful precompiles go a step further by **injecting state access** through the `PrecompileAccessibleState` interface. This gives them the ability to: - Read and modify account balances - Read and write to contract storage - Access the full EVM state during execution This state access makes stateful precompiles far more powerful than their stateless counterparts. Instead of just performing calculations, they can directly manipulate blockchain state at the protocol level—enabling use cases like native token minting, fee configuration, and transaction/contract deployer permissioning. Through stateful precompiles, Avalanche provides a level of EVM customization that goes beyond what's possible with the original precompile interface, making it ideal for building highly customized L1 blockchains. # Protocol Integration (/academy/l1-native-tokenomics/03-precompiles/03-precompile-architecture) --- title: Protocol Integration description: How precompiles integrate with the VM stack and optimize execution updated: 2025-01-15 authors: [nicolasarnedo] icon: BookOpen --- Blockchains are organized into three distinct layers, each with specific responsibilities: ![Blockchain Layers](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/Precompiles_Layers1.png) 1. **Application Layer**: Where smart contracts and user applications run 2. **Execution Layer**: Where the EVM processes transactions and executes code 3. **Consensus Layer**: Where validators agree on the blockchain state Precompiles live in the **Execution Layer**, directly integrated into the EVM itself: ![Precompiles in Execution Layer](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/Precompiles_Layers2.png) ## Why Aren't Precompiles in the Application Layer? You might wonder: if precompiles can be called like smart contracts, why aren't they stored with other smart contracts in the application layer? The answer comes down to **performance and access**: **1. Direct State Access** Smart contracts in the application layer must go through the EVM interpreter to access blockchain state. Precompiles, living in the execution layer, have **direct access** to the EVM state—they can read and modify balances, storage, and consensus data without the overhead of bytecode interpretation. **2. Native Code Execution** Application layer contracts are compiled to EVM bytecode and executed opcode-by-opcode. Precompiles are written in Go (Avalanche's client language) and execute as **native code**, making them orders of magnitude faster for complex operations. **3. Protocol-Level Operations** Precompiles need to perform operations that regular smart contracts cannot or should not do—like minting native tokens, configuring transaction fees, or managing validator permissions. These are **protocol-level concerns** that require execution layer access. **4. Security Boundaries** By keeping precompiles in the execution layer, we maintain a clear security boundary. User-deployed contracts can't interfere with protocol operations, and protocol operations benefit from the stability and security of the client software itself. This architectural separation is what makes stateful precompiles so powerful: they bridge the gap between user applications and the blockchain protocol, enabling customizations that would be impossible in the application layer alone. # Introduction (/academy/l1-native-tokenomics/04-native-minter/01-introduction) --- title: Introduction description: Understanding the Native Minter Precompile and token supply management updated: 2025-08-21 authors: [nicolasarnedo] icon: Book --- In the previous section, we learned about precompiles and how they enable protocol-level customization of your L1. Now we'll put that knowledge into practice with the **Native Minter Precompile**—one of the most powerful tools for managing your blockchain's tokenomics. ## What You'll Learn This section covers: - **Native Token Supply**: Understanding hard-capped vs. flexible token supplies - **Native Minter Precompile**: How to enable and control native token minting - **Minting Rights Management**: Configuring who can mint new tokens ## Why This Matters The Native Minter Precompile determines whether your blockchain has a fixed token supply (like Bitcoin's 21M cap) or can mint additional tokens as needed. This decision has major implications: - **Valueless Gas Tokens**: Enable gas-free user experiences by minting tokens as needed - **Controlled Inflation**: Manage token supply for economic incentives - **Protocol Flexibility**: Adapt your tokenomics as your ecosystem grows By the end of this section, you'll understand how to configure native token minting rights for your L1's specific use case. Let's dive in! # Native Token Minting Rights (/academy/l1-native-tokenomics/04-native-minter/02-native-token-minting-rights) --- title: Native Token Minting Rights description: Learn how to manage the native token minting rights. updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- Many chains have a hard-capped supply for their gas token. The Avalanche C-Chain uses AVAX as their gas token and there will only ever be 720m AVAX. This may not be optimal for enterprise use cases, especially if they want to **create an experience with a valueless gas token**. Therefore, they have the option to mint more native tokens at any time. This can be done through the **Native Minter Precompile**. If a community running an Avalanche L1 wants to hard-cap the native token, that is perfectly doable as well. They just keep the native minter capabilities deactivated (which is the default). import NativeMinter from "@/content/common/evm-precompiles/native-minter.mdx"; import defaultMdxComponents from "fumadocs-ui/mdx"; In the upcoming chapter, we will only use an admin address in order to keep the exercise simple. # Activate Native Minter (/academy/l1-native-tokenomics/04-native-minter/03-activate-native-minter) --- title: Activate Native Minter description: Learn how to activate the native minter precompile. updated: 2024-06-28 authors: [usmaneth] icon: Terminal --- When prompted if you want to allow minting of new natie tokens, select **Allow minting additional tokens**. Remember, we're still configuring the genesis for our L1—**don't hit "Create Chain" yet!** We'll continue adding more tokenomics configurations in the following sections and finally create the chain in the Genesis Breakdown section where you'll see all your configurations come together. # Introduction (/academy/l1-native-tokenomics/05-fee-config/01-introduction) --- title: Introduction description: Let's recap our knowledge on transaction fees in blockchains updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- import TransactionFees from '@/content/common/multi-chain-architecture/transaction-fees.mdx'; # Transaction Fees (/academy/l1-native-tokenomics/05-fee-config/02-transaction-fees) --- title: Transaction Fees description: Learn about L1 blockchain gas fees. updated: 2024-06-28 authors: [usmaneth] icon: BookOpen --- When creating an Avalanche L1 we can configure not only our custom native token, but also how the transaction fees (also known as gas fees) are determined. This allows Avalanche L1s to define the desired or maximal throughput of the blockchain differently. ![](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/multi-chain-architecture/subnet-fee-comparison-RDdq99JjKbXqvCIQBnreWAwwjzY2jn.png) To adjust the fee dynamics we have the following parameters: **Gas Limit:** The total amount of gas that can be used in a single block. This impacts how much computation happens in one block. The value represents the maximum amount of gas a single transaction can use (which would result in a single-transaction block). **Target Block Rate:** The network aims to produce a new block in targetBlockRate seconds. This value is in seconds. If the network starts producing faster than this, base fees are increased accordingly. Otherwise, if the network starts producing slower than this, base fees are decreased accordingly. **Min. Base Fee:** The minimum base fee that can be used by a transaction and the cost to do native token transfers. No matter how simple a transaction is, it will cost at least the minimum base fee. **Target Gas:** The targeted amount of gas (including block gas cost) to consume within a rolling 10s window. **Base Fee Change Denominator:** The transaction base fee changes with demand of the block space. If the parent block used more gas than its target, the base fee should increase. Otherwise, if the parent block used less gas than its target, the base fee should decrease by the amount. There is a formula behind this that would go beyond the scope of this section, but setting this value to a larger value means that the base fee will change more gradually. Setting it to a smaller value will make the base fee change more rapidly. **Min. Block Gas Cost:** Minimum gas cost a block should cover. **Max. Block Gas cost:** Maximum gas cost a block should cover. **Block Gas Cost Step:** This value determines the block gas change rate depending on the targetBlockRate. If the parent block is produced at the targetBlockRate, the block gas cost will stay the same. If the parent block is produced at a slower rate, the block gas cost will decrease. # Activate Fee Config (/academy/l1-native-tokenomics/05-fee-config/03-activate-fee-config) --- title: Activate Fee Config description: Learn how to configure your blockchain gas fees. updated: 2024-06-28 authors: [usmaneth] icon: Terminal --- Below you can set your custom fee configuration, select **Customize fee config**. Below toggle the **Reward Manager** and hit the *Add Wallet* button to include your connected wallet in the allowlist for this Precompile. Remember, we're still configuring the genesis for our L1—**don't hit "Create Chain" yet!** We'll continue adding more configurations in the following sections and finally create the chain in the Genesis Breakdown section where you'll see all your configurations come together. # Initial Allocation (/academy/l1-native-tokenomics/07-token-distribution/01-initial-allocation) --- title: Initial Allocation description: Considerations for the initial token allocation on an Avalanche L1 updated: 2024-09-03 authors: [0xstt] icon: Book --- The creator of an Avalanche L1 can pre-allocate native tokens during the chain genesis event. This **initial token allocation** refers to the distribution of tokens or digital assets when a new blockchain network is launched, which is a crucial aspect of the network's design. > **Note:** Initial token allocation determines how tokens are distributed among stakeholders and participants. ### Key Considerations for Initial Token Allocation There are several factors to consider when allocating tokens: #### Founders and Development Team A portion of the initial token supply is often allocated to the founders and the development team as a reward for their efforts in creating the blockchain protocol. This serves as an incentive for continued development and network maintenance. #### Early Investors and Backers Another portion of the token supply may be allocated to early investors and backers who provided funding or support during the project's early stages. These entities take on early risk and are rewarded with tokens that may increase in value as the network grows. #### Community A significant portion of tokens may be allocated to the community through mechanisms such as token sales, airdrops, or other distribution methods. This ensures widespread ownership and participation in the network, fostering decentralization and security. #### Reserve Some tokens may be allocated to a reserve or treasury to fund ongoing development, marketing, and ecosystem growth initiatives. This reserve can be managed by a DAO or another entity tasked with allocating funds for the network's benefit. > **Tip:** Transparent and equitable token allocation mechanisms are essential for fostering trust and confidence in the network among its stakeholders. # Vesting Schedules (/academy/l1-native-tokenomics/07-token-distribution/02-vesting-schedules) --- title: Vesting Schedules description: Learn about vesting schedules and how to implement them in Solidity. updated: 2024-10-08 authors: [owenwahlgren] icon: Book --- **Vesting schedules** are mechanisms used in blockchain projects to release tokens to team members, investors, or other stakeholders over a specified period. This approach helps align incentives, prevent immediate sell-offs, and promote long-term commitment to the project. --- ## Understanding Vesting Schedules A vesting schedule dictates how and when tokens are released to a recipient. Common elements include: - **Cliff Period**: An initial period during which no tokens are vested. - **Vesting Period**: The total duration over which tokens are gradually released. - **Release Interval**: The frequency at which tokens are released (e.g., monthly, quarterly). - **Total Allocation**: The total number of tokens to be vested. ### Types of Vesting Schedules 1. **Linear Vesting**: Tokens are released uniformly over the vesting period. 2. **Graded Vesting**: Tokens vest in portions at different intervals. 3. **Cliff Vesting**: All or a significant portion of tokens vest after the cliff period. --- ## Implementing Vesting Schedules in Solidity Smart contracts can automate vesting schedules, ensuring transparency and trustlessness. Below is an example of a simple Solidity contract implementing a linear vesting schedule. ### Example Solidity Contract ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract TokenVesting { address public beneficiary; uint256 public start; uint256 public duration; uint256 public cliff; uint256 public totalTokens; uint256 public released; IERC20 public token; constructor( address _token, address _beneficiary, uint256 _start, uint256 _cliffDuration, uint256 _duration, uint256 _totalTokens ) { require(_beneficiary != address(0), "Invalid beneficiary"); require(_cliffDuration <= _duration, "Cliff longer than duration"); require(_duration > 0, "Duration must be > 0"); token = IERC20(_token); beneficiary = _beneficiary; start = _start; cliff = _start + _cliffDuration; duration = _duration; totalTokens = _totalTokens; } function release() public { require(block.timestamp >= cliff, "Cliff period not reached"); uint256 unreleased = releasableAmount(); require(unreleased > 0, "No tokens to release"); released += unreleased; token.transfer(beneficiary, unreleased); } function releasableAmount() public view returns (uint256) { return vestedAmount() - released; } function vestedAmount() public view returns (uint256) { if (block.timestamp < cliff) { return 0; } else if (block.timestamp >= start + duration) { return totalTokens; } else { return (totalTokens * (block.timestamp - start)) / duration; } } } interface IERC20 { function transfer(address recipient, uint256 amount) external returns (bool); } # Bonding Curves (/academy/l1-native-tokenomics/07-token-distribution/03-bonding-curves) --- title: Bonding Curves description: Learn about bonding curves and their role in token economics. updated: 2024-10-08 authors: [owenwahlgren] icon: Book --- **Bonding curves** are mathematical formulas used in blockchain and token economics to define the relationship between a token's price and its supply. They provide a mechanism for automated price discovery and liquidity, enabling decentralized issuance and trading of tokens without relying on traditional market makers or exchanges. --- ## Understanding Bonding Curves A bonding curve is a continuous token model where the price of a token is determined by a predefined mathematical function based on the total supply in circulation. As more tokens are purchased and the supply increases, the price per token rises according to the curve. Conversely, selling tokens decreases the supply and lowers the price. ### Key Concepts - **Automated Market Maker (AMM)**: A system that provides liquidity and facilitates trading by automatically adjusting prices based on supply and demand. - **Price Function**: A mathematical formula that defines how the token price changes with supply. - **Liquidity Pool**: A reserve of tokens used to facilitate buying and selling without requiring counterparties. --- ## How Bonding Curves Work ### Price Functions The bonding curve relies on a price function `P(S)`, where: - `P` is the price per token. - `S` is the current supply of tokens. Common price functions include linear, exponential, and sigmoid curves. #### Linear Bonding Curve A simple linear function: ``` P(S) = a * S + b ``` - `a` and `b` are constants defining the slope and intercept. #### Exponential Bonding Curve An exponential function: ``` P(S) = e^(k * S) ``` - `e` is the base of the natural logarithm. - `k` is a constant determining the rate of price increase. ### Buying and Selling Tokens - **Buying Tokens**: To purchase tokens, a user pays an amount calculated by integrating the price function over the desired increase in supply. - **Selling Tokens**: To sell tokens, a user receives an amount calculated by integrating the price function over the desired decrease in supply. --- ## Applications of Bonding Curves ### Token Launch Mechanisms Bonding curves enable projects to launch tokens without initial liquidity or listing on exchanges. - **Continuous Token Issuance**: Tokens can be minted on-demand as users buy them. - **Fair Price Discovery**: Prices adjust automatically based on demand. ### Decentralized Finance (DeFi) Used in AMMs and liquidity pools to facilitate decentralized trading. - **Uniswap**: Utilizes bonding curves for token swaps. - **Balancer**: Manages portfolios using bonding curves. ### Fundraising and DAOs Facilitate fundraising by allowing investors to buy tokens that represent shares or voting rights. - **Continuous Organizations**: Organizations that continuously raise funds through token sales governed by bonding curves. - **DAO Membership**: Tokens purchased via bonding curves grant access and voting power. --- ## Implementing Bonding Curves in Smart Contracts ### Solidity Example Below is a simplified example of implementing a linear bonding curve in Solidity. ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract BondingCurve { uint256 public totalSupply; uint256 public constant a = 1e18; // Slope uint256 public constant b = 1e18; // Intercept mapping(address => uint256) public balances; function buy() external payable { uint256 tokensToMint = calculateTokensToMint(msg.value); balances[msg.sender] += tokensToMint; totalSupply += tokensToMint; } function sell(uint256 tokenAmount) external { require(balances[msg.sender] >= tokenAmount, "Insufficient balance"); uint256 ethToReturn = calculateEthToReturn(tokenAmount); balances[msg.sender] -= tokenAmount; totalSupply -= tokenAmount; payable(msg.sender).transfer(ethToReturn); } function calculatePrice(uint256 supply) public pure returns (uint256) { return a * supply + b; } function calculateTokensToMint(uint256 ethAmount) public view returns (uint256) { // Simplified calculation for demonstration purposes uint256 tokens = ethAmount / calculatePrice(totalSupply); return tokens; } function calculateEthToReturn(uint256 tokenAmount) public view returns (uint256) { // Simplified calculation for demonstration purposes uint256 ethAmount = tokenAmount * calculatePrice(totalSupply); return ethAmount; } } ``` Explanation - **`buy()`:** Users send ETH to buy tokens. The number of tokens minted is calculated based on the bonding curve. - **`sell()`:** Users can sell their tokens back for ETH. The amount of ETH returned is calculated based on the current supply. - **`calculatePrice()`:** Determines the price per token based on the total supply. This is a simplified example and not suitable for production. Proper handling of floating-point arithmetic and security considerations is necessary. ### Considerations and Risks **Price Volatility** - **Speculation**: Prices can become volatile due to speculative trading. - **Market Manipulation**: Large trades may influence prices significantly. **Smart Contract Risks** - **Security Vulnerabilities**: Bugs in the contract can lead to loss of funds. - **Complexity**: Implementing accurate bonding curves requires careful mathematical and technical design. **Liquidity Concerns** - **Slippage**: Large trades may experience significant price slippage. - **Liquidity Pools**: Adequate reserves are necessary to handle buy and sell orders. ## Conclusion Bonding curves offer a powerful tool for automated price discovery and token issuance in decentralized networks like Avalanche. They enable projects to create self-sustaining economies with built-in liquidity and dynamic pricing. Understanding bonding curves is essential for developers and stakeholders involved in tokenomics and decentralized finance. By carefully designing bonding curve parameters and smart contracts, projects can align incentives, promote fair participation, and foster sustainable growth within their ecosystems. # Airdrops (/academy/l1-native-tokenomics/07-token-distribution/04-airdrop) --- title: Airdrops description: Learn about token airdrops and Core's airdrop service in the Avalanche ecosystem. updated: 2024-10-08 authors: [owenwahlgren] icon: Book --- **Token airdrops** are a popular method in the blockchain ecosystem for distributing tokens to users, promoting projects, and incentivizing community engagement. In the Avalanche network, airdrops can help bootstrap new projects, reward loyal participants, and increase the distribution of tokens to a wider audience. --- ## Understanding Airdrops Airdrops involve sending tokens directly to users' wallets, typically for free or in exchange for simple tasks like participating in network activities, holding a specific token, or interacting with a smart contract. They serve multiple purposes: - **Marketing and Promotion**: Raise awareness about a new project or token. - **User Acquisition**: Attract new users to a platform or service. - **Community Rewards**: Reward loyal users and early adopters. - **Decentralization**: Distribute tokens widely to enhance network decentralization. --- ## Core Wallet's Airdrop Tool [Core](https://core.app/), Avalanche's native ecosystem wallet and portfolio, has integrated a new promotions and airdrop tool developed by Ava Labs. This tool leverages on-chain data sources for seamless token distribution, allowing any builder to strategically reward their loyal community members. ### Key Features - **On-Chain Data Integration**: Utilizes indexed on-chain data to identify and reward true users based on their interactions with contracts. - **User-Friendly Interface**: Provides a straightforward, free method to use on-chain data for airdrops and promotions. - **Custom Queries**: Allows builders to locate community members who have interacted with their contracts using the query builder. - **Manual Address Addition**: Enables manual addition or upload of addresses for token distribution. ### How It Works Ava Labs indexes on-chain data, enabling Core's airdrop tool to quickly analyze contract-level activity. Daily aggregation of C-Chain token balances is conducted at the close of each day (UTC time), collecting balances of native tokens, ERC20, and ERC721 tokens. Users can leverage this data to pinpoint addresses that meet specific criteria based on token activity. At launch, the tool supports Avalanche C-Chain and a subset of ERC20 tokens, with more tokens to be added in the future. #### Query Builder - **Filter Options**: Filter based on holding duration, current holdings, minimum holding duration, or customized date ranges. - **Multiple Queries**: Conduct multiple queries in a single search to refine the target audience. #### Airdrop Distribution Airdrops are distributed through contracts deployed on-chain. Core's airdrop tool leverages a deployed version of ThirdWeb's audited Airdrop contract and Ava Labs’ internal security team. At launch, airdrops will be distributed evenly among all selected wallets. --- ## Benefits of Using Core's Airdrop Tool - **Efficient Distribution**: Streamlines token distribution and community engagement with its intuitive interface. - **Cost-Free**: Provides a free method for builders to conduct airdrops and promotions. - **Strategic Rewards**: Allows for targeted airdrops based on specific on-chain activities. - **Security**: Utilizes audited contracts and security measures to protect token distributions. --- ## Real-World Example: Core Memecoin Month To showcase the power of the new airdrop tool, Core rewarded all participants from their **Core Memecoin Month**, which ran from April 1 to May 6. Participants received an evenly distributed reward in the same memecoin they used during the event. For example, if a participant qualified by following instructions during COQ’s memecoin week, they were airdropped COQ coins. --- ## Considerations - **Regulatory Compliance**: Ensure that airdrops comply with relevant laws and regulations in your jurisdiction. - **User Privacy**: Be mindful of user privacy and data protection when handling wallet addresses. - **Security Best Practices**: Utilize audited contracts and follow security best practices to protect against vulnerabilities. --- ## Conclusion Airdrops are a valuable tool for projects in the Avalanche ecosystem to engage with their community, distribute tokens, and promote network growth. Core's new airdrop tool simplifies this process by leveraging on-chain data and providing an intuitive interface for builders and developers. By utilizing Core's airdrop service, projects can efficiently and securely reward their loyal community members, fostering stronger relationships and encouraging active participation in the Avalanche network. --- # Introduction (/academy/l1-native-tokenomics/08-governance/01-introduction) --- title: Introduction description: An overview of governance use cases in Avalanche Layer 1 tokenomics, focusing on token-based voting and permissioning mechanisms. updated: 2024-09-03 authors: [owenwahlgren] icon: Book --- Governance in Avalanche Layer 1 networks center around two fundamental concepts: **token-based voting** and **permissioning through precompiles**. These mechanisms empower the community to manage critical aspects of network functionality and access control. This chapter explores various governance models within the Avalanche ecosystem, including Decentralized Autonomous Organizations (DAOs), quadratic voting, and real-world governance use cases. ### Key Aspects of Governance in Avalanche #### Token-Based Voting Token holders of a Layer 1 can play a pivotal role in the governance of the network. They have the authority to vote on proposals that can affect both technical parameters and broader policy decisions. This system ensures that individuals with a stake in the network have a direct influence over its future development. #### Permissioning Through Precompiles Governance on Avalanche L1 networks also involves managing precompiles, specifically precompiles with the [AllowList interface](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#allowlist-interface). Precompiles are specialized smart contracts that provide functionalities not available through standard contracts. The AllowList interface is a part of multiple precompiles and can govern who can deploy contracts or send transactions on the network among other functionality. Through governance, the community can vote to add or remove addresses from the AllowList, controlling who has permission to perform specific actions on the network. ### Examples of Governance Use Cases for an Avalanche L1 1. **AllowList Precompile Governance** - **Adding or Removing Addresses** A common governance activity involves proposals to modify the AllowList for a specific precompile (ie [NativeTokenMinter](/docs/avalanche-l1s/upgrade/customize-avalanche-l1#minting-native-coins)). These proposals may focus on adding or removing addresses, thereby determining who is authorized to interact with the precompile, for example deploying contracts or sending transactions on specific L1s. Token holders can vote on these proposals to ensure that only trusted entities are granted these permissions. 2. **DAOs and L1 Governance** - **Decentralized Autonomous Organizations (DAOs)** In the Avalanche ecosystem, DAOs can govern specific L1s by making critical decisions on upgrades, permissioning, and operational rules. These organizations typically use token-based voting to reach consensus and effectively manage their resources. 3. **Quadratic Voting** - **Promoting Fair Decision-Making** Quadratic voting could be employed in a Layer 1's governance to promote more democratic decision-making. This method allocates voting power more equitably among participants, allowing for nuanced decisions. It helps balance the influence between larger and smaller token holders, ensuring that the governance process is fair and representative. ### Importance of Governance in Avalanche L1 Tokenomics **Permission and Access Control** Governance over precompiles with the AllowList interface is crucial for decentralizing control over key network functions. It ensures that only community-approved entities can access sensitive parts of the network, enhancing security and trust. **Balancing Influence** Mechanisms such as quadratic voting help balance power between large and small token holders. This leads to fairer decision-making processes and fosters a more inclusive community. **Community-Driven Development** Token-based governance empowers the community to propose and vote on changes that directly influence the network’s future. This includes decisions related to L1 management, technical upgrades, and the addition or removal of participants. In the following sections, we will delve deeper into these governance models and examine how they function in practice within Avalanche L1 networks. # Governance Models (/academy/l1-native-tokenomics/08-governance/02-governance-models) --- title: Governance Models description: Learn about different governance models in blockchain networks. updated: 2024-10-08 authors: [owenwahlgren] icon: Book --- In blockchain networks like Avalanche and other EVM-compatible platforms, governance models are essential for guiding how decisions are made, who holds the authority, and how changes are implemented within the system. Effective governance ensures that the network evolves in a manner that reflects the community’s interests while promoting sustainability, security, and innovation. ## Understanding Governance Models Governance models can be broadly categorized into three types: ### 1. On-Chain Governance On-chain governance involves decision-making processes encoded directly into the blockchain protocol. Participants use the network’s native tokens to vote on proposals, and outcomes are automatically enforced by smart contracts. **Advantages**: - Transparency - Automation - Immutability **Disadvantages**: - Potential for low voter participation - Risk of governance attacks if tokens are concentrated ### 2. Off-Chain Governance Off-chain governance relies on discussions and decisions made outside the blockchain, often through forums, social media, or community meetings. While final implementations may involve on-chain actions, deliberation and consensus-building occur off-chain. **Advantages**: - Flexibility - Ability to consider qualitative factors **Disadvantages**: - Less transparency - Potential for slower decision-making ### 3. Hybrid Governance Hybrid governance combines on-chain and off-chain elements, aiming to balance the advantages of both models. Initial discussions and proposal formulations happen off-chain, while voting and implementation are conducted on-chain. **Advantages**: - Improved flexibility - Enhanced efficiency **Disadvantages**: - Complexity in coordination - Potential disconnect between off-chain discussions and on-chain actions ## Key Components of Governance Models Effective governance models comprise several key components: ### 1. Proposal Mechanism A structured process for submitting and discussing proposals is essential. - **Proposal Submission**: Guidelines for who can submit proposals and how they should be formatted. - **Discussion Period**: Allocated time for the community to debate and provide feedback. - **Amendments**: Allowing modifications based on community input before voting. ### 2. Voting System The method by which votes are cast and counted significantly impacts the governance model’s effectiveness. - **Token-Based Voting**: Votes are weighted based on the number of tokens held. - **Quadratic Voting**: Voting power increases quadratically with the number of tokens, promoting fairness. - **Delegated Voting**: Token holders can delegate their voting power to representatives. ### 3. Execution and Enforcement Ensuring that approved proposals are implemented correctly is crucial. - **Automatic Execution**: Smart contracts automatically enforce decisions without human intervention. - **Manual Execution**: Designated entities carry out decisions, which may introduce delays or risks. ## Governance Models in the Avalanche Ecosystem Avalanche supports flexible governance models that can be customized for different Layer 1 (L1) chains and projects. ### L1-Specific Governance Each L1 can implement its own governance model tailored to its needs. - **DeFi L1**: Might prioritize rapid, on-chain governance to adapt quickly to market changes. - **Enterprise L1**: Might opt for a hybrid model to incorporate regulatory compliance considerations. ### DAO-Based Governance Decentralized Autonomous Organizations (DAOs) on Avalanche often adopt on-chain governance models with token-based or quadratic voting systems. - **Transparency**: All decisions are recorded on the blockchain for auditing. - **Automation**: Leverage smart contracts to automate proposal submission, voting, and execution. - **Efficiency**: Enhance efficiency and trust within the community. ### Permissioned Governance Some L1s may require permissioned governance models, where only authorized participants can vote or propose changes. - **Private Networks**: Suitable for applications requiring compliance with specific regulations. - **Controlled Participation**: Ensures that only vetted members influence the network. ## Factors Influencing Governance Model Choice Several factors influence the selection of a governance model: ### Community Size and Diversity - **Engagement**: Larger communities may benefit from models that promote inclusivity. - **Power Distribution**: Models like quadratic voting help mitigate power concentration. ### Regulatory Environment - **Compliance**: Networks in regulated industries may need governance structures that allow for auditing. - **Authority Control**: May require oversight by recognized authorities. ### Network Goals and Values - **Decentralization**: Networks prioritizing decentralization might avoid models with central points of control. - **Security and Scalability**: The chosen model should align with the network’s primary objectives. ## Challenges in Governance Models Implementing governance models comes with challenges that need addressing: ### Voter Apathy - **Low Participation**: Can undermine the legitimacy of decisions. - **Encouraging Engagement**: Use incentives or simplify processes to boost involvement. ### Governance Attacks - **Malicious Actors**: May acquire significant voting power to push harmful proposals. - **Mitigation Strategies**: Implement time locks, quorum requirements, and reputation systems. ### Complexity and Usability - **Deterring Participation**: Complex models may discourage users. - **User-Friendly Design**: Ensure mechanisms are accessible to encourage broad involvement. ## The Evolution of Governance Models Governance models are continually evolving to address new challenges and integrate innovative ideas. ### Enhancing Participation - **Tools and Interfaces**: Develop user-friendly platforms for easier participation. ### Improving Security - **Safeguards**: Implement measures against attacks to ensure robustness. ### Adapting to Community Needs - **Feedback Integration**: Allow models to evolve based on community input and changing requirements. ## Conclusion Governance models are foundational to the success of blockchain networks like Avalanche. They shape decision-making processes, power distribution, and the network’s ability to adapt over time. By designing and implementing effective governance structures, the community can ensure the network remains secure, innovative, and aligned with participant interests. Understanding these models enables stakeholders to contribute meaningfully to the network’s evolution. As the ecosystem grows, collaborative and well-structured governance will be key to navigating challenges and seizing opportunities in the decentralized landscape. # DAOs (/academy/l1-native-tokenomics/08-governance/03-daos) --- title: DAOs description: Learn about the role and function of DAOs in the Avalanche ecosystem. updated: 2024-10-07 authors: [owenwahlgren] icon: Book --- In the Avalanche ecosystem and other EVM-compatible networks, Decentralized Autonomous Organizations (DAOs) are pivotal for governance. DAOs operate through smart contracts on the blockchain, enabling decentralized management without centralized leadership. They empower communities to collectively make decisions on protocol upgrades, fund allocations, and other critical governance matters. ### How DAOs Function DAOs in the Avalanche network govern specific L1s or projects using token-based voting mechanisms. Members hold governance tokens granting them the right to propose initiatives and vote on various issues. The decision-making process is transparent and recorded on the blockchain, ensuring accountability and trust among participants. For example, a DAO might oversee an L1 dedicated to decentralized finance (DeFi) applications. DAO members can vote on proposals related to L1 upgrades, fee structures, or partnership integrations. By distributing governance tokens to participants, the DAO ensures that contributors have a say in the network’s direction. Please reference OpenZeppelin's Governance contracts for more information on how DAOs can be implemented: [OpenZeppelin Contracts](https://docs.openzeppelin.com/contracts/4.x/api/governance). ### Benefits of DAOs in Governance - **Decentralization**: Eliminates the need for centralized authorities, reducing risks of single points of failure and corruption. Decision-making power is distributed among token holders, aligning actions with the community’s interests. - **Transparency**: All proposals, votes, and decisions are recorded on the blockchain, providing an immutable and transparent governance record. This openness fosters trust and encourages active participation. - **Efficiency**: Smart contracts automate many governance aspects, such as vote tallying and proposal execution. Automation reduces human error and accelerates decision-making. - **Inclusivity**: By allowing anyone with governance tokens to participate, DAOs promote inclusivity and democratize organizational control. Broad participation can lead to diverse perspectives and innovative solutions. ### DAOs and Tokenomics DAOs significantly impact the tokenomics of Avalanche and other EVM networks. Governance tokens often have utility beyond voting rights, including staking rewards or fee reductions. This dual utility incentivizes holding and using the tokens, contributing to the network’s economic health. Additionally, DAOs can manage treasuries funded by network fees or token issuance. These funds can be allocated to development initiatives, marketing efforts, or community grants, as decided by DAO members. Financial autonomy enables DAOs to invest directly in the growth and sustainability of their networks. ### Challenges and Considerations While DAOs offer numerous benefits, they also face challenges: - **Governance Participation**: Ensuring active participation can be difficult, as some token holders may be passive investors. Low voter turnout can lead to decisions not reflecting the broader community’s interests. - **Security Risks**: Smart contract vulnerabilities pose significant risks. Exploits or bugs in governance contracts can lead to loss of funds or manipulation of voting outcomes. - **Regulatory Uncertainty**: The legal status of DAOs is evolving in many jurisdictions. Regulatory actions could impact their operation and recognition, affecting functionality. ### Real-World Examples in the Avalanche Ecosystem - **Avalanche Ambassador DAO**: A group of proven projects from the Avalanche ecosystem that want to partner with the Ambassador DAO and support the Avalanche Community. These projects can create exclusive paid bounties for ambassadors to complete. Ambassadors will get opportunities to contribute and collaborate with these projects and the projects will get access to a global network of ambassadors. - **DeFi Protocol DAOs**: Projects building on Avalanche establish DAOs to manage protocol parameters like interest rates, collateral requirements, and reward distributions. - **Community DAOs**: Focus on community initiatives, funding educational content, organizing events, and promoting Avalanche technologies. The Avalanche Ambassador DAO is an example of a community-driven initiative. ### The Role of DAOs in Future Governance As the Avalanche network grows, DAOs will likely become increasingly important in governance. They offer a scalable and flexible framework for managing complex decentralized systems. By empowering communities to self-govern, DAOs contribute to the resilience, adaptability, and inclusivity of blockchain networks. In the context of Avalanche Layer 1 (L1) tokenomics, DAOs exemplify how decentralized governance models can effectively manage resources, drive innovation, and align diverse stakeholder interests. Understanding and participating in DAOs allows individuals to influence the network’s evolution and contribute to its long-term success. # Quadratic Voting (/academy/l1-native-tokenomics/08-governance/04-quadratic-voting) --- title: Quadratic Voting description: Enter description updated: 2024-10-07 authors: [owenwahlgren] icon: Book --- ### Quadratic Voting Quadratic voting is a governance mechanism designed to achieve a more equitable and democratic decision-making process by allowing participants to express the intensity of their preferences. In the context of the Avalanche network and other EVM-compatible platforms, quadratic voting helps balance the influence between large and small token holders, mitigating the dominance of wealthy participants and promoting fairness. #### How Quadratic Voting Works In traditional token-based voting systems, each token typically equates to one vote, which can lead to disproportionate influence by large token holders. Quadratic voting addresses this imbalance by making the cost of casting multiple votes increase quadratically. This means that the cost of an individual's votes is the square of the number of votes they cast. For example: - Casting 1 vote costs 1 credit. - Casting 2 votes costs 4 credits. - Casting 3 votes costs 9 credits. - Casting 4 votes costs 16 credits. This system ensures that casting a large number of votes becomes progressively more expensive, discouraging any single participant from overwhelming the voting process. It allows individuals to allocate their voting power more strategically, focusing on issues they care about most. #### Implementation in Avalanche Governance Quadratic voting can be integrated into an L1's governance models to enhance democratic participation. By adjusting the voting mechanisms in DAOs or other governance frameworks to support quadratic voting, the network can: - **Promote Inclusivity**: Smaller token holders gain a more significant voice in governance decisions, ensuring that the network's direction reflects a broader range of perspectives. - **Encourage Thoughtful Voting**: Participants must consider the cost of casting multiple votes, leading to more deliberate and meaningful voting behavior. - **Reduce Plutocracy Risks**: By limiting the disproportionate influence of large token holders, quadratic voting helps prevent the concentration of power and promotes a healthier governance ecosystem. #### Benefits of Quadratic Voting - **Fair Representation**: Balances the influence among participants, providing a fairer representation of the community's preferences. - **Preference Intensity**: Allows participants to express how strongly they feel about specific issues, not just whether they support or oppose them. - **Enhanced Collaboration**: Encourages dialogue and cooperation among stakeholders, as consensus becomes more valuable than sheer voting power. #### Challenges and Considerations While quadratic voting offers significant advantages, there are challenges to consider: - **Sybil Attacks**: Participants might attempt to create multiple identities to circumvent the quadratic cost. Implementing identity verification or staking mechanisms can mitigate this risk. - **Complexity**: The quadratic voting system is more complex than traditional voting methods, which may require additional education and user-friendly interfaces to ensure broad adoption. - **Economic Barriers**: Although quadratic costs balance influence, participants with more resources still have an advantage. Careful calibration of the cost functions and possible subsidies for smaller holders can help address this issue. #### Real-World Examples in the Avalanche Ecosystem Quadratic voting is gaining traction in decentralized governance. While its adoption in Avalanche is still emerging, projects can look to examples like: - **Gitcoin Grants**: Uses quadratic funding, a concept related to quadratic voting, to allocate funds to public goods based on community support. - **Democracy Earth**: An organization advocating for decentralized, blockchain-based governance systems utilizing quadratic voting to enhance democratic processes. By learning from these implementations, Avalanche projects can design governance models that incorporate quadratic voting to achieve more equitable outcomes. ### The Role of Quadratic Voting in Avalanche Governance Quadratic voting represents a significant step towards more democratic and fair governance within the Avalanche ecosystem. By empowering participants to express the strength of their preferences without allowing wealth to dominate the decision-making process, quadratic voting aligns with the principles of decentralization and community-driven development. Incorporating quadratic voting into Avalanche's governance structures can: - **Strengthen Community Engagement**: Encourages broader participation by making individual votes more impactful. - **Improve Decision Quality**: Reflects the true intensity of community sentiments, leading to decisions that better serve the network's interests. - **Enhance Network Resilience**: Diversifies governance influence, reducing the risk of centralized control and increasing the network's adaptability. As an L1 continues to evolve, embracing innovative governance mechanisms like quadratic voting will be essential for fostering a vibrant, inclusive, and sustainable ecosystem. # Governance 2.0 (/academy/l1-native-tokenomics/08-governance/05-governance-20) --- title: Governance 2.0 description: Learn about the evolution of blockchain governance models. updated: 2024-10-07 authors: [owenwahlgren] icon: Book --- Governance 2.0 represents the evolution of blockchain governance models, aiming to address the limitations of earlier systems and enhance the efficiency, inclusivity, and adaptability of decentralized networks like Avalanche. By integrating innovative mechanisms and technologies, Governance 2.0 seeks to create more robust and responsive governance structures that better align with the needs of the community. --- ## Key Features of Governance 2.0 ### Dynamic Participation Incentives Governance 2.0 models implement incentive structures that reward active engagement. This may include: - **Staking Rewards**: Earning rewards for participating in voting processes. - **Reputation Systems**: Recognizing valuable contributions to the network. - **Aligned Tokenomics**: Aligning individual incentives with the network's success. ### Flexible Governance Frameworks These models prioritize adaptability, allowing governance structures to evolve as the network grows and community needs change. Modular governance components can be updated or replaced without overhauling the entire system, enabling continuous improvement. ### Enhanced Transparency and Accountability Advanced smart contract capabilities provide greater transparency in decision-making processes. All proposals, votes, and outcomes are recorded on the blockchain, allowing participants to audit and verify governance activities, which fosters trust and accountability. ### Improved Security Measures Governance 2.0 incorporates robust safeguards against attacks and exploits, including: - **Time-Locked Contracts**: Preventing sudden changes without community consensus. - **Multi-Signature Requirements**: Ensuring multiple approvals for executing decisions. - **On-Chain Audits**: Detecting and addressing vulnerabilities promptly. ### Integration of Off-Chain Inputs By bridging on-chain governance with off-chain data and processes, Governance 2.0 allows for more informed decision-making. Oracle systems can feed relevant external information into governance proposals, enhancing the quality and relevance of decisions. --- ## Governance 2.0 in the Avalanche Ecosystem Avalanche is at the forefront of implementing Governance 2.0 principles, leveraging its high-performance infrastructure and customizable L1s. Key developments include: ### Customizable L1 Governance Avalanche allows each L1 to adopt its own Governance 2.0 model, tailoring governance mechanisms to the specific needs of its community. This flexibility enables experimentation with innovative governance features and the adoption of best practices across the ecosystem. ### Advanced Voting Mechanisms The network is exploring and implementing advanced voting systems like: - **Conviction Voting**: Allowing participants to express the strength of their support. - **Holographic Consensus**: Balancing proposal visibility with decision-making efficiency. These mechanisms aim to improve the expressiveness and effectiveness of governance decisions. ### Interoperability and Cross-Chain Governance Governance 2.0 on Avalanche supports interoperability, enabling coordinated governance across multiple L1s and chains. This interconnectedness enhances collaboration and resource sharing, strengthening the overall ecosystem. --- ## Benefits of Governance 2.0 ### Increased Engagement By providing meaningful incentives and more accessible governance tools, Governance 2.0 encourages higher levels of participation from a diverse range of stakeholders. This leads to decisions that better reflect the community's collective will. ### Adaptive Decision-Making The flexible nature of Governance 2.0 allows networks to adapt quickly to changing conditions. Whether responding to technological advancements, regulatory shifts, or evolving user needs, these governance models enable timely and effective responses. ### Strengthened Network Resilience Enhanced security measures and dynamic governance structures reduce vulnerabilities. By mitigating risks of centralization and governance attacks, Governance 2.0 contributes to a more robust and secure network. ### Alignment of Interests Integrating mechanisms that align individual incentives with the network's success fosters a cooperative environment. Participants are more likely to act in the network's best interest when their own success is directly tied to it. --- ## Challenges and Considerations While Governance 2.0 offers significant advancements, it also presents challenges: ### Complexity Management More sophisticated governance models can be complex, potentially creating barriers to understanding and participation. Simplifying user interfaces and providing educational resources are essential to ensure broad community engagement. ### Regulatory Compliance As governance models become more advanced, they may face increased regulatory scrutiny. Balancing compliance with the principles of decentralization requires careful design and ongoing dialogue with regulatory bodies. ### Scalability Implementing advanced governance features at scale demands efficient infrastructure. Avalanche's high-throughput capabilities position it well to address scalability concerns, but continuous optimization is necessary as the network grows. --- ## Future Directions The evolution of Governance 2.0 is an ongoing process. Future developments may include: ### Integration of Artificial Intelligence Utilizing AI to analyze proposals and predict outcomes could enhance decision-making processes, providing insights that inform more strategic governance actions. ### Cross-Network Governance As interoperability between different blockchain networks increases, Governance 2.0 models may facilitate governance that spans multiple platforms, promoting broader collaboration. ### Enhanced Privacy Features Balancing transparency with privacy, future governance models might incorporate zero-knowledge proofs or other cryptographic techniques to protect participant identities while ensuring accountability. --- ## Conclusion Governance 2.0 represents a significant advancement in how decentralized networks like Avalanche manage decision-making processes. By embracing innovation, enhancing inclusivity, and prioritizing adaptability, these governance models address the shortcomings of earlier systems and set the stage for a more resilient and dynamic blockchain ecosystem. As your Layer 1 network continues to grow and evolve, implementing Governance 2.0 principles will be crucial for maintaining its competitive edge and ensuring that it meets the needs of its diverse and expanding community. Engaging with these advanced governance models allows participants to directly influence the network's trajectory and contribute to its long-term success. In the following sections, we will explore specific case studies and practical implementations of Governance 2.0 within the Avalanche ecosystem, highlighting its impact on network governance and community engagement. # Getting Started with Interchain Token Transfer (/academy/interchain-token-transfer/02-getting-started/01-introduction) --- title: Getting Started with Interchain Token Transfer description: Learn how to use our Interchain Token Transfer toolbox to transfer assets across Avalanche chains. updated: 2024-05-31 authors: [ashucoder9, 0xstt] icon: Smile --- # Getting Started with Interchain Token Transfer In this section, we'll help you get started with our Interchain Token Transfer toolbox. We've made it easier than ever to transfer assets across Avalanche chains by providing a user-friendly interface that handles all the complexity for you. ## Prerequisites Before you begin, make sure you have: 1. A modern web browser with Core Wallet installed - Download Core Wallet from [core.app](https://core.app) - Create or import your wallet - Add the following testnet chains to your wallet: - Fuji C-Chain**** - Echo - Dispatch 2. Test tokens for development - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet tokens automatically - **Alternative:** Get test AVAX from external faucets like [core.app/tools/testnet-faucet](https://core.app/tools/testnet-faucet/?subnet=c&token=c) with code `avalanche-academy` 3. Basic understanding of how to use Core Wallet - Adding networks - Managing accounts - Approving transactions 4. An understanding of Avalanche networks and how to connect to them This course focuses on using the existing testnet chains (Fuji C-Chain, Echo, and Dispatch). If you want to create your own L1 blockchain, please refer to the [Creating an L1](/academy/avalanche-fundamentals/04-creating-an-l1/01-creating-an-l1) course. Since Interchain Token Transfer relies on Interchain Messaging (ICM), you must run your own relayer to enable cross-chain communication. Follow the [Running a Relayer](/academy/interchain-messaging/10-running-a-relayer/01-running-a-relayer) course to set up your relayer. ## Using the Interchain Token Transfer Toolbox Our Interchain Token Transfer toolbox is a comprehensive web application that provides a complete suite of tools for cross-chain operations. Here's how to get started: 1. Visit the [Interchain Token Transfer Toolbox](https://toolbox.avax.network/interchain-transfer) 2. Connect your wallet: - Click the "Connect Wallet" button - Select Core Wallet - Approve the connection request 3. Explore the toolbox features: a. Deploy Contracts: - Deploy new ERC-20 tokens - Create bridge contracts - Set up token bridges b. Transfer Assets: - Select the source and destination chains (Fuji C-Chain, Echo, or Dispatch) - Choose the token and amount - Execute cross-chain transfers c. Test Transfers: - Use the test transfer feature to verify your setup - Monitor transfer status - View transaction history ## Features Our toolbox provides several features to help you with cross-chain operations: - Contract Deployment: - ERC-20 token deployment - Bridge contract creation - Token bridge setup - Asset Transfers: - Support for testnet chains (Fuji C-Chain, Echo, Dispatch) - ERC-20 token transfers - Native token transfers - Cross-chain token swaps - Testing and Monitoring: - Test transfer functionality - Transaction history - Real-time status updates - Cross-chain transaction tracking ## Next Steps Now that you know how to use the Interchain Token Transfer toolbox, you can: 1. Learn about different token types in the next section 2. Explore token bridging concepts 3. Try out different types of cross-chain transfers 4. Deploy your own tokens and bridges 5. Test your cross-chain setup Remember that our toolbox handles all the complex interactions with the Interchain Token Transfer protocol, so you can focus on your cross-chain operations without worrying about the technical details. # Tokens (/academy/interchain-token-transfer/03-tokens/01-tokens) --- title: Tokens description: Learn about tokens updated: 2024-05-31 authors: [martineckardt] icon: Book --- ## What You Will Learn In this section, you will go through the following topics: - **What is a token**: Learn how tokens serve to represent ownership. - **Types of tokens**: Learn about ERC20 tokens, Native tokens and Wrapped tokens. - **Transfers**: Learn differences on transfering the different types of tokens between accounts or with smart contracts. # Native Tokens (/academy/interchain-token-transfer/03-tokens/02-native-tokens) --- title: Native Tokens description: Understand the native token role in a blockchain system updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- A native token in a blockchain running the Ethereum Virtual Machine (EVM) refers to the primary digital currency or cryptocurrency native to the EVM blockchain. Every EVM layer 1 chain has it's own native token: - Ethereum: ETH - Avalanche C-Chain: AVAX - Dexalot: ALOT - many more... The native token serves as both a means of value transfer within the EVM network and as the gas token for executing transfers or smart contracts. Therefore, native tokens play a crucial role in the EVM chain by enabling participants to interact with and utilize the platform's decentralized features, serving as the foundational unit of value. # Transfer Native Tokens (/academy/interchain-token-transfer/03-tokens/03-transfer-native-tokens) --- title: Transfer Native Tokens description: Learn how to transfer native tokens using Core Wallet updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; In this exercise, you will learn how to transfer native tokens (AVAX) between accounts using Core Wallet. We'll demonstrate this using the Fuji testnet. ## Open Core Wallet First, make sure you have Core Wallet installed. If you don't have Core Wallet installed already, click the Download Core Wallet button below. ## Switch to Fuji Testnet ## Get Test AVAX Before transferring tokens, ensure you have some test AVAX: **Option 1 (Recommended):** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet AVAX automatically on the C-Chain. **Option 2:** Use the external [Avalanche Faucet](https://core.app/tools/testnet-faucet/?subnet=c&token=c) with coupon code `avalanche-academy25` to claim AVAX tokens on the C-Chain of the Fuji testnet. ## Transfer AVAX To transfer AVAX to another address: 1. Click the "Send" button in Core Wallet 2. Enter the recipient's address 3. Enter the amount of AVAX to send 4. Review the transaction details 5. Click "Send" to confirm ## Verify the Transfer To verify the transfer was successful: 1. Check the transaction status in Core Wallet 2. View the transaction details on the [Fuji Explorer](https://testnet.snowtrace.io) 3. The recipient can check their balance in their Core Wallet Remember that you need to have enough AVAX in your wallet to cover both the transfer amount and the gas fees. Always double-check the recipient address before sending to avoid any loss of funds. Now that you know how to transfer native tokens, you can proceed to learn about transferring ERC-20 tokens in the next section. # Transfers and Smart Contract (/academy/interchain-token-transfer/03-tokens/05-transfers-in-smart-contracts) --- title: Transfers and Smart Contract description: Transfer native tokens to smart contracts updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Now that you know how to transfer native assets between accounts, let's see how to to transfer those funds to the control of an smart contract. Payable smart contract functions are essential for this process, as they allow the contract to receive Native Tokens. A function marked as payable in Solidity signifies that the function can accept and process incoming funds. This is particularly useful for contracts that need to handle funds for instance any financial dApps, such as decentralized applications (dApps), automated market makers (AMMs), or transfering funds from one chain to another, like we do with Avalanche Interchain Token Transfer. ## Payable functions The `payable` keyword in Solidity is used to designate functions that are capable of receiving native blockchain tokens. When a function is marked as payable, it allows the contract to accept incoming funds and execute specific logic based on the amount received. Without the payable modifier, any attempt to send native tokens to a contract would result in an error, preventing the transaction from being processed. A function defined as payable serves multiple purposes. First, it signals to users and other contracts that the function is intended to handle incoming payments. This adds a layer of transparency and ensures that the contract’s behavior is clear and predictable. Second, it allows the function to access special variables such as msg.value, which holds the amount of Ether sent with the call. This can be used within the function to implement various logic, such as recording payments, issuing tokens in return, or triggering other contract behaviors. Here's an example of a simple payable function in Solidity: ```solidity pragma solidity ^0.8.0; contract PayableExample { uint public amountReceived = 0 ; function receiveNative() public payable { amountReceived += msg.value; } } ``` In this example, the receiveNative function is marked as payable, enabling it to accept the Native Token. The msg.value variable is used to add the amount of native token sent to the amountReceived state variable. This is a basic illustration, but it underscores how the payable keyword facilitates the handling of native token transfers within smart contracts. ### Create the working directory to store new contracts ```bash mkdir -p src/my-contracts ``` ### Copy the above example contract into `/src/my-contracts/PayableExample.sol` ### Deploy Contract with Payable Function ```bash forge create --rpc-url myblockchain --private-key $PK src/my-contracts/PayableExample.sol:PayableExample --broadcast ``` ### Save the Contract Address ```bash export PAYABLE_CONTRACT=0X.. ``` ### Check `amountReceived` ```bash cast call --rpc-url myblockchain $PAYABLE_CONTRACT "amountReceived()(uint)" ``` ### Transfer Native Token to the Contract Now let's use solidity to transfer funds to our PayableExample contract ```bash cast send --rpc-url myblockchain --private-key $PK $PAYABLE_CONTRACT "receiveNative()" --value 100000 ``` ### Check `amountReceived` ```bash cast call --rpc-url myblockchain $PAYABLE_CONTRACT "amountReceived()(uint)" ``` Well done! You transferred native tokens to a Smart Contract. It's important to understand the distinction between Native Tokens and ERC-20 tokens. Native tokens, such as `AVAX` or `ETH`, are transferred directly to payable functions within the smart contract. This process is straightforward and involves sending the tokens to the contract's address, invoking the payable function. On the other hand, ERC-20 tokens require a different approach due to their standardized interface. We will cover that in the following sections. # ERC-20 Tokens (/academy/interchain-token-transfer/03-tokens/07-erc-20-tokens) --- title: ERC-20 Tokens description: Learn about ERC-20 Standard for tokens updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- There is only a single native token to a chain. However, to allow to represent a wide range of assets on EVM-chains, the ERC-20 token standard was developed. "ERC" stands for Ethereum Request for Comment, and "20" is the proposal identifier. ERC-20 tokens are fungible, meaning each token is identical and can be exchanged on a one-to-one basis. These tokens are typically created and managed through smart contracts that adhere to the ERC-20 standard, which defines a set of rules and functions that ensure interoperability between different tokens and decentralized applications (DApps). At the base of an ERC-20 token smart contract, there is a simple mapping of addresses to numbers, which represent the amount of tokens an address holds (balance). ```solidity abstract contract ERC20 is Context, IERC20, IERC20Metadata, IERC20Errors { mapping(address account => uint256) private _balances; //... } ``` These addresses may belong to EOA or contract accounts. Both of these can hold and transfer ERC-20 tokens. ERC-20 tokens can represent various assets, from digital currencies to tokenized assets, and they play a crucial role in the crypto ecosystem by enabling the creation of decentralized applications with diverse functionalities, such as token sales, decentralized finance (DeFi) protocols, and more. ## Interface To maintain compatibility with dApps designed to work with the ERC20 standard, all ERC20 tokens implement the following interface: ```solidity interface IERC20 { function name() external view returns (string memory); function symbol() external view returns (string memory); function decimals() external view returns (uint8); function totalSupply() external view returns (uint256); function balanceOf(address _owner) external view returns (uint256 balance); function transfer(address _to, uint256 _value) external returns (bool success); function transferFrom(address _from, address _to, uint256 _value) external returns (bool success); function approve(address _spender, uint256 _value) external returns (bool success); function allowance(address _owner, address _spender) external view returns (uint256 remaining); } ``` Click [here](https://eips.ethereum.org/EIPS/eip-20) to review the ERC20 standard in full detail. ## Transfer Tokens Transferring ERC-20 tokens between accounts involves calling the 'transfer' function, specifying the recipient’s address and the amount to transfer. To facilitate more complex interactions, such as those involving smart contracts, the ERC-20 standard includes the `approve()` and `transferFrom()` functions. We will check how to implement those so smart contracts can use funds according to its code. ### `approve()` Since all balances are just a mapping of amount values to addresses, the owner of each address can 'approve' some other account as a _spender of their funds up to some limit determined by the approved _amount. The spender then will be able to withdraw from your account multiple times, up to the value set in amount. ### `allowance()` Returns the amount which _spender is still allowed to withdraw from _owner. ### `transferFrom()` Transfers _value amount of tokens from address _from to address _to # Deploy and Transfer an ERC-20 Token (/academy/interchain-token-transfer/03-tokens/08-transfer-an-erc-20-token) --- title: Deploy and Transfer an ERC-20 Token description: Transfer an ERC-20 token between accounts updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; ### Deploy ERC-20 Let's deploy a basic demo ERC20 token on the Fuji testnet using our toolbox. ### Add Token to Core Wallet After deploying your ERC-20 token: 1. Open Core Wallet 2. Go to the Tokens tab 3. Click "Manage" 4. Click "+ Add Custom Token" 5. Enter the token contract address from the deployment 6. The token symbol and decimals should be automatically detected 7. Click "Add Token" Make sure you're connected to the Fuji testnet when adding the token. The token will only be visible on the network where it was deployed. ### Transfer ERC-20 Token To transfer your ERC-20 token: 1. In Core Wallet, select your token from the token list 2. Click "Send" 3. Enter the recipient's address 4. Enter the amount to transfer 5. Review the transaction details 6. Click "Send" to confirm Always double-check the recipient address before sending. ERC-20 transfers cannot be reversed once confirmed. ### Verify Transfer To verify the transfer was successful: 1. Check the transaction status in Core Wallet 2. View the transaction details on the [Fuji Explorer](https://testnet.snowtrace.io) 3. The recipient can add the token to their Core Wallet to see their balance If the recipient doesn't see the token in their wallet, they'll need to add it using the token contract address, just like you did in step 2. Now that you know how to deploy and transfer ERC-20 tokens, you can proceed to learn about token bridging in the next section. # ERC-20 and Smart Contracts (/academy/interchain-token-transfer/03-tokens/09-transfer-erc20-to-sc) --- title: ERC-20 and Smart Contracts description: Transfer an ERC-20 Token to a smart contracts updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; Transferring ERC-20 tokens to a smart contract involves a few steps, including setting an allowance and then using the transferFrom function to move the tokens. This process ensures that the smart contract can only withdraw the amount of tokens you've explicitly approved. First, let's look at the necessary code to achieve this. We'll use the same basic ERC-20 token contract that we used previously. ## Create Smart Contract Receiving an ERC20 We need a smart contract that will receive the tokens: ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import "@openzeppelin/contracts@4.8.1/token/ERC20/IERC20.sol"; contract TokenReceiver { IERC20 public token; constructor(address tokenAddress) { token = IERC20(tokenAddress); } function receiveTokens(address from, uint256 amount) public { require(token.transferFrom(from, address(this), amount), "Transfer failed"); } } ``` In this contract, the `receiveTokens` function allows the contract to receive tokens from a specified address. It uses the transferFrom function of the ERC-20 token standard. Copy this code into a new file `src/my-contracts/TokenReceiver.sol` ### Deploy ERC20 Receiver Let's deploy this ERC20receiver contract ```bash forge create --rpc-url myblockchain --private-key $PK --broadcast --constructor-args $ERC20_CONTRACT_L1 src/my-contracts/TokenReceiver.sol:TokenReceiver ``` ``` [⠊] Compiling... No files changed, compilation skipped Deployer: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC Deployed to: 0x4Ac1d98D9cEF99EC6546dEd4Bd550b0b287aaD6D Transaction hash: 0x31b45c9df0a823254fd51863e217801da809cc77915a7b2901ea348c74aa0cfe ``` ### Save Receiver Address The address `0x4Ac1d98D9cEF99EC6546dEd4Bd550b0b287aaD6D` is your receiver contract address. ```bash export ERC20_RECEIVER_L1=0x... ``` ### Approve Token Expense Now to send Tokens to that contract, the Receiver contracts needs to be allowed to take funds on behalf of the user. Therefore, we need to allow our receiver contract as spender on the TOK interface. ```bash cast send $ERC20_CONTRACT_L1 --private-key $PK "approve(address,uint256)" $ERC20_RECEIVER_L1 20ether --rpc-url myblockchain ``` ### Transfer Tokens to Smart Contract Finally let's transfer tokens to this contract ```bash cast send $ERC20_RECEIVER_L1 --private-key $PK "receiveTokens(address,uint256)" $EWOQ 2ether --rpc-url myblockchain ``` ### Confirm transfer ```bash cast call $ERC20_CONTRACT_L1 "balanceOf(address)(uint256)" $ERC20_RECEIVER_L1 --rpc-url myblockchain ``` # Wrapped Native Tokens (/academy/interchain-token-transfer/03-tokens/10-wrapped-native-tokens) --- title: Wrapped Native Tokens description: Represent native assets as ERC-20 tokens updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Native Wrapped Tokens are blockchain assets that represent a native cryptocurrency (AVAX, ALOT, ETH, etc) in a tokenized form that conforms to a specific token standard, typically ERC-20. This wrapping process involves locking the native cryptocurrency in a smart contract and minting an equivalent amount of the wrapped token. Wrapped tokens retain the value of the underlying native asset by backing those ERC20 representations 1:1 with the native token while gaining compatibility with the ERC-20 token standard. This compatibility is crucial for interoperability within the EVM ecosystem, enabling the use of wrapped tokens in decentralized applications (dApps), decentralized exchanges (DEXs), and smart contracts that require ERC-20 tokens including the **Avalanche Interchain Token Transfer dApp**. The process of wrapping and unwrapping tokens ensures that users can seamlessly switch between the native asset and its wrapped counterpart, facilitating smooth integration across various DeFi protocols and enhancing liquidity and usability within the blockchain ecosystem. Examples of Wrapped Tokens: - wAVAX - wETH - BTC.b (this is not only wrapped, but also bridged to the Avalanche Ecosystem but represented as ERC20 to gain compatibility with the Avalanche C-chain ecosystem.) # Create a Wrapped Native Token (/academy/interchain-token-transfer/03-tokens/11-create-a-wrapped-native-token) --- title: Create a Wrapped Native Token description: Wrap the native asset of your blockchain updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- Creating a wrapped token contract for NAT (myblockchain's native token) involves writing a smart contract that adheres to the ERC-20 standard. This allows NAT to be used in dApps without the need of creating certain behavior for native assets. Here’s a simple example of such a contract written in Solidity: ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; contract WrappedNAT { string public name = "Wrapped NAT"; string public symbol = "WNAT"; uint8 public decimals = 18; uint256 public totalSupply; mapping(address => uint256) private balances; mapping(address => mapping(address => uint256)) private allowances; receive() external payable { deposit(); } function deposit() public payable { balances[msg.sender] += msg.value; totalSupply += msg.value; } function withdraw(uint256 amount) public { require(balances[msg.sender] >= amount, "Insufficient balance"); balances[msg.sender] -= amount; totalSupply -= amount; payable(msg.sender).transfer(amount); } function balanceOf(address account) public view returns (uint256) { return balances[account]; } function transfer(address to, uint256 amount) public returns (bool) { require(balances[msg.sender] >= amount, "Insufficient balance"); balances[msg.sender] -= amount; balances[to] += amount; return true; } function approve(address spender, uint256 amount) public returns (bool) { allowances[msg.sender][spender] = amount; return true; } function allowance(address owner, address spender) public view returns (uint256) { return allowances[owner][spender]; } function transferFrom(address from, address to, uint256 amount) public returns (bool) { require(balances[from] >= amount, "Insufficient balance"); require(allowances[from][msg.sender] >= amount, "Allowance exceeded"); balances[from] -= amount; allowances[from][msg.sender] -= amount; balances[to] += amount; return true; } } ``` # Token Bridging (/academy/interchain-token-transfer/04-token-bridging/01-token-bridging) --- title: Token Bridging description: Learn about asset transfer between chains updated: 2024-05-31 authors: [ashucoder9] icon: Book --- Asset bridging is a crucial concept in blockchain interoperability, enabling assets to be transferred across different blockchain networks. This process is essential for creating seamless experiences in multi-chain ecosystems. ## What You Will Learn In this section, you will go through the following topics: - **Bridging assets:** you will understand why bridging assets is important. - **Bridge safety:** learn about the most common hacks in bridges and how to prevent them. # Bridge Architecture (/academy/interchain-token-transfer/04-token-bridging/02-bridge-architecture) --- title: Bridge Architecture description: Get familiar with different bridge mechanisms updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Asset bridging refers to the mechanism that allows assets, such as cryptocurrencies or tokens, to move from one blockchain to another. This is particularly important in ecosystems where multiple blockchains are used for different purposes, and users want to leverage assets across these chains. Bridging effectively extends the usability of assets beyond their native blockchain, facilitating greater liquidity and integration across various platforms. ## Bridge Mechanisms ### Lock & Mint Locking: The asset is locked in a smart contract on the source blockchain. For example, if you want to bridge AVAX from Avalanche's C-chain to Ethereum, you would lock your AVAX in a smart contract on Avalanche. Minting: Simultaneously, some event notifies Ethereum that an equivalent amount of a wrapped token (e.g., Wrapped AVAX) must be minted on the target blockchain and sent to the user’s address. ### Burn & Mint Burning: When transferring assets back to the source blockchain, the wrapped tokens are burned or destroyed on the target blockchain. Releasing: The original assets are then released from the smart contract on the source blockchain back to the user's address. ### Custodians Some bridges use a custodian or a centralized party to manage the assets. This party locks the asset on one blockchain and releases it on another, relying on trust and security measures. ### Cross-Chain Communication Advanced bridges such as Avalanche Interchain Token Transfer utilize native cross-chain communication protocols to facilitate transactions between blockchains without requiring intermediaries. These protocols ensure that the asset's state and ownership are synchronized across different chains. ## Why Bridging - Enhanced Liquidity: Bridging increases the liquidity of assets by allowing them to be used across different DeFi platforms and blockchain networks. This enhances trading opportunities and financial activities. - Interoperability: It fosters interoperability between different blockchains, enabling users to access a broader range of services and applications. - Flexibility: Users can move assets to chains with lower fees, faster transaction times, or better functionalities, optimizing their experience and strategies. # Use a Demo Bridge (/academy/interchain-token-transfer/04-token-bridging/03-use-a-demo-bridge) --- title: Use a Demo Bridge description: Interact with the ohmywarp bridge updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- This guide will walk you through the process of using a bridge between 2 Avalanche blockchains, providing a step-by-step approach to ensure a smooth and secure experience. Go to [ohmywarp.com](https://ohmywarp.com) and connect any web3 [wallet](https://core.app). Make sure your wallet has at least some AVAX on the Fuji Testnet. **Getting testnet AVAX:** - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet AVAX automatically - **Alternative:** Use the [external faucet](https://core.app/tools/testnet-faucet/?subnet=c&token=c) with coupon code `avalanche-academy` Mint some `TLP` token on `C-Chain` in the Mint tab. This is an ERC20 deployed on Fuji's C-chain. Finally bridge some `TLP` to `Dispatch`. You can confirm the transfer in the Mint tab. # Bridge Hacks (/academy/interchain-token-transfer/04-token-bridging/04-bridge-hacks) --- title: Bridge Hacks description: Learn about the most common bridge hacks. updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## What are Bridge Hacks? Asset bridges are vital for blockchain interoperability but have been the target of significant security breaches. The complexities of bridging assets between different blockchains can create vulnerabilities that malicious actors exploit. Here’s an overview of some of the most common types of bridge hacks and security issues: ### Smart Contract Exploits Smart contract exploits involve vulnerabilities in the bridge's code that can be exploited by attackers to steal assets. Common issues include: **Reentrancy Attacks**: Exploiting a contract’s ability to call itself, allowing attackers to repeatedly withdraw funds before the contract’s state is updated. Example: The DAO hack in 2016 utilized reentrancy to drain millions of dollars of ETH from a smart contract. **Arithmetic Errors**: Bugs related to integer overflows or underflows that can lead to unintended behavior. Example: In 2020, the **Value DeFi** exploit used an arithmetic bug to drain $6 million from the protocol. **Logic Flaws**: Errors in the contract's logic that can be exploited to bypass security controls. Example: In the case of bZx exploit, an attacker exploited a logic flaw to manipulate the price of assets and steal funds. ### Centralized Bridge Attacks Centralized bridges rely on a single custodian or entity to manage the assets. If the custodian is compromised, it can lead to: 1. **Theft of Assets**: Direct theft from the custodian's reserves if their security is breached. Example: The Poly Network hack in 2021 involved a vulnerability that allowed attackers to exploit the bridge's central control mechanisms and steal over $600 million. Although much of the stolen funds were later returned, it highlighted significant risks in centralized bridges. 2. **Mismanagement or Fraud**: The custodian could mismanage funds or engage in fraudulent activities. Example: Various cases of mismanagement or fraud in smaller, less established bridges where the custodian's integrity is in question. ### Governance Attacks Governance attacks target the mechanisms by which decisions are made in decentralized bridges: **Vote Manipulation**: Attacking the governance process to make changes that benefit the attacker. Example: In some DeFi protocols, attackers have manipulated governance votes to gain control or access to assets. **51% Attacks**: Gaining control over a majority of the governance or network nodes to disrupt or exploit the bridge. Example: While more common in blockchains, similar attacks can occur in decentralized bridges with weak governance structures. ### Cross-Chain Communication Vulnerabilities Cross-chain communication vulnerabilities involve weaknesses in the protocols or mechanisms that facilitate communication between different blockchains: **Data Manipulation**: Exploiting the data transmitted between chains to falsify transactions or asset states. Example: In 2022, the Wormhole bridge was exploited due to a vulnerability in its data verification process, leading to a loss of over $320 million. **Consensus Issues**: Problems with how different chains agree on the state of assets or transactions can lead to discrepancies and exploitation. Example: Misalignment between chains can lead to double-spending or other issues if not properly managed. ### Phishing and Social Engineering Phishing and social engineering attacks target users or administrators rather than the bridge’s technical infrastructure: **Phishing**: Attacking users to steal their private keys or credentials to access funds. Example: Users may be tricked into entering their private keys on fraudulent websites pretending to be bridge interfaces. **Social Engineering**: Manipulating individuals involved in the bridge’s operations to gain unauthorized access or influence. Example: Administrators may be tricked into giving away critical access or credentials. ## Mitigation Strategies 1. **Audits and Code Reviews**: Regularly audit smart contracts and bridge code to identify and fix vulnerabilities before they can be exploited. 2. **Security Best Practices**: Implement security best practices, including proper error handling, using well-tested libraries, and following coding standards. 3. **Decentralization**: Where possible, use decentralized bridging solutions to reduce the risk associated with centralized custodians. 4. **Governance Safeguards**: Strengthen governance mechanisms and ensure a robust process for decision-making and voting. 5. **User Education**: Educate users about phishing and social engineering threats to reduce the risk of these types of attacks. 6. **Insurance and Compensation**: Use insurance mechanisms or compensation funds to mitigate the impact of potential losses from breaches. By understanding and addressing these common bridge hacks, developers and users can better protect their assets and improve the overall security of cross-chain bridging solutions. # Avalanche Interchain Token Transfer (/academy/interchain-token-transfer/05-avalanche-interchain-token-transfer/01-avalanche-interchain-token-transfer) --- title: Avalanche Interchain Token Transfer description: Learn how to transfer assets between Avalanche L1s updated: 2024-05-31 authors: [ashucoder9] icon: Book --- The Avalanche Interchain Token Transfer is an application that allows users to transfer tokens between Avalanche L1s. The bridge is a set of smart contracts that are deployed across multiple Avalanche L1s, and leverages [Interchain Messaging](https://github.com/ava-labs/teleporter) for cross-chain communication. # What you will learn In this chapter you will learn: - **Bridge Design:** How the bridge is designed on a high level. - **File Structure:** The structure of the bridge contracts and their dependencies. - **Token Home:** What the token home is and how it works. - **Token Remote:** What the token remote is and how it works. # Interchain Token Transfer Design (/academy/interchain-token-transfer/05-avalanche-interchain-token-transfer/02-bridge-design) --- title: Interchain Token Transfer Design description: Get familiar with the ICTT design updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- Each token transferrer instance consists of one "home" contract and at least one but possibly many "remote" contracts. Each home contract instance manages one asset to be transferred out to `TokenRemote` instances. The home contract lives on the Avalanche L1 where the asset to be transferred exists. A transfer involves locking the asset as collateral on the home Avalanche L1 and minting a representation of the asset on the remote Avalanche L1. The remote contracts, each with a single specified home contract, live on other Avalanche L1s that want to import the asset transferred by their specified home. The token transferrers are designed to be permissionless: anyone can register compatible `TokenRemote` instances to transfer tokens from the `TokenHome` instance to that new `TokenRemote` instance. The home contract keeps track of token balances transferred to each `TokenRemote` instance and handles returning the original tokens to the user when assets are transferred back to the `TokenHome` instance. `TokenRemote` instances are registered with their home contract via a Interchain Messaging message upon creation. Home contract instances specify the asset to be transferred as either an ERC20 token or the native token, and they allow for transferring the token to any registered `TokenRemote` instances. The token representation on the remote chain can also either be an ERC20 or native token, allowing users to have any combination of ERC20 and native tokens between home and remote chains: - `ERC20` -> `ERC20` - `ERC20` -> `Native` - `Native` -> `ERC20` - `Native` -> `Native` The remote tokens are designed to have compatibility with the token transferrer on the home chain by default, and they allow custom logic to be implemented in addition. For example, developers can inherit and extend the `ERC20TokenRemote` contract to add additional functionality, such as a custom minting, burning, or transfer logic. # File Structure (/academy/interchain-token-transfer/05-avalanche-interchain-token-transfer/03-file-structure) --- title: File Structure description: Understand the components of the ICTT structure updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## Contract Structure The ERC20 and native token transferrers built on top of Interchain Messaging are composed of interfaces and abstract contracts that make them extendable to new implementations in the future. ### `ITokenTransferrer` Interface that defines the events token transfer contract implementations must emit. Also defines the message types and formats of messages between all implementations. ### `IERC20TokenTransferrer` and `INativeTokenTransferrer` Interfaces that define the external functions for interacting with token transfer contract implementations of each type. ERC20 and native token transferrer interfaces vary from each other in that the native token transferrer functions are `payable` and do not take an explicit amount parameter (it is implied by `msg.value`), while the ERC20 token transferrer functions are not `payable` and require the explicit amount parameter. Otherwise, they include the same functions. # Token Home (/academy/interchain-token-transfer/05-avalanche-interchain-token-transfer/04-token-home) --- title: Token Home description: Learn about the Token Home, the asset origin component of the ICTT updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## `TokenHome` An abstract implementation of `ITokenTransferrer` for a token transfer contract on the "home" chain with the asset to be transferred. Each `TokenHome` instance supports transferring exactly one token type (ERC20 or native) on its chain to arbitrarily many "remote" instances on other chains. It handles locking tokens to be sent to `TokenRemote` instances, as well as receiving token transfer messages to either redeem tokens it holds as collateral (i.e. unlock) or route them to other `TokenRemote` instances (i.e. "multi-hop"). In the case of a multi-hop transfer, the `TokenHome` already has the collateral locked from when the tokens were originally transferred to the first `TokenRemote` instance, so it simply updates the accounting of the transferred balances to each respective `TokenRemote` instance. Remote contracts must first be registered with a `TokenHome` instance before the home contract will allow for sending tokens to them. This is to prevent tokens from being transferred to invalid remote addresses. Anyone is able to deploy and register remote contracts, which may have been modified from this repository. It is the responsibility of the users of the home contract to independently evaluate each remote for its security and correctness. ### `ERC20TokenHome` A concrete implementation of `TokenHome` and `IERC20TokenTransferrer` that handles the locking and releasing of an ERC20 token. ### `NativeTokenHome` A concrete implementation of `TokenHome` and `INativeTokenTransferrer` that handles the locking and release of the native EVM asset. # Token Remote (/academy/interchain-token-transfer/05-avalanche-interchain-token-transfer/05-token-remote) --- title: Token Remote description: Learn about the Token Remote, the asset destination component of the ICTT updated: 2024-05-31 authors: [ashucoder9] icon: BookOpen --- ## `TokenRemote` An abstract implementation of `ITokenTransferrer` for a token transfer contract on a "remote" chain that receives transferred assets from a specific `TokenHome` instance. Each `TokenRemote` instance has a single `TokenHome` instance that it receives token transfers from to mint tokens. It also handles sending messages (and correspondingly burning tokens) to route tokens back to other chains (either its `TokenHome`, or other `TokenRemote` instances). Once deployed, a `TokenRemote` instance must be registered with its specified `TokenHome` contract. This is done by calling `registerWithHome` on the remote contract, which will send a Interchain Messaging message to the home contract with the information to register. All messages sent by `TokenRemote` instances are sent to the specified `TokenHome` contract, whether they are to redeem the collateral from the `TokenHome` instance or route the tokens to another `TokenRemote` instance. Routing tokens from one `TokenRemote` instance to another is referred to as a "multi-hop", where the tokens are first sent back to their `TokenHome` contract to update its accounting, and then automatically routed on to their intended destination `TokenRemote` instance. TokenRemote contracts allow for scaling token amounts, which should be used when the remote asset has a higher or lower denomination than the home asset, such as allowing for a ERC20 home asset with a denomination of 6 to be used as the native EVM asset on a remote chain (with a denomination of 18). ### `ERC20TokenRemote` A concrete implementation of `TokenRemote`, `IERC20TokenTransferrer`, and `IERC20` that handles the minting and burning of an ERC20 asset. Note that the `ERC20TokenRemote` contract is an ERC20 implementation itself, which is why it takes the `tokenName`, `tokenSymbol`, and `tokenDecimals` in its constructor. All of the ERC20 interface implementations are inherited from the standard OpenZeppelin ERC20 implementation and can be overridden in other implementations if desired. ### `NativeTokenRemote` A concrete implementation of `TokenRemote`, `INativeTokenTransferrer`, and `IWrappedNativeToken` that handles the minting and burning of the native EVM asset on its chain using the native minter precompile. Deployments of this contract must be permitted to mint native coins in the chain's configuration. Note that the `NativeTokenRemote` is also an implementation of `IWrappedNativeToken` itself, which is why the `nativeAssetSymbol` must be provided in its constructor. `NativeTokenRemote` instances always have a denomination of 18, which is the denomination of the native asset of EVM chains. We will cover Native Token Remote in depth later in the course. # ERC-20 to ERC-20 Bridge (/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/01-erc-20-to-erc-20-bridge) --- title: ERC-20 to ERC-20 Bridge description: Transfer ERC-20 tokens between Avalanche L1s updated: 2024-05-31 authors: [ashucoder9] icon: Book --- import { Step, Steps } from 'fumadocs-ui/components/steps'; import Link from 'next/link'; import { buttonVariants } from '@/components/ui/button.tsx' ## Transfer an ERC-20 Token → Echo as an ERC-20 Token This chapter will show you how to send an ERC-20 Token from C-Chain to Echo using Interchain Messaging and Toolbox. This guide is conducted on the Fuji testnet, where we'll bridge tokens from C-Chain to Echo. **All Avalanche Interchain Token Transfer contracts and interfaces implemented in this chapter implementation are maintained in the [`avalanche-interchain-token-transfer`](https://github.com/ava-labs/avalanche-interchain-token-transfer/tree/main/contracts/src) repository.** Deep dives on each template interface can be found [here](https://github.com/ava-labs/avalanche-interchain-token-transfer/blob/main/contracts/README.md). _Disclaimer: The avalanche-interchain-token-transfer contracts used in this tutorial are under active development and are not yet intended for production deployments. Use at your own risk._ ## What we will do 1. Deploy an ERC-20 Contract on C-Chain 2. Deploy the Interchain Token Transferer Contracts on C-Chain and Echo 3. Register Remote Token contract with the Home Transferer contract 4. Add Collateral and Start Sending Tokens # Deploy an ERC-20 (/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/02-deploy-erc-20-token) --- title: Deploy an ERC-20 description: Deploy the asset to transfer updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; ### Deploy an ERC-20 Token We've already deployed an ERC-20 token in the [Transfer an ERC-20 Token](/academy/interchain-token-transfer/03-tokens/08-transfer-an-erc-20-token) section. If you've completed that step, you can use the same token for this bridge. If you haven't deployed a token yet, make sure you're connected to the Fuji testnet since we're covering this guide from Fuji to Echo. You can deploy the original ERC-20 token below: Make sure to save the deployed token address as you'll need it for the next steps. You can find it in the deployment confirmation or by checking your Core Wallet's token list. ### Add Token to Core Wallet If you haven't already added the token to your Core Wallet: 1. Go to the Tokens tab 2. Click "Manage" 3. Click "+ Add Custom Token" 4. Enter the token contract address 5. The token symbol and decimals should be automatically detected 6. Click "Add Token" Remember that you need to be connected to the Fuji testnet to see and interact with your token. ### Verify Token Balance To verify your token balance: 1. In Core Wallet, select your token from the token list 2. The balance should be displayed 3. You can also check the token balance on the [Fuji Explorer](https://testnet.snowtrace.io) by entering your address If you deployed the example token, you should see a balance of 10,000,000,000 tokens in your wallet. # Deploy a Home Contract (/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/03-deploy-home) --- title: Deploy a Home Contract description: Deploy the Token Home on C-chain updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; We will deploy two Avalanche Interchain Token Transfer contracts. One on the source chain (which is C-chain in our case) and another on the destination chain (Echo in our case). ### Deploy ERC20Home Contract Since we're covering an ERC20 > ERC20 bridge, make sure to set the Transferrer type to "ERC20" and input the ERC20 token address you previously deployed in the token address field. Then deploy the ERC20Home contract on the Fuji testnet using our toolbox: Make sure you have: 1. Deployed your ERC-20 token (from [Deploy ERC-20 Token](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/02-deploy-erc-20-token)) 2. Have enough test AVAX for gas fees ### Save the ERC-20 Home Address After deployment, you'll need to save the contract address for future steps. You can find it in the deployment confirmation in the toolbox Keep this address handy as you'll need it for the next steps in the bridging process. # Deploy a Remote Contract (/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/04-deploy-remote) --- title: Deploy a Remote Contract description: Deploy the Token Remote on your own blockchain updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; To ensure the wrapped token is bridged into the destination chain (in this case, C-Chain) you'll need to deploy a _remote_ contract that implements the `IERC20Bridge` interface, as well as inheriting the properties of `TeleporterTokenRemote`. In order for the bridged tokens to have all the normal functionality of a locally deployed ERC20 token, this remote contract must also inherit the properties of a standard `ERC20` contract. ### Get Test ECH Before deploying the remote contract, ensure you have some test tokens on Echo: - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet tokens automatically on Echo - **Alternative:** Use the [external faucet](https://core.app/tools/testnet-faucet/?subnet=echo&token=echo) to claim tokens ### Deploy the Remote Contract Now we'll deploy the ERC20TokenRemote contract to the Echo chain. Use our toolbox: Make sure you have: 1. Deployed your ERC-20 token (from [Deploy ERC-20 Token](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/02-deploy-erc-20-token)) 2. Deployed your ERC20Home contract (from [Deploy Home](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/03-deploy-home)) 3. Have enough test ECH for gas fees ### Save the Remote Contract Address After deployment, you'll need to save the contract address for future steps. You can find it in the deployment confirmation in the toolbox. Keep this address handy as you'll need it for the next steps in the bridging process. # Register Remote Bridge (/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/05-register-remote) --- title: Register Remote Bridge description: Register the remote bridge with the home bridge updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; After deploying both the home and remote bridge contracts, you need to register the remote bridge with the home bridge. This registration process informs the Home Bridge about your destination blockchain and bridge settings. ### Register Remote Bridge To register your remote bridge with the home bridge, use our toolbox: Make sure you have: 1. Deployed your ERC-20 token (from [Deploy ERC-20 Token](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/02-deploy-erc-20-token)) 2. Deployed your ERC20Home contract (from [Deploy Home](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/03-deploy-home)) 3. Deployed your ERC20TokenRemote contract (from [Deploy Remote](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/04-deploy-remote)) 4. Have enough test ECH for gas fees ### Verify Registration After registration, you can verify the process was successful by looking for the registration event in the transaction logs. You can find the registration confirmation in the toolbox. The registration process is a one-time setup that establishes the connection between your home and remote bridges. Once completed, you can proceed with token transfers. # Transfer Tokens (/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/06-transfer-tokens) --- title: Transfer Tokens description: Transfer an ERC20 from C-chain to your own blockchain updated: 2024-05-31 authors: [ashucoder9] icon: Terminal --- import { Step, Steps } from 'fumadocs-ui/components/steps'; ### Transfer the Token Cross-chain Now that all the bridge contracts have been deployed and registered, you can transfer tokens between chains using our toolbox: Make sure you have: 1. Deployed your ERC-20 token (from [Deploy ERC-20 Token](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/02-deploy-erc-20-token)) 2. Deployed your ERC20Home contract (from [Deploy Home](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/03-deploy-home)) 3. Deployed your ERC20TokenRemote contract (from [Deploy Remote](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/04-deploy-remote)) 4. Registered your remote bridge with the home bridge (from [Register Remote](/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/05-register-remote)) 5. Have enough test AVAX for gas fees ### Verify the Transfer After initiating the transfer, you can verify it was successful by: 1. Checking the transaction status in the toolbox 2. Checking the transaction status in Core Wallet 3. Viewing the transaction details on the [Fuji Explorer](https://testnet.snowtrace.io) 4. Checking your token balance in Core Wallet on the destination chain The transfer process may take a few moments to complete as it involves cross-chain communication. You can track the progress through the transaction hash. # Integrate ICTT with Core (/academy/interchain-token-transfer/06-erc-20-to-erc-20-bridge/07-avacloud-and-core-bridge) --- title: Integrate ICTT with Core description: Learn how to integrate ICTT bridges into the Core Bridge through AvaCloud. updated: 2024-05-31 icon: Book authors: [owenwahlgren] --- ## Integrate Interchain Token Transfers (ICTT) Into Core ICTT bridges deployed through [**AvaCloud**](https://avacloud.io/) will automatically integrate into the [**Core Bridge**](https://core.app/en/bridge). This ensures that any bridges created through AvaCloud are available immediately and do not need extra review. However, **ICTT bridges** deployed outside of AvaCloud (by third-party developers or other methods) will need to be submitted for manual review. Developers will need to provide: 1. **Token Bridge Contract Address(es)**: The bridge contract(s) on the L1. 2. **Layer 1 Information**: Name and other key details of the associated L1 blockchain. The Core team will review this info to ensure the bridge meets security and compliance standards. This guarantees network reliability while allowing more flexibility. ### [Community Submission Form](https://forms.gle/jdcKcYWu26CsY6jRA)