HomeOur Blog

Scaling Bandwidth with Near-Optimal Data Transfer

Share to Socials

This document outlines the Autonomys Network’s bandwidth scalability designs for near-optimal data transfer, and briefly explains how the Autonomys Labs Research team evaluated and selected these approaches.

Scaling Computation & Bandwidth

In blockchain design, sharding is essential for achieving two critical scaling goals:

  1. Computation: Autonomys tackles computation scaling through the use of domains and domain operators. Domains are similar to Ethereum Layer-2s, while domain operators act as decentralized sequencers. Implementing decentralized sequencing from the very beginning was an integral part of our network design philosophy.
  2. Bandwidth: Autonomys addresses bandwidth scaling by leveraging verifiable erasure coding — a cutting-edge method that ensures data transfer at near-optimal efficiency. Below is an brief explanation of how it works and why it matters.

Scaling Bandwidth for Large-System Data Transmission

Imagine a blockchain network with 1,000,000 nodes, of which only 20% are reliable — staying connected and up-to-date, maintaining liveness, and contributing to the system. In order too manage data efficiently in this context, Autonomys applies erasure coding to domain bundles. This involves:

  1. Encoding the Data: The original data is transformed into slightly more than 500 coded chunks.
  2. Distribution: These coded chunks are distributed across nodes, one chunk per node.
  3. Recovery: Any 100 coded chunks are sufficient to reconstruct the original data (as the size of the original data is equal to the total size of 100 coded chunks).

By distributing more than 500 chunks to the nodes, we ensure that at least 100 reliable nodes (of the 20% total) receive a chunk and can successfully recover the bundle. If fewer than 500 chunks are distributed, fewer than 100 reliable nodes are likely to receive them, making recovery impossible. This approach minimizes data transfer while maintaining robustness, achieving a near-optimal solution.

Autonomys’ Erasure Coding vs. Legacy Sharding

Comparing Autonomys’ erasure coding design with existing bandwidth scaling methods helps explain its significance.

In approaches without erasure coding, 1,000,000 nodes might be divided into 200 shards — each with 5,000 nodes — handling different bundles. However, this would require data transfer at a scale 1,000x greater than our design, as each shard would have ~1,000 reliable nodes, despite only one being needed to retain a bundle. Increasing the number of shards (e.g., to 20,000) might appear to offer a solution, but practical constraints, including dynamic shard assignment to mitigate adaptive adversaries, make this unfeasible. Most existing blockchains therefore cap shards at ~200. Even with erasure coding, dividing 1,000,000 nodes into 200 shards leads to data transfer levels that are still 1,000x higher than Autonomys’ design, as each shard has ~1,000 reliable nodes, and only one is needed to retain a coded chunk.

Autonomys’ near-optimal data transfer design — where the number of shards is unbounded — delivers throughput up to 1,000x higher than traditional scaling approaches. For every new bundle, a new shard is generated, consisting of more than 500 honest nodes. We eliminate the need for dynamic shard assignment by allowing for overlapping shards, and rely on mining and farming mechanisms to counter adaptive adversaries.

Conclusion

Autonomys’ innovative approach to scaling bandwidth through verifiable erasure coding represents a paradigm shift in blockchain technology. By combining minimal data transfer with robust recovery guarantees, Autonomys is setting a new standard for throughput and efficiency. Our design doesn’t just look good on paper — it’s built for the demands of the AI3.0 future.

Stay tuned as we continue to push the boundaries of what’s possible in web3 x AI scalability.

About Autonomys

The Autonomys Network — the foundation layer for AI3.0 — is a hyper-scalable decentralized AI (deAI) infrastructure stack encompassing high-throughput permanent distributed storage, data availability and access, and modular execution. Our deAI ecosystem provides all the essential components to build and deploy secure super dApps (AI-powered dApps) and on-chain agents, equipping them with advanced AI capabilities for dynamic and autonomous functionality.

X | LinkedIn | Discord | Telegram | Blog | Docs | GitHub | Forum | YouTube