Cerberus [ /’sɜrbʌrʌs/ ] is a family of deterministic, Byzantine Fault Tolerant (BFT) consensus protocols that use state provisioning to enable global consensus between shard groups on the correct ordering of software events.

THE MEME STUDIO | Web3 Marketing Agency

Page Sponsor: The Meme Studio: Web3 Marketing Agency


Whitepapers https://assets.website-files.com/6053f7fca5bf627283b582c2/608811e3f5d21f235392fee1_Cerberus-Whitepaper-v1.01.pdf
Peer Review https://escholarship.org/uc/item/6h427354
Sybil Protection https://radixdlt.notion.site/fc3306406d144989a939225006134cd9
State Model https://radixdlt.notion.site/Sharding-6b6dc10c1f83403d8dfb0559dc925eb9
Fault Tolerance Byzantine Fault Tolerant
Execution Environment https://radixdlt.notion.site/Radix-Engine-1cd101cd52264df9806d63da555d5828 v2
Validator Node Cap n/a


The Cerberus protocol was first described in a March 2020 paper by Florian Cäsar, Dan Hughes, Josh Primero and Stephen Thornton, and has been designed specifically for Radix’s multi-shard architecture. Cerberus represents the sixth iteration of the core technology underlying Radix. It addresses the "Weak Atom" problem that existed in the previous version of Radix's protocol, Tempo.

An evaluation of Cerberus was published in June 2023 in the Journal of Systems Research under the title ‘Cerberus: Minimalistic Multi-shard Byzantine-resilient Transaction Processing.’ The authors’ analysis highlighted the ability of Cerberus to minimize coordination costs for multi-shard consensus.

RDX Works - core developers of the Radix network - have signaled that a sharded version of Cerberus will be implemented in the Xi’an mainnet upgrade.


Cerberus is a consensus protocol optimized for Radix's multi-shard architecture. It aims to enable scalability, security, and decentralization.

Cerberus partitions transaction processing and state management into groups called shards. Each shard runs a local Byzantine Fault Tolerant (BFT) consensus algorithm to order transactions and maintain local state. This sharding technique is necessary to overcome the scalability limitations of traditional fully replicated networks in which every node must process every transaction. However, multi-shard transactions that span across shards require atomic commitment protocols to maintain consistency.

A key innovation of Cerberus is its cross-shard communication mechanism that coordinates multi-shard transactions with low overhead. For this, Cerberus utilizes UTXO-based sharding. In the UTXO model, data must be destroyed when it is modified and recreated in a new state. This prevents double spending across shards, avoiding the need for global transaction ordering. By leveraging UTXO transaction semantics and transaction atomicity, Cerberus minimizes the coordination required for multi-shard commitment, making cross-shard ordering unnecessary. A key capability of Cerberus is the ability to process UTXO transactions concurrently within the same shard without explicit locking or blocking. This is accomplished through an efficient inter-shard communication method between disjoint validator sets. This allows Cerberus to reduce communication and computation costs compared to existing protocols such as AHL, ByShard, Caper, Chainspace, RingBFT, and SharPer. Parallel processing across shards enables linear scalability as shards increase.

At the execution layer, applications can be split by the Radix Engine into independent components compatible with Cerberus sharding.


Cerberus is an active multi-decree (3-phase commit) consensus mechanism, requiring a supermajority for state commitment and capable of processing transactions in parallel. By way of comparison, Bitcoin is a single decree mechanism that sacrifices deterministic finality for probabilistic finality with a simpler communication protocol.

Cerberus consists of three variants:

  1. Core: The base variant, ideal for general use.
  2. Optimistic: Optimized for scenarios with high trust.
  3. Resilient or Pessimistic: Designed for use in less trustworthy environments.

1. Core Cerberus

Core Cerberus (CCerberus) is designed to take maximum advantage of the UTXO transaction properties to minimize coordination costs. CCerberus operates under the assumption that all clients are well-behaved and will never approve conflicting transactions.

The key insight in CCerberus is that the order of transaction execution does not matter for correctness as long as the atomicity of each transaction is ensured. CCerberus enforces transaction atomicity using a three-step approach:

  1. Local inputs: Each shard checks if it has the necessary local inputs for a transaction.
    1. When a client sends a transaction request to a shard, the shard first checks if it has all the necessary input objects specified in the transaction.
    2. The shard uses a consensus protocol to create an ordered log of transactions. Consensus is necessary to get consistent results on object availability across replicas.
    3. If the shard has all the specified input objects available, it adds the transaction to its local ordered log with a "pledge" to use those inputs if needed.
  2. Cross-shard exchange: The shards exchange availability of local inputs via a single communication round.
    1. The shard that received the client's transaction request broadcasts the transaction with its local input object status to all other shards involved in the transaction.
    2. Cross-shard broadcast uses a "cluster sending" primitive that prevents equivocation and ensures reliable delivery.
    3. The shard waits to receive input object status from all other relevant shards before proceeding.
  3. Decide outcome: If all shards have indicated input availability, the transaction is committed, else aborted.
    1. If all shards reported input object availability, the transaction is committed by constructing the output objects. Else it is aborted.
    2. Shards defer the actual commit/abort until the consensus sequence order allows it.
    3. A single consensus step within each shard is sufficient to totally order the commit or abort of the transaction.
    4. Clients are notified of the commit or abort outcome from enough replicas to withstand Byzantine faults.