Achieving >300k TPS
March 7, 2026

Achieving >300k TPS

Why the Protocol Makes Performance Possible

Before any engineering optimizations, Pod's protocol itself sets the foundation. Pod does not perform strong consensus (or total global ordering) before processing transactions. Validators can process transactions in different orders and still converge to the same state—a property known as eventual consistency. This means there is no waiting for coordination across the network before execution begins. The protocol creates the conditions for high performance; the engineering is what actually gets you there.

The Actor Model: Concurrency Without Locks

Pod's execution pipeline is built around the actor model. Each stage of processing—signature verification, execution, disk persistence, signing, and network broadcast—runs as an independent actor with its own thread and its own memory. Actors communicate by passing messages through channels rather than sharing memory.The state itself is maintained entirely in memory, local to the execution actor's thread. When two sequential transactions touch the same account, that account's data is already in cache. There are no database queries, no disk reads during normal execution.

The practical consequence is that Pod avoids mutexes and locks entirely. Channel sends are atomic operations, but they are significantly cheaper than acquiring a lock. Because each actor owns its memory, data stays close to the CPU cache for that thread, reducing cache misses. And because each stage is independent, the pipeline can process a new batch the moment a stage finishes—without waiting for downstream stages to complete.

Actors can also be scaled horizontally. If a particular stage becomes a bottleneck, more workers can be added for that actor without restructuring the rest of the system.

Batching: Amortising the costly operations

One of the most impactful optimizations in Pod is batching at every stage of the pipeline. When transactions arrive from the network, they are grouped into batches and sent through the pipeline together. This reduces the number of channel sends between actors, which directly reduces the number of synchronization operations—the primary bottleneck in multithreaded systems.

Batching has its largest effect at the signing stage and when writing to the append log (See below). Signing a transaction is expensive. Hashing is not. Rather than signing each transaction individually, Pod hashes each attestation, then hashes those hashes together and signs once over the entire batch. Verifying the batch is equally efficient for every other node in the network. This benefit compounds across the validator set: every node that receives a vote batch spends less time on verification than it would processing individual signatures.

The Append Log: Fast Persistence in the Hot Path

Rather than writing to a database, Pod uses an append log for persistence. The append log is a binary file that stays open on disk. When a batch of transactions is processed, the inputs of the system (like time, transaction data, vote data, etc) are appended to the end of the file. Nothing is updated or overwritten.

This approach has two advantages. First, append operations are fast and straightforward—multiple workers can write using multiple files, potentially spread across different disks. Second, since Pod stores the inputs to the system rather than derived state, the log is fully deterministic. If a node restarts, it replays the log in order and reconstructs the exact same state without needing a separate database.

UDP Networking

Pod sends votes and other messages over UDP rather than TCP. With UDP, Pod can pack data tightly into a single packet sized to fit within the MTU, avoiding fragmentation entirely, and then send them without expecting an acknowledgment. Data is serialized as raw bytes using a custom, minimal encoding—no JSON, no external serialization libraries. The format is designed so that encoding and decoding are trivial operations.

On the receive side, Pod uses a pulls multiple messages from the network at once via kernel functions. These packets are placed directly into pre-allocated buffers and pass these directly to decoding, without any extra copies. A separate worker handles decoding while the receiver immediately starts pulling the next batch. This zero-copy, zero-allocation design keeps packets moving out of the kernel buffer as fast as possible, reducing the risk of the kernel dropping packets under load.

None of this is accidental. Every decision — from protocol design to how packets leave the kernel — removes a specific bottleneck. The result is a system where performance isn't bolted on. It's built in.

Achieving >300k TPS
March 7, 2026
Achieving >300k TPS