Section III: Ethereum Scaling and Layer 2 Solutions
The Rollup Revolution
Recall from the EVM section that every full node must process every transaction. This design choice, essential for decentralization and security, constrains throughput to what a typical node on consumer hardware can verify and propagate within a 12-second slot. The result is on the order of a few dozen simple transactions per second, far too slow for mainstream adoption. A single popular application can congest the entire network, sending gas fees soaring to hundreds of dollars per transaction during periods of extreme demand.
The solution cannot be to simply "make blocks bigger" or "process transactions faster." Raising gas limits or shortening block times would increase bandwidth, CPU, and storage requirements, quietly pushing slower nodes off the network. This would concentrate validation in the hands of fewer, more powerful operators and undermine decentralization. Ethereum's core developers therefore prioritize keeping node requirements low enough that anyone with reasonably affordable consumer hardware and a decent internet connection can participate in securing the network.
Rollups address this constraint by moving most computation off Layer 1 while anchoring security to Ethereum. Transactions execute on a separate Layer 2 chain that runs much faster and cheaper because it doesn't require every Ethereum node to re-run every step. The rollup then posts compressed transaction data (and, depending on the design, proofs or fraud-detection mechanisms) back to Layer 1, which provides data availability, dispute resolution, and final settlement.
This security inheritance only works fully when data availability lives on Ethereum itself. For a rollup to be truly secure, anyone must be able to reconstruct its state from data posted to Layer 1. If transaction data disappears or becomes unavailable, users can't prove they own their funds or challenge invalid state transitions. Rollups that use external data availability (called validiums, since they validate transactions but store data elsewhere) sacrifice this guarantee and require additional trust assumptions.
A common criticism of the rollup scaling approach claims that L2s extract value from Ethereum by launching their own tokens, pulling investor attention and capital away from ETH. The concern breaks down into two main issues. First, users end up speculating on L2 tokens rather than ETH itself. Second, valuable revenues from sequencers (the entities that order and batch transactions on L2s) and transaction fees get captured at the rollup level instead of flowing back to Ethereum's base layer.
However, rollups that post their data to Ethereum still generate L1 fees and contribute to ETH's deflationary burn mechanism, especially as L2 usage grows. The choice of gas token matters here: some rollups denominate user gas in a native L2 token, others in ETH, but in all cases sequencers ultimately need to acquire ETH to pay for L1 data availability. This forces part of the system's revenue back into ETH demand. Additionally, factors like sequencer decentralization and how tightly a rollup's economics couple to Ethereum's settlement layer all determine whether value flows back to ETH holders or gets mostly captured at the L2 level.
The rollup ecosystem has evolved into two primary approaches, each making different compromises:
Optimistic Rollups: Trust but Verify
Optimistic rollups, exemplified by Arbitrum and Optimism, embrace an "innocent until proven guilty" philosophy. They optimistically assume all transactions are valid and immediately post new state updates to Layer 1. This assumption allows for fast execution and low costs, but it comes with an important caveat: a challenge period of roughly seven days during which anyone can submit a fraud proof if they detect invalid transactions.
This security model balances speed against finality. While users enjoy fast, cheap transactions on the rollup itself, withdrawing funds back to mainnet requires patience. The seven day waiting period ensures that any fraudulent activity can be detected and reversed, but it means that optimistic rollups aren't ideal for users who need immediate access to their funds on Layer 1.
However, the market has responded to this friction with third-party fast withdrawal services. Liquidity providers like Hop Protocol and Across Protocol will front users their funds on Layer 1 immediately, charging a fee for the convenience. These providers assume the risk during the challenge period. If a fraud proof invalidates the transaction, they bear the loss. Users who need speed pay a premium; those willing to wait can withdraw for free.
ZK Rollups: Mathematical Certainty
ZK rollups, including Starknet, zkSync, and Scroll, take an entirely different approach. Instead of assuming validity and waiting for challenges, they use validity proofs (advanced cryptographic techniques that mathematically prove the correctness of every batch of transactions). These rollups first commit transaction data to Layer 1, then submit a proof that validates the entire batch.
These zero-knowledge proofs are advanced mathematical techniques. They allow a rollup to prove that thousands of transactions were processed correctly without requiring Layer 1 to re-execute them. The proof provides strong cryptographic certainty about the validity of an entire batch (though like all cryptography, this relies on certain mathematical assumptions being sound).
Different ZK rollups use different proof systems, each with distinct properties. Scroll uses pure SNARKs, generating tiny proofs of just a few hundred bytes that minimize L1 costs, but requiring a trusted setup where initial parameters must be securely generated and destroyed, like destroying the mold after casting a master key so nobody can secretly mint more keys later. Starknet uses STARKs, producing much larger proofs of hundreds of kilobytes, but offering stronger security properties: no trusted setup, transparency, and better resistance to potential future quantum computers. zkSync takes a hybrid approach, generating STARK proofs internally for security, then wrapping them in a SNARK for cost-efficient on-chain verification. This still requires a trusted setup for the SNARK wrapper.
The advantage over optimistic rollups is compelling. ZK rollups avoid the week long withdrawal delays that plague optimistic systems. Once a validity proof is verified on Layer 1, users can access their funds without any challenge period (though they still wait for proof generation and verification, which typically takes minutes to hours depending on system load). However, this security comes at a cost. The cryptographic machinery required to generate these proofs is more complex and computationally intensive than optimistic approaches.
Additional Rollup Considerations
Beyond the core differences between optimistic and ZK approaches, several other dimensions matter when evaluating rollups.
In practice, most rollups currently rely on centralized sequencers to achieve the fast confirmations users expect. Unlike Ethereum mainnet, which distributes block proposal across thousands of validators, these rollups use a single entity to order transactions and produce blocks. While this represents a temporary engineering choice rather than a permanent design, it introduces potential censorship risks and single points of failure. Leading rollups are actively developing solutions to eliminate sequencer centralization while preserving performance. Shared sequencing networks distribute ordering responsibility across multiple parties, creating redundancy without sacrificing speed. Sequencer rotation systems periodically change which entity handles transaction ordering, preventing long-term control by any single party. Inclusion lists require sequencers to include certain transactions within specified timeframes, making censorship more difficult. Preconfirmations allow sequencers to make soft commitments about transaction inclusion before formal consensus, improving user experience while maintaining reversion options through slashing mechanisms and dispute windows.
Proof systems continue to evolve in maturity and coverage. Many ZK-rollups still operate with "training wheels" (additional security mechanisms that can pause or override the system during early phases while the technology matures). Optimistic rollups depend on robust fault proof systems that are still being refined and battle-tested. Fee structures combine L2 execution costs with L1 data availability and inclusion fees. Additionally, rollups operate in different data availability modes. True rollups post all data to Ethereum, while validiums use external data availability or hybrid approaches that trade cost savings against stronger trust requirements.
Not all rollups prioritize decentralization equally. Some projects deliberately embrace centralized architectures to achieve consumer-app-like responsiveness. MegaETH, for example, uses a single active sequencer and 10-millisecond “miniblocks” to target millisecond-level latency and on the order of 100,000 transactions per second. This design accepts risks like single points of failure and potential censorship in exchange for high speed. Such approaches reveal the inherent tensions in blockchain design: decentralization, security, and performance exist in constant competition, with different applications requiring different balances.
Solving the Data Availability Challenge
With rollups defined, the dominant cost driver becomes data availability. Before March 2024, rollups had to store their data permanently in Ethereum's expensive execution layer, making data availability costs account for 80-95% of total rollup fees.
EIP-4844, implemented in the Dencun upgrade, fundamentally changed this economics by introducing blob-carrying transactions. EIP-4844 introduced blobs with a separate fee market and temporary retention (~18 days), cutting rollup DA costs. These blobs are large packets of data (about 128 KB each) that live temporarily on Ethereum's consensus layer before being automatically pruned, establishing a separate, much cheaper data market specifically designed for rollups.
The system maintains security through KZG commitments, which are cryptographic fingerprints that uniquely identify each blob's contents. Imagine rollups renting billboard space on mainnet: they paste a huge poster (the blob) that stays up for roughly 18 days, then the city takes it down. The city keeps only a sealed, signed thumbnail that uniquely commits to the poster (the KZG commitment). Later, anyone can verify a specific square of that poster with a tiny receipt (a proof) without the city storing the full poster forever.
Through this design, Ethereum created two separate fee markets: blob space operates with its own base fee mechanism (similar to regular gas pricing), while normal transaction fees continue unchanged. With Pectra, EIP-7691 raised blob limits (target 3→6, max 6→9 per block) and made blob fees fall more aggressively when blobs are underused, further reducing costs for rollups while keeping prices responsive without overshooting.
This design is the first step toward full danksharding, Ethereum's long-term vision for massive data availability scaling. KZG commitments enable nodes to verify blob integrity while remaining forward-compatible with future upgrades that will let even resource-constrained devices verify data availability by checking only small samples rather than downloading everything.
Alternative Data Availability Solutions
For applications requiring even lower costs than Ethereum's blobs provide, several alternative Data Availability (DA) layers have emerged. Each makes different security compromises to achieve cost reduction. Understanding these design choices is essential for evaluating which rollups to use.
Celestia represents the most ambitious alternative. It's a specialized blockchain that provides consensus and data availability only, without execution. Its key innovation is Data Availability Sampling, which allows even devices with limited resources to gain high confidence that full block data was published by checking only small, random pieces rather than downloading everything. The system also lets different rollups efficiently prove their data was included without downloading irrelevant information from other rollups. Security relies on validators and an honest majority of independent samplers, with full nodes able to produce fraud proofs if data is incorrectly encoded.
EigenDA leverages Ethereum's restaking ecosystem (described in Section IV) to provide high-throughput data availability. A disperser coordinates the encoding and distribution of data across operators who attest to its availability. Throughput can be high, but security depends on the value restaked by operators and the specific quorum assumptions of each deployment.
Validium and committee-based systems take a different approach entirely, keeping data off-chain under the control of a committee or bonded set of operators. This can be cheaper than on-chain alternatives but weakens security guarantees since data availability isn't enforced by Layer 1 protocol rules.
Many rollups operate in hybrid modes, posting state commitments to Ethereum while using external data availability for the bulk of their data, or switching between different DA providers based on market conditions.
The data availability landscape continues to evolve rapidly, with new solutions emerging and existing ones improving their efficiency and security models. As rollups mature and user adoption grows, the choice of data availability solution will likely become as important as the choice of consensus mechanism itself.