Research
5 min read

Multidimensional Fee Markets at 1M TPS

Written by
Eclipse Labs
Published on
July 10, 2025

Imagine an airline that charges passengers solely based on the number of seats, regardless of their luggage weight. The airline would be inefficient because passengers can bring heavy luggage at no additional cost. This is exactly how most blockchains handle transaction fees.

By flattening all the different, non-fungible network resources (like computation, bandwidth, and storage) into a single "gas" fee, blockchains create a single measure that doesn’t appropriately represent the resource utilization of transactions. This inevitably leads to underutilization of network capacity, possibly on the order of hundreds.

In a bid to improve the SVM’s existing multi-dimensional fee market implementation, we looked at the research and previous attempts, and we came to a number of conclusions regarding the utility and practicality of multidimensional fee markets on high-throughput blockchains. 

Key Takeaways

  1. Local fee markets, which are a limited implementation of multi-dimensional fee markets, are necessary for scaling high-throughput blockchains.
  2. Implementing multi-dimensional fee markets involves two main challenges: a "fee update" problem and a "block-packing" problem.
    The vast majority of existing work has tackled the fee update problem, which we’ve found to be the simpler of the two issues with implementing multi-dimensional fee markets.
  3. The other problem–choosing the highest utility transactions given a set of constraints (like gas limit) is the true bottleneck. It is a multi-dimensional knapsack problem, which is NP-hard.
  4. Our experiments show that even with a state-of-the-art C++ Constrained Programming solver, computing perfect solutions for a two-resource fee market requires at least three orders of magnitude more time than desired.
  5. Our conclusion is that, given the intractability of finding perfect solutions, dimensionality reduction and heuristic solutions are the best way forward for low-latency, high-throughput blockchains.

The Cost of Poor Metering

A study from Flashbots recently presented at EthCC showed that on EVM L2s, as much as 80% of gas is consumed by spam transactions that pay as little as 2.9% of the total gas fees.

Figure 1: Fees paid vs. gas used on EVM Rollups | Source

These spam transactions can frequently be categorized as either "on-chain" or "probabilistic" searching  (p-searching). 

On-chain searching exploits transactions’ atomicity. Searcher bots spam transactions that read the current chain state and either execute or revert based on it, rather than attempting to capture pre-calculated MEV. In p-searching, bots take advantage of the poor market structure (FCFS instead of auctions) and spam transactions with low base fees in hopes of capturing MEV opportunities cheaply. 

Some spam transactions employ both strategies, but all ideas revolve around spamming transactions that read the chain state during execution, and execute or revert based on that state, instead of paying a high-priority fee to capture MEV, which they’re sure exists.

Our investigations show that on-chain and p-searching are harmful to the network because of poor resource metering and market structure.
On most chains, transactions are charged only for resources used within the VM, which is a small amount compared to the total resources they consume. They require compute for signature verification, and network bandwidth for propagation, among other things, but are only charged for compute.

For the resources that are metered, like memory bandwidth, converting them to unified compute units (gas/ CUs) is inefficient because the resources are non-fungible. This necessitates the repricing of opcodes to improve scalability or prevent attacks and has limited effectiveness as covered by Vitalik here:

The "default" way we have handled such a scenario is re-pricing: make storage reading more expensive to reduce the per-block maximum to something safer. However, we have already done this many times, and it would make too many applications too expensive to do this again.

We find that MEV bots exploit these inefficiencies; they keep compute usage low, and spam seemingly high-utility (gwei/gas) transactions, revert early, and avoid paying most for most of the resources they consume. This trend is exacerbated on the EVM because it issues gas refunds. 

A particularly egregious example is this transaction that paid a fee of \$ 0.02 to capture an arbitrage of \$ 0.12. But before this successful attempt, there were around 350 failed transactions that reverted quickly and only paid for the gas consumed.

The MEV spam problem isn’t solely borne out of the lack of multi-dimensional resource metering, but it crystallizes how wide-reaching the consequences of poor resource metering can be. While the SVM is an improvement over the EVM in terms of mitigating spam due to the absence of gas refunds, there's still considerable room for improvement through proper multi-dimensional fee markets.

A Brief Introduction to Multi-dimensional Resource Pricing

Most blockchains charge operations based on their use of a single resource. While this simplifies UX and block packing, it is an inaccurate abstraction of the real world. Each operation uses different amounts of non-fungible network resources, and flattening them creates a false equivalence that leads to underutilization of network capacity.

For example, imagine a blockchain with two resources–gas and blobs–that can handle up to 36M gas and 6 blobs per block.  The best linear[1] constraint (single-dimensional gas limit) that captures both constraints is $\text{resource consumption} = gas + 6M * \text{no. of blobs}  \leq 36M$

When a set of operations uses half the maximum compute (18M gas) and half the maximum number of blobs (3 blobs), this single-dimensional constraint counts the block as full when half of the network’s capacity is unused. And choosing a different constraint either leads to unsafe blocks or unnecessarily restricts capacity.

Figure 2: Effect of a single-dimensional constraint on a multi-dimensional resource system | Source

In general, when there are k distinct resources, collapsing them into a single (safe) constraint can incur a loss of up to a factor of k, e.g., if there are three resources flattened into one, there are cases where you only use a third of the network capacity (and incur a loss of ⅔).

In summary, the worst-case inefficiency of substituting a single-dimensional constraint for a multi-dimensional one increases proportionally with the number of resources.

What Exactly is a Resource

A resource is anything that the network can track consumption of. Each non-fungible[2] CPU instruction can be treated as a resource. And this is not just complexity for complexity’s sake; the throughput (Instructions Per Cycle) of a CPU depends on the mix and sequence of instructions. For example, a modern CPU can execute independent instructions that don’t depend on some data while fetching that data.

Compressing the instructions to the number of cycles, as is done today, incurs the same single-dimensional loss we discussed above.

This implies that the number of resources and by extension, the worst-case loss from using a single-dimensional gas measure to represent them is on the order of hundreds!

That’s the true (theoretical) unlock of multi-dimensional fee markets; they allow complete utilization of the network’s resources and protect against DDoS attacks due to mispricing.

History of Multi-dimensional Resource Pricing on the SVM

Contrary to popular opinion, Solana (not Ethereum) was the first chain to implement multi-dimensional resource pricing. Their implementation is referred to as local fee markets; in this model, each account is a resource and is metered separately.

We extensively covered the scalability benefits of local fee markets in a previous blog post. But a quick summary is that because each piece of state is independently metered, local fee markets allow the network to set a gas limit that is representative of the physical execution cores. Without local fee markets, the network must be conservative when setting the gas limit to prevent a long serial chain, like an NFT mint, from stalling the network.

It might be somewhat confusing to think of local fee markets as multi-dimensional fee markets because the resource is the same–compute. But enforcing per-state metering and constraints is the same as any other multi-dimensional resource pricing, and the conservative gas limit problem for non-local fee markets is the same as the inefficiency of a single constraint described above.

There have been attempts to expand Solana’s multidimensional fee market implementation, with the most prominent being Chili Peppers by the Firedancer team. Chili Peppers proposes adding a new resource to meter memory bandwidth the same way Compute Units meter CPU usage. It defines a network-wide cache that separates accounts, delineates hot accounts (already in RAM) from cold accounts (accounts on disk), and the number of memory bandwidth units used would depend on how much data is loaded and where the data is loaded from.

In the authors’ words: “For Solana (or any blockchain), treating all state as equivalent (regardless of its usage patterns) means that either total state size will be limited by the size of RAM, or the throughput of the network will be limited to the bandwidth of disks. Actual usage patterns (and expectations for future usage patterns as the network grows) show that a relatively small amount of the total state is accessed frequently, and most of the state is accessed infrequently. This usage pattern allows a hot/cold tiered state design to allow the total state size available from disk, while achieving the throughput available from RAM.

Chilli peppers shows how abstractions don’t always map to the real world and tangibly explains how that results in throughput loss. Unfortunately, Chili Peppers has not seen any activity in a while, and the pull request (PR) is closed as of this moment. This further inspired us to explore advancements in multidimensional resource pricing markets, and overcome the shortcomings identified in earlier models.

Deconstructing the Multi-dimensional Fee Market Problem

Diamandis et al. (2022) showed that the MD-resource problem can be modeled as an optimization problem that decomposes into two sub-problems, the mostly solved “fee update rule” problem and the NP-hard “packing problem”.

Fee Update Problem

The fee update rule answers the question of how to price the resources to maximize network utility. There have been a number of proposed solutions:

  • Multi-Dimensional First Price Auctions (MD-FPAs): MD-FPAs are simple; each transaction bids an amount for each resource, and the block producer chooses the transactions with the highest utility/cost ratio. But due to the challenges single-dimensional FPAs faced, multi-dimensional FPAs have not been given as much attention. However, recent research (Goren and Garimidi, 2024) suggests that they are promising.
  • Multi-dimensional EIP-1559; this update rule proposes a per-resource EIP-1559 update rule. The base price of a resource in block n + 1 is set based on the price and utilization of that resource in block n, and a learning rate .
  • Diamandis et al (2022) formally proved that a per-resource EIP-1559 belonged to a class of update rules that effectively solved the fee update rule problem.
    Using techniques from convex optimization and duality theory, they showed that simple multi-dimensional EIP-1559 and the entire class of similar update rules eventually converge at the optimal prices. Their work also suggested that, while great, the naive EIP-1559 approach can be improved with some form of memory.
  • More recent work (Aptos Labs, 2024) suggests that the per-resource EIP-1559 update rule significantly exacerbates an inefficiency observed in the single-dimensional EIP-1559: transactions are priced out because of a rapid drop in demand, even when there is room for them in the block. In the authors’ words: “Events that were rare in a single-dimension (1D) model are now exceedingly more likely, leading to underutilized capacity and a negative impact on user experience.

    To mitigate this problem, the authors propose a dot product inequality that averages out single-dimensional volatility. The equation below describes the inclusion function for transactions: 

$$ \begin{cases} 1 & \text{if } \vec{BIDS} \cdot \vec{LIMITS} \geq \vec{\text{BASE FEES}} \cdot \vec{LIMITS} \\ 0 & \text{otherwise} \end{cases}$$

where:
$\vec{BIDS}$ is the vector of bids by the transaction,

$\vec{LIMITS}$ is the vector of resource limits of the transaction

$\vec{\text{BASE FEES}}$ is the vector of base fees for the resources at that block.

This modification to the inclusion criteria ensures that transactions are included, provided their bids are reasonable across all dimensions rather than exact.

Given the evolution of the research and our permissioned sequencer set, we planned to simply extend our work on SIMD-253 to the multi-dimensional context. However, we soon realized that the update rule was the least of our problems.

The Block Packing Problem

The second sub-problem is the packing problem.

Block producers (largely validators) are tasked with the responsibility of building blocks that maximize network utility, subject to resource utilization constraints. It is an 0-1 knapsack problem, well known to be NP-hard. This means that as the pool of transactions to choose from grows, finding the optimal selection of transactions becomes exponentially harder.

In the single-dimensional case, the problem is (almost) optimally solved by “greedily” selecting the transactions with the highest utility/capacity ratio (Gwei/gas or lamports/CU), but in the multidimensional context, research suggests that the greedy approach falls short.

This forced us to evaluate the feasibility of finding optimal solutions to the block packing problem at our GigaCompute target.

The Experiment

To quantify the difficulty of solving the NP-hard problem, we tested the performance of a state-of-the-art solver in practical conditions. 

To start, we investigated the feasibility of solving the multi-dimensional block packing problem in a low-capacity environment like Ethereum mainnet. 

Multi-dimensional Fee Markets on a Low-Throughput Network

We analyzed Ethereum mainnet blocks between 22873945 and 22874944 and found that the blocks had an average of 201 transactions and a maximum of 493 transactions per block.

Setup

Based on this, we conducted a simple experiment:

  • We chose 500 transactions as the block size.
  • We created a two-dimensional model with two resources: compute and network (bandwidth).
  • We designed four transaction archetypes to simulate a real-world mempool[3]: simple sends, DeFi swaps, oracle updates, and random, resource-intensive transactions, each with different resource costs and fee profiles.
  • We tasked the best-in-class solver (Google's OR-Tools Knapsack solver) to pack the most valuable block of 500 transactions possible. 
resource_types: ["compute", "network"]
target_block_size: 500

transaction_archetypes:
  simple_send:
    distribution_pct: 0.10
    resources:
      compute: {min_value: 10, max_value: 20}
      network: {min_value: 20, max_value: 30}
    fee_config:
      min_fee: 10
      max_fee: 20

  defi_swap:
    distribution_pct: 0.80
    resources:
      compute: {min_value: 80, max_value: 120}
      network: {min_value: 40, max_value: 60}
    fee_config:
      lognormal_mean: 4.5
      lognormal_sigma: 0.8

  oracle_update:
    distribution_pct: 0.05
    resources:
      compute: {min_value: 5, max_value: 10}
      network: {min_value: 30, max_value: 40}
    fee_config:
      min_fee: 100
      max_fee: 120

  random_gunk:
    distribution_pct: 0.05
    resources:
      compute: {min_value: 1, max_value: 150}
      network: {min_value: 1, max_value: 150}
    fee_config:
      min_fee: 1
      max_fee: 100

We ran this simulation thirty times for each mempool size, from 501 to 10,000 transactions, and measured the time it took the solver to find the perfect solution.

experiment:
  mempool_sizes: [501, 600, 750, 1000, 1500, 2000, 3000, 5000, 10000]
  repeats: 30

Results

In the Ethereum-like environment, the CP-sat solver can compute optimal solutions for mempools four times larger than the blocksize within the current Ethereum mainnet execution window (4 seconds). 

This suggests that calculating optimal solutions might be entirely feasible in an Ethereum-like environment.

Figure 3: Solver performance for two resources and a block size of 500.

As the mempool gets larger, we find that solving time increases significantly and is around 14 seconds when the mempool is 20 times larger than the target block size (10,000 transactions). While not ideal, these results suggest that the multi-dimensional knapsack problem is not intractable on Ethereum mainnet.

Next, we explored the performance of the solver on a high-throughput network that closely resembles GigaSVM Eclipse.

Multi-dimensional Fee Markets on a High Throughput Network

Our GigaCompute target entails a throughput of at least 1M TPS. Given this, each component of our transaction pipeline must be capable of processing over 1M TPS with a latency of 1-2 ms. This translates to around 1,000 transactions/ms.

Setup

Based on this, we extended the previous experiments. We increased the target block size to 1,000 and increased the mempool size appropriately.

mempool_sizes: [1001, 1200, 1500, 2000, 3000, 5000, 10000]

Results

Aside from the initial simulation when the mempool capacity is approximately the block capacity, we found that the time required to find the optimal solution scaled almost linearly with the size of the mempool. And that in the best test case, when the mempool is only 20% larger than the block, computing an optimal solution for 1,000 transactions takes ~3.5 seconds.

Figure 4: Solver time at different mempool sizes for a 1k-sized block

As shown in Figure 4, when the mempool is ten times larger than the block (10k txs), computing an exact solution takes as much as 10 seconds! This is at least three orders of magnitude (1,000x) slower than desired. This led us to investigate the feasibility of heuristic (good enough) solutions. So we ran the same experiment, with time constraints.

solver_times: [10, 5, 3, 1, 0.1, 0.01, 0.001, 0.0001]

With the time constraints, in the best test case, i.e., block capacity is approximately mempool size, the solver can compute optimal heuristic solutions relatively fast, requiring less than 3ms/batch. Even in this best case, computing an approximate solution in less than 3ms/batch proved impossible. The solving time was slightly lower with the heuristic solver, but still too slow.

Mempool Size (Transactions) Optimal Solution Time Minimum Time for Any Feasible Solution
1,200 3.2 seconds 2.0 seconds
1,500 4.0 seconds 2.8 seconds
2,000 5.0 seconds 4.2 seconds
3,000 6.0 seconds 5.4 seconds
5,000 6.9 seconds > 5 seconds
10,000 10.7 seconds > 10 seconds

We repeated the experiment with varying conditions and found some interesting data points, like the fact that the size of the mempool relative to the block size matters more than the number of dimensions.

But regardless of the setup, we found that in the high-performance context, computing a heuristic or exact solution proved to be 1,000 to 10,000 times slower than required. This aligns with the results seen in Kiayias et. al (2025).

Given the significant gap between our desired result and the performance of a well-optimized solver, we concluded that computing exact solutions to the block-packing problem is intractable at the ultra-low latency and high throughput that we’re building Eclipse for.

In the following section, we discuss the next steps we are taking to tackle this problem.

Next Steps

Given that finding an optimal solution at 1M TPS is intractable, heuristics are the only viable approach. Fortunately, we are not starting from scratch. Our path forward involves three key areas of research. We leave the discussion of our findings on these strategies to future communication.

Strategy 1: Doubling Down on Heuristics

There is an extensive body of academic and practical research on heuristic solvers for the knapsack problem, which we can adapt for our specific domain. We are actively exploring these to build a custom, high-performance solver.

Strategy 2: Exploring Update Rules that Shrink the Mempool

An important data point from our experiments is that the relative size of the mempool to the block is a crucial factor in the performance of the solver. We’re exploring update rules that can minimize the size of the mempool without worsening the UX.

Strategy 3: Making the Problem Simpler with Dimensionality Reduction

The speed of solving a knapsack problem is also highly dependent on the number of dimensions. We established above that a blockchain can have 100s of resources. This inspires the exploration of optimal dimensionality reduction: instead of tracking 100 different resources, intelligently group them into a few "meta-resources" that capture the most important information.

The pioneers of this approach, Lavee et al. (2025), describe a process to efficiently reduce the number of dimensions. We will briefly cover the fundamental principles.

Definition 1: Gas Measure

Given a set of primitive operations (e.g. VM OPCODES), a gas measure is a non-negative vector $g = g_i$. The gas measure is said to represent an operation-resource matrix $W = (w_{ij})$ with resource capacities $B=(B_j)$ if for any non-negative vector $x=(x_i)$:

$$\sum_i x_i g_i \le 1 \quad\implies\quad \forall\ j:\sum_i x_i w_{ij} \le B_j$$

Explanation: This defines a "gas measure" as a single set of costs ($g$) for operations. A block's total gas cost within a set limit (e.g., ≤1) ensures all underlying, individual resource limits are respected.

Definition 1.1: Optimal Gas Measure

For an operation-resource matrix $W=(w_{ij})$ with resource capacities, $B=(B_{j})$ the optimal gas measure $g_i$ for operation $i$ is defined as $g_{i}=max_{j}w_{ij}/B_{j}$.

Explanation: This provides the formula for the most efficient single-dimensional gas measure; the cost for an operation is set by its highest fractional usage of any single resource.

For example, in a network with two resources $A, B$ and capacities of 10 and 100 for each, respectively. The optimal gas measure for an operation $o$ with resource utilization (1, 5) is: $max (1/10, 5/100)$, which is 0.1.

This gas measure allows you to pack the maximum number of any operation that can fit within the multi-dimensional constraints into the single-dimensional block. It optimally compresses multiple dimensions into a single dimension while maintaining safety.

Definition 2: α-approximate representation

Mathematical Statement: A gas measure $g$ that represents an operation-resource matrix $W=(w_{ij})$ with resource capacities $B=(B_{j})$ is called an $\alpha$-approximate representation $(\alpha\ge1)$, if for any non-negative vector $x=(x_{i})$ we have that: $$\Bigl(\forall j: \sum_{i}x_{i}\,w_{ij}\le B_{j}\Bigr) \implies \sum_{i}x_{i}\,g_{i}\le\alpha$$

Explanation: This definition quantifies the inefficiency of using a single gas measure. The factor $\alpha$ represents the maximum possible "overestimation" of resource usage by the single gas rule compared to the true, multi-dimensional constraints.

Theorem 1: Single-Dimensional Approximability

Mathematical Statement: The single-dimensional approximability of an operation-resource matrix $W=(w_{ij})$ with resource capacities $B=(B_{j})$ is exactly the reciprocal of the value of the zero-sum game with utilities $u_{ij}=w_{ij}/(B_{j}\cdot g_{i})$ is the gas measure achieving this approximation (where the row player, who chooses $i$, is the minimizer.).

Explanation: This theorem provides a precise method to calculate the worst-case capacity loss ($\alpha$) from using a single gas measure. This loss is found by solving the zero-sum game based on the system's operations and resource costs.

How this is Useful

If we replace the moves of the row player in the zero-sum game mentioned above with the actual historical distribution of operations, we can quantify the actual loss from replacing the multi-dimensional resource utilization with the respective gas measures. 

This will allow us to quantify just how important multidimensional fee markets are while setting the stage for dimensionality reduction.

Dimensionality Reduction

With the gas measure analysis performed in the previous step, Lavee et. al explore how the number of dimensions can be reduced with minimal loss. 

Definition 3: k-Dimensional Gas Measure

Mathematical Statement: A k-dimensional gas measure $A \in \mathbb{R}^{|I|\times k}$  $\textbf{represents}$ $W'$ if for any $x \ge 0$, we have: $\forall r \leq q : \sum_i x_i A_{ir} \le 1 \;\implies\; \sum_i x_i w'_{ij} \le 1 \forall j.$

Explanation: This generalizes the gas measure concept from one to $k$ dimensions. It defines a simplified set of $k$ fee dimensions ($A$) to approximate a much larger, more complex set of $n$ underlying resource constraints ($W^{\prime}$). Basically, instead of using a single-dimensional measure for all resource dimensions, the dimensions are partitioned and collapsed into k-dimensions.

Finding the optimal k-dimensional partitioning is itself an NP-complete problem. However, Lavee et al propose a Non-negative Matrix Factorization approach that is less complex but guaranteed to be as good as any solution computed by the partitional approach.

Exploring this path will allow us to significantly reduce the complexity and increase the tractability of the problem, a suggestion we find in Kiayias et. al. (2025).

Conclusion

While the theoretical benefits of multi-dimensional fee markets for optimizing blockchain throughput and resource allocation are significant, their practical implementation on high-throughput blockchains is difficult, primarily due to the computational intractability of the multi-dimensional knapsack problem. We believe the next phase of research should be on domain-specific heuristic solutions and methods for mempool and dimensionality reduction. This research is essential to unlock the potential of multi-dimensional resource pricing for high-throughput, low-fee blockchains.

References

  1. Eclipse Labs. Local Fee Markets are Necessary to Scale Ethereum (2024) https://www.eclipse.xyz/articles/local-fee-markets-are-necessary-to-scale-ethereum
  2. Robert Miller. MEV and the Limits of Scaling. (2025) https://writings.flashbots.net/mev-and-the-limits-of-scaling
  3. Theo Diamandis, Alex Evans, Tarun Chitra, and Guillermo Angeris. Dynamic Pricing for Non-Fungible Resources: Designing Multidimensional Blockchain Fee Markets. (2022) https://arxiv.org/pdf/2208.07919
  4. Aggelos Kiayias, Elias Koutsoupias, Giorgos Panagiotakos, Kyriaki Zioga. One-dimensional vs. Multi-dimensional Pricing in Blockchain Protocols. (2025) https://arxiv.org/abs/2506.13271
  5. NIr Lavee, Noam Nisan, Mallesh Pai, Max Resnick. Does Your Blockchain Need Multidimensional Transaction Fees? (2025) https://arxiv.org/pdf/2504.15438
  6. Guy Goren, Pranav Garimidi. Exploring Multidimensional Gas Markets on Aptos: A New Frontier for Resource Management. (2024) https://medium.com/aptoslabs/exploring-multidimensional-gas-markets-on-aptos-a-new-frontier-for-resource-management-95ca34278664
  7. Vitalik Buterin. Multidimensional gas pricing. (2024) https://vitalik.eth.limo/general/2024/05/09/multidim.html
  8. The Ethereum network is currently undergoing a DoS attack (2016)
  9. EIP-2028
  10. EIP-150
  11. EIP-1884
  12. EIP-2929
  13. How many x86 instructions are there? (2016)
  14. SIMD-253
  15. SIMD-197
  16. Google OR Knapsack
  17. Knapsack Problem
  18. NP-Hardness

Resources

  • MiniZinc ranking for Constrained Programming (CP) Solvers.

Footnotes

  1. You could use non-linear functions to reduce the loss, but non-linear knapsack problems are NP-hard, and at least as hard as multi-dimensional knapsack problems, and when solved, can never provide more utility than the multidimensional problem. So you’d much rather use a multidimensional constraint than a non-linear single-dimensional one.
  2. Fungibility in this context refers to how the CPU executes these operations; ADDs and SUBs are largely fungible since they’re basically the same operation.
  3. Note that mempool in this case just refers to the transaction buffer, not necessarily an explicit/public mempool.

Share this post