
April 1, 2026
Running a Solana validator is not a small-footprint operation. After Firedancer's mainnet launch and the SIMD-0256 compute limit increase to 60M+ per block, validator hardware requirements reached new heights. This is not a workload suited to VPS or hyperscale cloud providers. Bare metal servers are now the only practical standard for mainnet validators. The hardware demands are steep, the latency requirements are strict, and the cost-per-slot demands are unforgiving.
Solana validators require enterprise-grade hardware that rivals high-frequency trading systems. The spec table below shows minimum and recommended configurations for stable mainnet operation.
A properly configured Solana validator needs at minimum 512GB RAM, a high-core-count CPU, NVMe direct access with low-latency peering, and dedicated network connectivity. Most operators run significantly above minimum to handle ledger growth, state compression, and network variance.
| Component | Minimum | Recommended |
|---|---|---|
| CPU | AMD EPYC 9004/7004 (24 cores) | AMD EPYC 9004/7004 (32+ cores, single-socket) |
| RAM | 256GB DDR5 | 512GB DDR5 |
| Storage | 2x 2TB NVMe enterprise | 2x 4TB NVMe enterprise (RAID 1) |
| Network | 1 Gbps dedicated | 10 Gbps dedicated |
| OS | Ubuntu 22.04 LTS | Ubuntu 22.04 LTS |
The AMD EPYC single-socket recommendation is critical. Solana's validator software is NUMA-sensitive, especially with Firedancer's tile-based architecture. Dual-socket configurations introduce cross-socket latency that degrades slot performance.
NVMe storage must be enterprise-grade with low-latency direct access to the kernel. Virtualized or network-attached storage introduces jitter that validators cannot afford. The ledger grows roughly 150–200GB per day on mainnet. Most operators maintain 14–30 days of local ledger, requiring 2–6TB of fast storage.
Network connectivity below 10 Gbps becomes a bottleneck during high-block-volume periods. The ledger alone can exceed 1 Gbps sustained during state dumps. Network redundancy and low-latency peering to major exchanges and RPC nodes are essential for vote catchup.
Hyperscale cloud providers and virtualized hosting introduce overhead that Solana validators cannot tolerate. A hypervisor layer adds 3–8 milliseconds of latency to disk I/O and network operations. In an environment where slots are 400 milliseconds long and votes must land within 32 slots, that overhead is catastrophic.
Virtualization also creates the noisy neighbor problem. If a validator runs on shared hardware alongside other workloads, CPU contention will cause slot misses and vote timeouts. Enterprise virtualized hosts can partially mitigate this with dedicated vCPU pinning, but that still leaves the hypervisor tax on I/O and memory consistency.
NVMe direct attachment is non-negotiable. Cloud providers abstract storage behind network layers or storage appliances. This introduces both latency and unpredictability. Validators require consistent sub-millisecond disk access. Bare metal delivers that. Virtualized storage does not.
Bare metal servers also provide deterministic, non-shared network interfaces. Virtualized network stacks in cloud environments add queuing delay and can suffer from neighbor saturation. Solana validators need direct, unshared access to network hardware.
For large operations, the cost and performance calculus is clear. Bare metal dedicated servers cost 60–70 percent less than equivalent cloud capacity when run at scale, while delivering 2–3x better latency characteristics.
See Our Dedicated Server PlansFiredancer, now live on mainnet, is a complete rewrite of Solana's validator client in C. It uses a tile-based, NUMA-optimized architecture that massively improves throughput and reduces latency variance.
SIMD-0256 raised the per-block compute limit from 48M to 60M+ compute units. This allows more transactions per slot and puts even more pressure on validator hardware. The old baseline specs are now insufficient for reliable mainnet operation.
Firedancer's architecture demands single-socket CPUs with balanced core counts and high memory bandwidth. The tile architecture assigns specific functions to CPU cores and minimizes cross-socket communication. Dual-socket setups negate much of Firedancer's architectural advantage.
Firedancer also increased the baseline RAM recommendation from 256GB to 512GB in most production setups. The increased compute limit means more state load in memory and larger snapshot files during consensus.
Operators who were marginal on the old validator client are now forced to upgrade hardware. This is not a gradual transition. Firedancer's performance benefits only manifest at the higher spec tiers.
Total cost of ownership for a Solana validator breaks down across hardware, hosting, and operational overhead.
A bare metal validator server in Europe runs $800–1,200 per month depending on CPU tier, storage capacity, and network connectivity. A single-socket AMD EPYC with 512GB RAM, 4TB NVMe, and 10 Gbps connectivity costs roughly $1,000 monthly on the managed bare metal market.
Equivalent cloud capacity costs 2.5–4x higher. A comparable configuration on major cloud platforms runs $2,500–4,000 per month. The compute, storage, and network charges compound quickly, and the performance is inferior to bare metal for this specific workload.
Add operational costs: colocation fees (if running at a third-party data center), backup connectivity, infrastructure monitoring, and staffing. Most professional operations budget $1,500–2,500 monthly per validator when including overhead.
For a network of 3–5 validators spread across regions (mainnet best practice for redundancy), TCO lands at $5,000–12,000 monthly. Cloud would cost $8,000–20,000+ for the same deployment.
ROI depends on validator rewards and MEV capture. Median Solana validator rewards sit at 1.5–2 SOL per epoch (roughly 4–6 SOL per day). At current SOL prices, this covers infrastructure costs and generates modest profit for most validators. MEV capture can significantly improve returns but requires additional infrastructure and software.
Frankfurt is the dominant choice for European Solana validators. The DE-CIX Internet Exchange in Frankfurt is the world's largest Internet exchange point by throughput. Peering there directly cuts latency to major Solana infrastructure, Serum DEX nodes, and RPC providers.
Frankfurt also offers GDPR jurisdictional clarity. Data residency is unambiguous. German data protection law is stringent, but for infrastructure operations, that clarity is valuable.
Strasbourg is a secondary choice with lower costs and reasonable peering. The trade-off is 2–5ms additional latency to Frankfurt-based infrastructure.
velia.net has operated the DoubleZero network as a major public goods contribution to Solana. The DoubleZero nodes in Frankfurt provide low-latency snapshot distribution and gossip peering. Validators hosted at velia.net in Frankfurt can peer directly with DoubleZero infrastructure with sub-millisecond latency.
Validator operators running in Frankfurt should prioritize providers with direct peering into DE-CIX. This ensures optimal routing to other major nodes, exchanges, and MEV infrastructure without backhaul through commercial transit providers.
Contact SalesNo. Virtualized hosting adds 3-8ms latency to disk and network operations, which is unacceptable for Solana validators on 400ms slots. The hypervisor overhead, noisy neighbor risk, and shared storage abstraction make VPS unsuitable for mainnet validation.
Solana does not require a minimum stake to run a validator. However, validators with delegated stake earn proportionally higher rewards. Most validators operate with at least 100-500 SOL delegated to remain profitable.
Validators participate in consensus and earn rewards by voting on blocks. RPC nodes serve data queries and transaction submission but do not participate in consensus. Validators require much higher hardware specifications.
Ten Gbps is highly recommended for mainnet validators. One Gbps is technically possible but becomes a bottleneck during high block volume periods. Professional validators almost universally operate at 10 Gbps or higher.
Running a Solana validator is a specialized infrastructure operation. The hardware demands have only increased with Firedancer and SIMD-0256. Bare metal is not an option — it is the baseline.
velia.net has operated bare metal infrastructure for over 22 years. Frankfurt colocation with direct DE-CIX peering provides the latency characteristics Solana validators demand.
Enterprise bare metal built for demanding workloads. No noisy neighbors, no virtualization overhead.
See Server Plans