Quality Assurance

Ensuring network optimisation as DRAM prices surge

Ensuring network optimisation as DRAM prices surge

Towards the end of 2025, the increased demand from artificial intelligence (AI) data centres, alongside a tightening of supplies within the market, led the prices of Dynamic Random-Access Memory (DRAM) to skyrocket to unprecedented levels.

AI clusters and high-density virtualisation, together with data-heavy applications, are now consuming bandwidth faster than legacy fabrics can supply, while high bandwidth memory (HBM) production continues to draw capacity away from standard, dual in-line memory modules (DIMMs). As a result, DRAM prices soared by 172% across 2025, leading to manufacturers halting orders and consumers seeing retail price hikes of up to 40%. In Q1 alone, there was a 20% quarter-on-quarter increase.

Storage pricing is also increasing, with analysts predicting spending to rise to $22.2 billion in 2026 – a 5% increase from 2025. Solid-state drive (SSD) vendors are signalling further price pressure as production lines are retooled to focus on higher-value products, while hard drive disk (HDD) availability has shifted due to sector consolidation and growing demands for nearline capacity.

A changing landscape

This inflation has had a notable effect on the entire supply chain. With both DRAM and storage price points escalating, performance upgrades have quickly become an escalating expense.

These changes have been influenced by the ways AI systems, especially large-scale training clusters, are reshaping the allocation of DRAM capacity within the semiconductor industry. For example, AI accelerators are heavily reliant on HBM, and are presently absorbing a significant amount of the industry’s manufacturing capacity. This diverts wafers away from DRAM for standard servers, and this shift means operators are now competing for fewer modules at exceptionally high prices, at a time when budgets are increasingly stretched.

Bottlenecked manufacturing steps are also causing long lead times for packaging and substrate capacity. Leading AI accelerators are reliant on advanced packaging techniques that prioritise HBM, and even if a supplier increases the wafer output, the packaging throughput isn’t enough to keep up with this demand. Compounding these issues is the sheer, disproportionate share of DRAM AI server growth requires: to this end, hyperscalers are buying more DRAM per server than ever before, even as HBM consumes upstream capacity.

The challenges to overcome

As a result, semiconductor manufacturers are now prioritising their own customers, leading to the disruption of supply chains and everyone from end consumers to big businesses paying premiums. Additionally, as HBM becomes a larger share of the memory market, standard DRAM become more expensive to produce relative to its return, tipping the balance of what vendors choose to manufacture.

Network performance and optimisation remains an ongoing challenge, with operators now reporting underutilised central processing units (CPUs) and graphics processing units (GPUs) since data cannot move through the network fast enough. Historically, businesses have defaulted to buying more memory to overcome these issues, but this is no longer a feasible option in today’s volatile market. Instead, the focus must pivot from reliance on price-sensitive components, to the layer of the stack which is currently restricting utilisation.

Why your network is the real bottleneck

Across real-world deployments, network capacity is restricting performance: ‘east-west’ traffic is growing, and this is causing workloads to hit network limits before compute or memory saturation. Only 7% of AI/ML teams report GPU utilisation above 85% during peak workloads, with the majority struggling with data loading, network, or I/O delays. The issue isn’t the hardware itself, but the movement of data between nodes, storage, and accelerators.

In any network, bandwidth and latency shape how optimally workloads can run. Although AI training, virtualised databases, analytic clusters and multi-tenant environments behave differently, they all share a common dependency timely data movement. When networks can’t keep up with demand, adding DRAM will only have a minimal impact. So why do organisations continue to buy memory to ‘fix’ this problem?

Embracing fibre expansion

Simply put, if operators aren’t utilising everything they have within their network infrastructure, then they do not have the resources to be paying premiums for additional memory. Without moving away from legacy infrastructure, memory expansion will only increase costs without truly improving throughput. Instead, a more effective path comes from strengthening the network itself.

Fibre upgrades are crucial to delivering measurable performance gains at a fraction of the cost of DRAM expansion. High-bandwidth optics increase workload throughput across clusters in environments limited by ‘east-west’ congestion, ensuring reliable latency while reducing queue delays within the network. Additionally, fibre optic prices have remained stable, while DRAM and NAND costs have not. This gives operators the means to drive CPU and GPU utilisation and benefit from far larger gains than addition memory would provide, in a more cost-effective manner.

For those looking for stronger network performance without the cost associated with OEM-branded optics, there are now ‘OEM alternative’ vendors – such as ProLabs – which offer compatible optics that match performance requirements, but with a cheaper price tag. This is because these solutions undergo rigorous testing to ensure they will work with OEM-branded switches and routers across mixed environments, which is critical for operators requiring dependable behaviour across legacy and next-generation platforms. They also offer full-code verification, stress testing, extended temperature validation and ongoing interoperability testing to support long-term reliability.

The smarter path to enhanced performance

DRAM prices will continue rising as AI adoption and HBM demand influences supply. By ensuring optimal optical network infrastructure is in place, operators can at least ensure they don’t just throw good money after bad and only pay for the memory they require in order to make valuable cost savings in a challenging market.

The case for fibre optics is clear: increased throughput, strengthened utilisation, and support for future compute cycles without relying on unpredictable pricing. With a fibre-focused approach, underpinned by solutions provided by OEM alternative vendors, operators can scale efficiently and extend the lifecycle of their current hardware.

About the author:

Sam Walker, ProLabs Vice President of Sales, EMEAI