Market Analysis

Changing fortunes in component market drivers?

By Adam Fletcher, Chairman of the Electronic Components Supply Network (ecsn)

For the past decade or so the primary driver of the global electronic components market has been the demand for the latest innovation in cellular mobile phone (CMP) handsets and related infrastructure.

The Automotive sector occupied position two followed by the high-performance computing (HPC) sector. But no longer! The HPC sector (now re-named Hyperscale Computing) is today’s number one driver and according to Adam Fletcher, Chairman of the UK’s Electronic Components Supply Network (ecsn) and the International Distribution of Electronics Association (IDEA), we’ll have to wait until underlying demand returns to the wider market including the industrial, medical, aerospace, and military sectors, before the electronic components market can claim to be in stable recovery state.

Overall, the global electronic components market is experiencing disappointing growth: CMP remains the largest market followed by Automotive, however neither of these sectors can claim to be growing. The migration to 5G networks provided a temporary boost to the CMP market but the demand for new phones remains stubbornly below previous market highs, probably due to a combination of a lack of compelling new features, escalating prices, and market saturation. The Automotive sector is undergoing huge structural change as it attempts to migrate away from internal combustion engine (ICE) powered vehicles towards electric and hybrid alternatives in accordance with recent climate change legislation. Consumer demand for electric and hybrid powered vehicles has been lacklustre due to concerns over price, range (and charging infrastructure), alongside confusing technology choices.

The rise of hyperscale computing

High levels of competition in the Cloud, Datacentre and Edge Computing Enterprises sector are currently forcing a rapid transition away from high performance CPUs predominantly supplied by IBM, Intel, AMD, and ARM towards ‘Graphics Processing Units’ (GPUs). Developed to meet the ‘step-function’ change in processing performance demanded by artificial intelligence (AI) services, GPUs support the latest generation
of large language models such as the ubiquitous ChatGPT. NVIDIA and AMD are currently leading the GPU charge but in-house SOC designs from a host of rapidly rising semiconductor startups are hot on their heels.

A memory led recovery

GPUs are expensive (rightly so given the massive investment required to design, manufacture, and test them) and sales of these devices are growing exponentially, but in reality, the current revenue growth in semiconductor (and therefore electronic components) markets is actually being driven by the huge increase in demand for the very high-performance memory required to support GPUs. Whilst commodity memory such as DRAM and Flash continues to hold a substantial market share demand for the new generation of memories such as Double Data Rate 5 Synchronous Dynamic Random Access Memory (DDR5) and High Bandwidth 3D Stacked Memory Dynamic Random Access Memory (HBM3E) is escalating due to their smaller form factor, larger capacity, and much faster read/write speeds. Sadly, demand is outstripping the ability of the three leading manufacturers to supply and whilst they struggle to increase this ‘bleeding edge’ capacity in an overheating market they are not producing other products that customers still need. The price of commodity devices will almost certainly crash, but no one is forecasting when this will happen.

Headwinds to HPC investment

A recent study published by ‘Tangoe’ (a US based Technology Expense and Asset Management organisation) confirms that spending on Cloud computing will have increased by 30% this year, largely in support of Generative AI and related technologies. Although this trend looks likely to continue, it is presenting these enterprises with a very significant challenge: How to recover these costs from their customers in an ultra-competitive market? In practice these increased costs will quickly percolate down to end-customers who ubiquitously and, probably without really considering or realising the fact, are the ultimate users of this technology.

It should also be noted that the use of GPUs, memory, and associated architecture processing Large Language Models in datacentres will mean a huge increase in energy usage of up to 10% of global consumption of electricity, but alternative solutions to this problem are currently being developed.

Rapid technology migration in AI

The semiconductor industry has always been very dynamic but the pace of change in the artificial intelligence field has been nothing short of phenomenal. Previous semiconductor technology advancements have typically taken four to five years from academic research to implementation in silicon, but as the main AI driver is pure mathematical functionality in software, the timescale from research to implementation is now under twelve months, most of which is spent modifying the architecture of the silicon on which the software will run to optimise performance.

Today the demand for GPU, DDR5, and HBM devices is primarily driven by the training of Large Language Models (AI algorithms have to be ‘trained’ and refined to successfully recognise recurring patterns in large datasets) in AI applications ranging from image recognition to molecular biology. Once ‘trained’ and embedded in a product, the actual AI usage phase is referred to as ‘inference’ and needs significantly less processing, memory, and power.

Potential ‘game changer’

In April ’24 researchers at Massachusetts Institute of Technology, the Californian Institute of Technology, and Northeastern University, published a paper proposing the use of Kolmogorov-Arnold Networks (KAN) as an alternative to the multi-layer perceptions used by LLMs. Their aim was to deliver similar accuracy for both ‘training’ and ‘inference’ functions while dramatically reducing model size, memory, and power consumption. The first of a flurry of the resulting new product designs will be available next year and is expected to enable equivalent AI functionality with much smaller memory requirements to be manufactured in a footprint small enough to be embedded within the existing DDR memory of a CMP. It is also likely to drive increased AI applications in personal computers, tablets and a multitude of other applications where ‘inference’ is required to be embedded locally within the device rather than backhauling the data to a server farm for processing before the output is returned to the device, as is the current practice. In automotive applications it will enable the Edge processing necessary for ADAS (advanced driver-assistance system) functionality to be managed within the vehicle, dramatically improving system latency and thereby overall system performance and reliability while lowering power budgets and reducing costs.

Market instability

In previous cycles, it was typical for the semiconductor market recovery to be led or driven by commodity memory, closely followed by sales of logic, analog, and discrete products. Sadly, this trend has not been apparent in the current growth cycle and as I write, demand for these products remains very disappointing.

The industry concern is that whilst the headline sales revenue number for semiconductors is growing, the corresponding volume of components shipped is static, suggesting that the current growth cycle is inherently unstable. I hope this concern proves to be misplaced but much depends on how long the investment cycle in HSC continues (it looks to be a multi-year timescale) and how the other major industry sectors fare. A relatively small uptick in demand for CMP or Automotive products could easily and significantly result in restricted component availability and extended manufacturer lead-times.

Concluding thoughts

Due to the high levels of geopolitical uncertainty, many customer organisations are struggling to forecast their customers’ demand – and therefore their organisation’s demand – to their supply network. Procurement professionals, and the entire electronic components supply network, need to be extremely cautious over the next twelve months because of the numerous variables that could impact overall customer demand and by  extension, the global electronic component supply and demand dynamic. Honest, open, and frank communication with partners up and down the electronic components supply network and the provision of accurate and timely requirements forecasting costs little but delivers much to the competitive advantage for all parties in the supply network and the throughout the wider economy.

Adam Fletcher, Chairman of the Electronic Components Supply Network (ecsn)

This article originally appeared in the November issue of Procurement Pro.