Industry Insights

Who invented the semiconductor?

Who invented the semiconductor?

No one person invented the semiconductor. The semiconductor device was developed over many years and many people from across Europe contributed to its actualisation. Each discovery added to what would later become the foundation of semiconductor theory, though at the time these findings were often treated as isolated curiosities. Over the course of more than a century, this gradual accumulation revealed the unusual properties of materials that behaved neither like conductors nor insulators.

The first recorded use of the word “semiconducting” dates back to 1782, when Italian chemist and physicist Alessandro Volta – better known as the inventor of the electric battery – described the behaviour of certain materials in this way. However, the term did not take on its modern scientific meaning until much later, but it did mark the beginning of a line of thought …

Early observations

In 1821, German physicist Thomas Seebeck found that a current could flow between two different conductors if they were kept at different temperatures. This became known as the Seebeck effect and it highlighted the role of temperature in electrical behaviour. Twelve years later, Michael Faraday reported that silver sulphide’s resistance fell as it was heated – the opposite of how metals such as copper behaved. This was one of the first clear signs that semiconductors followed their own rules.

In 1839, French physicist Alexandre Edmond Becquerel observed a voltage at the junction of a solid and a liquid electrolyte when light struck it. This was the first description of the photovoltaic effect. The idea that light could trigger electrical activity would resurface repeatedly in later discoveries.

By 1873, English engineer Willoughby Smith had found that selenium became more conductive when exposed to light – a property that would make it useful in early light sensors. Just a year later, Karl Ferdinand Braun reported conduction and rectification in metallic sulphides. Similar effects, however, had been described in 1835 by Swedish scientist Peter Munck af Rosenschöld, though his work was largely overlooked. Around the same time, Arthur Schuster observed that copper oxide layers on wires acted as rectifiers, an effect that vanished when the oxide was cleaned away.

In 1876, William Grylls Adams and Richard Evans Day discovered the photovoltaic effect in selenium, providing further evidence that light could directly generate current. Two years later, Edwin Hall identified what became known as the Hall effect, which allowed researchers to measure the type and density of charge carriers inside a material. These discoveries each set the foundation for a more in-depth understanding of semiconductor physics.

Moving towards a theory

The discovery of the electron in 1897 gave scientists a new lens through which to explain electrical behaviour. By the early 20th century, the various isolated observations of rectification, photoconductivity, and thermoelectricity began to be connected under the now emerging field of solid-state physics.

The term “semiconductor” itself gained currency in the early 1900s. Josef Weiss, in his doctoral thesis of 1910-11, used the German word Halbleiter, which translates as semiconductor. Earlier uses of similar terms had appeared, but Weiss’s was the first to capture the concept in a recognisably modern sense.

By 1931, British physicist Alan Herries Wilson had developed band theory, which explained how electrons in solids occupy distinct energy levels and why semiconductors behave differently from metals and insulators. Around the same time, Bernhard Gudden studied the effects of impurities in semiconductors, and theorists including Schottky, Mott, and Davydov advanced models of metal-semiconductor junctions and the p-n junction. These ideas laid the groundwork for the devices that would soon follow.

A gradual emergence

From Volta’s first mention of “semiconducting” materials in the 18th century through to the theoretical breakthroughs of the 1930s, the semiconductor was never a single invention but rather an accumulation of observation. Each discovery added a piece to the puzzle which would transfer odd observations into a coherent theory.

By the middle of the 20th century, this theory provided the basis for the first practical semiconductor devices – the point-contact rectifiers, transistors, and diodes that defined modern electronics.

Why is a semiconductor called a semiconductor?

The name ‘semiconductor‘ reflects its in-between nature. A semiconductor is not a full conductor like copper, nor is it a complete insulator like glass. Instead, its ability to carry current depends on conditions such as temperature, light, or the presence of impurities. This ‘half-conducting’ character is what led Weiss and his contemporaries to adopt the word Halbleiter – literally “half conductor” – which, when translated, gave us the modern term semiconductor.