BLOG

Published 27 November 2024 by Benjamin Skuse

Nobel Prize in Physics 2024: Where the AI Revolution Began

This digitally generated image illustrates a human brain with glowing connections to electronic circuits beneath, representing advanced neural network concepts and AI integration.
Artificial neural networks are mimicking processes in the brain. Photo/Credit: alvarez/iStockphoto

When the first Nobel Prizes were awarded on 10 December 1901, science as an endeavour was a very different beast to the discipline we participate in and witness today. Although it remains at its core the pursuit of knowledge and understanding of reality and the world around us, science today is more often than not fundamentally inter- and multidisciplinary. Such a fundamental shift in how science is conducted begs the question: can the scientific fields – chemistry, physics, and physiology or medicine (plus economic sciences since 1969) – to which Laureates are granted awards be defined precisely nowadays, and do they even really exist as separate disciplines any longer?

More recently, the rise of machine learning and large language models (often categorised under the blanket term AI) in the past few years has provided science with a shiny new tool to pursue scientific knowledge and understanding. With an ever more critical and growing role in science, should significant developments in AI be recognised by the Nobel Committee as scientific advances or be ignored, categorised as solely technological, computational, or mathematical progress?

AI’s inexorable diffusion into all aspects of science and society also raises much deeper questions. Given the technology’s power and ability to evolve on dramatically faster timescales than our own intelligence, will researchers continue to steer AI towards scientific discoveries, or will AI take the lead role and guide humans? And if the latter comes to pass, could this changing of the guard be mirrored more widely across society, even perhaps representing an existential threat to humanity?

These questions and more were brought to the fore this year when the Nobel Committee awarded US Princeton University professor emeritus John Hopfield and University of Toronto British-Canadian professor emeritus Geoffrey Hinton the 2024 Nobel Prize in Physics “for foundational discoveries and inventions that enable machine learning with artificial neural networks”, and the 2024 Nobel Prize in Chemistry to three other AI trailblazers: John Jumper and Demis Hassabis, who developed AI to predict protein structures at Google DeepMind with AlphaFold2, and David Baker, who used AI to design completely new proteins.

With the latter trio delivering computational tools for predicting and designing the structure of proteins, the link to the traditional discipline of chemistry was quite clear—proteins control and drive all the chemical reactions that form the basis for life. But to get to the heart of why Hopfield and Hinton’s seminal work in machine learning was classed as physics requires a little more thought.

The Birth of Artificial Neural Networks

After a PhD in theoretical condensed matter physics, Hopfield initially followed in his physicist parents’ footsteps, carving out a niche in the interaction of light with solids. Yet at the end of the 1960s, the US theoretical physicist changed tack, moving to the intersection between physics and biology, and by 1982 he was specialising in neurobiology, where theoretical physics, neurobiology, and computer science meet.

This digitally generated image illustrates a human brain with glowing connections to electronic circuits beneath, representing advanced neural network concepts and AI integration
Concept of a neural network. Photo/Credit: peepo/iStockphoto

His first contribution to neurobiology – a paper entitled “Neural networks and physical systems with emergent collective computational abilities” – was seminal. It introduced his eponymous network, which was one of the first artificial neural networks, and a first demonstration of how computers could use a layer of connected nodes to remember and recall information.

Directly inspired by the concept of a spin glass – a type of disordered material studied in physics, characterised by the presence of magnetic moments or ‘spins’ that interact with one another in a random and often conflicting (frustrated) manner – and indeed sharing the same dynamical description as a simplified spin glass, a Hopfield network stores images and other information as patterns, mimicking the way memories are stored in the brain. Moreover, it can recall an image when prompted with a similar image, an ability named associative memory.

Learning to Crawl, Sit, Stand, Walk, and Run

Meanwhile, Hinton was forging an unusual (at least, at the time) specialism at the boundary between computation and how the brain works, building knowledge in experimental psychology and AI. In a 1985 paper – “A learning algorithm for Boltzmann machines” – Hinton (alongside co-authors David Ackley and Terrence Sejnowski) built on Hopfield’s research by leaning on ideas from statistical physics to incorporate probabilities into a layered version of the Hopfield network.

Their more advanced network often consists of two layers, where information is fed in and read out on the visible layer, connected to a hidden layer that affects how the network functions as a whole. This ‘Boltzmann machine’ – so named, due to the system’s amount of available energy being described by the Boltzmann equation (developed to describe the dynamics of an ideal gas by 19th Century Austrian physicist and father of statistical mechanics, Ludwig Boltzmann) – was a conceptual breakthrough, revealing for the first time that an artificial neural network could learn from data, categorise images, and even generate new and original images if trained on a set of similar images.

The artificial neural networks Hopfield and Hinton developed in the 1980s would be described as rudimentary by AI researchers today. Yet they paved the way for technologies including computer vision, recommendation engines, and generative AI that continue to revolutionise modern business and everyday life.

Big data visualization. Social network, financial analysis of complex databases. Data mining. Vector technology background. Information analytics concept.
Big data visualization. Photo/Credit: natrot/iStockphoto

 

Perhaps most importantly from the perspective of the Nobel Prize in Physics, the machine learning and AI technologies that grew from Hopfield and Hinton’s early work have become indispensable to physics research. Whether it be sifting through vast swathes of data from the Large Hadron Collider to recognize the signatures of different particles in the debris of high-energy collisions or developing and optimising models that predict climate patterns or reduce energy consumption, many modern research tasks could simply not be done without artificial assistance.

Calling Out Dangers

However, though they are both rightfully proud of their work and happy to accept the awards, neither Laureate is deaf to the concerns people have regarding the threats posed by the AI technologies that they played an influential part in developing.

Hinton, in particular, has been highly vocal about these dangers, even resigning from his role at Google in 2023 to ensure he could speak freely on such issues. “With respect to the existential threat of these things getting out of control and taking over, I think we’re [at] a kind of bifurcation point in history where in the next few years we need to figure out if there’s a way to deal with that threat,” he warned during his customary telephone interview soon after receiving the Nobel award. “I think it’s very important right now for people to be working on the issue of how we will keep control.”

Hopfield was equally outspoken in his Nobel interview: “I share his worries. You always worry when things look very, very powerful and you don’t understand why they are, which is to say you don’t understand how to control them, or if control is an issue, or what their potential is.”

Benjamin Skuse

Benjamin Skuse is a professional freelance writer of all things science. In a previous life, he was an academic, earning a PhD in Applied Mathematics from the University of Edinburgh and MSc in Science Communication. Now based in the West Country, UK, he aims to craft understandable, absorbing and persuasive narratives for all audiences – no matter how complex the subject matter. His work has appeared in New Scientist, Sky & Telescope, BBC Sky at Night Magazine, Physics World and many more.