Nobel prize for physics goes to pair who invented key AI techniques
The 2024 Nobel prize in physics has been awarded to John Hopfield and Geoffrey Hinton for their work on artificial neural networks and the fundamental algorithms that let machines learn, which are key to today’s large language models like ChatGPT.
“I’m flabbergasted, I had no idea this would happen,” Hinton told the Nobel committee upon hearing the prize announcement. “I’m very surprised.” Hinton, who has been vocal about his fears around the development of artificial intelligence, also reiterated that he regretted the work he had done. “In the same circumstances, I would do the same again, but I am worried that the overall consequences of this might be systems more intelligent than us that eventually take control,” he said.
While AI might not seem like an obvious contender for the physics Nobel, the discovery of neural networks that can learn and their applications are two areas that are intimately connected to physics, said Ellen Moons, chair of the Nobel Committee for Physics, during the announcement. “These artificial neural networks have been used to advance research across physics topics as diverse as particle physics, material science and astrophysics.”
Many early approaches to artificial intelligence involved giving computer programs logical rules to follow to help solve problems, but this made it difficult for them to learn about new information or encounter situations they hadn’t seen before. In 1982, Hopfield, at Princeton University, created an architecture for a computer called a Hopfield network, which is a collection of nodes, or artificial neurons, that can change the strength of their connections with a learning algorithm that Hopfield invented.
That algorithm was inspired by work from physics that finds the energy of a magnetic system by describing it as collections of tiny magnets. The technique involves iteratively changing the strength of the connections between the magnets in an attempt to find a minimum value for the energy of the system.
In the same year, Hinton, at the University of Toronto, began developing Hopfield’s idea to help create a closely related machine learning structure called a Boltzmann machine. “I remember going to a meeting in Rochester where John Hopfield talked and I first learned about neural networks. After that, Terry [Sejnowski] and I worked feverishly to work out how to generalise neural networks,” he said.
Hinton and his colleagues showed that, unlike previous machine learning architectures, Boltzmann machines could learn and extract patterns from large data sets. This principle, when combined with large amounts of data and computing power, has led to the success of many artificial intelligence systems today, such as image recognition and language translation tools.
However, while the Boltzmann machine proved capable, it was also inefficient and slow, and it isn’t used in modern systems today. Instead, faster, modern machine learning architectures like transformer models, which power large language models like ChatGPT, are used.
At the Nobel award conference, Hinton was bullish on the impact that his and Hopfield’s discoveries would have. “It will be comparable with the industrial revolution, but instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability,” he said. “We have no experience of what it’s like to have things smarter than us. It’s going to be wonderful in many respects… but we also have to worry about a number of bad consequences, particularly the threat of these things getting out of control.”
Topics: