Announcement from artificial intelligence researchers John Hopfield and Geoffrey Hinton as this year’s Nobel Prize winners in physics sparked celebration and consternation on the status of AI in science and society. In Japan, however, another feeling dominates: frustration.
“Japanese researchers should have won too,” writes an editorial in the Asahi Shimbun Newspaper proclaimed. Congratulations to Hopfield and HintonThe Japanese Neural Network Society emphatically added: “We should not forget the role played by pioneering Japanese researchers in building the foundation of neural network research. »
Neural networks are at the center of contemporary AI. These are models that allow machines to learn independently through structures that, although often vaguely, are inspired by the human brain.
So who are these pioneering Japanese AI researchers?
In 1967, Shun’ichi Amari proposed a method of adaptive classification of modelswhich allows neural networks to self-adjust how they categorize patterns, through exposure to repeated training examples. Amari’s research predicted a similar method known as “backpropagation”, one of the Hinton’s major contributions on the ground.
In 1972, Amari sketched a learning algorithm (a set of rules for performing a particular task) which was mathematically equivalent to Hopfield’s method paper from 1982 quoted by the Nobel on associative memory, which allowed neural networks to recognize patterns despite partial or corrupted inputs.
North American researchers worked separately in groups in Japan and reached their conclusions independently.
Later, in 1979, Kunihiko Fukushima created the first multi-layer convolutional neural network. This technology has been the backbone of the recent boom in deep learning, an approach to AI that has given rise to neural networks that learn without supervision, using more complex architectures. If this year’s Nobel Prize rewarded “fundamental discoveries and inventions enabling machine learning with artificial neural networks”, why not reward Amari and Fukushima?
Unilateral perspectives
The AI community itself is debating this question. There are compelling arguments for why Hopfield and Hinton fit better into the Nobel category of “physics” and why national balance was important, given that the Peace Prize was awarded to Japan’s Nihon Hidankyō.
So why should we still worry?
The answer lies in the risks of historical bias. Our standard view of artificial neural networks is a story based in the North Atlantic – and, overwhelmingly, in North America. AI experienced a period of rapid development in the 1950s and 1960s.
In 1970, it entered an “AI winter”, during which research stagnated. Winter finally turned to spring in the 1980s, thanks to towns like Hopfield and Hinton. The latter researcher’s links with Google and OpenAI are said to have fueled the current boom in AI based on neural networks.
And yet, it was precisely during this so-called “winter” that Finnish, Japanese and Ukrainian researchers – among others – laid the foundations for deep learning. Integrating these developments into our AI story is essential as society confronts this transformative technology. We must expand what we mean when we talk about AI in a way different from the current vision proposed by Silicon Valley.
Over the past year, Yasuhiro Okazawa of Kyoto University, Masahiro Maejima of the National Museum of Nature and Science in Tokyo, and I have led an oral history project focused on Kunihiko Fukushima and the laboratory. at NHK where he developed the Neocognitron, a visual pattern recognition system that became the basis of convolutional neural networks.
NHK is the Japanese public broadcaster, equivalent to the BBC. To our surprise, we discovered that the context in which the Fukushima research emerged had its roots in psychological and physiological studies of television audiences. This led NHK to create, in 1965, a laboratory for “vision bionics“. Here, television engineers could help advance knowledge of human psychology and physiology (how living organisms work).
Indeed, Fukushima considered his own work to be dedicated to understand biological organisms rather than AI in the strict sense. Neural networks were designed as “simulations” of how visual information processing might work in the brain, and are thought to contribute to progress. physiological research. The Neocognitron was specifically intended to help settle debates over whether complex sensory stimuli corresponded to the activation of a particular neuron (nerve cell) in the brain or to a pattern of activation distributed across a population of neurons. .
Human approaches
Engineer Takayuki Itō, who worked under Fukushima, called his mentor’s approach “human science.” But in the 1960s, American researchers abandoned artificial neural networks based on human models. They were more concerned apply statistical methods to large data setsrather than a patient study of the complexities of the brain. In this way, imitating human cognition became simply an incidental metaphor.
When Fukushima visited the United States in 1968, he found few researchers sympathetic to his human-brain-centered approach to AI. many have confused his work with “medical engineering.” His lack of interest in upgrading the Neocognitron with larger data sets ultimately put him at odds with NHK’s growing demand for applied AI-based technologies, leading to his resignation in 1988.
For Fukushima, the development of neural networks has never been about their practical use in society, for example to replace human labor or for decision-making. Rather, they represented an attempt to understand what made advanced vertebrates like humans unique, and in this way make engineering more humane.
Indeed, as Takayuki Itō noted in one of our interviews, this “human science” approach can lend itself to greater embrace of diversity. Although Fukushima itself did not follow this path, Itō’s work since the late 1990s has focused on “accessibility” in relation to the cognitive traits of older adults and older adults. disabled. This work also recognizes different types of intelligence than traditional AI research.
Fukushima today retains a distance measured from machine learning. “My position,” he said, “has always been to learn from the brain. » Compared to Fukushima, AI researchers outside Japan have taken shortcuts. The more traditional AI research leaves the human brain aside, the more it gives rise to technologies that are difficult to understand and control. Stripped of its roots in biological processes, we can no longer explain why AI works and how it makes decisions. This is called the “Black box” problem.
Would a return to a “human sciences” approach resolve some of these problems? Probably not alone, because the genie is out of the bottle. But amid global concerns about superintelligent AI bringing about the end of humanity, we should consider a global story filled with alternative understandings of AI. This last story is unfortunately left out of this year’s Nobel Prize in Physics.