A Deep Learning Revolution for Science
Francis Crick Professor, Salk Institute for Biological Studies
Distinguished Professor, University of California, San Diego
The neural network pioneers in the 1980s were a highly interdisciplinary group of researchers, including many physicists such as John Hopeld, who brought with him intuitions from spin glasses and condensed matter physics. Hopeld nets inspired Geo Hinton and me to invent the Boltzmann machine, a neural network that could be trained to solve complex problems with many layers of hidden units between the input and the output. Thirty years later and with a million times more computer power and data, learning algorithms for deep neural networks have delivered solutions to many problems in AI that had eluded previous methods based on symbols and rules. Another learning algorithm from my lab called Independent Component Analysis solved the “cocktail party problem” for blind source separation of mixed signals, which also has many practical applications. These and other machine learning algorithms are being applied to many scientic problems with remarkable results: Astronomers use neural networks to nd Einstein rings; accelerator physicists use neural networks to select events and reconstruct tracks; the resolution of optical microscopes at the diraction limit has been enhanced to nanometer superresolution by neural networks. These and many other applications of neural nettworks are having a far-reaching inuence on science in the 21st century.