Newswise — This is the second in a two-part series exploring a sampling of the ways artificial intelligence is helping researchers around the world perform cutting-edge scientific research with the Lab’s cutting-edge facilities and instruments. Read the first part here.
Every day, Department of Energy (DOE) researchers SLAC National Accelerator Laboratory and the world are collaborating to answer fundamental questions about how the universe works – and inventing powerful tools to facilitate this quest. Many of these new tools are based on machine learninga type of artificial intelligence.
“AI has the potential to not only accelerate science, but also change the way we do science in the laboratory,” said Daniel Ratnera leading scientist in AI and machine learning at SLAC.
Researchers are using AI to conduct more efficient experiments in the lab’s X-ray facilities, the Stanford Synchrotron Radiation Light Source (SSRL) and Linac coherent light source (LCLS) and manage massive amounts of complex data from experiments such as the National Science Foundation (NSF)-DOE Vera C. Rubin Observatorywith which the laboratory cooperates BLACK laboratory. Machine learning is also becoming essential to solving the big questions and important problems facing our world – in discovering drugs to improve human health, finding better battery materials for sustainable energy, understanding our origins and the universe, and much more. Nationally, SLAC is part of the DOE laboratory complex that comes together around initiatives such as Frontiers of artificial intelligence for science, security and Technology (FASST), which aims to address national and global challenges.
Unravel the mysteries of the universe
Among SLAC’s largest projects is the NSF-DOE Vera C. Rubin Observatory, for which SLAC built the Legacy Spatial and Temporal Survey Camera (LSST)the world’s largest digital camera for astrophysics and cosmology. SLAC will jointly operate the project and its LSST, a ten-year investigation aimed at understanding dark matter, dark energy and more. “The Rubin Observatory will offer us a whole new window on the evolution of the universe. In order to make sense of this new data and use it to understand the nature of dark matter and dark energy, we will need new tools,” said Risa Wechslerprofessor of humanities and sciences at Stanford University, professor of physics, particle physics, and astrophysics, and director of the Kavli Institute of Particle and Astrophysics Cosmology at SLAC and Stanford.
Next year, Rubin will conduct nighttime surveys of the Southern Hemisphere sky and send about 20 terabytes of images each night to several data centers, including the American Data Facility at SLAC. These images are so large that displaying one at full resolution would take up 375 4K high-definition TVs – and manually sifting through them to track billions of celestial objects and identify anything new or abnormal would be impossible.
AI, on the other hand, is effective at detecting anomalies. For example, machine learning can help more quickly determine whether a change in brightness, a type of anomaly, is due to an irrelevant artifact or something truly new, like a supernova in a distant galaxy or an object astronomical theorized but previously undetected. “You’re trying to extract a subtle signal from a massive, complex data set that is only tractable using AI/machine learning tools and techniques,” said Adam Bolton, SLAC scientist and manager of the Rubin US Data Facility.
Machine learning can also help study the smallest objects in the universe, such as neutrinos, considered the most abundant particle of matter. THE Deep underground neutrino Experience (DUNE) will help scientists study the properties of neutrinos to answer fundamental questions about the origins and evolution of the universe, such as how and why the universe is dominated by matter. THE DUNE near the detector currently under construction at the Fermi National Accelerator Laboratory, will capture images of thousands of neutrino interactions every day. Manually analyzing the expected millions of images from the proximity sensor will be a daunting task. So work is underway to develop and train a machine learning neural network tool, an approach inspired by the way neurons work in the human brain, to automatically analyze images.
“Current manual and software methods will take months or even years to process and analyze millions of images,” said SLAC scientist Kazuhiro Terao. “With AI, we are very excited to achieve high-quality results in just a week or two. This will speed up the process of discovering physics.
Transforming drug design and materials discovery
AI-assisted research could also have more direct and concrete benefits. For example, researchers working to discover or design new materials and drugs face the daunting task of choosing from a pool of hundreds or even millions of candidates. Machine learning tools, such as Running a Bayesian algorithmcan energize the process by making suggestions about what next step to take or what protein or material to try, instead of researchers painstakingly trying one candidate after another.
This is one of the goals of a DOE-funded project. Brave consortium led by SSRL scientists Derek Mendez and Aina Cohenwho co-directs the SSRL Structural Molecular Biology Resource. The team is advancing biopharmaceuticals in the United States, in part by developing new AI tools that simplify the time-consuming and complex steps involved in the structure-based drug design process. For example, AI tools at SSRL’s Structural Molecular Biology beamlines analyze diffraction images, which helps researchers understand the structure of biological molecules, how they work, and how they interact with new drug-like compounds. These tools provide real-time information on data quality, such as the integrity of the protein crystals studied. Ideally, single crystals are used for these experiments, but the crystals are not always well ordered. Sometimes they break or stick, potentially compromising data quality, and these problems are often only apparent after researchers have analyzed the collected data.
These experiments produce hundreds or even thousands of diffraction patterns at high rates. It will be nearly impossible to manually inspect all of this data to weed out faulty crystal models, so researchers are turning to AI tools to automate this process. “We developed an AI model to evaluate the quality of diffraction pattern images 100 times faster than the process we used before,” Mendez said. “I like using AI to simplify time-consuming tasks. This can really help researchers free up time to explore other, more interesting aspects of their research. Overall, I’m excited to find ways that artificial intelligence and natural intelligence can work together to improve the quality of research.
LCLS researchers are now using these tools, Cohen said – and, according to Mendez, there is growing interaction between the two. “LCLS is working on diffraction analysis tools that we want to apply to SSRL,” Mendez added. “We’re building this synergy.”
Frédéric Poitevin, researcher at LCLS, is of the same opinion. “Working together is essential to address the unique challenges of both facilities. »
Among the many AI tools developed by Poitevin’s team is one that accelerates the analysis of complex diffraction images that help researchers visualize the structure and behavior of biological molecules in action. Extracting this information involves considering subtle variations in the intensity of millions of pixels over several hundred thousand images. Kevin Dalton, a scientist on Poitevin’s team, trained an AI model that could analyze a large volume of data and search for important weak signals much more quickly and accurately than traditional methods.
“With the AI models that Kevin is developing, we have the potential at LCLS and SSRL to open a whole new window into the molecular structure and behavior that we have been looking for but have never been able to see with traditional approaches “, said Poitevin.
Opening the doors to discovery and innovation
Beyond specific AI projects, Wechsler said, the growing importance and utility of AI in many scientific fields is opening more doors for collaboration among SLAC scientists, engineers and students. “I’m excited about growing the SLAC AI community,” she said. “We still have a lot to do and learn from each other in all disciplines. We have a lot in common in what we want to accomplish in astronomy, particle physics and other areas of laboratory science, so there is a lot of potential.
Growing opportunities for collaboration are pushing SLAC teams to identify areas where AI tools are needed and develop workflows that can be applied across the lab. This effort is an important part of the lab’s overall strategy to harness AI and computing power to advance science today and tomorrow.
Ratner adds: “Also leveraging our partnership with AI experts at Stanford University, we are working together to deepen our knowledge of AI and create AI tools that enable discovery and innovative technology to explore the science at the largest, smallest and fastest scales.
The research was supported by the DOE Office of Science. LCLS and SSRL are DOE Office of Science user facilities.