AI built for speech now unchecks the language of earthquakes.
A team of researchers from the Division of Earth and Environmental Sciences at Los Alamos National Laboratory reused Wav2 with Meta, a model of AI designed for the recognition of speech, to analyze the seismic signals of ‘collapse of the 2018 Kīlauea volcano of Hawaii.
Their results, published in Nature Communications, suggest that defects emit distinct signals as they change – models that AI can now follow in real time. Although this does not mean that AI can predict earthquakes, the study marks an important step towards understanding the behavior of defects before a slip event.
“Seismic recordings are acoustic measures of the waves crossing solid earth,” said Christopher Johnson, one of the main researchers in the study. “From a signal processing perspective, many similar techniques are applied both for the analysis of the audio and seismic wave form.”
Large earthquakes do not only shake the soil – they overthrow the savings. Over the past five years, earthquakes in Japan, Turkey and California have caused tens of billions of dollars of damage and moved millions of people.
This is where AI comes into play. Directed by Johnson, with Kun Wang and Paul Johnson, the Los Alamos team has tested whether the recognition of speech could give meaning to the movements of fault – decipher the tremors like Words in a sentence.
To test their approach, the team used data from the 2018 dramatic collapse of Kīlauea Caldera from Hawaii, which sparked a series of earthquakes over three months.
The AI has analyzed the forms of seismic waves and mapped them to the movement of the soil in real time, revealing that the faults could “speak” in patterns resembling human speech.
Speech recognition models like WAV2 with 2.0 are well suited to this task because they excel in the identification of complex chronological series data models – whether implying human speech or earthquakes.
The AI model has surpassed traditional methods, such as trees boosted by the gradient, which fight with the unpredictable nature of seismic signals. The trees boosted by the gradient build several decision -making trees in sequence, refining the predictions by correcting the previous errors at each stage.
However, these models fight with very variable continuous signals such as seismic wave forms. On the other hand, deep learning models such as WAV2 with 2.0 excel in the identification of underlying models.
How AI was formed to listen to the earth
Unlike previous automatic learning models that required manually labeled training data, researchers used a self-supervised learning approach to train WAV2 with 2.0. The model was pre-trained on continuous seismic wave forms, then refined using real data from the Kīlauea collapse sequence.
Accelerated IT Ividia played a crucial role in the processing of large amounts of seismic wave data in parallel. High -performance NVIDIA GPUs have accelerated training, allowing AI to effectively extract significant models from continuous seismic signals.
What is still lacking: can AI predict earthquakes?
Although the AI was promising to follow the default discrepancies in real time, it was less effective in providing future movement. The attempts to form the model for the predictions of quasi -filure – essentially, asking it to anticipate a sliding event before it occurs – gave non -conclusive results.
“We must extend training data to include continuous data from other seismic networks that contain more variations in natural and anthropogenic signals,” he said.
A step towards smarter seismic surveillance
Despite the challenges of forecasts, the results mark intriguing progression in research on earthquakes. This study suggests that AI models designed for voice recognition can be particularly suitable for the interpretation of complex and changing signal defects generate over time.
“This research, applied to tectonic defect systems, is still in its infancy,” said Johnson. “The study is more similar to the data of laboratory experiences than the major earthquake fae zones, which have much longer recurrence intervals. The extension of these efforts for real world forecasts will require additional development of models with constraints based on physics. »»
So no, the Word -based AI models do not yet predict earthquakes. But this research suggests that they could one day – if scientists can teach him to listen more carefully.
Read the complete paper, “Recognition of automatic speech predicts the movement of the flaw of the contemporary earthquake», To dive deeper into science behind this revolutionary research.