Improving avalanche forecasting with AI

As the climate continues to change, avalanche experts look to cutting-edge tech to help keep the public safe.

By Luc Alper-Leroux 

On December 18, 2020, 41-year old Brandon Jones was snowmobiling in Wyoming’s Salt River Range when he inadvertently triggered the avalanche that took his life. That day, the avalanche danger scale rating for all elevations in the region was listed as Moderate (Level 2). Even though he was able to deploy an airbag, he became the first of 37 people who would suffer similar fates during a record year for deaths by avalanche in the United States.

Last year, the National Avalanche Center reported the highest number of recreational avalanche-related deaths since 1950. As the global climate becomes increasingly unstable, avalanche frequency and variability are also expected to increase.

Photo by Nicolas Cool on Unsplash

“The recent research into climate change suggests we may get more stuck weather patterns as the Arctic sea ice melts,” says Karl Birkeland, director of the National Avalanche Center. “So we might end up with more prolonged stormy periods followed by prolonged dry periods, followed by prolonged stormy periods. Those are the kinds of situations that definitely could lead to more dangerous avalanche conditions.”

Avalanche centers in places like Utah, Colorado and Switzerland have the difficult job of warning the public about avalanche threats. To address the urgent need for better forecasting and comprehensive avalanche data, avalanche specialists are exploring the use of artificial intelligence (AI) and machine learning (ML) in their work.

Predicting the unpredictable

Currently, experts monitor snowfall, wind and temperature data from local weather stations, and measure the snowpack themselves. Once they synthesize the data, they make an informed assessment about avalanche danger.

“We form an assessment on the potential for avalanches by going out into the field and making measurements, whether that be fixed study plots or targeted location analysis throughout the terrain,” says Ethan Greene, director of the Colorado Avalanche Information Center.

There are too many parameters for a human brain to process all the data needed to predict an avalanche. For these purposes, ML is perfectly suited.

—Dr. Alec van Herwijnen, leader, WSL Institute for Snow and Avalanche Research

Warning the public about potential avalanche danger requires coordination between avalanche monitoring centers, weather stations and the media. As climate instability increases, avalanche management specialists in the U.S. and abroad have turned to AI and ML to anticipate future avalanches and provide better real-time data for public alerts.

“There are too many parameters for a human brain to process all the data needed to predict an avalanche. For these purposes, ML is perfectly suited,” says Dr. Alec van Herwijnen, leader of the avalanche formation team at the WSL Institute for Snow and Avalanche Research in Davos, Switzerland.

AI unpacks the snowpacks

Most research into AI and ML applications for avalanche prediction is still in the early stages of development, but what is being developed shows great promise. “This technology could be a game-changer,” says van Herwijnen. “It would be very similar to what happened in meteorology with the invention of precipitation radars. You go from a crude prediction to a more advanced prediction.”

Photo by Nathan Anderson on Unsplash

With funding from the Swiss Data Science Center, van Herwijnen and his team are developing algorithms to accommodate the vast amounts of data needed to predict possible snow avalanches. They’re using a Random Forest Model (a statistical model used to describe observable events that depend on internal factors) as the basis for the WSL’s avalanche prediction model, which would allow them to input data from weather stations and avalanche detection systems, as well as snow stratigraphy data modeled by the WSL.

The AI-driven prediction model could streamline current avalanche analysis processes with the increased data processing power and reduce the time needed to analyze avalanche data. Unfortunately, this promising new possibility still has limitations and will require more testing.

Any avalanche requires three components: a weak layer of snow, a slope angle and a trigger. When the first two conditions are ripe for a slide, avalanche danger forecasters issue a warning to outdoor enthusiasts: Do not be that trigger.

Van Herwijnen’s team is hoping to better understand the conditions that were present in previous avalanches to help predict future avalanche danger. What makes that challenging is the lack of data—the majority of avalanches are either unobserved or unreported.

When you’re training an avalanche forecasting model, there’s not really a ground truth. That’s at the heart of why it’s difficult to create.

—Scott Chamberlin, founder of the Open Avalanche Project,

“The main problem is that there are not many avalanche events we can base our predictions off of. Usually, ML needs lots of data to work. If that data is very imbalanced, only a tiny fraction of the data is usable, and it’s not enough to build a good model,” says van Herwijnen.

“I think the larger problem is data quality,” says Scott Chamberlin, founder of the Open Avalanche Project, an organization that is working to save lives by developing more accurate avalanche forecasting using ML. Based near Seattle, Washington, Chamberlin is a software engineer and backcountry skier who uses forecasts to inform his own decisions on the mountain.

Given the same inputs, forecasters often come up with different outputs, he explains. One person looks at the data and thinks avalanches are a high probability, and another would say moderate. Even within one slope, the results of a field test where you actually test the snow can vary greatly.

Photo by Kira Laktionov on Unsplash

“When you’re training an avalanche forecasting model, there’s not really a ground truth,” Chamberlin says. “That’s at the heart of why it’s difficult to create.”

Another issue, he says, is in the area of interpretability. Knowing there is a forecast of high or low probability is of some value to users, especially in areas with no forecasting, but what’s more valuable is knowing why the algorithm thinks that. “Interpretability in deep learning is an emerging field. It can be difficult to know why the algorithm is choosing the values it does,” he says. “Providing that to users can help them correlate the forecast information with what they’re seeing on the ground.”

Looking ahead

Leveraging AI and ML for better disaster prediction is being tested at centers like the Colorado Avalanche Information Center and the National Avalanche Center. However, trading in the old for the new continues to challenge both centers’ teams. “As you start looking at how AI and ML can change our approach, you can start to shake out patterns and ideas that you didn’t really know existed before,” says Greene.

Chamberlin’s team is aiming for a model that resembles at least 75% accuracy, as compared to historic avalanche forecasts. Right now, they’re at 69-70%. “We’re still iterating,” he says, adding that the data and code is open-source to enable others to evaluate and contribute to the project.

As the climate changes ever more rapidly, experts will need all the tools they can get to gain an edge. If these new AI-driven data analysis methods are successful at predicting slides, they could prevent avalanche-related deaths like Brandon’s in the future.

Lead photo by Photoholgic on Unsplash