By Marty Graham, Contributor
Artificial intelligence evangelists working with Ugandan wildlife defenders are developing a sophisticated algorithm that not only predicts where poachers will be, but also helps rangers decide the most effective ways to thwart them—in places where poachers slaughter dozens of elephants each year.
Run by the Center for Artificial Intelligence for Social Good, the first phase of the predictive AI project known as PAWS (Protection Assistant for Wildlife Security) has helped park rangers find poachers’ snares and lairs in parts of the park where they had previously not looked.
Fresh from field tests (in which the AI model was tested for five months in about 175 square miles of patrolling grounds in the 965 square miles of Uganda’s Queen Elizabeth Protected Area), USC post graduate student Shahrzad Gholami, who led the project, said the results are promising.
“Our field test shows that it’s a model that has accurate results and is efficient; it can do computations very quickly,” she explained. “It has the capability to make accurate predictions for the whole area.”
“Our field test shows that [PAWS is] a model that has accurate results and is efficient; it can do computations very quickly. It has the capability to make accurate predictions for the whole area.”
—Shahrzad Gholami, Project Lead for PAWS
The Business of Poaching
Poaching is a profitable enterprise in national protected areas across Africa, where thousands of elephants and rhinos have been hunted and killed for their tusks and horns, which sell for up to $30,000 per pound—making it worth more than gold, which today sells for just under $20,000 per pound.
While there were 1.3 million elephants in the 1970s, today there are less 500,000. According to the Convention on International Trade of Endangered Species, in 2013, over 20,000 African elephants were slaughtered, mostly for their tusks, in a single year. Although these numbers have declined slightly since, they remain higher than the normal growth rate for elephant populations—a recipe for extinction, according to reports from the research arm of the international treaty.
In addition to the high numbers of killings, poaching methods are brutal. Common methods include snares made of wire spun and twisted like a giant slinky that entangles the animals and leaves them helpless and easily slaughtered, if they haven’t already died of thirst.
With large land areas and few rangers, preventing such killings is difficult. Rangers travel in teams for safety, as poachers have shot and killed the wildlife defenders, limiting the ability to patrol multiple swaths of land. Because game preserves have limited workforce and resources, getting rangers to the illegal hunting grounds while the poachers are in action is critically important.
The Center for Artificial Intelligence for Social Good has so far worked with a half dozen national protected areas in Uganda, South Africa, Malawi, and Zimbabwe on separate projects to develop AI tools for wildlife defenders to protect elephants and rhinos from such slaughter.
From Predictive to Prescriptive Analytics
The first project built to protect wildlife was a basic predictive model that successfully identified areas rangers didn’t often patrol. After using the tool, rangers returned from forays into these areas with trucks full of wire snares and huge traps.
Built on game theory and using machine learning, the effectiveness of the initial project prompted even greater interest in using AI for quicker enforcement and changing the defenders’ game from defense to offense.
Gholami has worked extensively on adversarial strategies in game theory, and her team is the second to work in Queen Elizabeth Protected Area. Since the pilot project, they have developed a model that takes into account the limits of the initial data set: changes in poacher behavior over time and the adjustments in poacher locations.
“We developed a hybrid spatio-temporal algorithm and we were able to test it,” she said. This newer model makes predictions with an ensemble of decision trees, a statistical analysis method whereby each answer to a series of questions leads to a fork in the path and fresh sets of questions. When there’s enough underlying data, the smart tool then makes predictions for areas where there is little historical data.
The spatio-temporal process looks something like this: The algorithm’s massive data set includes detailed factors like the known locations of animals, density of the forest, geospatial characteristics of the land, animal density, whether or not there are herds or wanderers, and distance to bodies of water. It also distinguishes watering holes from rivers and lakes, and the distance from roads and towns.
The idea is that if historical data shows that certain things happen in areas where, say, grasses are plentiful, watering holes are nearby, roads are within three miles and the forest is moderately dense, there’s a probability that the same things are occurring in other areas where the conditions are similar or identical, but there is no historical data.
“We know, for example, that poachers do not want to go deep into the area, they want to stay close to the roads,” Gholami said. “Then we introduce the historical data about what has happened in the park.”
Curing Data Bias
While the data set is comprehensive, it has gone through a series of trial and error. The first iteration of the algorithm took the historical data at face value and ended up suffering data bias, Gholami explained.
“The main challenge in our domain was bias in the data and the bias was due to imperfect observations by the park rangers,” she explained. “Those data are only about the regions they have already been to and are heavily patrolling.”
In other words, the historical data lacked observations about areas of the park where rangers didn’t go. (A small group patrolling a 965 square mile preserve is going to miss a lot.) What’s more, since rangers gathered data on handheld devices as they patrol, the data disproportionately showed no poacher activity as poachers familiar enough with the patrol routes tended to stay away or flee as rangers approached.
Yet Gholami and her team conjured a solution. “We trained the model to take into account the imperfect observations, in part to take what rangers had observed and apply it with the other factors to the areas,” Gholami recounted. “Our model successfully predicted where snaring activity would occur and where it would not, and rangers found more snares and snared animals than in areas of lower activity predictions.”
The most recent field tests in Queen Elizabeth National Park in Uganda, reflect these efforts. The new algorithm now helps defenders, as Gholami calls the rangers, not just look for hot spots, but also to patrol more effectively.
“We asked, how do we use these predictions to come up with the most effective route for the defenders?” she said. “We are trying to make the defenders less predictable and more effective.” For example, since the implementation of the updated algorithm, there have been no more routine patrols.
“If the patrols randomize, poachers are not able to learn the behavior of the defenders,” she says. “We are seeing that by this randomization we are causing some uncertainty for the poachers.”
Sharing the AI Model
The success of the Queen Elizabeth trials has led to invitations to set up in other African preserves, as well as interest from China and Cambodia, where rhinos are also being hunted to extinction.
But the algorithm itself may be useful for situations other than thwarting poachers, and Gholami has begun to think about where else it could fit.
“The model can be used in domains that have similar challenges–for wildlife protection, illegal oil refineries, illegal logging, fishing, dumping,” she said. “The factors are very similar.” And, it’s clear, the opportunities to use AI to protect the planet are abundant.