Researchers in the US are using predictive artificial intelligence to help police officers classify crimes and determine whether they are gang-related.
Jeffrey Brantingham, a University of California at Los Angeles anthropology professor and pioneer in the field of predictive policing, presented research earlier this year that uses a neural network to predict if crimes are gang-related. The ultimate goal, Brantingham’s team writes in their paper, is “to automatically classify gang-related crimes where some crucial pieces of crime information are not currently available or are missing.”
Titled “Partially Generative Neural Networks for Gang Crime Classification,” the paper is the first from a research team Brantingham leads at the University of Southern California’s Center for Artificial Intelligence and Society (CAIS). Bratingham’s team is studying “Spatio-Temporal Game Theory & Real-Time Machine Learning for Adversarial Groups,” with a focus on countering extremism. The research is federally funded by the US Department of Defense via the Minerva grant, which awarded it $1.2 million (£881,000) over three years.
As the Verge notes, once police have classified someone as a gang member or charged with gang violence, they face longer prison sentences, additional charges, and are barred from entering certain areas or associating with certain other people.
Advanced technologies are being used around the US to better address gang-related issues. Elementary schools in New Mexico are using face recognition to bar suspected gang members from entry, for example, while in Chicago police use AI to “predict” someone’s likelihood of either committing or being victim to gang-related gun violence. AI is also used to predict crime more generally, led by PredPol, a predictive policing company Brantingham co-founded.
To conduct their research, Brantingham’s team fed data the Los Angeles Police Department collected between 2014 and 2016 into a neural network—AI that uses human brain-like processing for data classification. The AI is given police reports without certain qualitative data (which is the most time-intensive for police to complete), then generates the missing data itself. It uses the algorithmically generated report as part of its overall prediction of whether the crime was gang-related.
While more and more resources are being devoted to using AI to predict, prevent, or classify gang violence, some in the field are pushing back against the practice. Christo Wilson, assistant professor in computer and information science at Northeastern University, notes to the Verge that the AI’s predictions are only as good as the data used to train its predictions. Activists have long claimed the LAPD is overzealous in applying the gang classification. Thus, the AI may only reinforce these same biases.
“Now, maybe the LAPD is 100 per cent objective in their determinations of what is and is not gang-related,” Wilson told the Verge. "But if they are not, then the algorithm is going to reproduce their errors and biases.”
While this particular research is still nascent, policing work has become increasingly automated. “Hot spot” policing uses computational methods to send police to geographic locations to anticipate crimes, changing simple patrolling. Number plate readers provide a host of information on drivers, and police have looked into autonomous driving vehicles that ticket speeding drivers. Axon, makers of the infamous Taser, uses AI to enhance the usefulness of its body cameras, offering an automated redaction and archival service for body camera footage.
Like Brantingham’s research, a focus in Axon’s technology is reducing work for police officers. In a statement announcing its newly formed Ethics Board, Axon said its “ultimate goal in developing AI technology is to remove the need for police officers to do manual paperwork entirely.” [The Verge]