Predictive Policing: reinforcing bias

A drawing of a robot pointing a gun over a map of Philadelphia

By Kiera Patton

In the summertime, do you worry about police stopping you when you go out for the evening? Many people of color in Philadelphia have experienced racial profiling, when police make racist assumptions and target Black and Brown people, especially those who are young. During Mayor Kenney’s election campaign, he promised to end stop and frisk policing, but he changed his mind once in office.

“Predictive policing” is the use of computer software to predict who might break the law before they do it. The software comes up with these predictions using data, such as statistics on how many arrests are made in a certain neighborhood.

Often, there is the assumption that data-driven methods will improve problems of bias, but in the case of predictive policing, it may be the opposite. A study by Cornell University found that predictive policing algorithms can generate feedback loops, as the presence of police causes more arrests to occur, leading more police to be assigned, which in turn leads to more reported arrests. The results of predictive policing algorithms are only as good as the data fed into them, and policing is extremely biased in the geographical areas it targets.

A more mundane problem of predictive policing is that it makes inherently biased policing appear neutral, like just another area of technical expertise. According to the Georgia Law Review, if a system computing recidivism risk uses risk scores based on data from decades of biased sentencing, it will likely produce biased results. Predictive algorithms and risk scores end up putting a sheen of neutrality on years of biased data. Low-income and minority neighborhoods are subjected to increased police surveillance and harassment and therefore more arrests, so they will inevitably be determined by algorithms as areas to focus on.

In Philadelphia, a program called Operation Safe Streets in 2002 used statistical analyses to determine areas to focus on geographically. The program was intended to decrease the drug trade in Kensington. Obviously, given that headlines about drugs in Kensington continue to this day, the program achieved very little. Even at the time, critics correctly pointed out that heavy-handed sweeps and patrols would do nothing, and social programs like rehabilitation would do far more.

Predictive policing also dangerously blurs the line between innocence and guilt. In a program in Los Angeles, for instance, people regarded as statistically likely to commit crime are subjected to increased surveillance. While not equivalent to arresting someone entirely based on predictions of future crimes, the increased surveillance can become a self-fulfilling prophecy of arrests and police harassment. Also, it’s questionable whether statistical analysis can be meaningfully applied to predict a single person’s behavior.

. . .

If you enjoyed this post, help us grow by contributing to the Philadelphia Partisan on Patreon.

Leave a comment