Increasing numbers of police departments today use computer algorithms of human behaviour to predict crime, in the hope of preventing it. For some U.S. forces, it’s an alternative to controversial stop-and-frisk policies.
Advocates contend that
… this approach focuses on those most likely to commit crimes, allowing for better relationships between police and residents. But critics say the computer models perpetuate racial profiling and infringe on civil liberties with little accountability, especially when the forecasting models are built by companies that keep their methods secret.
Sociologist Andrew Papachristos does not see a big problem:
Data analytics have been used to combat cholera outbreaks and the transmission of HIV. Why shouldn’t we use similar systems of network science to address “outbreaks” of gun violence?
He envisions the police inviting high-risk offenders for pizza and soda in a church basement “instead of handcuffs.”
But that’s hardly all that’s involved. Among other things, predictive policing replaces real-time evidence-based suspicion of an individual with the statistical probability that a given person may commit a crime (pre-crime).
One model is Chicago’s “heat list” of potential criminals, based on criminal records, gang connections, social circles, and being an attack victim. (Because media naturally focus on stories of innocent victims, we tend to overlook the fact that a criminal lifestyle greatly increases the likelihood of becoming a victim.)
The risk-terrain model, said to be a more community-friendly approach, “focuses on the geographical characteristics that attract criminals to hotspots, rather than the people who happen to be inside a hotspot.” The problem with both models is that the police are treating citizens as if they were spies under surveillance.
Does predictive policing reduce crime? That’s hard to say.
Some centers have found that it does. Kansas City reported a 20% drop in one year, after a table murder rate of 114 murders a year for over four decades. But overall assaults increased, and the drop tapered off. As Canada’s National Post reports, “In other places, the success of the algorithms has been spotty or difficult to assess.”
In any event, law professor Andrew Guthrie Ferguson points out,
Predictive policing is based on algorithms only a few people understand. Further, most are proprietary, carefully controlled by the companies that developed them. And, of course, police do not want to reveal how or where they hope to catch the bad guys. This reality prevents outside observers from judging whether the analytics work, what is being inputted, and whether the data are clean, complete, accurate, and reliable. A few recent reports have begun to question how “success” is judged by those promoting the technology.
What few of us understand is easily manipulated without notice.
The Economist, citing promising crime reduction stats from Britain, nonetheless cautions,
Predicting and forestalling crime does not solve its root causes. Positioning police in hotspots discourages opportunistic wrongdoing, but may encourage other criminals to move to less likely areas. And while data-crunching may make it easier to identify high-risk offenders—about half of American states use some form of statistical analysis to decide when to parole prisoners—there is little that it can do to change their motivation.
Misuse and overuse of data can amplify biases. It matters, for example, whether software crunches reports of crimes or arrests; if the latter, police activity risks creating a vicious circle. And report-based systems may favour rich neighbourhoods which turn to the police more readily rather than poor ones where crime is rife. Crimes such as burglary and car theft are more consistently reported than drug dealing or gang-related violence.
One might add that politically sensitive crimes (potential rape by migrants in Europe, for example) might not make the surveillance list even if there is ample evidence of risk.
Then there is the Whoa! Factor: Some want police to snoop on social media to identify pre-criminals and create files on them. As the Economist also reminds us, smart criminals use the internet to plan jobs:
Nearly 80% of previously arrested burglars surveyed in 2011 by Friedland, a security firm, said information drawn from social media helps thieves plan coups. Status updates and photographs generate handy lists of tempting properties with absent owners.
Very well, but what follows? Social media users will likely be subjected to more surveillance than physical “area residents” because snooping from a laptop is easier and cheaper than plodding around in the freezing rain. Meanwhile, the smart criminals will quickly find ways of leaving false or confusing digital trails.
In short, a human being with a strong motive can easily beat a human being with a job, software notwithstanding. That might be why the success rate tapers off after a while.
What remains, however, is the habitual acceptance of surveillance of people who have not done anything wrong. And a market for the software to do it.
See also: Apple vs. FBI: Free internet is at stake Few analysts agree with the FBI that it would end with just this one case. It can’t.
The Economist explains the case for predictive policing. Chilling.
Denyse O’Leary is a Canadian journalist, author, and blogger.