More than 1000 experts have created “racist biased” AI to predict crime based on your face

More than 1000 experts have created “racist biased” AI to predict crime based on your face

Technicians at MIT, Harvard, and Google say research claiming to predict crime based on human faces creates a “take-to-jail pipeline” that strengthens racist policing. More than a thousand technologists and scholars are speaking out against algorithms that try to predict crime on the basis of just one person’s face,
as publishing such research reinforce pre-existing racial bias in the criminal justice system. In an upcoming book published by Springer Nature, Transactions on Computer Science and Computational Intelligence, outlining a system they created from the University of Harrisburg, they claimed “(in a press release that has now been removed from online),” 80 percent accuracy and no ethnicity Without prejudice, the software can predict if someone is a criminal based solely on their facial image.

Experts in a variety of technological and scientific fields, including statistics, machine learning, artificial intelligence, law, history, and sociology, have publicly responded to the letter, warning of “criminal trial statistics for predicting crime,” with many and immediately problematic assumptions and severity of use.

The public letter was signed by academics and AI experts from Harvard, MIT, Google, and Microsoft, urging publisher Springer to stop publishing the upcoming paper. The paper describes a system in which the authors claim that “80 percent accuracy” and “no racial bias” is based solely on their facial image.

Audrey Beard, one of the organizers of the letter, wrote, “There is no easy way to develop a system that can predict ‘guilt’ that is not racially biased because criminal trial data is inherently racist,” wrote Audrey Beard, one of the letter’s organizers. The letter called on Spranger to withdraw the paper from publication in Spranger Nature, issue a statement condemning the use of these methods, and promise not to publish similar research in the future.

This is not the first time AII researchers have made this dubious claim. Machine learning researchers have roundly condemned a similar paper published in 2017, whose authors claim the ability to predict future criminal behavior by training algorithms with the faces of people convicted of a crime. As experts have noted, this only creates a response loop that proves to be more targeted at marginalized groups who have already been unnecessarily polished.

The letter is being published as protests against traditional racism and police violence continue across the United States following the deaths of Brauna Taylor, George Floyd, Tony McDade, and other black people killed by police. Technologists have described these biased algorithms as part of a “take-to-jail pipeline” that enables law enforcement to justify discrimination and violence against marginalized communities behind the use of ‘objective’ algorithmic systems.

World Rebellions has chosen to verify algorithmic policing technologies as face recognition. Earlier this month, IBM announced that law enforcement would no longer develop or sell the verbal recognition system for their use. Amazon has suspended its one-year period for police use of its own facial recognition system, Recognition. The motherboard asked an additional 45 companies to stop selling the technology to police and would receive a majority response.

Share This Post