Few weeks ago, I saw a news article about risks of unregulated AI. The news article quoted that in USA, Police came to a house of a 8 months pregnant African American lady and arrested her due to a facial recognition system identified her as the theft suspect in a robbery. No amount of pleading from the lady about her advanced pregnancy condition during the time of robbery and she just could not have committed the said crime with this condition, was heard by the police officer. The Police officer did not have any discretion. The system set up was such that once the AI face recognition identifies the suspect, Police are required to arrest her, bring her to the police station and book her.
In this case, she was taken to the police station, booked and released on bail. Few days later the case against her was dismissed as the AI system has wrongly identified her. It was also found out that she was not the first case and few more people, especially African American women were wrongly arrested and released later due to incorrect facial recognition model.
The speed in which the governments are moving on regulations and proliferation of AI tech companies delivering business application such as this facial recognition model demand urgent regulations.
May be citizens themselves should organize and let the people responsible for deploying these systems accountable. The Chief of Police, may be the Mayor of the town and County officials who signed off this AI facial recognition system, should be made accountable. May be the County should pay hefty fines and just not a simple oops, sorry.
Lots of attention need to be placed on training data. Training data should represent all the diverse people in the country in sufficient samples. Expected biases due to lack of sufficient diversity in training data must be anticipated and the model tweaked. Most democratic countries have criminal justice system with a unwritten motto “Let 1000 criminals go free but not a single innocent person should go to jail”. The burden of proof of guilt is always on the state. However, we seem to have forgotten this when deploying these law enforcement systems. The burden of proof with very high confidence levels and explainable AI human understandable reasoning, must be the basic approval criteria for these systems to be deployed.
The proposed EU act classifies these law enforcement systems as high risk and will be under the act. Hopefully the EU act becomes a law soon and avoid this unfortunate violation of civil liberty and human rights.