Police and security forces around the world are testing out automated facial recognition systems as a way of identifying criminals and terrorists. But how accurate is the technology and how easily could it and the artificial intelligence (AI) is manipulated by criminals or dictators or authoritative governments?
Computer scientists at MIT Media Lab and Google's Ethical Artificial Intelligence Team, have shown that facial recognition has greater difficulty differentiating between men and women the darker their skin tone. A woman with dark skin is much more likely to be mistaken for a man.
San Francisco banned the use of facial recognition by transport and law enforcement agencies in an acknowledgement of its imperfections and threats to civil liberties. But other cities in the US, and other countries around the world, are testing the technology.
Until it can show non-biasness, facial recognition tech remains under suspicion and under scrutiny.