We are heading, with great speed, towards an epochal moment in the history of computing.
Until very recently, computers could do what a human might describe as “read” – they could take in information, as long as it was presented in a language they understood.
Now, thanks to artificial intelligence, computers are learning how to “see”.
They can ingest, filter and store the entire visual universe, whether or not it has been rendered in an appropriate format.
Before, computers could only see QR codes. Now, increasingly, life is a QR code.
The significance of this can hardly be underestimated. Under the untiring, telescopic eyes of seeing computers, every fragment of experience can be digested and regurgitated for human use – or, to be more precise, for the use of the humans with access to the cameras.
That’s why a current legal challenge against police use of facial recognition is an immensely significant moment.
South Wales Police is being taken to court by a man whose image was captured by a facial recognition system as he was shopping in Cardiff.
As the barristers on both sides made clear in court, it is very explicitly intended to set a precedent for automated facial recognition – that is, mass or bulk facial recognition, which surveys an entire crowd in order to pick out certain faces.
The issues are wide-ranging and complex, but two stand out as central.
First, does this technology breach the human right to privacy?
Faces are biometric identifiers, as unique in their own way as fingerprints or DNA. But unlike those attributes, faces can be swept up without consent, as they have been by South Wales Police in Cardiff and the Metropolitan Police at the Notting Hill Carnival in London.
The government argues that facial recognition is less intrusive than DNA or fingerprints because “many people’s faces are on public display all the time.”
But that same point might be made in reverse; the fact that faces are “on display” means they need extra protection, if the ability to recognise them at scale is not to be abused.
The second issue concerns the detail of the systems. As they are currently deployed, facial recognition algorithms work by matching faces against a database of photos, known as a watchlist.
Watchlists have contained images of serious criminals, but they have also held images of people with mental health issues – with action then taken by the police as a result. Their administration is vague in the extreme.
So, while this hearing is prompted by a brand new technology, many of the most crucial questions have nothing to do with computers or algorithms.
When is each watchlist used? Who goes on it? Once you’re on it, how do you get off?
In other words: who watches the watchers? The oldest judicial question of all, reframed with sudden urgency for a world of computers that see.