The Government's Chief Scientist Extolls the Virtues of AI - But Avoids The Scariest Application

By James O Malley on at

Last week I headed down to the British Library to attend The Turing Lecture - named for Bletchley Park computing pioneer Alan Turing, and held by the national centre for data science which bears his name. Sir Mark Walport, the government’s Chief Scientific Adviser was the esteemed speaker, and he delivered an interesting overview of some of the big opportunities and challenges for government and the public services with new machine learning and artificial intelligence technologies.

For example, he reckons that AI could soon be doing a better job than human lawyers when it comes to assembling the relevant case law to put before a judge.

As UKA reported, this throws up an intriguing question. Walport asked: “Most of us would agree that having judges who are informed by the corpus of knowledge is important, but would we really think it is a good idea to have AI systems that would work out the sentencing to be applied?”

Could robots soon be deciding how long someone should remain locked up?

He also pointed out one of the potential problems with relying on AI: As the algorithms are written by humans, and the data that is input contains biases, it could be as susceptible to bias and prejudice as we are, while maintaining the illusion of objectivity. He pointed to a Chinese-built facial recognition system that it was claimed could spot people who look like criminals - and likened it to the widely discredited “science” of skull-measuring phrenology of times gone by. (We’ve also got another real world example of this with the accidentally racist Kinect videogame that our pals at Kotaku reported on recently.)

Perhaps one of the biggest causes of optimism he identified was in health care. If new technologies could be set loose on the enormous datasets owned by the NHS, they could spot new relationships between drugs and outcomes, or unexpected correlations, which would be imperceptible to a human, who is unable to comprehend such large amounts of data en masse. If we use data like this, it could unlock new cures and treatments.

Walport also pointed to some of the big ethical dilemmas of the future. If customer services start using AI chatbots to assist customers, is it important that the customer knows whether they’re talking to an AI or a human?

Also in the lecture, he mentioned the privacy implications for new technologies. The whole thing that makes AI useful is that it can scour and analyse massive datasets. And this made me think: Isn’t there an elephant in the room here? What about the biggest dataset of all that the government is currently assembling? What about the Investigatory Powers Act?


Sir Mark Walport lecturing last Thursday.

The law came into force at the end of last year, and essentially legitimated the bulk collection of everyone’s internet data by the government - similar to the sort that Edward Snowden exposed in the US. It’s something I’ve been banging on about for ages, and is a law that has some hugely draconian implications.

So the government is currently amassing an enormous database containing our data. And simultaneously we’re living through a time when there are huge advances in machine learning and artificial intelligence. What about the huge privacy implications? What’s to stop all sorts of algorithms being applied to our data to mine our behaviour?

For example, the government isn’t just reading our text messages and our emails. Imagine if the sort of machine learning image recognition techniques that Google Photos is capable of were applied to all of the images and videos that the government intercepts - it could quickly build up a database of every face and numberplate in the country. And you’ll be on it - whether you’re committing crimes or not. Now imagine the sorts of AI we’ve seen powering Amazon Alexa or Siri applied to every voice conversation you’ve ever had… and the government suddenly has searchable transcripts of practically every utterance by its citizenry. Surely, when it comes to any future debates about the role of AI and machine learning, the fact that this sort of hugely draconian dataset could be or is being created should be central to discussion? Perhaps we can trust our current government to not do anything dodgy with our data - but is it really a good idea to have these tools in place but unknown governments of the future?

I asked Walport about this during the Q&A. I asked whether he thought collecting this data was justified given the scope of what these new technologies could enable. And he gave the most frustrating possible answer - the sort of garbled non-answer that a politician would give. Here’s what he said, verbatim:

“I think that’s when you have to look at the purpose, which is actually [for] keeping us safe. It’s actually one of the other challenges of the global internet and the increased democratisation, if I can put it that way, of encryption. It’s very different having a wild west - a wild west is a place where someone who breaks the law has an asymmetrical advantage over the fact that we want to maintain law and order.”

“At the end of the day this is one of those democratic decisions about the use of data and the use of the technology, and the intent is absolutely clear: it’s to protect us at a time when threats from people who mean harm not only to us as individuals but to our values, to our way of life, this is a very challenging time, and I think the judgement of most people is that we need to be able to defend ourselves. That is the purpose of it.”

“But at the end of the day we live in a plural society, people have different views and that’s why we have democracy and legislation, and it has to be said that the people who would like to destroy us do not have democratic values. And so it's one of the threats, it's one of the unintended consequences of the technology, its scrutinised, and that’s the background to it.”

This is a profoundly disappointing answer - not least because the Chief Scientific Advisor seemed to fall into the sort of slippery answer that would be given by a politician, in an otherwise illuminating lecture.

It may well be that the intent is for bulk collection is to protect us - but does Walport really lack the imagination to think about how it could be used in less well intentioned or unintended ways? If we’re going to worry about the implications of people contacting the DVLA not knowing whether they’re speaking to a human or a chat bot, shouldn’t we also worry about the fact the government has the ability to know everything about us?

Walport argues that those that threaten us do not have democratic values - I agree - but shouldn’t we also admit that massive draconian surveillance doesn’t really sit well under the “democratic values” subheading either? And this doesn’t even take into account the fact that bulk surveillance doesn’t work when it comes to stopping terrorism.

Part of me wonders if Walport actually secretly knows this and agrees with me. Given the weird “government spokesman” mode that he shifted into when he answered me, I wonder if he was just keen to avoid a situation like what happened to Professor David Nutt.

Under the previous Labour government, Professor Nutt was sacked by then Home Secretary Alan Johnson for arguing (correctly) that taking the drug ecstasy was no more dangerous than horse riding. Because the government couldn’t cope with dissent in its ranks, even though he was backed by a little thing called “scientific evidence”, he was soon given the chop for speaking against the officially decided government position.

Because of this we’ll never know for sure Walport’s true opinion. We’ll never know whether he does really think bulk data collection is fine, or whether he would truthfully agree with me that it is illiberal. Sadly if he wants to keep his job - and if he wants to keep working on other issues where he does appear to have genuine insight - he can’t say anything about the most important issue.