AI at the BBC: Hi-tech hopes and sci-fi fears
edits this blog. Twitter: @chblm
If you hadn’t expected artificial intelligence would have much to offer the BBC you’d have been surprised to hear a Google staffer say that “Eric Schmidt [executive chairman of Google’s corporate owner, Alphabet] met Tony Hall three weeks ago, and out of that session, the number one topic was machine learning.”
Machine learning? It’s how a computer gets better at a task by testing what happens when it responds to different sets of data. And it’s one kind of AI. As Matt McNeill, another senior Googler, put it: “machine learning is a technique used to develop AI.”
The subject of AI and the BBC was examined at a series of BBC Blue Room events at Broadcasting House – starting with practicalities in a couple of afternoon sessions.
Daniel Hulme, an academic and the founder of Satalia, gave an evangelical introduction to the wonders of AI. Satalia is run by AI, he claims, meaning, among other things, he said, no managers (cheers) and no holidays (boos). (He didn’t explain the latter, but it might be something like the Netflix practice of letting staff decide for themselves how much holiday to take.)
If you imagine a kind of hierarchy from lowly data, up through, in order, information, knowledge and understanding, to wisdom, then AI, as it becomes more sophisticated, can operate further up that hierarchy. No longer is it just a question of using raw computer power to crunch numbers faster than humans can, Hulme said. Today’s most sophisticated programs are examples of artificial general intelligence: they don’t just know about particular problems; they are good at learning itself.
So where could this take us? “Organisations are broken,” said Hulme. AI can help with that. He also mentioned countries and even the planet as potentially benefitting from AI.
Stephen Reidy from Google Cloud explained that his company is already offering AI tools for everyone though its Cloud Platform. The fact that Google is expecting revenues from its cloud services to rival its advertising business in a few years suggests the motive is not purely altruistic.
Reidy explained that Google lets outside companies or individuals ‘plug in’ to its AI products, which are packaged in the form of specialised APIs (application program interfaces). There’s one for translating, one for speech recognition and one for visual recognition (including the reading of text), among others.
There’s got to be something in all that for the BBC? Well, yes: for instance, onscreen text in the BBC’s programme library could be read using these tools, creating a database of programme credits and other information. And the object recognition capability could further classify videos: to find all the shots of dogs or cats, for instance.
If you’d like to try this at home, Reidy encouraged his audience to take a look at Google Cloud Platform. Why not build your own machine learning model?
In the evening, a discussion in the BBC Radio Theatre called Artificial Intelligence and Society was wide-ranging, even philosophical.
The BBC’s head of Design & Engineering, Matthew Postgate presented the BBC’s prospects for AI in a historical context. The BBC’s transformation from a company to a corporation back in 1929 signalled the adoption of public service principles which it holds to this day. In the Corporation’s approach to AI, it should defend those same principles, promoting what Postgate called “responsible AI”.
That would mean that the output from AI systems should be trustworthy – because the BBC is independent of both government and commercial interests. Also that it would be impartial, “rooting out the bias in AI systems as we now do in journalism”. And the public accountability of the BBC should mean that its AI work would have transparency: “we can’t just operate as a black box.”
The evening session had its practical side too. If anyone still had doubts that AI was going to be a force in society, Ali Parsa’s entertaining demo of his Babylon Health service would have dispelled them. Babylon Health is a well-funded London startup that uses AI to offer chatbot diagnoses on smartphones – and then links the results to live GPs, who see the results, can speak online to the patients and prescribe drugs. The fact that the AI can take account of demographic information – the patient’s age, gender etc. – in its chatbot interaction and in the information given to the doctor, makes the outcome all the more likely to be successful.
Visions of an emerging AI utopia were laid to rest by two other speakers. Lilian Edwards, professor of e-governance at the University of Strathclyde, talked about the need to understand the ‘inputs’ to AI systems in order to know whether their outputs are fair and unbiased. What data has a machine learning system been ‘trained’ on? Because systems can have a “historic bias embedded in them”.
Edwards highlighted a 1995 EU law that gives citizens the right to a review of any decision that’s made by a purely automated process. But how would this work with AI systems, whose outputs are the result of such complex relationships with the original data fed into them, Edwards asked. There are already cases involving a range of AI systems where accusations of bias have been made.
Finally, Stephen Cave, a philosopher and AI specialist from the University of Cambridge, urged the BBC to show leadership on AI by helping society to make the most of its potential whilst at the same time taking seriously the fears that are already arising - fears of job losses (truck drivers with self-driving vehicles), alienation (robots that behave with unnervingly human characteristics) and loss of humanity (brains being downloaded and living in computer form for eternity).
The clichés of science fiction became part of this BBC debate on public service strategy. What a shame that George Orwell, who worked for the Corporation as a producer in the 1940s, wasn’t there to raise his hand and ask a question from the floor.
For Matthew Postgate, the diverse subjects under discussion during the day – from how the BBC can use AI to enhance its content to its role in raising awareness about the ethics of AI – were all worthy goals. As he summed it up in his presentation: “the challenge is clear: we need to use AI to shape world-class public service, and we need to use world-class public service to shape AI.”
Videos of most of the talks mentioned in this blog are online for BBC staff.