Posted by Rosie Campbell on , last updated
I recently attended the AI Business Summit at the Business Design Centre in London. The operative word here being ‘business’, which I had for some reason failed to heed, resulting in me feeling somewhat out of place in my rainbow jelly shoes amongst a sea of navy suits and patent court heels.
But this isn’t a story about fashion (or indeed, my lack of it). It’s about AI. Well, really, it’s about chatbots. And maybe a bit of video-analysis, because it turns out that that’s pretty much what AI means in business at the moment. I don’t mean that disparagingly; it’s all a necessary part of moving technology through the inevitable hype cycle. But AI is currently a loaded term, and I want to manage your expectations.
As one of the speakers pointed out, a great way to start a conversation at the event would be to ask people ‘what do you mean by AI?’ because there was by no means a straight answer. Machine Learning, Data Science, Deep Neural Networks, and plenty of other terms were used roughly interchangeably with AI, which is fine for most practical purposes, but it might also be useful to try and draw some distinctions.
Firstly, I consider ‘AI’ to be an outcome, not an approach. Early research in the field used symbolic, rule-based methods, and is sometimes known as ‘Good Old Fashioned AI’ (GOFAI) to distinguish it from modern Machine Learning approaches. At the business summit, there were some products that (as far as I could tell) weren’t implemented with the kind of Machine Learning algorithms that we currently tend to associate with AI, but this shouldn’t necessarily rule them out from hopping on the AI bandwagon. If a system passes the Turing test (that is; a human evaluator can’t tell whether they are interacting with a machine or a human) or whatever other measure of synthetic intelligence we decide to apply, the implementation is kind of irrelevant.
Machine Learning, as I’ve just given away, is one route to AI. Deep Learning is a Machine Learning technique which uses Neural Networks with multiple hidden layers, and is probably the most popular approach to AI at the moment. There are also other types of Machine Learning that are less commonly used for AI but are instead used to achieve other goals. Data Science is the discipline of extracting insights from data - this may or may not be used as part of a Machine Learning approach, and may or may not be used to achieve the goal of AI.
Confused yet? This is why I try not to be too pedantic, except when it really matters.
I came to realise that ‘AI’ was often being used to mean ‘simulating an interaction with a human’. This was most evident in the case of chatbots, which were primarily being touted as an alternative to humans in customer service. Interestingly, one of the speakers pointed out that in their trials, customers were pretty satisfied with the experience as long as they knew they were interacting with a bot and not a human. Is this because users value the transparency? Or is it because they subconsciously moderate their language to make it easier for the bot? This might mean that they perceive the bot as performing more effectively than it really is.
Data-cleaning services seem to be a lucrative area. A number of companies were offering to take messy, unstructured data and turn it into something useful for a neural network. It seems this process is still done manually by thousands of distributed workers. Often, the data was limited to images or 30 seconds video clips, so sadly it doesn’t yet look like a feasible way to process the data goldmine that is the BBC’s vast archive of video.
One thing that is of interest to the BBC is the emerging set of cloud-based video-analysis platforms from the likes of Google and Microsoft. Convolutional Neural Networks have been making waves in the academic literature for a few years due to their better-than-human ability to process images, and it now seems that they are making their way into actual commercial products. BBC R&D is doing its own work on video-analysis for media and broadcasting applications, and is actively looking at the commercial and open-source offerings and see how well they match BBC use cases.
A common theme was emphasising the ‘human in the loop’; AI working in harmony with humans to augment, rather than replace them. It was unclear whether this was to avoid the ‘robots are taking our jobs!’ narrative, or because the products themselves aren’t yet mature enough to work without any human intervention.
With conventional programming, instructions are given explicitly step-by-step, so you know exactly why things happen. Machine Learning on the other hand is less transparent - it’s not always obvious how and why the system comes to certain conclusions. Another common theme of the summit was the need to see inside this so-called ‘black box’. High-profile cases of algorithmic bias have clearly made businesses nervous about trusting the opaque decision-making of AI systems, and a number of companies showcased products and services that claim to offer some insight. This is interesting from an ethical perspective and I’m glad to see concerns about bias and discrimination being taken seriously. However, it was usually marketed as a way of helping companies justify controversial decisions and avoid legal complications. Perhaps this is where the BBC’s public-service remit could really make the difference - could we be a leader in the ethical application of AI?