Main content

How easy is it, really, to achieve social good with AI? The decidedly thought-provoking discussion at the recent 'BBC Machine Learning Fireside Chats presents: AI’s Hurdles to Doing Good’ was hosted by Tse Yin Lee, a senior journalist in News and Current Affairs.

The provocation...
We’ve seen AI’s use in the commercial sector explode, but what about the charity sector? There is huge room for AI to do social good, but given that charities are not traditionally at the sharp end of technology, the path is not without challenges. Is the data available to support the effort? Are there enough experts who understand not just algorithms, but the environments they’re working in? How do you even begin to check your project for ethical risks? How do smaller charities cope? How do you compete with larger companies who can pay staff significantly more?

The panel...

Julie Dodd, Director of Digital Transformation and Communication @ Parkinsons UK
Julie is Parkinson’s UK’s director of digital transformation and communications. She’s also written The New Reality, a study into how non-profit organisations can approach digital transformation and use digital technology for change. Among other things, Parkinson’s is currently working with Benevolent AI to try and identify at least three currently available medicines that can be re-purposed to address the condition and two brand new ways to treat it with new drugs.

Giselle Cory, Executive Director @ DataKind UK
DataKind is an organisation that is itself a charity whose volunteers support other charities and social enterprises, providing data science expertise and services such as exploratory analysis and prototyping. Projects they’ve worked on include helping homeless organisations identify issues people sought advice on before becoming homeless and analysing possible reasons why people progress differently in a shelter. It’s helped Global Witness to uncover potential corruption with open data and the Citizens Advice Bureau to better identify emerging social issues to support their staff training.

Michelle Eaton, Programme Manager in the Government Innovation Team - Data @ Nesta
Michelle worked with the Essex Police Force for 11 years - starting in community policing before moving on to performance analysis, and then to strategic change, where she led the Essex Centre for Data Analytics. Now she oversees Nesta’s Offices of Data Analytics programme, which helps governments reform their services, and recently wrote a blog defending the use of AI in policing.

The discourse…

Giving the panel a chance to show off, Tse posed her first question: “What’s the most exciting thing you do with AI?”

Giselle jumped in first and explained how DataKind had worked on a predictive model for a food bank to help identify people in the line of the food bank who might be in need of extra support. Exploring the topic more, Giselle was clear in stating that this model, and others like it, should exist to help advisors, and not act as the primary decision maker; relying wholly on AI-generated decisions is a dangerous game to play, especially with decisions that have deep ethical grounding.

Julie followed, explaining how they, Parkinson’s UK, have partnered with Benevolent AI to aid with the complex process of drug discovery. Their platform impressively trawls through millions of Parkinson's-related papers, looking for clues which humans may have missed. They hope to shorten the process in finding answers for new treatments to Parkinson’s.

Michelle explained how, at Nesta, they are not immediately looking to implement AI solutions to solve problems; instead, they are more focused on looking into the wider problems, from problem identification through to implementing a solution, where AI is not always the answer.

Tse then went on to ask the panel about perceptions from leadership of AI in organisations, from before the the introduction of AI to the organisation, through to implementing a fully AI-driven solution.

Continuing her earlier train of thought, Michelle highlighted how “an AI solution is not a silver bullet”. She went on to explain that a crucial part of the process is gathering data and understanding the problem space, and communicating these findings back to leadership. Only then, once you have your findings and have formed good partnerships, do you progress forward. Luckily, the leadership at Nesta was very receptive to this process.

Giselle talked next, paraphrasing the quote from Matt Velloso, “If it’s written in Python, it’s probably machine learning, if it’s written in PowerPoint, it’s probably AI”. She went on to explain that “AI as a concept is hard to hold”, and there’s lots of steps involved in getting actual AI into non-profits; there is a spectrum of ‘data maturity’ and most small non-profits are clustered towards the bottom - which is where DataKind come in to help. A lot of what they do is completely unrelated to Data Science, it’s about holding hands, creating a safe space in order to empower people to feel confident when using data. Julie explained how, in a lot of cases, for charitable organisations the culture is somewhat ready to embrace AI, but the data is not accessible. Where she has started to see success is when the focus is put on gathering the right data first, and then using it to effectively present the business case to leadership so the projects have full backing and understanding throughout the organisation. In addition, when taking on a new project, they set up a periodic review to asses ‘data maturity’ across the organisation. She added the amusing observation, made through the questionnaires, that as the staff understand the problem space more, the scores for data confidence go up, and the scores for data accessibility go down!

“How easy is it to recruit the data scientists you need?” Tse looked to the panel for answers.

Charities often can’t afford to pay market rates. Julie explained how at Parkinson's UK, even though they have over 500 people, that’s not enough - they need more to scale. The job market is currently squeezed for data roles, causing market rates to increase beyond what charities can afford to pay; they find they get candidates who either join for the love of the cause, or candidates who enjoy the flexibility that comes with working in a charity. Drawing on her experience as a police community support officer, Michelle explained how she traversed through various roles within the police and eventually came out in a rather different role, overseeing Nesta’s Data Analytics programme, and its thanks to this journey that she greatly values aptitude and attitude over skills.

Tse then talked through a list of somewhat provocative headlines negatively covering AI in healthcare and policing and asked the panel what they thought of them.

Acknowledging her bias, Michelle began by stating she’s emotionally invested in the police so it’s difficult to criticise them. Due to factors such as the police being stretched, demand rising, and the front line shrinking, the police have to look for new ways of doing things, complementary methods. These stories often miss the outcomes of projects, they miss the actual good. The positive aspects, such as doing these things to better understand threats to the public, and understanding how to protect people better, are rarely talked about. She then continued discussing how the police are making a real effort to develop these capabilities transparently and are considering all the ethical implications upfront, whilst in constant communication with experts. Julie made an interesting point that the negative headlines in healthcare have acted as the seed for more ‘grown up’ conversations about AI in healthcare - she says it grounds these concepts to reality and moves us away from the ‘AI will save us all’ and ‘AI sounds great, let’s do a thing’ mindsets.

“Do you think the sense of mission in a charity can blind them to the possible ethic problems of the way they are using data?”, Tse asked.

Any organisation dealing with vulnerable people tend to approach their cohort with a duty of care and are therefore generally cautious with these kinds of things, Giselle explained. That doesn’t mean there won’t be mistakes, she continued, there are examples of partnerships between charities and commercial sector companies which may not be entirely ethically sound. Expanding on partnerships, Michelle talked about Parkinson’s UK’s partnership with Benevolent AI and the challenges they have faced with data ownership, intellectual property, and the immaturity of legal frameworks surrounding partnerships such as these. She also expanded on the difficulties of reward - should the person's data who it was originally from be rewarded? Julie added that with charities, strong ethical frameworks are usually fundamental to the organisation so there’s less concern about misuse of data compared to the commercial sector.

The floor was then opened for questions from the crowd. We discussed Geographical differences in data ability, Giselle discussed how some areas have very strong government specialists, some have lots of highly skilled people volunteering, and some areas really struggle for non-profits. A question on using data sources in charities as a revenue generating asset was posed to the panel; Julie informed us that charities in the UK are often working together and sharing a lot, and there are rich ongoing conversations regarding how to make open data platforms; Michelle added that Nesta are looking into data trusts and data collaboratives as a way to incentivise sharing.

After a few more rounds of questions, some lovely nibbles, and a beer or two, the event drew to a close!

For more details about upcoming events, visit BBC Machine Learning Fireside Chat.

More Posts

Previous

Step into Tech update