In this short highlights video you will hear from our contributors about how artificial intelligence and data are central themes for public service media today, especially when it comes to strengthening and personalising relationships with its citizens. This is why The European Broadcasting Union (EBU) has launched a strategic AI and Data Initiative.
Here are ten key takeaways:
- BUILD IN BBC VALUES: Public sector broadcasting guidelines need to be intrinsically built in to how machines learn. Audiences need to be put first as they create the data. Engineers must ensure fairness, accuracy and diversity of outcomes within the parameters of informing, educating and entertaining.
- IMPLEMENTATION IS KEY: Putting into practice editorial guidelines isn’t simple. Need to avoid bias coming into recommendations of content. Too early to talk of a rigid set of rules as these will be evolving.
- PROCESSING THE PUBLIC’S DATA: The media landscape is changing fast. There are more personalised experiences with products tailored to individual consumers. Television may also start watching the viewers. All this is fraught with ethical questions over how the public’s data is processed and interpreted and whether people have a right to opt out.
- SHARE AND COLLABORATE: It’s vital for broadcasters to share and collaborate. The technology is growing so fast it’s impossible for one organisation to stay across all developments. For European broadcasters to compete with the US and China they need to combine forces.
- WHAT’S LEGAL?: Some of the new AI technology such as facial recognition raises questions of legality. What happens when it is used at scale as opposed to amongst small groups? And can it be used ethically in archive searches? There are big questions of human rights and privacy.
- CHILDREN: Even more care needs to be taken with children’s programming. The BBC is working on a Data Ethics Framework, looking at data privacy, localised data storage and other ways to protect young people.
- NEWS: AI can be used in News, both in language translation and devising new archiving techniques for pictures and video. Metrics needed are accuracy, recency and diversity of content. AI can also help with processing large amounts of data and seeing patterns for major journalistic investigations.
- DISINFORMATION: Disinformation has always been hard to tackle, and it’s a challenge for AI too. The biggest problem for AI is the lack of data transparency. Much online content is stripped of meta data, so it’s hard to tell the provenance. Also, there’s a lot of ‘data decay’ – once accounts are taken down they have totally disappeared.
- REGULATION: Government is starting to look at regulation of AI. House of Lords report in 2017 said there was sufficient regulation under GDPR so blanket AI specific regulation at this stage inappropriate. House of Commons report on ‘Disinformation and fake news’ highlighted platform responsibility. ‘Online harms’ White Paper recommended codes of practice. The centre for data ethics and innovation (part of DCMS) reviewed online targeting and personalisation. The information commissioner’s office is considering auditing framework for AI.
- DANGER OF NOT GETTING IT RIGHT: AI sometimes produces bizarre or wrong results so the human element checking the output needs to be ever-present. Personalisation of output to consumers risks broadcasters losing public trust. That trust is vital for their long-term survival in a highly competitive media environment.
Watch the full conference sessions and demos.