<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  
    <title>Research and Development</title>
    <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog" />
    <link rel="self" type="application/atom+xml" href="https://www.bbc.co.uk/rd/blog/atom" />
    <id>https://www.bbc.co.uk/rd/blog/atom.xml</id>
    <updated>2021-11-23T00:43:02Z</updated>
    <subtitle>This is the Research &amp; Development blog, where researchers, scientists and engineers from BBC R&amp;D share their work in developing the media technologies of the future. </subtitle>

    
      <entry>
        <title>Releasing the largest British Sign Language dataset for research</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-11-british-sign-language-dataset-bobsl" />
        <id>https://www.bbc.co.uk/rd/blog/2021-11-british-sign-language-dataset-bobsl</id>

        <published>2021-11-23T00:43:02Z</published>
        <updated>2021-11-23T00:43:02Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            A new dataset of British Sign Language footage to help academic researchers interested in automated sign language recognition has just been released.
          ]]>
        </summary>

        
          
            <author><name>Rob Cooper</name></author>
          
            <author><name>Andy Secker</name></author>
          
        

        
          <category term="Accessibility" />
        
          <category term="Data" />
        
          <category term="Artificial Intelligence &amp; Machine Learning" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>For the last three years, the BBC has worked with <a href="https://www.ox.ac.uk/">University of Oxford</a> and our Data Science Research Partners <a href="https://www.surrey.ac.uk/">Surrey University</a> and <a href="http://www.ucl.ac.uk/big-data/bdi">University College London</a> on an important research project into automated sign language recognition. The first results of this work, <a href="https://www.bbc.co.uk/rd/projects/extol-dataset">the BBC-Oxford British Sign Language Dataset (BOBSL), has just been released</a> and is available for download by the academic community under a non-commercial licence agreement. BOBSL is one of the largest and most comprehensive <a href="https://en.wikipedia.org/wiki/British_Sign_Language">British Sign Language</a> datasets ever produced.</p>

            <p>In 2017 we were approached by a couple of&nbsp;our&nbsp;<a href="https://www.bbc.co.uk/rd/projects/data-science-research-partnership">Data Science Research Partnership (DSRP) partners</a>, Surrey University and UCL along with <a href="https://www.robots.ox.ac.uk/~vgg/">the Visual Geometry Group at the University of Oxford</a> to support a bid and be on the Advisory Board for&nbsp;<a href="https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/R03298X/1">an academic research project called ExTOL</a>.&nbsp;. As part of this support, the creation of a dataset was discussed using the BBC&rsquo;s&nbsp; large archive of British Sign Language (BSL) recordings. &nbsp;<a href="https://www.bbc.co.uk/iplayer/categories/signed/a-z">The BBC produces signed versions of our most popular programmes</a>&nbsp;for the deaf&nbsp;and hard of hearing community, covering a wide range of genres and programme types.</p>
<p>They planned to analyse BSL footage using a range of linguistic and technical approaches, with the ultimate goal of building a system to interpret human signing automatically. Their proposed system would do this by watching footage of someone using BSL and analysing the pose, hand and head movements, facial expressions and mouthings. By comparing these complex movements with the labelled data we helped supply them with, the system could make an informed guess at what was being signed.</p>
<p>The University of Oxford will be presenting the results of their new system at an upcoming conference, but the dataset they used to train it on is being released to the wider academic community now. This will allow other researchers to train their own systems and to compare approaches. The dataset is made up of 1,962 BSL signed programmes from the BBC archive, which comprise around 1,400 hours of footage in total. Each programme has an associated transcript, which has been time-aligned with the BSL signer. The signer has been carefully cropped out of each programme, and background faces blurred to aid the training of machine learning systems. A diverse range of programmes, including drama, documentaries and comedies are included, and likewise, 39 separate signers are included in the data.</p>
<blockquote>
<p><em>High quality large scale datasets are critical for modern research in deep learning. The lack of such datasets for sign language recognition has held back research in this area. BOBSL will enable new approaches to be trained and evaluated at a scale that was previously impossible, and will lead to breakthroughs in automated recognition and linguistic understanding of this important means of communicating.</em></p>
<p style="text-align: right;"><em><a href="https://en.wikipedia.org/wiki/Andrew_Zisserman">Andrew Zisserman</a>, Professor of Computer Vision Engineering, <em>University of&nbsp;</em>Oxford</em></p>
</blockquote>
<p>We&rsquo;re extremely proud to have worked with the University of Oxford and our DSRP Partners in this work. One of the most important things any research and development department should do is to foster innovation. An increasingly important way for an organisation like the BBC to do this is by releasing data for <a href="https://www.bbc.co.uk/rd/topics/artificial-intelligence-machine-learning">machine learning research</a>, which can lead to all sorts of innovation.&nbsp;</p>
<p>For example, automated sign language translation could allow virtual assistants like Siri or Alexa to be adapted to work for the deaf and hard of hearing community, or to allow educational training firms to create interactive tutors to give instant feedback to someone learning BSL or to provide written transcripts to videos of BSL and thereby make them fully searchable at scale for the first time. Although our data is currently limited to non-commercial use, we hope that academic researchers will prove the viability of such systems and thereby inspire commercial companies to invest in this field.</p>
<p>The Data Science Research Partnership is committed to releasing further datasets in this way. We are working hard with our legal and technical teams to make more data available to our partners and the wider academic community. We&rsquo;re hoping 2022 will be a bumper year for further data releases.</p>
<p>If you&rsquo;re an academic researcher, you can <a href="https://www.bbc.co.uk/rd/projects/extol-dataset">apply to get access to the data</a>.</p>
<p><em>We&rsquo;d like to thank Red Bee Media, who provide BSL translation services for the BBC, for supporting this work.</em></p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Details%20of%20a%20@BBCRD%27s%20five%20year%20collaboration%20to%20unlock%20the%20potential%20of%20data%20in%20the%20media:%20http://bbc.in/2ugUpsj">Tweet This</a>&nbsp;-&nbsp;<a href="http://www.bbc.co.uk/rd/projects/data-science-research-partnership">Share on Facebook</a></strong></p>
<p><strong><a href="http://www.bbc.co.uk/rd/about/partnerships">BBC R&amp;D - Partnerships</a></strong></p>
<p><strong><strong><a href="https://www.surrey.ac.uk/">University of Surrey</a></strong></strong></p>
<p><strong><strong><strong><a href="http://www.ucl.ac.uk/big-data/bdi">University College London - Big Data Institute</a></strong></strong></strong></p>
<p><strong><a href="https://www.robots.ox.ac.uk/~vgg/">University of Oxford -&nbsp;Visual Geometry Group</a></strong></p>
<p><strong><a href="http://www.bristol.ac.uk/">University of Bristol</a></strong></p>
<p><strong><a href="http://www.datascience.manchester.ac.uk/">University of Manchester - Data Science Institute</a></strong></p>
<p><strong><a href="https://www.ed.ac.uk/data-science">University of Edinburgh - Data Science initiative</a></strong></p>
<p><strong><a href="https://www.imperial.ac.uk/data-science/">Imperial College London - Data Science Institute</a></strong></p>
<p><strong><a href="http://www.qmul.ac.uk/">Queen Mary University of London</a></strong></p>
<p><strong><a href="https://www.ulster.ac.uk/">Ulster University - Data Science</a></strong></p>
<p><strong><a href="https://www.engadget.com/2017/10/19/bbc-machine-learning-research-partnership/">Engadget - The BBC is turning to AI to improve its programming</a></strong></p>
<p><strong><a href="http://www.itpro.co.uk/strategy/29757/bbc-to-develop-new-services-based-on-machine-learning">IT Pro - BBC to develop new services based on machine learning</a></strong></p>
<p><strong><a href="https://www.theverge.com/2017/10/19/16503658/bbc-data-analytics-machine-learning-curation-tv">The Verge - BBC will use machine learning to cater to what audiences want to watch</a></strong></p>
<p><strong><a href="http://advanced-television.com/2017/10/19/bbc-seeks-to-unlock-data-potential/">Advanced Television - BBC seeks to unlock data potential</a></strong></p>
<p><strong><a href="http://www.tvbeurope.com/bbc-plans-five-year-project-unlock-potential-data/">TVB Europe - BBC plans five-year project to 'unlock potential of data'</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>DVC-P: Deep video compression with perceptual optimizations</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-11-dvc-p-video-compression-perceptual-optimization" />
        <id>https://www.bbc.co.uk/rd/blog/2021-11-dvc-p-video-compression-perceptual-optimization</id>

        <published>2021-11-12T16:19:40Z</published>
        <updated>2021-11-12T16:19:40Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            Continual developments in compression is needed to deliver bandwidth-intensive video formats such as ultra-high definition or panoramic video - we are exploring new approaches which use artificial intelligence techniques.
          ]]>
        </summary>

        
          
            <author><name>Marta Mrak</name></author>
          
        

        
          <category term="Video" />
        
          <category term="Artificial Intelligence &amp; Machine Learning" />
        
          <category term="Quality" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>Modern streaming services require a great deal of compression to deliver bandwidth-intensive video formats such as ultra-high definition or panoramic video. These developments demand continual improvement, and we are exploring new approaches which use artificial intelligence techniques in video compression. Processes that use deep neural networks could perform better and bring new functionalities.</p>

            <p>Existing compression methods commonly focus on optimising the video's objective quality (measured by Peak Signal to Noise Ratio or PSNR) and its bit rate. However, since audiences perceive quality differently from objective quality, it is arguably more important to optimise the perceptual quality of decoded videos. This can be achieved by considering the Quality of Experience (QoE), especially when videos are compressed at low bit rates, as that is when the quality typically suffers. To tackle this problem we are researching how to optimise perceptual quality using improved networks for compressing video at low bit rates.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6181d2b906d63ec8810000ae/chart1.png" alt="An illustration of the proposed DVC-P network." /></p>
<p style="text-align: center;">The proposed DVC-P network</p>
<p>We are partnered with other academic institutions to study deep video compression with perceptual optimisations (DVC-P). Our research uses a popular Deep Video Compression (DVC) network, improving it with perceptual optimisations. Our three proposed improvements are shown in green in the network structure image above/below. The '-P(1/3)' and '-P(2/3)' modules can enhance the synthesis of pixels, and the '-P(3/3)' module can help optimise network parameters during its training.</p>
<p>We use two strategies to achieve perceptual improvements. The first, implemented in two generator nets (the green-shaded nets in the image), improves commonly used mechanisms that restore the original resolution of videos. The second introduces a discriminator network to enable the video compression network training in a perceptually driven way.</p>
<p>Residuals of predicted pixels and motion vectors are reconstructed in the DVC-P encoder using generator networks to produce reference frames for inter coding. Instead of using common strided-deconvolution in these generator networks, we use nearest-neighbour interpolation to upsample and restore the video's original resolution, eliminating checkerboard artefacts that can appear in sequences encoded with the original DVC frameworks. Moreover, a discriminator network and a mixed loss are used during training to help our network trade-off among distortion, perception and rate.</p>
<p>Compared with the baseline DVC, our proposed method can generate videos with higher perceptual quality because of these two improvements.</p>
<blockquote>
<p>Read more about this approach in our conference paper with our partners from the <a href="http://www.cvc.uab.es/">Computer Vision Center in Barcelona</a>&nbsp;and China&rsquo;s <a href="https://www.xidian.edu.cn/">Xidian University</a>&nbsp;and <a href="https://www.nwpu.edu.cn/">Northwestern Polytechnical University</a>: Saiping Zhang,&nbsp;Marta&nbsp;Mrak, Luis Herranz, Marc G&oacute;rriz, Shuai Wan, Fuzheng Yang, &ldquo;DVC-P: Deep Video Compression with Perceptual Optimizations&rdquo;, In Proc. IEEE VCIP 2021 [<a href="https://arxiv.org/abs/2109.10849">preprint</a>]</p>
</blockquote>
<h2>What next</h2>
<p>Our current work is based on the DVC network consisting of multiple modules referring to those in the traditional block-based hybrid coding framework. The next step will be to expand this work and combine it with other video compression networks to benefit from further improvements in the field.</p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Continual developments in compression are needed to deliver bandwidth-intensive video formats such as ultra-high definition or panoramic video - @BBCRD is exploring new approaches which use artificial intelligence techniques: ">Tweet This</a>&nbsp;-&nbsp;<a href="https://www.facebook.com/sharer/sharer.php?u=https://www.bbc.co.uk/rd/blog/2021-11-dvc-p-video-compression-perceptual-optimization">Share on Facebook</a></strong></p>
<p><strong><a href="https://github.com/SaipingZhang/DVC_P">Github -&nbsp;DVC-P: Deep Video Compression with Perceptual Optimizations</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/projects/video-coding">BBC R&amp;D - Video Coding</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-10-video-compression-machine-learning">BBC R&amp;D - Faster Video Compression Using Machine Learning</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-09-artificial-intelligence-colourisation-video">BBC R&amp;D - AI &amp; Auto Colourisation - Black &amp; White to Colour with Machine Learning</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-11-bbc-music-day-cognitus">BBC R&amp;D - Capturing User Generated Content on BBC Music Day with COGNITUS</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-05-av1-codec-streaming-processing-hevc-vvc">BBC R&amp;D - Testing AV1 and VVC</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2016-09-turing-codec">BBC R&amp;D - Turing codec: open-source HEVC video compression</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-06-comparison-of-recent-video-coding-technologies-in-mpeg-and-aomedia">BBC R&amp;D - Comparing MPEG and AOMedia</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2016-10-alliance-open-media-video-compression">BBC R&amp;D - Joining the Alliance for Open Media</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Sounds Amazing 2021</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-11-sounds-amazing-2021" />
        <id>https://www.bbc.co.uk/rd/blog/2021-11-sounds-amazing-2021</id>

        <published>2021-11-29T11:05:58Z</published>
        <updated>2021-11-29T11:05:58Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            Announcing this year's edition of our free online conference celebrating innovation in audio.
          ]]>
        </summary>

        
          
            <author><name>Chris Baume</name></author>
          
        

        
          <category term="Audio" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>I’m thrilled to announce that Sounds Amazing is back! On Thursday 2nd December 2021, we will again be running a free online conference celebrating innovation in audio. LJ Rich will return to host an extraordinary line-up of standout technology and creativity. Head of BBC Radio 1, Aled Haydn Jones, will introduce this year’s event, which features Nicolas Becker on his Oscar-winning work on “Sound of Metal”, multi-platinum selling songwriter and producer Benbrick, and BAFTA winner Oliver Tarney on editing the sound for the latest Bond movie “No Time to Die”. Over a dozen more speakers will present and discuss topics ranging from accessibility and sound branding, to in-car audio and personalisation. Make sure you <a href="https://www.bbc.co.uk/send/u95094482">book your place for this unmissable free online event</a>.</p>

            <blockquote>
<p><strong><a href="https://www.bbc.co.uk/send/u95094482">Register for this free event using this form</a>.</strong></p>
<p><strong><a href="https://www.bbc.co.uk/academy/events/sounds-amazing-2021/">See the Sounds Amazing website for an up-to-date schedule with timings.</a></strong></p>
</blockquote>
<p>Sounds Amazing is a biannual event organised collaboratively between the BBC Academy and the BBC Research &amp; Development Audio Team. Following the remarkable success of last year&rsquo;s online event, which attracted over 5000 live views, we couldn&rsquo;t wait another year until the next one. We have organised this jam-packed one-day event to keep you up-to-date on all the latest developments and innovation in the audio world. Below is an overview of our exiting line-up.</p>
<h2>Morning: Accessibility</h2>
<p>The theme of our morning session is accessibility and aural diversity. We will learn about how hearing loss can be portrayed and modelled, and show how technology can be used to improve listening experiences.</p>
<p>Nicolas Becker will open the session with a keynote talk and Q&amp;A on his work that won this year&rsquo;s Oscar for Best Sound. 'Sound of Metal' follows the journey of a metal drummer as he loses his hearing, and the audio is cleverly used to help the audience hear what drummer is experiencing.</p>
<p>Our panel features three leading researchers on accessibility technology. Angeliki Mourgela, PhD candidate at Queen Mary University of London, will demonstrate her work on modeling hearing loss. Alinka Greasley, Associate Professor in Music Psychology at the University of Leeds, will discuss how hearing aid technology affects people&rsquo;s music experiences. Andrew Hugill, Professor of Creative Computing at the University of Leicester, will talk about his experimental compositions for improving the experience of aurally diverse audiences.</p>
<h2>Midday: Sound in short</h2>
<p>These sessions feature several short talks on new audio technology and top creativity to inspire and inform your productions.</p>
<p>Jon Manel, Podcast Commissioner at BBC World Service, will talk about the ideas put forward for the BBC International Podcast Competition - including an exclusive preview of winning entry 'Dear Daughter', which airs on 6th December. We&rsquo;ll then get some top tips from Kahl Henderson, dubbing mixer on the BBC hit 'Vigil', and multi award winning audio artist and poet Axel Kacoutie from the Guardian Podcast.</p>
<p>Deta Ward and Simon Smith from the BBC Blue Room will introduce us to the brave new world of in-car audio, where Spotify is pitching against Amazon Echo and Apple CarPlay. Finally, Laura Harrison from BBC R&amp;D will present <a href="https://www.bbc.co.uk/makerbox">BBC MakerBox</a>, which makes innovative production tools available to producers for creating novel media experiences.</p>
<h2>Afternoon: Audio creativity</h2>
<p>Our final session covers a range of exceptional examples of creativity in sound. It includes panels on sound branding and personalised audio experiences, plus interviews with leading industry talent.</p>
<p>The first panel will discuss how to creatively brand your sound, to help it stand out in a competitive landscape. BBC Head of Brand, Jane Lingham, will talk about how she worked with top advertising agency Wolff Olins to create the new BBC sting. Alexander Parsons - composer of lockdown standouts 'Grayson&rsquo;s Art Club' and 'Surviving Covid' - will share how he created these memorable and characterful soundtracks.</p>
<p>Our Audio Masterclass will delivered by BAFTA winner and four-time Oscar nominee Oliver Tarney. He will talk about editing the sound for the latest James Bond movie 'No Time to Die', and explain how gaming sound influenced his process.</p>
<p>Personalisaton allows experiences to be adapted and tailored to individuals, opening up extraordinary creative possibilities. Ian Forrester, Mike Armstrong and Dave Bevan from BBC R&amp;D will talk about the tools and experiences they have developed for personalisation. Professor Adrian Hilton from the University of Surrey will introduce the 'AI for Personalised Media Experiences' collaborative project which launched in October.</p>
<p>Paul Carter, also known as Benbrick, is the Peabody Award winning producer and composer of the 'Have You Heard George's Podcast?'. The podcast won 5 Gold British Podcast Awards in 2019, including the coveted Podcast of the Year award. He will join us for an in-depth interview and Q&amp;A about his wide-ranging work.</p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Sign%20up%20to%20watch%20@BBCRD%27s%20free%20online%20conference%20showcasing%20the%20most%20exciting%20and%20innovative%20developments%20in%20audio%20production%20and%20technology%20from%20film,%20TV,%20gaming%20and%20more:%20https://bbc.in/34ttYWb">Tweet This</a>&nbsp;-&nbsp;<a href="https://www.facebook.com/sharer/sharer.php?u=https://www.bbc.co.uk/rd/blog/2020-08-audio-sounds-amazing-event">Share on Facebook</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-08-casualty-tv-drama-audio-mix-speech-hearing">BBC R&amp;D - Casualty, Loud and Clear - Our Accessible and Enhanced Audio Trial</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/taster/pilots/casualty-ae-audio">BBC Taster - Casualty: A&amp;E Audio</a></strong></p>
<p><strong><strong><strong><a href="https://www.bbc.co.uk/rd/blog/2020-07-synchronised-audio-devices-orchestrator">BBC R&amp;D - How we made the Audio Orchestrator - and how you can use it too</a></strong></strong></strong></p>
<p><strong><strong><strong><strong><strong><a href="https://www.bbc.co.uk/makerbox/tools/audio-orchestrator">BBC MakerBox - Audio Orchestrator</a></strong></strong></strong></strong></strong></p>
<p><strong><strong><a href="http://www.bbc.co.uk/rd/work_streams/audio-research">BBC R&amp;D - Audio Research</a></strong></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>AudioWatch - Live audio monitoring for Autumnwatch 2021</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-11-live-audio-monitoring-autumnwatch-ai" />
        <id>https://www.bbc.co.uk/rd/blog/2021-11-live-audio-monitoring-autumnwatch-ai</id>

        <published>2021-11-10T15:28:49Z</published>
        <updated>2021-11-10T15:28:49Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            Continuing our use of AI in the production of the BBC's nature programmes, we've investigated how artificial intelligence techniques can be applied to audio.
          ]]>
        </summary>

        
          
            <author><name>Sebastian Ward</name></author>
          
            <author><name>Robert Dawes</name></author>
          
        

        
          <category term="Audio" />
        
          <category term="Artificial Intelligence &amp; Machine Learning" />
        
          <category term="Production" />
        
          <category term="Automation" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>Over the last few editions of the Watches (<a href="https://www.bbc.co.uk/rd/blog/2020-06-springwatch-artificial-intelligence-remote-camera">Springwatch</a>, Autumnwatch and <a href="https://www.bbc.co.uk/rd/blog/2021-04-winterwatch-artificial-intelligence-automated-monitoring">Winterwatch</a>), BBC Research &amp; Development has been collaborating with <a href="https://productions.bbcstudios.com/our-production-brands/the-natural-history-unit">the BBC’s Natural History Unit</a> (NHU) to investigate the application of artificial intelligence technologies such as machine learning and computer vision to live natural history production. For this year’s Autumnwatch, we’ve expanded the work to investigate how we can apply similar techniques to the audio from the show’s wildlife cameras.</p>

            <p>Throughout the week of Autumnwatch the audience could watch live streams of a selection of the wildlife cameras for 12 hours each day. The Live Stream team manage this and always have an operator watching and listening to ensure that the video and audio are of acceptable quality and complies with the BBC&rsquo;s editorial guidelines. They try to ensure that the audio remains in keeping with the natural setting of the production and that man-made noises such as vehicle noise or speech are avoided.</p>
<p>One particular challenge for an operator is detecting the presence of unsafe audio and then working out which stream it is appearing on.&nbsp;Our system sets out to assist the team with this task. A single member of the production team will often have to monitor the audio from up to eight feeds at once. They are normally listening to a mix of several of the audio sources, so after hearing some unsafe audio on the mix, they may have to then go through all the sources one by one to try and locate the problem sound. This can potentially take several minutes if the problem sound is intermittent and so is difficult to track down. Additionally, if the operator is listening to a single source, they can miss problem sounds on all the other sources. They may become reliant on other members of the team discovering the problem and passing that information on.</p>
<p>Our tools can detect unsafe audio and alert the production team to its presence on a particular stream in a few seconds. Fundamentally it is hard for a person to listen to eight different audio streams at once. However, it is relatively easy to watch eight different videos at once. So our system translates the problem audio into a visual warning on the operator&rsquo;s screen. This warning also remains on the screen for several seconds, so it&rsquo;s easy to spot even if the problem sounds might have only been brief.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/618328b706d63ee2cb000004/image1.png" alt="Screenshot of the multi-camera view with speech icons overlaid on two of the feeds." /></p>
<p style="text-align: center;">Our monitoring system warns of speech detected on two of the cameras</p>
<h2>Audio tagging</h2>
<p>Before we can warn the production team about the audio, we need to determine what we&rsquo;re hearing. For this, we use <a href="https://arxiv.org/abs/1912.10211">a machine learning-based audio classifier</a>. We chose the classifier because of its high accuracy in detecting a wide range of sounds, hierarchically described by the <a href="https://research.google.com/audioset/ontology/index.html">Google AudioSet ontology</a>. The classifier achieves state-of-the-art performance in AudioSet tagging, with a mean average precision (mAP) of 0.439.</p>
<p>The AudioSet ontology contains a large selection of different sounds with a great deal of variety. This allows our audio monitoring system to be used for any sort of audio content, opening up the possibility for a range of applications in all sorts of productions and programmes.</p>
<p>Our system takes in streams from the cameras and puts them into <a href="https://www.bbc.co.uk/rd/blog/2021-03-springwatch-automatic-analysis-cloud-computing">our cloud-based media management system</a>. We then take a live stream of the audio from this recording into our processing tool. This tool chunks the audio up into short clips of around a second before passing them onto the tagging system. When we receive the results from the tagger, we examine the scores for different sounds and see if any of the problematic audio types have scored highly. If they have, we warn the monitoring team.</p>
<p>Before using our system live at Autumnwatch, we wanted to put it through its paces. A notable issue that we encountered here was a lack of suitable audio to test it with. To address this, we ran a set of experiments in conjunction with R&amp;Ds audio team, in which we generated 4400 hours of hours of unsafe audio by mixing clean Springwatch audio with a selection of relevant sound effects from <a href="https://freesound.org/">Freesound</a>. We used these to test the tagging system, and to determine the sensitivities at which the various unsafe sounds would be picked up. Understanding these sensitivities and so being able to set detection thresholds is important because speech, for example, is far more likely to cause compliance issues and so should be picked up with far greater sensitivity than say, vehicles, which will not.</p>
<h2>Remote Production Cluster integration</h2>
<p>A key task for this tool was to have it integrate as naturally as possible into the existing monitoring workflow. Much of the remote production work makes use of a collection of cloud-based tools put together by colleagues in BBC News called the Remote Production Cluster. This includes facilities to ingest and route video - and then generate a multiviewer displaying those camera feeds.</p>
<p>We initially created a web interface that closely mirrors the layout of camera sources that the monitoring team see in their multiviewer; a box for each camera feed with either a green speaker symbol for clean audio or a red speaker when there is problem audio. When there is a problem, we also display a larger red icon denoting what type of audio has been detected. These pictograms should be simple to understand at a glance so the operator can quickly tell what kind of audio is being detected. Examples include a speech bubble for speech and a car for vehicle noise.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6183292306d63e6653000043/image2.png" alt="A screenshot of the web interface showing a grid of eight spaces with speech icons over two of the cells on the grid." /></p>
<p style="text-align: center;">Web interface displaying audio warnings</p>
<p>The web interface would add an additional window that the monitoring team would need to keep track of. So to better integrate the monitoring with the existing workflow, colleagues in BBC News were able to take our web interface, turn it into a video source and overlay it on the existing multiviewer. This is a far easier to use monitoring tool; only one screen needs to be visually monitored to see the full set of available data.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61851d8c06d63eed780000b8/image3-2.png" alt="Screenshot of multi-camera view with icons overlaid on the video feeds showing what sounds are present." /></p>
<h2>Results</h2>
<p>We tested the system live during Autumnwatch 2021. The latency between detecting a sound and displaying a warning was about 2 seconds: well below the previous average monitoring delay. The operators found it extremely useful for gauging the health of audio from a particular stream, which meant they could ensure unhealthy streams were low volume in the outgoing audio mix. A notable example was several hours during streaming where playing children could be heard in the distance on several streams. The audio monitoring software reliably picked this up and highlighted which of the eight cameras had this problematic audio.</p>
<p>One problem we encountered was wind. During the first day of streaming, the wind was so strong that several camera feeds were determined to contain loud vehicle noise. The classifier construed the wind noise as that from a nearby lorry or car due to its volume. Having an additional metric to evaluate the difference between wind and vehicles - a task that is sometimes difficult to even a human listener when it is picked up through a microphone, compressed and sent over the internet - would prove useful in future watches.</p>
<p>Our tests at Autmnwatch showed how potentially useful the tool could be and proved a great opportunity to get feedback from the streaming team on how we can make it more helpful for them. The tests were also a catalyst to develop ways to integrate our tools into the existing infrastructure and workflow at Autumnwatch. We intend to use what we have learnt to develop our tools further and then apply improved versions at Winterwatch in the new year.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61851d6f06d63eed780000b6/image4-2.png" alt="Screenshot of the multi-camera view with a footstep icon overlaid on one of the feeds." /></p>
<p style="text-align: center;"><span style="font-size: small;">Warnings of footsteps on the nighttime cameras</span></p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Continuing their use of AI in the production of the BBC's nature programmes, @BBCRD investigated how artificial intelligence techniques can be applied to audio: https://bbc.in/3qa3CDa">Tweet This</a>&nbsp;-&nbsp;<a href="https://www.facebook.com/sharer/sharer.php?u=https://www.bbc.co.uk/rd/blog/2021-11-live-audio-monitoring-autumnwatch-ai">Share on Facebook</a></strong></p>
<p><strong><strong><a href="https://www.bbc.co.uk/rd/projects/intelligent-video-production-tools">BBC R&amp;D - Intelligent Video Production Tools</a></strong></strong></p>
<p><strong><strong><strong><a href="https://www.bbc.co.uk/blogs/natureuk/entries/da7d59d4-8a77-4d49-b211-8c28d6b47a2a">BBC Winterwatch - Where birdwatching and artificial intelligence collide</a></strong></strong></strong></p>
<p><strong><a href="https://www.bbc.co.uk/programmes/w3csz985">BBC World Service - Digital Planet: Springwatch machine learning systems</a></strong></p>
<p><strong><a href="https://www.tvbeurope.com/production-post/how-bbc-rd-artificial-intelligence-to-watch-springwatchs-cameras">TVB Europe - Artificial intelligence and Springwatch</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/programmes/b007qgm3">BBC Springwatch</a>&nbsp;|&nbsp;<a href="https://www.bbc.co.uk/programmes/b0079t1p">Autumnwatch</a>&nbsp;|&nbsp;<a href="https://www.bbc.co.uk/programmes/p012msk2">Winterwatch</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-11-cloud-production-ingesting-video-as-a-service">BBC R&amp;D -&nbsp;Cloud-Fit Production Update: Ingesting Video 'as a Service'</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-07-ip-production-centre-continuous-delivery-software-media-broadcast">BBC R&amp;D -&nbsp;Tooling Up: How to Build a Software-Defined Production Centre</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2017-09-ip-studio-nmos-video-cloud-production">BBC R&amp;D - IP Studio Update: Partners and Video Production in the Cloud</a></strong></p>
<p><strong><a href="https://www.ibc.org/content-management/moving-production-to-the-cloud/1069.article">IBC 365 - Production and post prepare for next phase of cloud-fit technology</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Developing automated user generated content filtering tools for news events</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-11-automating-video-editing-news-story-machine-learning" />
        <id>https://www.bbc.co.uk/rd/blog/2021-11-automating-video-editing-news-story-machine-learning</id>

        <published>2021-11-03T15:26:55Z</published>
        <updated>2021-11-03T15:26:55Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            User generated content can be a great addition to a news story, but it can be hard to curate an overwhelming amount of material. We are developing tools to help editors automate the process.
          ]]>
        </summary>

        
          
            <author><name>Marta Mrak</name></author>
          
        

        
          <category term="Journalism" />
        
          <category term="Artificial Intelligence &amp; Machine Learning" />
        
          <category term="Automation" />
        
          <category term="Video" />
        
          <category term="Editing" />
        
          <category term="Production" />
        
          <category term="Live Events" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>The ability to capture videos on mobile phones has introduced a potentially unlimited number of &#39;on the scene&#39; views to news editors covering events. These types of video often capture the most unique, relevant and authentic moments, which are essential for a good story. </p>

<p>Content generated by those attending an event brings an immediacy and enriched coverage but can also flood editors with a constant stream of information that needs to be curated. Editors need to judge the newsworthiness of the material to choose what video clips should be selected for use alongside other footage to create an engaging and coherent story.</p>

<p>This skilful task requires manual searches of massive amounts of data, which takes a great deal of time and resources. BBC Research &amp; Development is collaborating with the <a href="https://www.unl.pt/en">NOVA University of Lisbon</a> to <a href="https://www.bbc.co.uk/rd/blog/2020-05-automated-news-stories-user-generated-journalism">automate this process</a> and NOVA&#39;s <strong>Joao Magalhaes</strong> explains below how we are developing content-filtering tools to make searching quick and effective while editors maintain complete control of the final result.</p>

            <p>When illustrating a news story, the goal is to create an informative, cohesive and appealing visual sequence that best conveys the timeline of events. The package is structured into story segments that describe parts of the event. Many things influence why producers and editors use different pieces of content to represent specific segments. Our research aims to derive a computational framework for visual storyline illustration that closely follows the principles and criteria employed by editors in the newsroom.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61816cd206d63ea877000001/image-1.png" alt="An illustration of various segments within a story, the variety of clips and footage that could illustrate these segments, and the possible transitions between each of these clips and segments." /></p>
<p style="text-align: center;">Visual Storyline illustration framework.</p>
<p>This work continues our previous collaboration with NOVA which introduced a benchmark for&nbsp;<a href="https://www.bbc.co.uk/rd/blog/2020-05-automated-news-stories-user-generated-journalism">automated news storytelling</a>. Our challenge now is to&nbsp;<a href="https://www.bbc.co.uk/rd/topics/automation">automate</a>&nbsp;the editorial process to assist news editors.</p>
<p>To do this, we need to firstly, find the right video footage from all the available sources, and secondly, understand the timeline of events that feature in the news story.</p>
<h2>Our Approach</h2>
<p>Our system to automate the selection of images or video clips for a story or news report consists of 3 steps:</p>
<ul>
<li>The editor enters a story topic and its story segments as a text query.</li>
<li>The framework selects images or videos that illustrate the story segments.</li>
<li>This compilation is then presented to the editor for selection and post-editing.</li>
</ul>
<p>Instead of struggling to decide from one of the many different ways to illustrate the story, the editor can iteratively browse different sequences created from the footage and work from there.</p>
<p>We designed a graph-based framework to semi-automate a selection of footage so that the visual storyline is succinct, cohesive and the sequence has an interesting and relevant narrative.</p>
<p>The first part of this process focuses on story transitions from one story segment to another. Deciding how the transitions happen is a challenge because it will ultimately affect the visual quality of the story, as judged by a human. Therefore, to organise the story segments and define the transitions, our approach predicts the perceived transition quality of neighbouring segments by analysing their visual and semantic characteristics.</p>
<p>We do this by formally defining the transition between two images based on a distance ensemble of multimodal visual features (e.g. colour, luminance, objects, etc.). We then use this definition to automatically assess the transition between many sets of images, evaluating the quality by analysing their visual and semantic characteristics.</p>
<p><a href="https://www.bbc.co.uk/rd/topics/artificial-intelligence-machine-learning">Machine learning</a> tools create the transition between two images; in particular, transition points are decided using an ensemble of regression trees over a set of distances between multimodal features in the footage we provided. We trained a gradient boosted regressor to learn which elements maximise the quality of transitions, with ground truth obtained by human annotators.</p>
<p>The proposed framework tackles the second important task: moving beyond the transitions between separate segments in one packaged report to select specific footage to represent that segment. To do this, it uses another machine learning technique known as multimodal graph-based methods to look at the storyline as a whole and produce a visual storyline, as shown in the image above, by finding a set of candidate multimodal sources to illustrate each segment.</p>
<p>Two fundamentally different structures are considered: a bipartite graph and a multipartite graph. We optimise the bipartite graph to create visual storylines with the best possible sequential transitions while considering the relevance of the content. The visual storyline is obtained by finding the shortest path in the graph, where the distance between elements of neighbouring segments is defined by the predicted transition quality and relevance of the content. On the other hand, the multipartite graph assesses transitions between all elements (the footage) available for the story, optimising for maximal cohesion. This process leverages the possibility that individual transition quality is affected by the remaining elements of the storyline.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61816d8606d63e30150000ff/image-2.png" alt="An illustration of the differences bipartite and multipartite graphs as described above." /></p>
<p style="text-align: center;">The contrast between local transition (left) and global transition modelling (right).</p>
<p>This graph-based formulation is capable of representing visual storylines taking into account numerous images and videos. It provides the necessary flexibility to adapt to different settings (e.g. arbitrary number of story segments). Most importantly, it results in a computationally efficient solution that can quickly generate alternative visual storylines so that it can be used interactively.</p>
<h2>What were the key takeaways?</h2>
<p>We carried out user testing, gaining new insights into what impacts the perception of news storyline illustrations. We conclude that a storyline is perceived best when targeting maximum cohesion among all story illustrations (using our multipartite graph). Additionally, our study shows that maximum cohesion also leads to a better sense of the relevance of the news story illustration.</p>
<p>When analysing the performance of our framework at the point of transition quality evaluation, we observed that the most determinant visual characteristic responsible for achieving higher transition quality is semantic similarity. Our experiments show that storylines with the best quality avoid abrupt semantic shifts, which can be done by placing images and footage that show similar things close together.</p>
<p>The colour difference between sequential images in a storyline should be minimised to make the news package appealing; this supports our assumption that using content with similar colour palettes is a solid way to optimise the quality of visual storylines.</p>
<p>The result of our work is a framework that can highlight some of the methodological patterns editors use when illustrating storylines, despite the subjective nature of the task. It is possible to systematically and accurately predict the quality of transitions in an automated way, leaving the editor to make any final adjustments.</p>
<blockquote>
<p>You can read more about this work in our paper: Gon&ccedil;alo Marcelino, David Semedo, Andre Mourao, Saverio Blasi, Joao Magalhaes and Marta Mrak, "Assisting News Media Editors with Cohesive Visual Storylines", in Proc. ACM Multimedia 2021. [<a href="https://doi.org/10.1145/3474085.3475476">doi</a>] [<a href="https://arxiv.org/abs/2110.06805">preprint</a>].</p>
</blockquote>
<blockquote>
<p>This work was carried out as part of the <a href="https://www.bbc.co.uk/rd/projects/cognitus">H2020 COGNITUS project</a>, in collaboration with the <a href="https://www.fct.unl.pt/en">School of Science and Technology</a> from <a href="https://www.unl.pt/en">Universidade NOVA de Lisboa</a>.</p>
</blockquote>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=User generated content can be a great addition to a news story, but it can be hard to curate an overwhelming amount of material. @BBCRD are developing tools to help editors automate the process: https://bbc.in/3EFf39X">Tweet This</a>&nbsp;-&nbsp;<a href="https://www.facebook.com/sharer/sharer.php?u=https://www.bbc.co.uk/rd/blog/2021-11-automating-video-editing-news-story-machine-learning">Share on Facebook</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/projects/cognitus">BBC R&amp;D - Helping to automate storytelling for news events</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/projects/cognitus">BBC R&amp;D - COGNITUS</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-11-bbc-music-day-cognitus">BBC R&amp;D - Capturing User Generated Content on BBC Music Day with COGNITUS</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2017-08-cognitus-edinburgh-festival">BBC R&amp;D - COGNITUS at the Edinburgh Festival 2017</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-08-artificial-intelligence-production">BBC R&amp;D - AI Opportunities: Transforming Coverage of Live Events</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2021-05-video-quality-artificial-intelligence-colourisation">BBC R&amp;D - AI &amp; Auto colourisation: Chasing rainbows with transformers</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Sustainable futures: Designing a secure carbon calculator</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-10-sustainability-carbon-calculator-privacy-data" />
        <id>https://www.bbc.co.uk/rd/blog/2021-10-sustainability-carbon-calculator-privacy-data</id>

        <published>2021-10-26T12:17:40Z</published>
        <updated>2021-10-26T12:17:40Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            Investigating the potential of a personal data-driven climate impact calculator that puts users' privacy first.
          ]]>
        </summary>

        
          
            <author><name>Galen Reich</name></author>
          
        

        
          <category term="Data" />
        
          <category term="Sustainability" />
        
          <category term="Content Formats" />
        
          <category term="Participation" />
        
          <category term="Personalisation" />
        
          <category term="Audience Research" />
        
          <category term="User Interfaces" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>Data. Sustainability. Privacy. Critical issues that affect us all. BBC Research &amp; Development is exploring these three areas, and my fellow 2019 <a href="https://www.bbc.co.uk/careers/trainee-schemes-and-apprenticeships/trainee-schemes/research-technologists">intake of graduates</a> - Duncan Walker, Mhairi Finlayson, Sebastian Ward, Fearn Bishop, and Callum Lunn - have all been investigating the potential of a personal data-driven climate impact calculator that puts users&#39; privacy front and centre.</p>

            <h2>Climate change</h2>
<p>Mention climate change and many people in think of documentaries like <a href="https://www.bbc.co.uk/programmes/p04tjbtx">Blue Planet</a> and <a href="https://www.bbc.co.uk/programmes/b006mywy">Planet Earth</a> with David Attenborough's familiar and thought-provoking narration. The BBC has an important role in <a href="https://www.bbc.co.uk/news/science-environment-56837908">communicating the threats of climate change</a>, helping everyone become more aware of their impact on the planet that we all call home.</p>
<p>But leading a sustainable lifestyle <a href="https://www.bbc.co.uk/bbcthree/article/b2e7ee32-ad28-4ec4-89aa-a8b8c98f95a5">can be daunting</a>, and it's often unclear what we can do to have a significant positive impact on the environment. That's what we want to change. We want an easy way for anyone to see and understand their impact on the Earth's climate.</p>
<p>Many existing carbon calculators are designed to capture a user&rsquo;s climate impact at a given instant and often focus on a specific type of impact. Many calculators encourage carbon offsetting (paying a company to remove the equivalent quantity of CO2 from the atmosphere). Offsetting is great, but it&rsquo;s <a href="https://www.unep.org/news-and-stories/story/carbon-offsets-are-not-our-get-out-jail-free-card">just one part</a> of the journey towards <a href="https://www.bbc.co.uk/news/uk-scotland-52939325">net-zero carbon emissions by 2050</a>. We want to build a calculator that uses data to help people find other steps they can take to have a positive climate impact.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6172cd0206d63e8a3a0000a2/slide.png" alt="An overview of the app's benefits from the introductory screen - View your personal carbon footprint, make eco-friendly choices." /></p>
<h2>Data, privacy, and control</h2>
<p>Before we could think about increasing carbon literacy or giving suggestions for positive climate impact, we had to answer some critical questions about handling users&rsquo; data. What data do we need to calculate climate impact? How do we ensure privacy? Who should have control of personal data?</p>
<p>When answering these questions, a Personal Data Store really shines. A Personal Data Store is a secure digital space that gives users complete control over their data, allowing them to choose which data to share and which data to keep private.</p>
<p><a href="https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">BBC R&amp;D is exploring the potential of Personal Data Stores</a>, and using one would avoid transferring any sensitive data to BBC servers; all the necessary processing can be carried out on our users&rsquo; devices, keeping their data private and secure.</p>
<h2>User testing and ideation</h2>
<p>To help us have a unified view of the carbon calculator app, we worked together to establish a project statement that captured the important elements of the project:</p>
<blockquote>
<p><em>&ldquo;This product should use a trusted Personal Data Store to allow users to make meaningful, personalised allegories between activity and carbon footprint to improve carbon literacy and encourage specific personalised actions that users could take as part of a sustainable routine.&rdquo;</em></p>
</blockquote>
<p>Working with this project statement, we iterated through several rounds of design ideation, with three of our initial designs selected for getting user feedback. Each design explored a different theme:</p>
<ul>
<li>A collaborative or challenge app similar to <a href="https://www.nhs.uk/live-well/exercise/couch-to-5k-week-by-week/">Couch to 5k</a> to experience with a community or group of friends.</li>
<li>A &lsquo;Tiny World&rsquo; similar to a <a href="https://en.wikipedia.org/wiki/Tamagotchi">Tamagotchi</a> to look after, where their carbon footprint would impact how their world was presented.</li>
<li>A &lsquo;data dashboard&rsquo; of visualisations to help users dig into their carbon footprint data.</li>
</ul>
<p>We created three &lsquo;app store&rsquo; pages to present these ideas, allowing us to communicate them in broad strokes while leaving room for individual interpretation and removing the influence of user interface design decisions that might affect people&rsquo;s opinions of the ideas.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6177e96306d63e841a00002d/MyTinyWorld2.png" alt="A screenshot of an example listing for one of our apps in the Google Play store." /></p>
<p>We tested these ideas with members of our target audience: young adults who primarily engage with media on their mobile devices, are aware of climate change as an issue but are passive in their actions to combat it. People strongly disliked the idea of an app that could produce social pressure, shaming, or guilt as a result of their carbon footprint. They wanted an app that made them feel positive and encouraged change rather than reflecting negatively on past behaviours. Ideas involving gamification had a mixed response, some people found them engaging and wanted to see the impact of their carbon footprint visually, but some thought gamification might be gimmicky or hold their attention for a short time. Generally, people felt positive about the data dashboard with a variety of visualisations, and most could imagine taking a closer look at their data at some point in time.</p>
<p>From this testing, we developed a prototype that incorporated the ideas people found most interesting. The prototype included the visual and gamified elements of the tiny world, inside an overall data focused application.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6172ca8306d63e8a3a000096/globe-stats.png" alt="A screenshot from the app - a breakdown via stats and charts of your carbon dioxide use and suggestions for improvements." /></p>
<h2>The calculator and carbon factors</h2>
<p>The job of the carbon calculator was to take a securely held set of data and turn it into a set of &lsquo;carbon events&rsquo; that could be analysed. Google Maps journey data, for example, includes a journey distance and an inferred mode of transport. To interpret this travel data, the carbon calculator needs to make some assumptions, and this is where carbon factors come in.</p>
<p>A carbon factor is a single number that approximates the climate impact of a particular activity, often measured in <a href="https://www.bbc.co.uk/news/science-environment-11833685">carbon dioxide equivalent (CO2e)</a>. For example, the carbon factor for an average car in the UK is about <a href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/891105/Conversion_Factors_2020_-_Condensed_set__for_most_users_.xlsx">276g CO2e per mile</a>, so if you take a one-mile trip in an average car in the UK, you produce around 276g of carbon dioxide equivalent. When the carbon calculator is given data containing a car journey, it uses this carbon factor to convert to carbon dioxide equivalent. Using carbon factors for other modes of transport allow us to build a fully-fledged carbon calculator for Google Maps journey data, which turns each journey into a carbon event with an associated quantity of emissions.</p>
<p>For each type of data, a different set of carbon factors are required - knowing the car&rsquo;s emissions isn&rsquo;t helpful to calculate the impact of buying a loaf of bread! To include all the carbon factors the calculator needs, we developed individual parsers (short bits of code for translating a particular type of data to carbon events) to process different data sources. We ended up writing four parsers for different types of data: <a href="https://maps.google.com/">Google Maps</a> for travel, <a href="https://www.acorn-energy.com/">Acorn Energy</a> for home, <a href="https://www.tesco.com/clubcard">Tesco Clubcard</a> for food, and <a href="https://monzo.com/">Monzo</a> for shopping.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6172cbf806d63e9c92000087/bbc-app-globe.png" alt="Another screenshot from the app showing stats and charts relating to personal CO2 emissions and suggestions." /></p>
<h2>Suggestions</h2>
<p>Generating suggestions from user data is a significant part of this app. We wanted to engage users with their data and provide ways they can change their behaviour, and understand what impact that has on their carbon footprint. The suggester takes carbon events and creates a weekly summary of a users carbon footprint, broken down by our four main categories and by sub-category (i.e. &lsquo;travel&rsquo; could be by car, bus, walking etc).</p>
<p>When the app is opened, the suggester uses this data to pick from some pre-written suggestions relevant to a user&rsquo;s activities in the last week. These suggestions are populated with the user&rsquo;s data (e.g. &ldquo;last week you drove the car X times generating Y CO2e, try taking the bus next week to save Z CO2e&rdquo;). Suggestions are typically restricted to a week of action. The suggestions aim to encourage small, achievable goals which help users understand how different parts of their lifestyle affect their carbon footprint and help them make sustainable changes that work for them.</p>
<h2>Visualisation</h2>
<p>With the calculator and the suggestions engine up and running, we could plug these into our front-end visualisation. We would like to incorporate some of the <a href="https://bbcnewslabs.co.uk/projects/graphical-storytelling/">graphical storytelling ideas from our BBC News Labs colleagues</a> but focused on data visualisation for the six-week sprint.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6172cc6b06d63ef16200042a/travel-screen-3.png" alt="A visualisation of personal data - showing a world map overlaid with a flight path, and a chart breaking down personal carbon emissions by mode of transport." /></p>
<p>Because of the tight timescale, we focused on building a web app but were conscious that our users primarily engage with media on mobile devices. To get a sense of how our app might look on mobile, we worked with Alex Ballantyne, a UX designer from <a href="https://www.bbc.co.uk/gel/">the BBC&rsquo;s User Experience and Design team</a>, to mock up a few mobile design fictions - we think they look great!</p>
<h2>Final thoughts</h2>
<p>We&rsquo;ve covered a lot of ground to get here, all six of us have learnt a lot about sustainability and personal data stores and are pretty chuffed with our rapidly assembled carbon calculator.</p>
<p>This short sprint worked well for familiarisation with personal data stores and opened our eyes about this exciting space. With <a href="https://www.bbc.co.uk/rd/projects/new-forms-value-bbc-data-economy">R&amp;D&rsquo;s New Forms of Value project</a>, there&rsquo;s a lot of innovation around data stewardship and building a data ecosystem as a public service and we can&rsquo;t wait to see what the future holds.</p>
<p><span style="font-size: small;"><em>Icons and images in this post used under licence:&nbsp;<a href="https://www.flaticon.com/free-icon/tram_561689">tram</a>, <a href="https://svgsilh.com/image/575514.html">bush</a>, <a href="https://svgsilh.com/image/310072.html">trees</a>, <a href="https://www.flaticon.com/free-icon/cow_2395796">cow</a>, <a href="https://www.svgrepo.com/svg/2853/satellite">satellite</a>.</em></span></p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Creator%20of%20the%20World%20Wide%20Web,%20Sir%20Tim%20Berners-Lee,%20wants%20to%20reconfigure%20it%20so%20users%20own%20and%20control%20their%20data.%20@BBCRD%20are%20investigating%20how%20to%20give%20people%20the%20chance%20to%20do%20just%20that:%20https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Tweet This</a>&nbsp;-&nbsp;<a href="https://www.facebook.com/sharer/sharer.php?u=https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Share on Facebook</a>&nbsp;-&nbsp;<a href="https://www.linkedin.com/shareArticle?url=https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Share on LinkedIn</a></strong></p>
<p><strong><a href="https://reddit.com/submit/?url=https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Share on Reddit</a>&nbsp;-&nbsp;<a href="https://news.ycombinator.com/submitlink?u=https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Share on Hacker News</a></strong></p>
<hr />
<h2 class="b-font-family-serif"><a href="https://storyplayer.pilots.bbcconnectedstudio.co.uk/experience/bbc_rd_2020_showcase">Interactive Presentation - BBC R&amp;D Showcase</a></h2>
<p>In this&nbsp;<a href="https://storyplayer.pilots.bbcconnectedstudio.co.uk/experience/bbc_rd_2020_showcase">interactive experience discussing the use of personal data</a>&nbsp;you can choose what you would like to know more about from our engineers.</p>
<hr />
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-06-bbc-box-personal-data-privacy">BBC R&amp;D - Introducing the BBC Box</a></strong></p>
<p><strong><strong><a href="https://www.bbc.co.uk/rd/blog/2020-09-personal-data-store-privacy-services">BBC R&amp;D -&nbsp;Putting audience data at the heart of the BBC</a></strong></strong></p>
<p><strong><strong><strong><a href="https://www.wired.co.uk/article/bbc-data-personalisation">Wired -&nbsp;The BBC&rsquo;s radical new data plan</a></strong></strong></strong></p>
<p><strong><a href="https://www.slashgear.com/bbc-box-personal-cloud-aims-to-rewrite-the-rules-on-internet-privacy-19581160/">SlashGear - BBC Box personal cloud aims to rewrite the rules on internet privacy</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/news/technology-48825413">BBC News - Personal online data to inform BBC audience choice</a></strong></p>
<p><strong><strong><a href="https://www.bbc.co.uk/rd/projects/bbc-box-personal-data-management-privacy">BBC R&amp;D - Personal Data Stewardship</a></strong></strong></p>
<p><strong><a href="http://www.databoxproject.uk/">Databox</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2015-01-the-ethics-of-data-videos">BBC R&amp;D - The Ethics of Personal Data</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/projects/data-science-research-partnership">BBC R&amp;D - Data Science Research Partnership</a></strong></p>
<p><strong><a href="https://www.broadcastnow.co.uk/bbc/bbc-tackles-data-safety-with-disruptive-device/5140546.article">Broadcast - BBC tackles data safety with &lsquo;disruptive&rsquo; device</a></strong></p>
<p><strong><a href="https://www.gizmodo.co.uk/2019/06/the-bbc-is-doing-cloud-storage-and-wants-you-to-have-full-control-over-your-data/">Gizmodo - BBC Box personal cloud aims to rewrite the rules on internet privacy</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Streaming player shakedowns: Our setup to test internet streaming</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-10-internet-streaming-test-network-conditions-dash" />
        <id>https://www.bbc.co.uk/rd/blog/2021-10-internet-streaming-test-network-conditions-dash</id>

        <published>2021-10-19T14:19:27Z</published>
        <updated>2021-10-19T14:19:27Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            A chassis dynamic rig allows a racing team to evaluate new components and setups - saving time. Our DASH Player testbed is a similar idea but for internet streaming on BBC iPlayer.
          ]]>
        </summary>

        
          
            <author><name>Chris Bass</name></author>
          
        

        
          <category term="Distribution" />
        
          <category term="Performance" />
        
          <category term="Networks" />
        
          <category term="Internet" />
        
          <category term="Audio" />
        
          <category term="Video" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>One of the standard pieces of equipment used by motor racing teams to develop their cars nowadays is a chassis dynamic rig (also known as a shaker or seven poster rig). It is a mechanical marvel on which a racing car sits and which emulates the experience of driving on a specific racetrack by applying forces to the wheels and chassis of the vehicle via a number of hydraulic actuators, using data previously captured from a car driving around that track. This allows teams to test how well the vehicle and its components handle those forces and enables them to tune the car for a particular circuit before they even get there.</p>

<p>Having a chassis dynamic rig is of great benefit to a racing team. It means that they can evaluate new components and setups without going to the expense of transporting a vehicle and team of engineers to an actual circuit, cutting costs, environmental impact and development time. It means that they can make the best use of time, running tests on cars 24/7 and without any risk to a driver.</p>

            <p>In the world of internet streaming players, having the equivalent of a chassis dynamic rig would be similarly useful: a rig that allows us to evaluate how players will react to the kind of network conditions faced by our audiences; a tool that enables us to quickly <a href="https://www.bbc.co.uk/rd/blog/2021-06-streaming-adaptive-bitrate-quality-resolution-bandwidth">measure the likely impact of changes to our player software or streams</a>, without the risks associated with testing in iPlayer. That, in essence, is what our DASH Player Testbed provides. Later in this article, I&rsquo;m going to describe the architecture of our Testbed. But first, what are some of the particular challenges of implementing a test rig for internet streaming?</p>
<h2>Recreating network conditions</h2>
<p>The physical conditions that racing teams need to recreate in their chassis rigs are pretty well defined: they race on a small number of known circuits and have detailed data for each one. In the world of internet streaming, however, the range of network conditions faced by users is much broader and the specific circumstances under which a programme will be streamed are not known in advance. The complex interactions of all the devices and links that make up the internet mean it&rsquo;s unlikely that any two streaming sessions will be exactly the same. So how can we decide what network conditions to apply within our Testbed to simulate those faced by our audiences?</p>
<p>To determine this, we captured detailed metrics about how the download rate of audio and video data varied over time for around 380,000 BBC iPlayer sessions. The download rates for each streaming session were captured at the so-called application level, i.e., they represent the data throughput available to the player software that was decoding and presenting the stream.&nbsp;From this information we derived a set of around ten network profiles representing a range of conditions faced by our audiences. Each of these network profiles is encapsulated in a file describing how available bitrate varies over time. Here is the throughput graph of one of these profiles.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/615c5ee006d63eab33000035/fig1.png" alt="Network throughput against time for a Testbed network profile." /></p>
<p style="text-align: center;"><span style="font-size: small;">Network throughput against time for a Testbed network profile</span></p>
<p>Having network profiles is one thing. How, though, do we recreate the network conditions described in a profile and apply them to a player streaming some content? What is the equivalent in the Testbed of the actuators that apply physical forces to a car on a chassis dynamic rig?</p>
<p>We use a technique called network emulation, in which the characteristics of a real network &ndash; its data rate, latency and/or packet loss &ndash; are altered to mimic those of a different network. In the Testbed we use the traffic control facilities built into the Linux operating system, which allow the behaviour of its network interfaces to be controlled. We use these to modulate the throughput of the network interface used by the player software according to the particular network profile being used for a test. Note that this method doesn&rsquo;t precisely recreate the conditions existing when the network profile was captured but it is sufficiently representative to allow meaningful comparisons to be made across multiple tests.</p>
<h2>Handling non-determinism</h2>
<p>One of the challenges of drawing valid conclusions about player performance is that the sheer complexity of networks, protocols, connected devices and the media player software that runs on them, together with all of the interactions between these components, means that the system as a whole is unlikely to behave in exactly the same way twice. Though general patterns of behaviour are evident, internet streaming is fundamentally non-deterministic.</p>
<p>Given that, how can we confidently conclude that, for instance, one version of a player is better than another? The answer is that we need to run many sessions of each test, i.e., of each specific combination of player, stream and network profile. Then we can calculate the variance of the results and determine the significance of performance differences between player versions. To make this manageable, the Testbed automates the running of many sets of tests and parallelises the execution of test sessions across multiple machines to reduce the overall time taken to run them.</p>
<p>Let&rsquo;s take a look next at the overall architecture of the Testbed.</p>
<h2>Testbed architecture</h2>
<p><a href="https://www.bbc.co.uk/rd/sites/50335ff370b5c262af000004/assets/615c5f2706d63e418b000038/fig2.png"><img src="/rd/sites/50335ff370b5c262af000004/assets/615c5f2706d63e418b000038/fig2.png" alt="Simplified architecture of the Testbed" /></a></p>
<p style="text-align: center;"><span style="font-size: small;">Simplified architecture of the Testbed (<a href="https://www.bbc.co.uk/rd/sites/50335ff370b5c262af000004/assets/615c5f2706d63e418b000038/fig2.png">Click for larger version</a>)</span></p>
<p>The TestController, which sits at the heart of the Testbed, is responsible for queueing and running sets of tests. The UI submits a test set to the TestController in a JSON file, which describes each test to be run in terms of the following:</p>
<ul>
<li>The stream to use in the test: whether it&rsquo;s a live or on-demand stream, what combination of audio/video/subtitle components it has and its duration.</li>
<li>Which DASH player software to use in the test and how that player should be configured. The Testbed currently supports players based around the <a href="https://github.com/Dash-Industry-Forum/dash.js/">dash.js</a> JavaScript DASH implementation and the <a href="https://gstreamer.freedesktop.org/">GStreamer</a> media toolkit.</li>
<li>The number of sessions of that test to run (to give statistically useful results &ndash; see above).</li>
<li>The network profile that should be applied during the running of each test session.</li>
</ul>
<p>When the TestController runs a test from within a test set, it farms out the individual sessions of that test to a pool of Executors, which are each responsible for running a single session at a time. Each of the Executors is a Linux machine. As described above, we use the traffic control facilities built into the Linux kernel to throttle the data rate of its network interface according to the particular network profile selected for that test. The Executor configures and launches the player identified in the test description, which will play the test stream in real time and report metrics about its performance back to the MetricStore.</p>
<p>The MetricStore which captures player metrics implements a <a href="https://graphql.org/">GraphQL</a> interface that allows those stored metrics to be accessed. This allows graphing clients to visualise captured data, allowing the performance of different player software under the same network conditions to be compared, for example, or the effects of adding a new video component under different network conditions to be seen. The GraphQL interface also allows non-graphical clients to extract useful information from the captured data: it could be used, for example, to access session data on which new player software can be trained using machine learning techniques.</p>
<p>As well as helping us optimise the reliability of our current streams, the Testbed is helping us develop future improvements in online streaming, such as <a href="https://www.bbc.co.uk/rd/projects/low-latency-live-streaming-mpeg-dash">low latency live streaming</a>. Low latency streaming, in which the time lag when streaming live events is brought down to a level similar to that of traditional broadcast delivery, presents more challenges to players than regular streaming. It&rsquo;s more difficult to measure how much bandwidth is available to a player during low latency streaming, and the trade-off of playing with a low latency is that a player can buffer only a small amount of media ahead into the future, meaning it has a lot less safety margin to react to downturns in network bandwidth and avoid stalling. This remains an area of active research that we will cover in upcoming posts in this series.</p>
<p>Just as automated testing tools are of great value in the world of motor racing, so they are in the field of internet streaming. The DASH Player Testbed is just one tool that is helping us reach the goal of giving audiences of our internet-delivered content the same or better experience than they get from our broadcast content.</p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Read%20about%20distributing%20live%20television%20at%20scale%20over%20the%20Internet%20-%20trials%20@BBCRD%20have%20been%20conducting:%20https://bbc.in/2PMznhj">Tweet This</a>&nbsp;-&nbsp;<a href="https://www.facebook.com/sharer/sharer.php?u=http://www.bbc.co.uk/rd/projects/dynamic-adaptive-streaming-ip-multicast-dasm">Share on Facebook</a></strong></p>
<p><strong><strong><a href="https://www.t3.com/news/the-bbc-has-found-a-clever-way-to-make-iplayer-streaming-even-better-quality">T3 - The BBC has found a clever way to make iPlayer streaming even better quality</a></strong></strong></p>
<p><strong><strong><a href="http://www.bbc.co.uk/rd/projects/adaptive-bitrate-technology">BBC R&amp;D - Adaptive Bitrate Technology</a></strong></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/projects/low-latency-live-streaming-mpeg-dash">BBC R&amp;D - Low Latency Live Streaming with MPEG DASH</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-07-ultra-high-definition-uhd-viewing">BBC R&amp;D - A Summer of Football and Tennis in Ultra HD</a></strong></p>
<p><strong><a href="https://en.wikipedia.org/wiki/Over-the-top_media_services">Wikipedia - Over The Top Media Services</a></strong></p>
<p><strong><a href="https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP">Wikipedia - Dynamic Adaptive Streaming over HTTP</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-09-forecaster-5g-mobile-interactive-content-experience">BBC R&amp;D - A New View of the Weather: Forecaster5G, our Object-Based Weather Report</a></strong></p>
<p><strong><a href="http://www.bbc.co.uk/blogs/internet/entries/5c5b8f80-891d-4b51-babd-8814c1511b4e">BBC Technology + Creativity - Broadcasting the World Cup and Wimbledon in UHD</a></strong></p>
<p><strong><a href="http://www.bbc.co.uk/blogs/internet/entries/b56edd34-a1d8-4ef9-9d2d-1c93eda18f73">BBC Technology + Creativity - Inside our UHD workflow</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Two years of technology: MakerBox - our toolkit for creators</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-10-makerbox-creator-tools-interactive-experiences" />
        <id>https://www.bbc.co.uk/rd/blog/2021-10-makerbox-creator-tools-interactive-experiences</id>

        <published>2021-10-14T13:19:18Z</published>
        <updated>2021-10-14T13:19:18Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            As our toolkit for creators turns two, an opportunity to reflect - but also to look ahead at what’s coming in 2022.
          ]]>
        </summary>

        
          
            <author><name>Laura Harrison</name></author>
          
        

        
          <category term="Production" />
        
          <category term="Content Formats" />
        
          <category term="Participation" />
        
          <category term="Personalisation" />
        
          <category term="Interactivity" />
        
          <category term="Audio" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>This week <a href="https://www.bbc.co.uk/makerbox">MakerBox</a> will have been out in the ether for two years. I’m taking this opportunity to reflect but also to look ahead at what’s coming in 2022.</p>

<p>MakerBox has strategic importance for BBC Research &amp; Development to drive the adoption of technology to the next generation of makers. It gives us a route to test R&amp;D tools and technology with production teams, artists and creative technologists and learn from the user’s experience of making things and from our audiences trying the end result. The BBC benefits in several ways, including a better understanding of what engages audiences in storytelling, making it easier to use linear and interactive workflows by uniting them, and up-skilling a community in preparation for the future. If we believe someday all content will be made this way, we certainly need to bring the makers with us on this journey.</p>

            <p>We measure our success by our impact on the community and those that choose to use the tools to experiment. Embedding these tools and the expertise into a variety of programmes and initiatives across the sector is also vitally important for us.</p>
<p>MakerBox has been a fundamental part of several initiatives and commissions, including <a href="https://www.bbc.co.uk/connectedstudio/projects/dancepassion">Dance Passion interactive</a>, bringing together new and experienced talent from the dance industry to utilise new technology to tell stories about <a href="https://www.onedanceuk.org/dancepassion2022/">dance through a new lens</a>. We also partnered with Coventry City of Culture as part of their <a href="https://coventry2021.co.uk/explore/youthful-cities-programme/">Youthful Cities Digital Storytelling Lab</a> working with new artists collaborating in Coventry and Nairobi, using MakerBox tools to tell their stories. You&rsquo;ll be able to try these experiences when they&rsquo;re released to <a href="https://www.bbc.co.uk/taster/">BBC Taster</a> in spring 2022.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6168277306d63e909200002a/homepage.png" alt="Examples of new items from the MakerBox homepage inviting users to submit a challenge or connect with an expert." /></p>
<p>From the product side, we&rsquo;ve made some changes to the <a href="https://www.bbc.co.uk/makerbox">homepage</a> in the last six months. As our communities of practice continue to grow, <a href="https://makerbox-discourse.tools.bbc.co.uk">the MakerBox forum</a> is one way we&rsquo;re trying to connect people who want to use technology to make new media experiences, or to learn how. Although the number of people who want to join the community is high, there&rsquo;s a reluctance to openly share views, expertise and knowledge. The homepage changes highlight several routes into conversations in the forum in order to help tackle this. Community members can post challenges that they face with their pilots, ask for help or support, or members could highlight challenges that they feel passionate about and request some friendly collaborators to experiment with.</p>
<p>We&rsquo;ve made it easier to access <a href="https://www.bbc.co.uk/makerbox/case-studies">case studies</a> from the homepage too. Previously, the case studies rotated in the top three spots on the homepage; now, you can access all case studies through the navigation bar. This may evolve in future with a more intricate tagging approach, making it easier for the user to find them. For now, simply having all of them available in one place is a significant step forward in sharing our knowledge.</p>
<p><a href="https://www.bbc.co.uk/connectedstudio/">Connected Studio</a>, <a href="https://www.bbc.co.uk/makerbox">MakerBox</a> and <a href="https://www.bbc.co.uk/taster/">BBC Taster</a> are all part of the same journey. Bringing together these products is essential if we want to help our audiences become makers and boost the next generation of new digital talent. Connected Studio builds on this network and community, running initiatives that encourage new users to utilise MakerBox tools. MakerBox empowers these communities with technology, insights and expertise to drive standards and nurture partnerships. And of course, Taster offers a platform to test these experiences with audiences, directly requesting feedback that can influence the production and development of technology. Earlier this year, we began visibly connecting these products - showing which Taster pilots have been made using MakerBox tools, introducing users to the technology and allowing them to discover what tools they could use to make something themselves. The more we encourage the next generation of makers to experiment with technology, the greater the reward for the whole creative economy, helping to democratise skills within the industry. This feature is due to be realised next month.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6168256906d63e4ab800001b/p09pvgrn.jpeg" alt="Promotional artwork for Philip 21, showing Philip standing at a bar waiting for a date." /></p>
<p>Over the last year, the MakerBox gang have worked with several teams to turn their ideas into BBC Taster pilots using our <a href="https://www.bbc.co.uk/makerbox/tools/audio-orchestrator">Audio Orchestrator</a> &amp; <a href="https://www.bbc.co.uk/makerbox/tools/storyformer">StoryFormer</a> tools. My biggest highlight was <a href="https://www.bbc.co.uk/taster/pilots/philip-21">Philip, 21</a> which uses StoryFormer to take audiences on a date with Philip, where they decide how the evening goes. The piece explores interracial dynamics and modern love by <a href="https://www.bbc.co.uk/blogs/internet/entries/7007a13e-2c58-4011-9d38-bfeef2c56cbc">highlighting the user&rsquo;s choice and agency</a>. This idea was born out of the New Creatives initiative for new artists, was delivered via StoryFormer and toured various UK festivals by the team at <a href="https://www.halucid.co.uk/">_Hallucid</a>.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/60a51b5006d63e89bc000141/16-9_Post_Card_Product_Shot_Gruff.jpg" alt="Artwork from the cover of Gruff Rhys' new album, Seeking New Gods - an illustration of a mountain scene." /></p>
<p>Another highlight was <a href="https://www.bbc.co.uk/taster/pilots/seeking-new-gods">Gruff Rhys&rsquo; launch of an interactive version of his latest album Seeking New Gods</a>. We urged the <a href="https://www.bbc.co.uk/rd/projects/audio-device-orchestration">R&amp;D Audio Orchestrator team</a> to think creatively about making a fan listening party with the tool. The result was a countdown to <a href="https://www.bbc.co.uk/rd/blog/2021-05-gruff-rhys-seeking-new-sounds-surround">a live and simultaneously linked immersive audio experience</a> using the Orchestrator, which utilises an element of another <a href="https://www.bbc.co.uk/rd/blog/2020-05-iplayer-watch-party-group-watching-viewing">R&amp;D technology built for BBC Together</a>. Promotion on <a href="https://www.bbc.co.uk/6music">BBC Radio 6 Music</a> allowed us to bring together different areas of the BBC for a more significant impact. After working through some pretty sticky music rights issues, (mainly as R&amp;D isn&rsquo;t quite like the other parts of BBC in this sense), the Seeking New Gods experience was available on BBC Taster for three months. However, this process is now in place for the BBC to build on our work so that other potential music use cases can happen more easily in future.</p>
<p>There&rsquo;s a lot to look forward to in 2022. We will be publishing a bumper Spring season of interactive pilots on Taster from the initiatives using MakerBox tools at their core. We also hope to offer the maker community two more object-based media tools that build upon those already available. We&rsquo;ll have more ways to help you begin your ideas when using the technology, and we&rsquo;ll be thinking about how we might combine R&amp;D&rsquo;s work on projects such as the <a href="https://www.bbc.co.uk/rd/blog/2019-11-human-values-digital-wellbeing-audience-behaviour">Human Values Framework</a> more openly into our products. A busy year awaits.</p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=An%20online%20community%20full%20of%20discussion,%20learning%20and%20inspiration%20for%20digital%20storytellers.%20%20Find%20out%20how%20@BBC_Connected%20and%20@BBCRD%20made%20it:%20https://bbc.in/3jhjNYx">Tweet This</a>&nbsp;-&nbsp;<a href="https://www.facebook.com/sharer/sharer.php?u=https://www.bbc.co.uk/rd/blog/2019-10-makerbox-digital-storytelling-tools-creators">Share on Facebook</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/makerbox/">BBC Connected Studio - MakerBox</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/connectedstudio/">BBC Connected Studio</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-01-interactive-drama-stories-branching-narrative">BBC R&amp;D - Where Next For Interactive Stories?</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-02-storytelling-interactive-digital-drama">BBC R&amp;D - Storytelling of the Future</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-10-object-based-media-tools-storyformer">BBC R&amp;D - StoryFormer: Building the Next Generation of Storytelling</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/projects/object-based-media-toolkit">BBC R&amp;D - StoryKit: An Object-Based Media Toolkit</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/click1000">BBC Taster - Try Click's 1000th Interactive Episode</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/news/technology-48867302">BBC News - Click 1,000: How the pick-your-own-path episode was made</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/taster/pilots/instagramification">BBC Taster - Watch Instagramification</a></strong></p>
<p><strong><a href="http://www.bbc.co.uk/rd/object-based-media">BBC R&amp;D - Object-Based Media</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>IRFS weeknotes #328</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-10-irfs-weeknotes-number-328" />
        <id>https://www.bbc.co.uk/rd/blog/2021-10-irfs-weeknotes-number-328</id>

        <published>2021-10-04T13:26:15Z</published>
        <updated>2021-10-04T13:26:15Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            Weeknotes from the IRFS team, featuring timelines of news and sketches of robots.
          ]]>
        </summary>

        
          
            <author><name>Tristan Ferne</name></author>
          
        

        
          <category term="Artificial Intelligence &amp; Machine Learning" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>Last week the Data Team teamed up with Mathieu and Andrew to explore ideas around timelines for news. This was for a hackweek that also involved folks from News Labs and BBC News Editorial and Product, and the work explored different ways to segment and present news stories over time. </p>

            <p>We developed 5 ideas in the week. Chris Newell worked with Ben on connecting pairs of automatically extracted entities (such as notable politicians) and how best to visualise their co-occurence across time. Tamsin looked at ways to present what politicians have said on a particular topic over time, using our automated quote extraction software. Ben looked at a similar area with his research into what&rsquo;s going on in the world of automated claim extraction and fact checking. Matt dived into the world of similarity search to see if we could help journalists curate more relevant follow-on stories for their readers, and Mathieu and Andrew prototyped some presentations of complex stories.<br /><br />Chris has also deployed a Farsi version of our Starfruit tagging system for evaluation purposes and Andrew McParland attended the first Advisory Board meeting of the Visual AI EPSRC-funded project looking at next generation computer vision methods.<br /><br />Mathieu and Andrew spent some time working on new sets of wireframes using the Social Interaction Toolkit, trying to see if it could scale to concepts for larger groups of people. <br /><br />Libby and Henry have been running "Futures" workshops with Human Values and Audio teams, with several of us helping out. Libby's also been doing a landscape and competitor review for <a href="https://www.bbc.co.uk/rd/blog/2020-05-iplayer-watch-party-group-watching-viewing">BBC Together</a>,<br /><br />We've been finishing up the launch website for <a href="https://www.bbc.co.uk/rd/blog/2021-08-explaining-artificial-intelligence-part-3-what-does-ai-look-like">Better Images of AI</a>, which included David hand-drawing some "bad" AI images (purely for illustrative purposes) and Alicia implementing some new designs<br /><br /><img src="/rd/sites/50335ff370b5c262af000004/assets/615ae10306d63e4e2a000150/weeknotesImage.jpg" alt="Sketches of a robot thinking and brains" /><br /><br />Finally, Chris Needham is organising media-related meetings at the upcoming <a href="https://www.w3.org/2021/10/TPAC/Overview.html">W3C TPAC annual conference</a>. This includes web technologies on TV devices and real-time media processing in the browser.<br /><br /></p>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Personal data stores: building and trialling trusted data services</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research" />
        <id>https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research</id>

        <published>2021-10-14T16:27:00Z</published>
        <updated>2021-10-14T16:27:00Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            Sir Tim Berners-Lee, wants to reconfigure the web so users own and control their data. We are investigating how to give young people the chance to do just that.
          ]]>
        </summary>

        
          
            <author><name>Eleni Sharp</name></author>
          
        

        
          <category term="Data" />
        
          <category term="Personalisation" />
        
          <category term="Recommendations" />
        
          <category term="Content Formats" />
        
          <category term="Content Discovery" />
        
          <category term="Audience Research" />
        
          <category term="User Interfaces" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>BBC Research &amp; Development has been exploring new ways of storing and using data, especially personal data, for several years. In 2017, we announced a project with <a href="https://www.nottingham.ac.uk/research/groups/mixedrealitylab/projects/databox.aspx">Nottingham University</a> called <a href="https://www.bbc.co.uk/rd/projects/databox">Databox</a>, followed by an experiment with a prototype device we named <a href="https://www.bbc.co.uk/rd/blog/2019-06-bbc-box-personal-data-privacy">BBC Box</a> in 2019. Built on Databox and powered by a Raspberry Pi, BBC Box pulled your data together in one place. Most recently, my colleagues Bill Thompson and Max Leonard published a blog post about <a href="https://www.bbc.co.uk/rd/blog/2020-09-personal-data-store-privacy-services">putting data at the heart of the BBC</a>. Since then, we&#39;ve been exploring personal data and the latest personal data store technology.</p>

            <p>According to the <a href="https://www.w3.org/People/Berners-Lee/">World Wide Web creator, Sir Tim Berners-Lee</a>, reconfiguring the web so that its users own and manage their data can <a href="https://www.theguardian.com/lifeandstyle/2021/mar/15/tim-berners-lee-we-need-social-networks-where-bad-things-happen-less">turn the world 'the right way round'</a>. We think this is right and believe that giving people the chance to control their personal data could completely transform the current system.</p>
<p>With this in mind, our latest work focuses on reinventing how <a href="https://www.bbc.co.uk/rd/topics/data">data</a> is stored, processed and controlled online &ndash; changing how people think about and use personal data.</p>
<p>Our research team focused on the needs of young audiences (under 35s) who aren't regular users of the BBC's apps and online services, aiming to gain trust, relevance, and find new ways to support the BBC's reputation and position for future generations.</p>
<p>We recently tested our first personal data store prototypes &ndash; and can share what we've been working on so far and our findings.</p>
<h2>Why is the BBC researching personal data stores?</h2>
<p>We believe a change in the way people can manage their personal data has a broader public value. Recent research found:</p>
<ul>
<li><a href="https://www.infosecurity-magazine.com/news/organizations-personal-data-brit/">At least 39 different organisations hold personal data on the average UK citizen</a>.</li>
<li><a href="https://www.infosecurity-magazine.com/news/organizations-personal-data-brit/">82% of people are unsure of what personal information companies hold about them</a>.</li>
<li><a href="http://www.digitaljournal.com/business/report-finds-only-1-percent-reads-terms-conditions/article/566127">Only 1% of people read the terms &amp; conditions</a>.</li>
</ul>
<p>It's the BBC's responsibility to look at how we can help make a positive impact for our audiences. We believe there's enormous room for improvements with how data is managed online &ndash; and <a href="https://www.bbc.co.uk/rd/about/standards">our long history of researching and using established and emerging technologies</a> can help move the industry forward to create public value and make a <a href="https://www.bbc.co.uk/rd/projects/public-service-internet">better internet</a>.</p>
<p>Rather than big organisations holding personal data under their control, there are possibilities for people to have greater agency over their data - who can identify them, where and how their personal data is stored and what data services they can access. Eventually, this will also include transparency and choice over how services use personal data.</p>
<p>Alongside this, we also believe audiences will benefit from BBC services that are more relevant and targeted to them. Relevance is vital for the BBC, and data has a part to play in this. <a href="https://www.bbc.co.uk/blogs/internet/entries/e77985a0-acc2-3494-9627-60b868cc8ba5">The introduction of sign-in</a>, for example, now means audiences can get <a href="https://www.bbc.co.uk/blogs/internet/entries/660189f0-9bb3-48d9-a2f7-e97e2606e7c4">personalised recommendations</a> across BBC iPlayer, BBC Sounds and the BBC homepage. Technology like personal data stores could be transformative and support <a href="https://www.bbc.co.uk/rd/topics/personalisation">our ambition to create tailored and personalised content</a>. We expect there will be technology from our findings which could be transferred and integrated into existing BBC products and services &ndash; and in the future, we may also create new products and services.</p>
<p>The user research and prototypes we're currently working on are an extension of our previous projects and are a natural next step. Our ambition is to move the industry in a positive direction by showing that personal data can be done differently. We want to be transparent with our findings so as many people benefit from them as possible; that's why we're doing this research in the open and sharing our results publicly.</p>
<blockquote>
<p><strong><strong><a href="https://www.bbc.co.uk/rd/blog/2021-09-data-public-service-media">BBC R&amp;D -&nbsp;The potential of data in modern public service media</a></strong></strong></p>
<p><strong><a href="https://www.wired.co.uk/article/bbc-data-personalisation">Wired -&nbsp;The BBC&rsquo;s radical new data plan</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/news/technology-48825413">BBC News - Personal online data to inform BBC audience choice</a></strong>&nbsp;</p>
</blockquote>
<h2>What have we produced?</h2>
<p>We've been working on personal data store prototypes to test the technology and its potential benefits. We've been working on:</p>
<p><strong>Technical prototypes</strong></p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6153992906d63e22f10001a2/pds11.png" alt="A screenshot of the prototype app showing a media profile which lists programmes that the user has watched." /></p>
<p>We developed a web app that uses live personal data from the BBC, Spotify and Netflix to create a media profile for a user. The media profile allows the user to view and edit their entire media viewing history in one place. We can then send a profile derived from this data to our research version of BBC Sounds to provide enriched <a href="https://www.bbc.co.uk/rd/topics/recommendations">recommendations</a> and suggestions of relevant local events.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61539b3706d63ecdfb00016d/pds9.png" alt="Screenshot of the prototype app listing the user's favourite music artists, also with a tab for speech content." /></p>
<p><strong>UX prototypes</strong></p>
<p>These more speculative prototypes looked a bit further into the future to outline a more comprehensive product we call 'My PDS'. My PDS has multiple profiles, including media, health, finance and social. These profiles visualise and bring to life personal data which has been imported into My PDS.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61539a2706d63e4b2b00014f/pds5.png" alt="A screenshot from a social profile in the prototype app, showing amount of time spent in various different social media apps, and a total amount of time for the day." /></p>
<p>We also created a central dashboard, allowing users to view, edit and manage data as well as a selection of services that sit alongside My PDS. Our vision is that services can be produced by organisations other than the BBC to create a data ecosystem and that a range of providers can offer the underlying PDS.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61539bdb06d63e4b2b00015b/pds8.png" alt="A screenshot of apps from other organisations, offering services which list local events and personal carbon footprint tracking." /></p>
<p><strong>Educational materials</strong></p>
<p>We created video explainers on what personal data is and what personal data stores are. We also produced an introduction to My PDS to bring a complex subject to life for audiences.</p>
<h2>What technology did we use?</h2>
<p>Our system is built on an open-source tool called <a href="https://solidproject.org/">Solid</a>, initially developed by Web inventor Sir Tim Berners-Lee. Solid stores user data in 'pods', and we have used these to build a user-centred data storage system and a prototype media discovery tool to test our ideas in practice. We have been working with a version of Solid implemented by <a href="https://inrupt.com/solid">Inrupt</a>, founded by Sir Tim.</p>
<p>We chose to work with Solid for the following reasons:</p>
<ul>
<li>It is open-source and is essentially a set of standards so we can build our own, and crucially for this initial trial, we can dig deep into any aspect due to Solid's open-source nature.</li>
<li>It is web-native and embodies the principles of the web, especially <a href="https://www.bbc.co.uk/mediacentre/articles/2021/making-the-case-for-a-universal-bbc#thecaseforauniversalbbc">universal access, which is one of the essential principles in the way we deliver our services</a>.</li>
<li>There is <a href="https://forum.solidproject.org/">a large and active developer community</a>.</li>
<li>The commercial support on offer.</li>
</ul>
<h2>How are we evaluating what we created with audiences?</h2>
<p>While building the prototypes, we worked on a research programme alongside research agency <a href="https://www.youthsight.com/">YouthSight</a> to design and test our ideas and thinking. This included a national survey of over 1,500 people and a two-week qualitative research study with 25 participants under the age of 35.</p>
<p>Throughout the two weeks, participants were introduced to our prototypes and educational materials, taking part in focus groups, online discussions, design reflections, one-to-one interviews and a live trial. The research aimed to get views on the potential for personal data stores as an alternative way to manage data, test the appeal for additional services and get feedback on our prototypes and potential designs.</p>
<p>Our four main takeaways after analysing the findings are:</p>
<p><strong>Lack of understanding and confidence with personal data</strong></p>
<p>We found that most of those involved in our research want to take steps to manage their personal data but don't know how.</p>
<p><strong>Openness to using a personal data store</strong></p>
<p>People of all ages said they'd be happy to use a personal data store and think it is better than what we have now. Their key benefits were being able to control data and see it come to life visually.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61539d1906d63e4b2b000168/pds10.png" alt="Screenshot from the prototype showing buttons from a range of web services which allow you to connect your account from each service to the app." /></p>
<p><strong>The need for oversight and control</strong></p>
<p>Seeing who collects, shares and uses data and controlling permissions is important to young people.</p>
<p><strong>The profiles are valued</strong></p>
<p>We had positive feedback on the profile prototypes, with people interested in ones that help manage and inform lifestyle choices and behaviours.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/61539cbd06d63ecdfb000186/pds7.png" alt="Screenshot showing a promo with more details on how to sign up for the carbon tracking app." /></p>
<p>The findings inform our next steps and will shape the vision and roadmap of the project in the future. Following the positive feedback for the profiles, one of our priorities will be to continue work on the media profile to make it as compelling as possible.</p>
<h2>What's next?</h2>
<p>We've been researching <a href="https://www.bbc.co.uk/rd/topics/internet">the future development of the internet</a> and ways it can be enhanced to deliver public value for many years. This work on creating a public service data ecosystem covers just one part of this &ndash; but we hope our findings will contribute to improvements in the wider industry. With that in mind, we are collaborating and demoing with other organisations working with Solid, including <a href="https://innovatie.vrt.be/index.php/en">Belgian broadcaster VRT</a>, <a href="https://www.bankofapis.com/">NatWest</a> and the digital team at the <a href="https://mojdigital.blog.gov.uk/">Ministry of Justice</a>. We will also be speaking at events and conferences as we share our findings with the world.</p>
<p>Next up, we'll be using the research findings to help build on our work so far. We'll also be looking at how we can scale up for larger trials in the future and having discussions with potential internal and external partners on possible services and features. Finally, there will be more studies and evaluations into the longer-term impact of work in this area. Creating and testing these prototypes is just the beginning &ndash; we look forward to sharing our next update.</p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Creator of the World Wide Web, Sir Tim Berners-Lee, wants to reconfigure it so users own and control their data. @BBCRD are investigating how to give people the chance to do just that: https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Tweet This</a> - <a href="https://www.facebook.com/sharer/sharer.php?u=https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Share on Facebook</a> - <a href="https://www.linkedin.com/shareArticle?url=https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Share on LinkedIn</a></strong></p>
<p><strong><a href="https://reddit.com/submit/?url=https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Share on Reddit</a> - <a href="https://news.ycombinator.com/submitlink?u=https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research">Share on Hacker News</a></strong></p>
<hr />
<h2 class="b-font-family-serif"><a href="https://storyplayer.pilots.bbcconnectedstudio.co.uk/experience/bbc_rd_2020_showcase">Interactive Presentation - BBC R&amp;D Showcase</a></h2>
<p>In this&nbsp;<a href="https://storyplayer.pilots.bbcconnectedstudio.co.uk/experience/bbc_rd_2020_showcase">interactive experience discussing the use of personal data</a>&nbsp;you can choose what you would like to know more about from our engineers.</p>
<hr />
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-06-bbc-box-personal-data-privacy">BBC R&amp;D - Introducing the BBC Box</a></strong></p>
<p><strong><strong><a href="https://www.bbc.co.uk/rd/blog/2020-09-personal-data-store-privacy-services">BBC R&amp;D -&nbsp;Putting audience data at the heart of the BBC</a></strong></strong></p>
<p><strong><strong><strong><a href="https://www.wired.co.uk/article/bbc-data-personalisation">Wired -&nbsp;The BBC&rsquo;s radical new data plan</a></strong></strong></strong></p>
<p><strong><a href="https://www.slashgear.com/bbc-box-personal-cloud-aims-to-rewrite-the-rules-on-internet-privacy-19581160/">SlashGear - BBC Box personal cloud aims to rewrite the rules on internet privacy</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/news/technology-48825413">BBC News - Personal online data to inform BBC audience choice</a></strong></p>
<p><strong><strong><a href="https://www.bbc.co.uk/rd/projects/bbc-box-personal-data-management-privacy">BBC R&amp;D - Personal Data Stewardship</a></strong></strong></p>
<p><strong><a href="http://www.databoxproject.uk/">Databox</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2015-01-the-ethics-of-data-videos">BBC R&amp;D - The Ethics of Personal Data</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/projects/data-science-research-partnership">BBC R&amp;D - Data Science Research Partnership</a></strong></p>
<p><strong><a href="https://www.broadcastnow.co.uk/bbc/bbc-tackles-data-safety-with-disruptive-device/5140546.article">Broadcast - BBC tackles data safety with &lsquo;disruptive&rsquo; device</a></strong></p>
<p><strong><a href="https://www.gizmodo.co.uk/2019/06/the-bbc-is-doing-cloud-storage-and-wants-you-to-have-full-control-over-your-data/">Gizmodo - BBC Box personal cloud aims to rewrite the rules on internet privacy</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Explaining Artificial Intelligence Part 4: A Practical Toolkit</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-09-explaining-artificial-intelligence-part-4-a-practical-toolkit" />
        <id>https://www.bbc.co.uk/rd/blog/2021-09-explaining-artificial-intelligence-part-4-a-practical-toolkit</id>

        <published>2021-09-22T14:02:32Z</published>
        <updated>2021-09-22T14:02:32Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            As Artificial Intelligence (AI) is used in more BBC products and everything else online, we think it’s important to deliver AI-powered systems that are responsibly and ethically designed.
          ]]>
        </summary>

        
          <author><name>BBC Research and Development</name></author>
        

        
          <category term="Artificial Intelligence &amp; Machine Learning" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>As Artificial Intelligence (AI) is used in more BBC products and everything else online, we think it’s important to deliver AI-powered systems that are responsibly and ethically designed. We also want to ensure that everyone has the opportunity to understand more about how this influential technology works in the world. This post by <strong>Karen Pudner</strong> is part of a series of posts on this topic.</p>

            <p>Tristan&rsquo;s previous blogs have set out <a href="https://www.bbc.co.uk/rd/blog/2021-07-explaining-artificial-intelligence-part-1-why-is-this-important">why explaining AI is important</a>, <a href="https://www.bbc.co.uk/rd/blog/2021-07-explaining-artificial-intelligence-part-2-what-where-and-how">different methods for explaining AI</a> and <a href="https://www.bbc.co.uk/rd/blog/2021-08-explaining-artificial-intelligence-part-3-what-does-ai-look-like">exposed the difficulties in visualising AI</a>. In parallel, the responsible Machine Learning (ML) team at the BBC have been thinking about how best to explain AI/ML projects to our colleagues and collaborators.</p>
<p>When developing BBC products and tools which use AI/ML (I will mostly refer to this technology as ML from here on), it is important to involve not just ML specialists, but also staff with domain expertise e.g. in programme making, editorial or journalism. Clear explanations of how ML works can facilitate multi-disciplinary input into the development of ML systems; ensuring that BBC values and editorial guidelines are integrated, helping to improve our models and ultimately ensuring ML work is responsible, successful, accepted and trustworthy. This is not just as add-on or afterthought; we want a fully collaborative approach with a range of colleagues at all stages of the process, to ensure that BBC values such as diversity of content, balance and fairness are embedded in our ML systems.</p>
<p>The challenges of explaining ML to BBC colleagues are similar to the challenges of explaining ML to the general public. While they&rsquo;re often curious and keen to input, colleagues have busy jobs and a limited amount of time to spend on learning the technical complexities of different ML models. So we felt it was important to pitch explanations at the right level, and to provide different types of explanations at key stages of ML project development.</p>
<p>For example, when building an ML tool which uses image recognition to scan a TV programme for potentially suitable stills for thumbnails, we needed to involve editorial staff to help identify what makes a &lsquo;good&rsquo; image and to get feedback on unsuitable images. Necessary explanations would need to cover: a broad technical overview of how image recognition works, the potential for bias, and how/when colleague feedback could help improve the tool.</p>
<p>An Explainability Toolkit<br />We already have a comprehensive checklist for developing responsible ML: <a href="https://www.bbc.co.uk/rd/publications/responsible-ai-at-the-bbc-our-machine-learning-engine-principles">the BBC&rsquo;s ML Engine Principles (MLEP)</a>. To support this, we identified 5 different stages of a ML project which require different types of engagement around explanations:</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6149e09306d63ed7d1000011/stages.png" alt="Flowchart showing the 5 stages of an ML project - Introducing ML Capabilities, Project Kick off, Design, Validation and Production." /></p>
<p>To illustrate each stage of the process, we can look at another typical use case for ML at the BBC: recommendation engines. Our ML teams have experience in collaborating with colleagues in areas such as iPlayer, Sounds and News to develop recommendations for their content.</p>
<p>Introducing ML capabilities <br />It is important that colleagues have realistic expectations of what AI/ML can and cannot do, and understand that creating ML systems is a collaborative process. Simple visualisations of ML models are helpful, or short introductory videos which avoid technical terminology.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6149e0f606d63e315d000003/quote.png" alt="undefined" /></p>
<p>Project kick-off</p>
<p>At the start of a project, an ML team will want to hold a series of meetings to discuss scope, objectives and how to collaborate. Good formats include a kick off meeting, &lsquo;show and tell&rsquo; sessions to gather different perspectives and ideation or story boarding workshops. The ML team can share some more technical materials, as well as existing research and data insights. It is key to establish a common understanding of how a ML model will work and how/where non-technical colleagues will provide feedback and help improve the model.</p>
<p>More detailed explanations of algorithmic models are appropriate here, for example:</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6149e13206d63e64be000002/chart.png" alt="Flow chart showing how the recommender works." /></p>
<p>Design</p>
<p>Typically, the ML team will set up some user testing via a prototype. This helps with explainability of the model, as it allows staff to experiment and see the output (in this case, article recommendations) for different inputs (the seed article). This approach allows colleagues to give feedback on the quality of the ML model&rsquo;s outputs, and this can serve as a starting point for discussing necessary modifications, e.g. to align with the BBC&rsquo;s editorial values and rules, or to improve the quality of recommendations.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6149e16406d63ed7d1000014/form.png" alt="undefined" /></p>
<p>Validation</p>
<p>Throughout development, regular meetings will be held to discuss the results of user testing. For recommendations, these discussions can cover metrics such as novelty and diversity to ensure up to date and varied content is shown to audiences. This stage is also where &lsquo;business rules&rsquo; (i.e. instructions on what inputs colleagues feel the the algorithm should prioritise, avoid or down-weight) can be agreed and factored into the model. Decision logs are a good way of recording these rules and any further changes to how the ML model will operate.</p>
<p>Production<br />Once a ML project has gone live, regular meetings between the ML team and colleagues/collaborators should continue to enable ongoing feedback and improvement of the model and to address any issues.</p>
<p>For transparency and accountability purposes, it is also important to complete and update documentation explaining how the ML model works and who has signed it off. This could be helpful for audit purposes, whether internal or regulatory. BBC teams have recorded decisions made in ML development by completing the <a href="https://www.bbc.co.uk/rd/publications/responsible-ai-at-the-bbc-our-machine-learning-engine-principles">&lsquo;MLEP&rsquo; responsible ML checklist</a>. Other options include explainability statements or the <a href="https://modelcards.withgoogle.com/about">model cards</a> format developed by Google.</p>
<p>Finally, teams should also think about how ML systems are explained to end users. The BBC has committed to doing this in plain English, and one example is our explainer pages on <a href="https://www.bbc.co.uk/usingthebbc/account/how-is-the-bbc-personalised-to-me/">personalisation</a> and <a href="https://www.bbc.co.uk/usingthebbc/account/what-does-recommended-for-you-mean/">recommendations</a>.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6149defe06d63e99a3000148/INjFKI5f.png" alt="Cartoon illustration of a laptop and several personalised recommendations for content" /></p>
<p>What&rsquo;s Next?</p>
<p>Internally, we&rsquo;re working on gathering examples of how different teams are explaining ML in practice to include in an explainability &lsquo;toolkit&rsquo;. We already have contributions from teams working on recommendation engines, voice assistants, image recognition, automated moderation and automated content tagging.</p>
<p>We think this framework is a helpful starting point for thinking through AI/ML explainability needs in a practical context. We know it doesn&rsquo;t address the &lsquo;black box&rsquo; problem of explaining outputs/decisions of very complex ML models. But it does help identify when and how to bring non-technical colleagues into ML development so that their expertise and knowledge of audience needs and BBC values can have a real impact on the way in which ML is implemented.</p>
<p>We hope other organisations using ML can benefit from this way of working, bringing in colleagues who understand the organisational context and the specific ethical issues, industry needs and business values which need to be addressed. It is exciting to think about what explaining AI/ML can do to promote effective collaboration, help build more successful ML products or services, and increase trust, engagement and transparency with use of AI/ML more widely.</p>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>IRFS weeknotes #327</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-09-irfs-weeknotes-number-327" />
        <id>https://www.bbc.co.uk/rd/blog/2021-09-irfs-weeknotes-number-327</id>

        <published>2021-09-16T12:25:56Z</published>
        <updated>2021-09-16T12:25:56Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            This week: our social interaction toolkit applied to existing apps, a productive reading week, and talking about understanding AI.
          ]]>
        </summary>

        
          
            <author><name>Libby Miller</name></author>
          
        

        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            
            <p><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">In the </span><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: bold; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Interaction and Prototyping team</span><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">, Mathieu and Andrew have compared and applied the principles of the </span><a style="text-decoration: none;" href="https://paper.dropbox.com/doc/Social-Interaction-Toolkit--BRpARkeeYC0lpKwSSdlEo38tAg-nqZM7J2XugGU74TlOek2p"><span style="font-size: 11pt; font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; -webkit-text-decoration-skip: none; text-decoration-skip-ink: none; vertical-align: baseline; white-space: pre-wrap;">Social Interaction Toolkit</span></a><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> to existing collaborative apps such as google docs and zoom. This led to wireframe versions that are compliant with the toolkit principles. In addition to production tools they are also exploring collaborative creative ideation applications.</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Ant has continued his investigation into prototyping for in-car experiences focusing on Echo Auto and Android Auto.</span></p>
<p style="line-height: 1.38; margin-top: 12pt; margin-bottom: 12pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Barbara was swimming in the Aegean Sea 🏊🏻&zwj;♀️&nbsp;</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Last week the </span><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: bold; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Data Team</span><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> were on a Reading Week. This is where they take a dip into a new area of research or dive deeper into an area they are already working on to help expand their knowledge base.</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Ben looked into Tensor Flow Recommendations libraries, with a long-term view to improving the algorithms we&rsquo;ve supplied to the iPlayer and iRex teams. Polina hosted a Recys (Recommender System) bookclub to look at what&rsquo;s new in this particular field and also worked out a potential musical collaboration with Queen Mary&rsquo;s University of London, a university we&rsquo;ve had a very productive research relationship with over the years. Chris was also working on some QMUL research, commenting on some papers we&rsquo;ve collaborated on as part of an existing partnership.</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Tamsin is prepping for the Hackweek on News timelines that&rsquo;s coming up in a few weeks, and also trying to find a way to extract useful data from a large trawl of BBC Drama scripts we&rsquo;ve got. Andy is carrying on with his important Data Science Research Partnership work and Ollie is taking a look at automated summarisation research, which should help inform his work for the Natural Language Processing team.&nbsp;</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: bold; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Internet and Society</span><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> have been working on the <a href="https://www.bbc.co.uk/rd/blog/2021-08-explaining-artificial-intelligence-part-3-what-does-ai-look-like">Images of AI</a> website: Alicia has been building the site, researching accessibility for it and deploying it; David has been designing the site and creating ideas for stock images.</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Also on this theme, Galen's been improving and testing his multiplayer guessing game "Missing Link" where you play against others and a GPT-style "AI".</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><img src="/rd/sites/50335ff370b5c262af000004/assets/614333bb06d63eb81e000252/missing_link.png" alt="A screenshot of the game 'missing link'." /></span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Tristan and Libby gave <a href="https://www.cambridgewireless.co.uk/events/more-intelligent-or-less-artificial-ux-choice-ai-h/">talks on the theme of understanding machine learning</a> to Cambridge Wireless UX special interest group.</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Henry and Libby have been talking about the Futures project and running a number of workshops using the techniques with different teams.&nbsp;</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: bold; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Links</span></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><a href="https://www.wired.com/story/ai-phishing-emails"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">AI Wrote Better Phishing Emails Than Humans in a Recent Test</span></a></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr"><a href="https://noahveltman.com/aimovies/"><span style="font-size: 11pt; font-family: Arial; color: #000000; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">AI movie posters</span></a></p>
<p style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;" dir="ltr">&nbsp;</p>
<p><strong><br /><br /><br /><br /><br /><br /><br /><br /></strong></p>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Spectrum Sounds: a lockdown audio composition in seven colours</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-09-hearing-synaesthesia-audio-composition-music" />
        <id>https://www.bbc.co.uk/rd/blog/2021-09-hearing-synaesthesia-audio-composition-music</id>

        <published>2021-09-06T12:18:40Z</published>
        <updated>2021-09-06T12:18:40Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            Seven short pieces of music in the colours of the spectrum arising out of lockdown, but also as a composition by an autistic man with synaesthesia and severe unbalanced hearing loss.
          ]]>
        </summary>

        
          <author><name>BBC Research and Development</name></author>
        

        
          <category term="Audio" />
        
          <category term="Immersion" />
        
          <category term="Accessibility" />
        
          <category term="Interactivity" />
        
          <category term="Personalisation" />
        
          <category term="Participation" />
        
          <category term="Devices" />
        
          <category term="Content Formats" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>Spectrum Sounds is a collection of seven short pieces of music in the colours of the spectrum. The BBC commissioned it as part of the <a href="https://www.bbc.co.uk/arts">Culture in Quarantine</a> programme.</p>

<p>In this blog post, composer <strong>Andrew Hugill</strong> explains how the piece arises out of lockdown but also from his hearing and listening, both as an autistic man and as someone with severe unbalanced hearing loss. Andrew worked on this composition remotely with several musicians, many of whom are also autistic or have hearing differences or both.</p>

            <blockquote>
<p><span style="font-size: medium;"><strong><a href="https://www.bbc.co.uk/taster/pilots/spectrum-sounds">BBC Taster - Try Spectrum Sounds</a></strong></span></p>
</blockquote>
<p>To create Spectrum Sounds, I used three approaches:</p>
<ol>
<li>A conventional linear audio composition that combined processed instrumental and digital sound, resulting in a stereo file that could be <a href="https://www.bbc.co.uk/programmes/m000zch2">broadcast on radio</a>;</li>
<li>An "evocative transcription" video of each piece, designed for those who do not listen primarily with their cochlea. Each piece of visual music can be experienced with or without audio;</li>
<li>A version for BBC Research &amp; Development's <a href="https://www.bbc.co.uk/rd/projects/audio-device-orchestration">Audio Orchestrator</a> environment that would allow listeners to set up and control their own listening situation by disseminating sounds (and images) across at least three linked devices.</li>
</ol>
<p>The Audio Orchestrator opened up new possibilities. I was attracted to it because it enabled the listener to control the listening environment. So, for example, since my right ear is so much worse than my left ear, I cannot detect spatial effects across a stereo field. However, if I position my auxiliary devices all on the left, I can get a sense of how the sound moves across audio space.</p>
<p>Also, I could re-conceptualise the pieces to encourage even stronger communication of synaesthesia. <a href="https://en.wikipedia.org/wiki/Synesthesia">Synaesthesia</a> is a phenomenon whereby the senses become confused or blended in the brain. In my case, this means that certain colours are associated with certain sounds (and taste gets in the mix too sometimes). For example, when I hear a trumpet, especially playing the note F sharp, I experience the colour blue.</p>
<p>Each of the very different pieces in the Spectrum Sounds set contains a collection of such colour/sound associations. The advantage of the Audio Orchestrator is that it enables me to bring out those relationships in ways that are not available via stereo alone.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/612f8ac206d63e2cf9000198/red2.png" alt="The colour red." /></p>
<h2>A Robin in the Snow</h2>
<p>The red piece tells a little story on solo violin. I was walking in the snow during the lockdown. All the sounds were even more muffled than usual. A melody began to form in my mind. Suddenly, a robin's singing cut through into my serviceable hearing range. This is conveyed by having the muted violin play through the central device at the star, then the robin calls (there are, in fact, two robins) answer one another via the left and right auxiliaries. My inner melody is then transformed as I continue my walk and reaches the reddest moment as it hits the open E string on the violin, becoming flowing and encompassing the full range of the instrument. This effect works wonderfully well on the Audio Orchestrator.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/612f6e2706d63efd7c000079/orange.png" alt="The colour orange." /></p>
<h2>Fluted Orange Turbulence</h2>
<p>The flute is coloured with orange sounds. I was able to deploy these sounds (e.g. wind chimes, drone harmonics, calm chords, synthetic waves, music boxes, etc.) in a dynamic way across the devices. I also used images to identify each sound so the listener can follow what is happening in this highly active piece.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/612f6e3b06d63e2cf90000d1/yellow.png" alt="The colour yellow." /></p>
<h2>A World in Yellow</h2>
<p>Metal and glass sounds are layered to create a unique sound world that is bound together by a drone and topped off with a spinning bicycle wheel. Here the Audio Orchestrator enabled me to bring out the textures of the layers in ways that are impossible in stereo alone.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/612f8af506d63e2cf9000199/blue.png" alt="The colour blue." /></p>
<h2>Ice Hole Trumpet</h2>
<p>The sound of wind blowing through ice holes in the Antarctic is combined with unpitched trumpet breath sounds, which then form into a melody that is full of shards of echo. Here the idea is to convey the spatial expanse of the ice, positioning the trumpet like a lone animal calling in a cold desert. The distribution of the three devices is very effective for this.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/612f6e5306d63e2cf90000d3/green.png" alt="The colour green." /></p>
<h2>Verdigrade</h2>
<p>The spatialisation is also effective in conveying the degradation of my hearing that is depicted in this piece. Verdigris is the green patina that appears on certain metals when they are exposed to the air. In the piece, a normal piano plays a rigorous sequence. Gradually a different piano takes over with a reciprocal sequence. This piano digitally recreates diplacusis, the phenomenon by which I hear two different pitches when one is played. This left-right degradation is powerfully conveyed by the distribution of linked devices.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/612f6e6506d63e8d5800005f/violet.png" alt="The colour violet." /></p>
<h2>What is violet?</h2>
<p>In the violet piece, a lutenist plays an unmeasured prelude at dusk, surrounded by the call of Noctule bats. I can create a good sense of 'surround' with the spatial distribution of the soundscape while the lute remains fixed in the centre. The result has the hazy violet vagueness evoked by clouds at dusk. The lute music begins lazily but follows some twists and turns, tightening up as if seeking temporary destinations, just like the flight of a bat across the space.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/6136070306d63e58e60001ff/purple2.png" alt="The colour purple" /></p>
<h2>Rook+Cello</h2>
<p>The combination of cello (playing minor 6ths in the Phrygian mode) with rook sounds is irrepressibly purple to me. Once again, the Audio Orchestrator enables a spatial treatment in which the listener is situated in a rookery, then in a recital room with solo cello, then back to the rookery and ultimately to a single rook. There is something about the slow nobility of this piece that is conveyed especially well by this treatment.</p>
<hr />
<p>One thing that has struck me while working with Audio Orchestrator is that it is conceived as a dissemination platform. Since creativity often involves using technology in ways that the makers did not intend, my next plan is to use Audio Orchestrator to make some digital scores. These will enable creative collaborations with various artists to make work that is conceived from the outset in terms of this technology.</p>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Seven short pieces of music in the colours of the spectrum arising out of lockdown, but also as a composition by an autistic man with synaesthesia and severe unbalanced hearing loss: https://bbc.in/3BBj4e8">Tweet This</a> - <a href="https://www.facebook.com/sharer/sharer.php?u=https://www.bbc.co.uk/rd/blog/2021-09-hearing-synaesthesia-audio-composition-music">Share on Facebook</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/taster/pilots/spectrum-sounds">BBC Taster - Try Spectrum Sounds</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/makerbox/tools/audio-orchestrator">BBC MakerBox - Audio Orchestrator</a></strong></p>
<p><strong><a href="https://www.telegraph.co.uk/news/2020/08/09/bbc-technology-turns-gadgets-speakers-could-used-proms-says/">The Telegraph - BBC technology that turns your gadgets into speakers could be used for the Proms, says Radio 3 controller</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-11-audio-experiences-device-orchestration">BBC R&amp;D - Vostok K Incident - Immersive Spatial Sound Using Personal Audio Devices</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-10-multi-speaker-immersive-audio-metadata">BBC R&amp;D - Vostok-K Incident: Immersive Audio Drama on Personal Devices</a></strong></p>
<p><strong><a href="https://www.ft.com/content/9eaa588a-b750-11e8-bbc3-ccd7de085ffe">Financial Times - BBC pioneers immersive sound using phone speakers</a></strong></p>
<p><strong><a href="https://www.telegraph.co.uk/news/2018/09/13/bbc-develops-system-turn-household-objects-surround-sound/">The Telegraph - BBC develops system to turn household objects into surround sound</a></strong></p>
<p><strong><a href="https://www.whathifi.com/news/bbc-3d-audio-technology-aims-to-revolutionise-home-entertainment">What Hi-Fi - BBC 3D audio technology aims to revolutionise home entertainment</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/publications/whitepaper352">BBC R&amp;D - Evaluation of an immersive audio experience</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2017-09-mermaids-tears-object-based-audio">BBC R&amp;D - The Mermaid's Tears</a></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Using board games as interfaces for social interactions</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-08-board-games-digital-storytelling-participation-experiences" />
        <id>https://www.bbc.co.uk/rd/blog/2021-08-board-games-digital-storytelling-participation-experiences</id>

        <published>2021-08-19T14:22:26Z</published>
        <updated>2021-08-19T14:22:26Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            Could the framework of cooperative board games provide a more human and social dimension to our digital storytelling and participatory experiences work?
          ]]>
        </summary>

        
          
            <author><name>Mathieu Triay</name></author>
          
            <author><name>Andrew Wood</name></author>
          
        

        
          <category term="Content Formats" />
        
          <category term="Content Discovery" />
        
          <category term="Participation" />
        
          <category term="Interactivity" />
        
          <category term="User Interfaces" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p>The Covid-19 pandemic demonstrated the need to maintain a social connection when computers increasingly mediate our communication. Individuals experimented with different ways to stay in touch with their friends and family, including weekly video calls, which quickly became tiresome and less regular. It can be hard to chat when nothing much has happened lately or when you don’t feel confident enough to speak up in large group conversation. Additionally, for many people working from home, another video meeting is the last thing they want to do.</p>

<p>A popular option to avoid these issues was to organise quizzes or games. They create conversation; everyone has a turn to speak and do something. Simply put, there is a set of rules which mediate the social interactions between the participants. We rely on unspoken rules such as body language and physical distance during real-life meetings, but virtual meetings make perceiving these cues much harder. Using a clear set of rules makes the social flow explicit.</p>

<p>Also, board games, and more specifically cooperative ones, successfully bring groups together and create a sense of togetherness. They offer ways for players to immerse themselves in a theme while promoting collaborative problem solving, thus re-creating the feeling of a team working towards a common goal.</p>

            <p>Could the framework of cooperative board games provide a more human and social dimension to our digital storytelling and participatory experiences work? Would it also bring some solutions to the problems of remote working and socialising?</p>
<p>Our research started by analysing the design of popular collaborative board games (including popular titles such as Pandemic, Arkham Horror, The Crew, Mysterium) and various RPGs (such as Shadowrun and Call of Cthulhu) to extract re-usable patterns that will improve participatory experiences.</p>
<h2>The patterns</h2>
<p>Analysing the games, we looked for aspects that could enhance the social feeling and create an ongoing conversation. Below are a few of the high-level concepts we found.</p>
<h2>Episodic &amp; regular</h2>
<p>A lot of cooperative board games seem to offer a discoverable narrative. Usually, they can change what will happen next, but sometimes the story is just delivered to them in instalments. Moreover, the story is sometimes divided into episodes. Each episode can be experienced one after the other but offers the possibility to take a break in between. This means you can play with your group of friends over a few evenings or a few months at your own pace.</p>
<p>The story keeps the players engaged with suspense and anticipation while pushing them to meet again to discover what&rsquo;s next. A similar thing can be observed when multiple people tune in to a TV show broadcast each week (as opposed to the series being available all at once): it creates a regular rendez-vous and an occasion to chat after the show.</p>
<h2>Asynchronous</h2>
<p>Most cooperative board games seem to encourage players to take turns. This allows people to discuss what they want to do without the pressure of time but also makes explicit who is currently in charge. Players make it clear when they&rsquo;re done and who is next.</p>
<p>The flow allows everyone to express themselves confidently by carving out a space for each individual while maintaining group conversation. However, it can create a quarterbacking problem where a single player might instruct everyone else what to do. Fortunately, there&rsquo;s a variety of ways to counteract it, from entirely silent games to passing around a leader token.</p>
<h2>Objective</h2>
<p>In cooperative board games, the group is trying to achieve something together. Usually, there is an objective: collect several things or reach several points. It gives purpose to the group, painting the game as a common enemy to be defeated together.</p>
<p>This can be made more exciting by using a countdown. In the spirit of asynchronicity, this is rarely to do with a time limit in real life but rather a threshold within the game. For instance, having to do something before turn three otherwise there will be bad consequences. The countdown creates a sense of urgency, reinforcing the social connection and the idea that the group is in it together.</p>
<h2>Roles &amp; responsibilities</h2>
<p>Cooperative tasks set by board games usually require creating a balanced team. Each player will have a character or role with different strengths and weaknesses. This has two advantages:</p>
<ul>
<li>It forces the group to organise and converse, making it feel like a team where everyone contributes.</li>
<li>It makes everyone feel useful and included. A well-designed game will require a contribution from the different aspects of each role.</li>
</ul>
<p>To succeed, the group has to be cohesive, and a single player cannot go ahead and do everything. This naturally creates a social feeling within the group as they undertake and complete the tasks together.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/611e3c3806d63ef6b5000026/roles-16x9.jpg" alt="Cards showing the different characters in Arkham Horror: The Living Card Game showing their different strengths and weaknesses." /></p>
<p style="text-align: center;">The different characters in Arkham Horror: The Living Card Game showing their different strengths and weaknesses.</p>
<h2>Communication</h2>
<p>To cooperate you need to communicate your intentions and your needs. In most board games, players are encouraged to narrate their actions so that everyone can follow what is going on. This can also include reading additional text provided by the game to give a narrative flavour.</p>
<p>Naturally, any limitation on communication increases the challenge but can also restrain the social aspect of the game by decreasing player interaction. Such a tool should be used carefully because it can dramatically increase the tension but also kill the social fun.</p>
<p>Though the players are usually encouraged to think for themselves, communication with the rest of the group to organise and plan is often necessary. For instance, the current player might submit their plan to the group to get their approval or ask them for additional resources to succeed.</p>
<h2>The board</h2>
<p>Board games have three main spaces where information is kept:</p>
<ul>
<li>The hand which has the player&rsquo;s private information</li>
<li>The board which represents all the information that isn&rsquo;t secret and that everyone in the group has access to.</li>
<li>The rulebook which gives meaning to the information on the board and in the hand</li>
</ul>
<p>In that dynamic, the board is the central shared space between the players that summarises their current and past actions. Everyone contributes to that space, changing it as they progress to construct a shared picture. It&rsquo;s the ultimate representation of the group&rsquo;s collaboration and what connects them.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/611e3c8206d63ebc3a000048/03_16_9.jpg" alt="Players sat around a board game, each holding a clutch of cards from the game." /></p>
<p style="text-align: center;">The relationship between the hand and the board.</p>
<h2>Uncertainty &amp; discoverability</h2>
<p>If you could do anything without risk you would lose the sense of accomplishment. In board games, that principle of chance and randomness is often applied to the player&rsquo;s actions. They rarely succeed automatically, which makes their achievements more valuable but also helps to keep the whole group interested in each other&rsquo;s actions, particularly when it&rsquo;s not their turn.</p>
<p>Everyone has a vested interest in your success when the task is cooperative. When the outcome of a test is revealed or a new element is discovered, it is shown to everyone at once, creating a shared moment of tension that can result in euphoria or disappointment.</p>
<h2>Tactility</h2>
<p>While digital applications now power some board games, by and large, the players have to move everything on the board physically. When a player takes an action, they might place a card down, then move a token, pick up some resources and then roll the dice. Because the players have to do all of this themselves, it contributes to their understanding of the game but also ensures that their intentions are broken down into smaller steps for the rest of the group.</p>
<p>As opposed to digital games, in board games, nothing happens without you purposefully doing it. Besides the fun aspect of moving pieces of cardboard around, it forces players to decompose their actions, making it clear to the group but also to themselves.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/611e3cd406d63ebc3a000049/02_16_9.jpg" alt="A player picks up a card from the game, while the board contains a selection of game pieces." /></p>
<p style="text-align: center;">Tactile elements such as tokens, cards and figurines are an important part of board games.</p>
<h2>Conclusion</h2>
<p>From the start of our investigation, the concept of the board seemed central. It&rsquo;s a shared space that we don&rsquo;t often see in digital experiences, where everything tends to be personalised. By its own limitation, the board game makes the same information available to everyone in the same way. Could reverting to using a separate screen for a shared board bring back the positive side-effects it might have on togetherness?</p>
<p>We wanted to try this but multi-screen applications come with a few challenges of their own. The main one being attention division &mdash; it&rsquo;s hard to know which screen to look at and when. This is due to a couple of things:</p>
<ul>
<li>The content is usually in real-time. You might miss something if you don&rsquo;t pause, creating FOMO (Fear Of Missing Out).</li>
<li>The multi-screen features are often added on top of an existing piece of content. That content is rarely designed for multiple devices and has to work on a single screen.</li>
</ul>
<p>However, by taking inspiration from board games, we can make this experience asynchronous. This gives players time to switch between screens at their leisure, without feeling something is happening without their knowledge. The tactile quality of board games can help us create experiences where each player understands what&rsquo;s happening every step of the way. And, by creating a new piece of content specifically designed for multiple screens, we can ensure the experience is optimal.</p>
<p>As a result, we decided to build a prototype game using the patterns we&rsquo;ve outlined in this article to help us assess if they&rsquo;re applicable in other digital experiences relevant to the BBC. It would take advantage of multiple screens to recreate the board and hand dynamic found in board games but would also work on a single screen by combining both.</p>
<p><img src="/rd/sites/50335ff370b5c262af000004/assets/611e3d4306d63ebc3a00004e/04_16_9.jpg" alt="Desktop and mobile screenshots of our prototype - showing elements of gameplay." /></p>
<p style="text-align: center;">A preview of the prototype.</p>
<blockquote>
<p><strong>This is the first in a series of articles</strong>, and in a coming installment we will describe how we approached building the prototype as well as how we implemented each of these patterns.</p>
</blockquote>
<blockquote>
<p><strong><a href="https://twitter.com/intent/tweet?url=&amp;text=Could the framework of cooperative board games provide a more human and social dimension to @BBCRD's digital storytelling and participatory experiences work? Would it also bring some solutions to the problems of remote working and socialising? https://bit.ly/3AVkPT7">Tweet This</a>&nbsp;-&nbsp;<a href="https://www.facebook.com/sharer/sharer.php?u=http://www.bbc.co.uk/rd/blog/2021-03-archiving-storykits-interactive-web-based-experiences">Share on Facebook</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-01-interactive-drama-stories-branching-narrative">BBC R&amp;D - Where Next For Interactive Stories?</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2019-02-storytelling-interactive-digital-drama">BBC R&amp;D - Storytelling of the Future</a></strong></p>
<p><strong><a href="https://www.bbc.co.uk/rd/blog/2018-10-object-based-media-tools-storyformer">BBC R&amp;D - StoryFormer: Building the Next Generation of Storytelling</a></strong></p>
<p><strong><strong><a href="https://www.bbc.co.uk/rd/blog/2019-07-personalised-documentary-data-instagramifaction">BBC R&amp;D - Making a Personalised, Data-Driven Documentary</a></strong></strong></p>
<p><strong><a href="https://www.broadcastnow.co.uk/tech/bbc-unveils-personalised-instagram-documentary/5141731.article">Broadcast - BBC unveils personalised Instagram documentary<br /></a></strong></p>
<p><strong><a href="https://www.tvbeurope.com/technology/bbc-announces-first-full-length-interactive-programme">TVB Europe - BBC announces first full-length interactive programme</a></strong></p>
<p><strong><a href="http://www.bbc.co.uk/rd/blog/2017-07-creating-adaptive-stories-with-paper-tools-workshop-series">BBC R&amp;D - Creating Adaptive Stories with Paper Tools</a></strong></p>
<p><strong><a href="http://www.bbc.co.uk/rd/projects/cake">BBC R&amp;D - Cook Along Kitchen Experience</a></strong></p>
<p><strong><strong><strong><a href="https://www.bbc.co.uk/rd/projects/object-based-media-toolkit">BBC R&amp;D - StoryKit: An Object-Based Media Toolkit</a></strong></strong></strong></p>
</blockquote>
          ]]>
        </content>
      </entry>
    
      <entry>
        <title>Explaining Artificial Intelligence. Part 3 - what does AI look like?</title>
        <link rel="alternate" type="text/html" href="https://www.bbc.co.uk/rd/blog/2021-08-explaining-artificial-intelligence-part-3-what-does-ai-look-like" />
        <id>https://www.bbc.co.uk/rd/blog/2021-08-explaining-artificial-intelligence-part-3-what-does-ai-look-like</id>

        <published>2021-08-18T07:46:43Z</published>
        <updated>2021-08-18T07:46:43Z</updated>

        <summary type="html" xml:lang="en">
          <![CDATA[
            We are working towards better images and media to represent AI
          ]]>
        </summary>

        
          
            <author><name>Tristan Ferne</name></author>
          
            <author><name>Henry Cooke</name></author>
          
            <author><name>David Man</name></author>
          
        

        
          <category term="Artificial Intelligence &amp; Machine Learning" />
        

        <content type="html" xml:lang="en" xml:base="http://www.bbc.co.uk/rd/blog">
          <![CDATA[
            <p><em>As Artificial Intelligence(AI) is used in more BBC products and everything else online, we think it’s important to deliver AI-powered systems that are responsibly and ethically designed. We also want to ensure that everyone has the opportunity to understand more about how this influential technology works in the world. This is part of a series of posts on this topic.</em></p>

<p>We have noticed that news stories or press releases about AI are often illustrated with stock photos of shiny gendered robots, glowing blue brains or the Terminator. We don&#39;t think that these images actually represent the technologies of AI and ML that are in use and being developed. Indeed, we think these are unhelpful stereotypes; they set unrealistic expectations, hinder wider understanding of the technology and potentially sow fear. Ultimately this affects public understanding and critical discourse around this <a href="https://www.bbc.co.uk/rd/blog/2021-07-explaining-artificial-intelligence-part-1-why-is-this-important">increasingly influential technology</a>. We are working towards better, less clichéd, more accurate and more representative images and media for AI.</p>

<p>Try going to your search engine of choice and search for images of AI. What do you get? </p>

            <h2>What are the issues?</h2>
<p>The problems with stock images of AI has been discussed and analysed a number of times already and there are some great articles and papers about it that describe the issues better than we can. The <a href="https://www.sciartmagazine.com/perception-is-seeing-believing.html">Is Seeing Believing?</a> project asks how we can evolve the visual language of AI. <a href="https://medium.com/@ageitgey/the-real-scandal-of-ai-awful-stock-photos-456633b9b0fc">The Real Scandal of AI</a> also identifies issues with stock photos. The AI Myths project, amongst other topics, includes <a href="https://www.aimyths.org/ai-equals-shiny-humanoid-robots">a feature on how shiny robots are often used to represent AI</a>.<br /><br />Going a bit deeper, <a href="https://www.noemamag.com/blueprints-of-intelligence/">this article explores how researchers have illustrated AI over the decades</a>, <a href="https://link.springer.com/article/10.1007/s13347-020-00415-6">this paper discusses how AI is often portrayed as white &ldquo;in colour, ethnicity, or both&rdquo;</a> and <a href="https://www.mdpi.com/2077-1444/11/5/253">this paper investigates the &ldquo;AI Creation&rdquo; meme that features a human hand and a machine hand nearly touching</a>. Wider issues with the portrayal and perception of AI have also been frequently studied, as <a href="https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf">by the Royal Society here</a>.<br /><br />To help us think through the problem we developed a workshop format in which we discuss existing imagery and then think about, sketch and create some better alternatives. The workshop isn&rsquo;t just about pictures though, it&rsquo;s thinking through what we talk about when we talk about AI. We have run the workshop with BBC teams several times and earlier in the year we took it to the <a href="https://schedule.mozillafestival.org/plaza">2021 Mozilla Festival</a>. We start our workshops by examining and discussing existing images that represent AI and ML.</p>
<p><br /><img src="/rd/sites/50335ff370b5c262af000004/assets/6116862506d63e1002000475/Untitled.001.jpeg" alt="Analysing helpful and unhelpful images" /></p>
<p><br />In our sessions there is often disagreement on which images are helpful and unhelpful, it's not clear cut. Some of the diagram-style images might be helpful, but only if you know a bit about the subject, and they're not visually striking or immediately recognisable. Similarly, funny images don't work unless you know enough to get the joke.<br /><br />The style of the existing images is often influenced by science fiction and there are many visual cliches of technology, such as 0s and 1s or circuit boards. The colour blue is predominant - although in this case it seems to be representing technology, blue can also be seen as representing male-ness.</p>
<p>The frequent representation of brains associate these images with human intelligence, although much AI and ML in use today is far removed from human intelligence. Robots occur frequently, but AI applications are very often nothing to do with robots or embodied systems. The robots are often white or they&rsquo;re sexualised female representations. We also often see &ldquo;evil&rdquo; robots from popular culture, like the Terminator.<br /><br />The people we&rsquo;ve workshopped with liked images that illustrate AI finding patterns, order or connections. Or images that try to embed AI &amp; ML into the reality and context in which used - like surveillance cameras or voice assistants.</p>
<h2><br />What do we think about AI?</h2>
<p>From reviewing the research literature and by interviewing AI engineers and developers in the BBC we have identified some common themes which we think are important in describing AI and ML and that could help when thinking about imagery.<br /><br /><img src="/rd/sites/50335ff370b5c262af000004/assets/6116864206d63e219a000138/Untitled.002.jpeg" alt="10 characteristics of AI and ML" /></p>
<ul>
<li>AI is all based on <strong>maths</strong>, statistics and probabilities</li>
<li>AI is about finding <strong>patterns</strong> and connections in data</li>
<li>AI works at a very <strong>large scale</strong>, manipulating almost unimaginable amounts of data</li>
<li>AI is often very <strong>complex and opaque</strong> and it&rsquo;s hard to explain how it works. It&rsquo;s even hard for the experts and practitioners to understand exactly what&rsquo;s going on inside these systems</li>
<li>Most AI systems in use today only really know about one thing, it is &ldquo;<strong>narrow</strong>&rdquo; intelligence</li>
<li>AI works quite differently to the human brain, in some ways it is an <strong>alien</strong> non-human intelligence</li>
<li>AI systems are <strong>artificial</strong> and constructed and coded by humans</li>
<li>AI is a <strong>sociotechnical</strong> system; it is combinations of computers and humans, creating, selecting and processing the data</li>
<li>AI is quite <strong>invisible</strong> and often hidden</li>
<li>AI is increasingly common, becoming <strong>pervasive</strong>, and affects almost all of us in so many areas. It can be <strong>powerful</strong> when connected to systems of power and affects individuals, society and the world</li>
</ul>
<p>We would like to see more images that realistically portray the technology and point towards its strengths, weaknesses, context and applications. Maybe they could...</p>
<ul>
<li>Represent a wider range of humans and human cultures than &lsquo;caucasian businessperson&rsquo; or &lsquo;humanoid robot&rsquo;</li>
<li>Represent the human, social and environmental impacts of AI systems</li>
<li>Reflect the realistically messy, complex, repetitive and statistical nature of AI systems</li>
<li>Accurately reflect the capabilities of the technology: generally applied to specific tasks and are not of human-level intelligence</li>
<li>Show realistic applications of AI</li>
<li>Avoid monolithic or unknowable representations of AI systems</li>
<li>Avoid using electronic representations of human brains, or robots</li>
</ul>
<h2><br />Towards better images</h2>
<p>In creating new stock photos and imagery we need to consider what makes a good stock photo. Why do people use them and how? Is the image representing a particular part of the technology or is it trying to tell a wider story? What emotional response should the viewers have when looking at it? Does it help them understand the technology and is it an accurate representation? <br /><br />Consider the visual style; a diagram, a cartoon or a photo each brings different attributes and will communicate ideas in different ways. Imagery is often used to draw attention so it may be important to create something that has impact and is recognisable. A lot of existing stock photos of AI may be misrepresentative and unhelpful, but they are distinctive and impactful and you know them when you see them.<br /><br />To conclude our workshops we wanted people to start making their own images, or think about what those images might be. They didn't need to be "good" - we were more interested in seeing what people focused on and how they wanted to represent it.<br /><br /><img src="/rd/sites/50335ff370b5c262af000004/assets/6116866406d63e219a000139/Untitled.003.jpeg" alt="Workshop ideas for new representations of AI" /><br /><br />Some of the themes we&rsquo;ve seen develop from these creations include:</p>
<ul>
<li>Putting humans front and centre, and showing AI as a helper, a tool or something to be harnessed.</li>
<li>Showing the human involvement in AI; in coding the systems or creating the training data.</li>
<li>Positively reinforcing what AI can do, rather than showing the negative and dangerous aspects.</li>
<li>Showing the input and outputs and how human knowledge is translated into data.</li>
<li>Making the invisible visible.</li>
<li>AI getting things wrong</li>
</ul>
<p><br />Interesting metaphors used include sieves and filters (of data), friendly ghosts, training circus animals, social animals, like bees or ants with emergent behaviours, child-like learning and the past predicting the future.<br /><br /></p>
<p style="text-align: center;"><img src="/rd/sites/50335ff370b5c262af000004/assets/611686a006d63e219a00013a/UnderstandingMLProjectPage.jpg" alt="Representation of AI using digitised pictures of trees" /><br /><em>A new image representing datasets / creating order / digitisation</em></p>
<p>&nbsp;</p>
<hr />
<p><br />This is just a starting point and there is much more thinking to be done, sketches to be drawn, ideas to be harnessed, definitions agreed on and metaphors minted.<br /><br />We have already kicked off a project with students at the <a href="https://www.arts.ac.uk/colleges/london-college-of-communication">London College of Communication</a>, giving them this brief and we&rsquo;re excited about what they&rsquo;ve created and how they thought about the problem. We are now working with <a href="https://weandai.org">We and AI</a> to develop more ideas and spread the word. Ultimately we&rsquo;re hoping to help create a library of better stock photos for AI; we&rsquo;re starting to look for artists to commission and we&rsquo;re looking for collaborators to work with. Please get in touch if you&rsquo;re interested in working with us. <br /><br /><em>This work is part of a wider project that we're working on to try <a href="https://www.bbc.co.uk/rd/projects/making-ai-more-understandable">to make AI and ML more understandable to everyone</a>. See previous posts on <a href="https://www.bbc.co.uk/rd/blog/2021-07-explaining-artificial-intelligence-part-1-why-is-this-important">why we think this is important</a>, <a href="https://www.bbc.co.uk/rd/blog/2021-07-explaining-artificial-intelligence-part-2-what-where-and-how">different ways we can approach explaining</a>, <a href="https://www.bbc.co.uk/rd/blog/2021-05-a-machines-guide-to-birdwatching">prototypes that demonstrate understandable AI</a> and <a href="https://www.youtube.com/watch?v=Le0tV36Euo8">a video about AI aimed at young people.</a></em><br /><br /><br /><strong>Icon credits</strong><br /><em>Complexity by SBTS from the Noun Project</em><br /><em>Octopus by Atif Arshad from the Noun Project</em><br /><em>pattern by Eliricon from the Noun Project</em><br /><em>watch world by corpus delicti from the Noun Project</em><br /><em>sts by Nithinan Tatah from the Noun Project</em><br /><em>narrowing by andriwidodo from the Noun Project</em><br /><em>Error 404 by Aneeque Ahmed from the Noun Project</em><br /><em>box icon by Fithratul Hafizd from the Noun Project</em><br /><em>Ghost by Pelin Kahraman from the Noun Project</em><br /><em>stack by Alex Fuller from the Noun Project</em><br /><em>Math by Ralf Schmitzer from the Noun Project</em><br /><em>chip by Chintuza from the Noun Project</em></p>
          ]]>
        </content>
      </entry>
    
  
</feed>
