The first of two articles about our first steps in producing truly object-based visual media.
The UX team have been exploring object-based media through a number of research projects over the last few years - including things like the Derek Tangye responsive radio programme, and more recently Visual Perceptive Media. These projects have given us insights into how linear media can be pulled apart and partially or completely reassembled in various forms to suit different scenarios and audience requirements. These projects have all focused on analysing how existing pieces of media that were produced traditionally can be turned into richer experiences for audiences. These were important first steps in understanding what makes an object-based media experience, and allowed us to investigate how data-driven approaches can be applied to a piece of film, radio or television.
We’re now entering a new phase of research: object-based production to experience. Our research to this point has had a hefty focus on the end result: the structure, implementation, and user-experience of object-based media. We now need to really consider the practicalities of how content creators and broadcasters can commission, write, produce and distribute content in an efficient and cost-effective way. In our humble opinion to offer compelling new experiences.
Our first practical exploration into the visual object-based production world is the Cook-Along Kitchen Experience. We set out to write, produce, shoot and ultimately distribute a completely object-based, interactive, real-time cookery programme. At first glance, a cooking show may seem like a strange choice compared to our previous investigations around more narrative-driven content. In pure data terms, however, something like a cooking show is a perfect format for exploring user-configurable non-linear content. Traditional recipes start as linear sequences, but can of course be changed in any number of ways: ingredients can be swapped, quantities increased/decreased, elements can be cooked in different ways and to different tastes. Recipes can be made of other recipes: a starter, a main, then a dessert. Each of these elements are made up of smaller recipes still: the cake is made of sponge, icing, decoration - any one of which can be swapped out, increased, used on it’s own, or removed entirely. Chefs and bakers (digital or otherwise) make these numerical/object changes all the time, without thinking about it… recipes are actually informally standardised data formats!
Another consideration is that our object-based production should know about the user. We want it to adapt to the person cooking, and their requirements: the time they have, who they’re cooking for, their skills, equipment, preferences. If the recipe needs three hob rings and the user only has two, how is the content reassembled to reflect this? What if they’re vegetarian? Or in a rush? Or cooking for a dozen people?
Finally, this experience happens in real time. It’s not like normal telly and isn’t a passive watch, it’s an experience that opens up an ongoing collaborative dialogue between the programme and the audience. This is a cook-along show: our system instructs and works with people, at their pace.
Bella's Chocolate & Orange Pots - Original recipe
The first step was of course to create our data starting point - the recipe structure itself. We worked closely with our chef and presenter, Bella Wright, to create a series of delicious complimentary recipes: a red mullet main course, a cannellini bean mash, a kale side dish, and a chocolate pot dessert. These recipes were chosen as they allowed us a high degree of flexibility in terms of preparation types and ingredient swaps, and of course they work together as a whole meal or as stand alone dishes.
The next phase was to take Bella’s linear suggested recipes, and really pull them apart into a data representation. This was surprisingly challenging, and required a really deep analysis of exactly what the makeup of a cooking process really looks like. The recipe was broken up into logical sub-sections, any one of which could function on its own or as part of a wider recipe - for example, the fish de-scaling and preparation was planned and scripted so it could be used as a guide on its own, or within a larger recipe flow. Timings and the way that each of the separate elements in the production depended/linked/referred to each other become extremely important too: our fish dish was prepared in three different ways, which took three different lengths of time, meaning the rest of the recipe objects had to be reconfigured depending on which one was being used.
CAKE whiteboard map with possible combinations of recipes
The first pass at this representation took the form of a logical map. This visual representation was then adapted into a more formalised script model - retaining the categorisations, links and organisational conventions, but presented as a traditional script. This was the first real data based representation of the audio and video we needed to shoot, and was then directly used to plan, organise and carry out the shoot on the day of production. This was laid out in a spreadsheet, not code - it’s better to think of the model as a representation of all of the media elements in the production, and a way of conceptualising it, and not as a computer program or similar. The decision to work like this meant that we were able to annotate and enhance the script itself as the shoot was carried out: adding notes, changes, points of interest and various other things to the data model as it was being used in the real world. Following the shoot, the exact same data model was used while editing the footage - allowing us to map footage precisely onto the model, establishing a link between footage and data.
In the next post, we’ll take a closer look at how the object-based shoot worked in our production studio, and explain where we’re heading next.