BBC R&D

Posted by James Weaver on , last updated

Digital Video is large. Really large. A professional quality HD studio video camera produces just over a gigabit every second (1 Gb/s) when making the kind of video we currently show on BBC HD (which is called 1080i50). For the higher quality "progressive" type of video this rises to just over 2 Gb/s, and for new ultra-high-definition, high dynamic range, or high frame rate video which we hope to deliver in the future the amount of data grows larger still. The sheer size of this sort of video data has meant that historically it's been sent around studios using specialised cabling designed for the purpose of carrying such high data rate.

However, moving large amounts of data quickly over long distances is technically difficult and expensive. For this reason, pretty much every time we send video to audiences we have to use compression techniques to reduce the amount of data being transmitted.

There are a plethora of video compression techniques out there at the moment: from the MPEG-2 video standard used for digital television channels on Freeview and for all DVDs, to the H.264 format which is used by FreeView HD, most BluRay discs and on many online video stores, to other formats like JPEG2000 which is used by digital cinemas; but up until now we've mostly not used compression when we're sending video around in the studio itself. In recent work we've started making use of another compression standard, called VC-2.

The Challenge of Uncompressed Video

With our recent work we're more and more often sending studio video around not on dedicated video tie-lines, but instead on a general purpose computer network, and this is posing some interesting new challenges.

As networks get faster we can fit more data over the links, but at the same time our video requirements are also always increasing:

  • With old-style 1Gb/s computer networks uncompressed HD video won't fit at all.
  • With reasonably widespread 10Gb/s networks we can send around 8 HD videos, but only 1 Ultra-HD video at a time. And to cut between videos we need more than that.
  • Affordable 40Gb/s networks are now available, and with those we can send around 4 Ultra-HD videos at once over each link, but a High Frame Rate or High Dynamic Range video can still be a problem.
  • Even faster 100Gb/s networks are now just on the horizon (and will likely become widespread fairly soon), but even then it seems likely that the demands of the industry for higher quality pictures will continue to expand.

Video compression can help us keep pace with the ever expanding size of new video formats as they continue to keep pace with the increasing speed of networks.

The Challenge of Compressed Video

In the past our experiments have made use of a video compression standard called AVCI-100, which is a variant of the same H.264 standard that's used for HD television, internet downloads, and most Blu-Ray discs. It's a very good standard in general, but isn't necessarily perfectly suited for the purpose we were putting it to here. In particular H.264 requires some very complex mathematical analysis of the pictures in order to compress them effectively. Doing that analysis takes time and computer power: a computer encoding an ultra-HD video with an H.264 encoder has to be very powerful, and will add a noticeable delay to the video by the time it reaches the other end (having been through both an encoder and a decoder).

This delay is called the "latency" of the system, and for a lot of video compression formats it can be much higher than we're used to in video studios. This can make things difficult: it's hard to cut video accurately if your feeds are all delayed compared to what's actually going on in front of the camera.

The VC-2 Codec

The codec (short for "coder and decoder", the term we use for a video compression format) we're currently using for this purpose is called VC-2, which stands for "Video Codec 2", the name it's been given by the Society of Motion Picture & Television Engineers who publish the document that defines how it works. It's actually a variant of an older codec designed by the BBC some years ago called Dirac (named for the physicist Paul Dirac), but where Dirac was a relatively complex codec similar in scope to H.264 the VC-2 codec is *much* simpler and has a much lower latency.

Now, in engineering there's always a trade-off, and in this case getting a simpler codec (which uses less computing power) with a lower latency also means we don't get as much reduction in size. We call the ratio between the original data rate of the uncompressed video and that of the compressed video the "compression ratio" of the system, and where AVCI-100 gives a ratio of around 10:1 with relatively good visual quality VC-2 needs a ratio of around 4:1 to achieve the same level of quality.

Still, for the purposes we need here that trade-off is very much worth it. A compression ratio of 4:1 can turn a relatively unmanageable 8Gb/s ultra-HD video into a much more manageable 2Gb/s stream allowing us to potentially send up to four of them down the same 10Gb/s network link with minimal problems. Looking to the future it could turn a colossal 48Gb/s ultra-HD 300fps High Frame-Rate video into a 12Gb/s stream, making its use on future 100Gb/s networks far more reasonable.

In fact the advantages in simplicity have meant that the same computers which ran at full capacity to compress our ultra-HD video for our Commonwealth Games demos last year using H.264 are now capable of compressing the same type of video using VC-2 running at around half their full capacity! And the decode side of our Ultra-HD workflow has been able to move to a much less powerful computer and run smoothly without frame drops.

Low Latency, Low Complexity, High Quality, and an Open Standard

There's a lot of talk in the industry at the moment about what the best standard for what we call "Mezzanine Compression" will be. That's the sort of compression we're talking about here, being used within the studio and production environment with low compression ratios and low latencies. We think that VC-2 is a good choice for this because of its extreme simplicity, very low latency, and the fact that it is an openly published standard with no patents covering its core technologies.

Earlier this month we published a new internet draft intended to become a standard way of packing VC-2 video into packets so that it can be sent across a network (this is the approach we've been using internally for our tests), and we've released reference encode and decode software which anyone may use to test out VC-2 implementations. We expect this will help with a wider adoption of this standard in the industry, and anticipate the announcement and demonstration of several VC-2 related products by equipment manufacturers in the next year.

We also have our own optimised software encoder/decoder pair which is available for licensing through our tech transfer team. For further information about this or anything else in this blog post please contact the ipstudio team.