Monday, 21 November 2016

Televising the IP revolution

Broadcast
I produced the editorial and chaired this conference. http://www.broadcastnow.co.uk/features/televising-the-ip-revolution/5111381.article
A panel of experts at the Broadcast TECH IP Summit discussed the technology’s potential as a creative tool, through advancements such as object-based broadcasting and remote production.
Object-based broadcasting, where elements like music, commentary and camera feed can be separated and chosen by galleries and consumers alike, is revolutionising how people can access content. Beyond the technical challenges, it has the potential to create new genres and formats, but poses regulatory and storage issues. A panel at the IP Summit organised by Broadcast’s sister titleBroadcast TECH grappled with how IP is evolving as a creative tool.
JON PAGE, Head of operations, BBC R&D
Like any good 19th century scientist, we started off in IP by experimenting on ourselves.
We’ve been running our all-staff meetings across multiple sites as a live event, using our IP infrastructure. In 2014, we ran parallel coverage in UHD over IP, camera to screen.
At the moment, we’re focusing on this challenge: if Glastonbury is a massive moment for the BBC, what if we wanted to do 50 Glastonburys, without increasing the licence fee?
We’ve been evolving that with the
Scottish music festival T in the Park, and with the Edinburgh Festival, to work with logging off single cameras, plugging those into the internet and then creating a very simple web frontend editor that enables us to produce content very simply. What we’re finding from the production communities and from different parts of the BBC is that people want access to this.
The ‘object’ bit, though, comes when you think about content not as strings of half-hour programmes, but as a series of objects where a half-hour programme might be one of many ways of arranging them. Without having to spend more money doing recuts or anything special, you can present it in many different ways.
We’re continuously asking more questions about what we might do with objects.
Global news has holes in its schedules every so often – 30 seconds, 45, maybe a minute and a half. By taking news objects and bracketing edit decisions, you can then put that through an algorithm that automatically generates a curated version.
This is about thinking about objects on a higher level – a scene, or series of scenes, that follow each other. It’s part of the way in which a story is told.
You can then extract that and tell it in a different way that allows people to catch up on content.
We used it on Peaky Blinders as a way to bring audiences up to speed with a condensed catch-up of the action before launching series three.
In a cookery show, you could label what you are filming so you understand what the objects are. When placed in a database, it enables it to be played out in such a way that it presents the content as it is being cooked – it tells the viewer what they are cooking, what ingredients they have chosen, how many people they have to feed, how long it takes to chop the onion.
It’s a teacher in the kitchen that comes out of having made the show. If you do your planning at the outset, you can get three or four products out of the same production.
What’s interesting is to speak to the production community. When we put it into producers’ hands, they throw back more challenges to optimise that technology.
CASSIAN HARRISON, Controller, BBC4
One of the things I’ve noticed in my career is how at particular moments, there have been pivot points in technology that have radically changed the mode in which we can make content and renewed and refreshed TV itself.
Small DV cameras, the VX1000 in particular, transformed the documentary and we ended up with what became the docusoap. We could create really intense, long-running series that would have been unaffordable before.
The advent of hard-disk storage allowed us to store a huge amount of data and access it in a random access manner, which gave us the fixed rig, while the heli-gimbal transformed natural history filmmaking with Planet Earth.
I think we are reaching a tipping point around live TV, which has always been an incredibly premium product, and a central element of what TV is.
Linear TV, in particular, is having something of an existential crisis, but what keeps on coming through is the value of the special shared moment.
You think about sport and entertainment, but what’s interesting over the past 10 years is how the BBC has really looked at how live can be used in science (with Stargazing Live), in natural history (with Springwatch), and even in very straight forward documentary contexts like Airport Live.
Before, we needed to hire a very expensive studio, pre-wired with a gallery, and if we wanted to step outside, we would need to hire an incredibly large OB truck and take the tie line to whichever natural park Autumnwatch was filming in.
Constraints on technology meant that there were real limitations on what we could offer.
What really floats my boat is that now the entire nation is fixed with camera tie lines. You have them going into every single home and over the air. In the context of factual filmmaking, that becomes extraordinary, because you can suddenly go anywhere and receive a live image and begin to make TV out of it – BBC News can take mobile feeds into news production.
But we haven’t had the tools to begin to craft a switch-edited output, in the manner that you can in a studio gallery or an OB truck, that we can then put live to air.
That’s the final piece of technology that will enable us to produce creative output out of this network of camera tie lines. My mind begins to spin at the idea of a virtual gallery.
We can plug in cameras anywhere in the UK and they pop up on a network that can be seen at a central location in the same way that the switch room in the basement of New Broadcasting House works.
The guilty secret is that a lot of what TV is about, and the stories that people like to watch, doesn’t change that much. People return to the ob docs, glimpses into people’s lives, the stories of crime and justice. But we now have the ability to go to those precincts and tell those stories in a different way.
This is more of a BBC4 brief of arts culture and creativity than for BBC2 – although I am thinking about how the heck I do multi-point Police Live.
Is there a way that we can co-opt Britain’s massive craft communities, making stuff every day all across the county, celebrate it and make it into a TV event? That would have been impossible three or four years ago, but it’s a fi rst step into a completely new way of approaching ob docs.
MARTYN SUKER Consultant; former head of production, ITV Studios
When you talk to creatives about IP, you get a pretty puzzled reaction.
To them, it’s all about rights – brand or format ownership. It begs the question: do directors and producers need to be aware of the plumbing, so long as it’s reliable and flexible?
They should be aware of its capabilities.
How do you write the stories if you don’t know what the opportunities might be?
Creatives are rightly nervous about anything to do with IT and we can’t introduce anything too early.
Any savings hardly ever find their way back to the production team battling with falling budgets – they go back to the broadcaster. Operating expenses? Personally, when I’m on location, I don’t want to share resources with another five clients; I want to know somebody is there supporting me, and only me – and that comes at a price.
Shifts in production methodology only work for certain formats – live, multi-camera sport, entertainment and reality events – and there will be a huge expansion there. Drama? Probably not so much.
It will take time for this to settle in and to get through to the creatives what it means for storytelling. Optional extras, like additional feeds and the information that comes with it, take time and effort to create somewhere within the programme- making process, and that costs money.
If our tariffs are falling, where is the money going to come from to give creatives the freedom to do this?
Remote and centralised production processes enable fl exibility but will alter how we do things, not what we do. And if we can overcome this, is this the tech nology that might actually change our view of the linear scheduled broadcast?

ROB FRANCE, Senior product marketing manager, Dolby
The two words we hear most about object-based video, ‘efficiency’ and ‘flexibility’, are just as applicable in audio.
Object-based audio can deliver a more flexible experience, but it’s IP that ultimately needs to give us the backbone and flexibility on that level.
The first use of objects is to say ‘I have a sound, and I want to produce it differently depending on the replay environment’.
Our technology Dolby Atmos brings 3D sound, which we use in the cinema – and as more people watch content on tablets and mobiles with headphones, they expect better audio than just a mix of left and right channels.
People want more choices – if your team scores, you want to hear what a great goal it was; to the rival team’s fans, it can be the greatest goal in the world, but you don’t want to hear that.
That’s why every major Premier League team has its own commentary streamed over the web for fanclub members. We could distribute those over the IPTV channels without changing the ‘object’, or the content.
For sports that someone is unfamiliar with, it might be very useful to have commentary that describes and explains everything, but if you know more about the sport, you might not want the same level of detail.
And there are certain elements of a story you don’t always want to cover. If you’ve got 20 minutes to watch a match, you watch the highlights.
Or in scripted shows, you might want a version for adults and a version they can watch with their children that has content cut out or some words substituted for more appropriate ones.

ANDY BEALE, Chief engineer, BT Sport
Most sports feeds are heavily locked down because rights holders are paranoid about their content.
They want to make sure the feed we produce in our trucks looks the same – not just in the UK, but internationally.
Consistency is very important, and the consequence of that is that all sports feeds end up being boiled down to an extremely predictable and reliable – and, hopefully, consistent – experience.
Personalisation of audio is interesting – fans love their commentary. Fans will choose to listen to their team’s commentary while watching a feed from us or Sky. It’s a disruptive experience as there’s no way it will be in synch with the pictures. There’s a missed opportunity there.
It goes beyond the creative and technical challenges, many of which will be solved. Ofcom needs to do some real thinking too – how do I prove to them I’ve delivered a broadcast- compliant programme to every single viewer if everyone is watching slightly different content?
What about how this content is recorded and archived? Do I put in a video file with some data and graphics, and what does my media asset management system look like?
How can I access it quickly when someone asks for a replay? How does playout look when I have multiple assets all flying around in parallel?
Every single sport in the world ends up having English graphics, but the two biggest brands in motor racing, Honda and Suzuki, are Japanese. That market is watching with English-language graphics and listening to English-language commentary.
Object-based broadcasting definitely opens up an opportunity to deliver personalised experiences to those markets.
We need to get the frameworks in place so the rights holders can hand to us broadcasters elements that we can’t break too far out of, but that allow us to offer the right amount of personalisation, and we in turn hand down the combined framework to give a consumer enough freedom.
If we fix that problem, we really have got an exciting opportunity.

Metadata - Knowledge is power


InBroadcast

Metadata is what makes media an asset and with software-defined workflows and exponential increases in content output maximising it is critical.


The importance of metadata cannot be stressed too highly: in a software-oriented media world it is the absolute core of the technology, the glue that makes everything work. Software-defined workflows need to know all about the content to be able to automate workflows and also make the content easier to find.

“Every single media enterprise will move to software-defined workflows so will come to depend upon metadata,” says Paul Wilkins, director - solutions & marketing, TMD. “What becomes critically important then, is that the metadata management layer – which is where the workflow orchestration resides in a well-designed system – is infinitely flexible to meet the unique needs of each media enterprise.”

Metadata is what makes media an asset and drives searching, organisation, and content identification. With more outputs and increased need to streamline production metadata is enabling more automated production within the broadcaster. 

According to Ximena Araneda, evp media workflows and playout, Vizrt, it is the key that decides what an orchestration layer should do with an asset: “Without metadata, the whole process of automating production and ultimately collaborating falls apart.” 

Also important is metadata exchange. The metadata does not stop in a facility, but goes via initiatives like AS-11 by Advanced Media Workflow Association to other broadcasters. “The benefit compared to the cost of metadata will only continue to be in favour of more metadata, as it is shared more effectively, and the basis for more automated production,” says Araneda.

Search and media discovery is the bread and butter of many metadata workflows. Beyond this the workflow becomes automated production typically coupled with automatic or user based task management. 

A common scenario using Vizrt is for handling media ingest, quality control, multi-platform distribution and archiving of the media. As media enters the system, it will be matched against a pre-created asset placeholder, and that placeholder is connected to a user operated ingest 

task. The operator can then verify that the media is associated with the correct asset and take action on that task. That will reflect a change in administrative metadata, triggering another task based workflow, for example, quality control, where the media automatically will be sent off to an external party for verification. The result of the quality control will then update additional administrative metadata, where another operator can verify the results and take action on that task, driving the workflow further.

“This type of workflow brings out better control of the lifecycle of the media, allowing for catching mistakes and errors and then act upon these in a more controlled environment,” explains Araneda.

MAM system Viz One uses descriptive, administrative and technical metadata. This type of metadata provides functionality such as resource discovery, interoperability and media lifecycle management. Metadata may also be present in the form of annotations enabling more powerful search, enhanced content description and automatic clip creation. The system also relies heavily on central metadata taxonomies such as dictionaries, thesauri, and directories. 

Whereas production departments typically focus only on the next playout date, archival description of content has to serve history, cultural identity, and cultural heritage.

That’s where NOA’s mediaARC Workflows come into play. This product is designed to link production procedures to specific media or metadata results. Examples include delivering assets from an archive, validating metadata edits, or an extended QC process. 

“Metadata is always changing because content is always changing,” states the Vienna-based firm. “Tomorrow’s structure of metadata description could differ significantly from today’s. Some say a non-structured, full-text database is the answer, but a strict hierarchical relationship between entities is not sufficient to describe the content of an archive. mediARC overcomes that problem by organising audio, video, and other content in media object containers. You can link media objects to multiple metadata entries regardless of whether they are complete or in segments. That means you can create logical cross-correlations even with different qualifiers describing the role of the connection of the media to its context.”

Recently NOA introduced a VideoPlayer for jobDB, the company’s system that allows users to set up workflows for the ingest, reshaping, and analysis of media for archiving or re-transcoding. 

Metadata driven workflows are core to Primestream’s Dynamic Media Management system. Some metadata is automatically created or assigned when a clip or file is originally created or ingest into its MAM system, while other workflows call for additional information to be added or edited later in the process. To streamline this workflow, Primestream built specific metadata tools..For example, Primestream’s Xchange Suite integrates with the Associated Press’ metadata service to add accurate, comprehensive data to assets. For real-time logging situations like sports, Primestream’s FORK Logger turns on real-time logging along with integration with STATS for a live metadata stream into the FORK Logger module. 

“If you want to find all of the goals or scores in a game for a highlight package then you are dependent on metadata to find that clip and put on air,” explains COO David Schleifer. “If ‘goals’ are logged live, an editor can have a Smart Bin where all the goals show up automatically. Primestream integration with all the leading NLE systems lets metadata and markers move between them and FORK and Xchange. Primestream also enables commenting, approval workflows and much more.”

Marquis Broadcast’s focus is on integrating incompatible systems from a workflow metadata or codec standpoint. All its solutions depend on metadata. “We often have to re-map metadata, re wrap files and transcode content ‘on the fly’ so it can move seamlessly through workflows,” says Paul Glasgow, sales & marketing director. “Our special ‘metadata sauce’ is being able to do this between fundamentally incompatible systems.”

The company maintains a large legacy and contemporary library of broadcast and media systems from which it extracts and returns the richest metadata sets possible. While rushes from cameras are very simple to manage, the heavy ‘metadata lifting’ comes from dealing with metadata associated with ‘in-production’ assets. As an example, Marquis manage metadata from complex timeline sequences, sequence translations and also manage the genealogy and tracking metadata. 

For example, Primestream’s Xchange Suite integrates with the Associated Press’ metadata service to add accurate, comprehensive data to assets. For real-time logging situations like sports, Primestream’s FORK Logger turns on real-time logging along with integration with STATS for a live metadata stream into the FORK Logger module. 

“If you want to find all of the goals or scores in a game for a highlight package then you are dependent on metadata to find that clip and put on air,” explains COO David Schleifer. “If ‘goals’ are logged live, an editor can have a Smart Bin where all the goals show up automatically. Primestream integration with all the leading NLE systems lets metadata and markers move between them and FORK and Xchange. Primestream also enables commenting, approval workflows and much more.”

Marquis Broadcast’s focus is on integrating incompatible systems from a workflow metadata or codec standpoint. All its solutions depend on metadata. “We often have to re-map metadata, re wrap files and transcode content ‘on the fly’ so it can move seamlessly through workflows,” says Paul Glasgow, sales & marketing director. “Our special ‘metadata sauce’ is being able to do this between fundamentally incompatible systems.”

The company maintains a large legacy and contemporary library of broadcast and media systems from which it extracts and returns the richest metadata sets possible. While rushes from cameras are very simple to manage, the heavy ‘metadata lifting’ comes from dealing with metadata associated with ‘in-production’ assets. As an example, Marquis manage metadata from complex timeline sequences, sequence translations and also manage the genealogy and tracking metadata.

“We may also extract metadata from media files and aggregate and re wrap these with other metadata sources such as from PAM, MAM and users e.g. for AS-11 compliance,” says Glasgow.

“For example, an Avid Media Composer editor within in an Interplay environment may wish to send an asset to another production system playout server, automation system DAM, archive etc. From a user perspective it’s a simple <right click<> send to>< an incompatible system>. We deal with all the complexity. In a large enterprise we may be servicing hundreds of concurrent processes interoperating between many different systems. To do this we maintain the largest independent library of legacy and contemporary third party integrations. This means customers can deploy the latest and best of breed systems, confident that they can also bi-directionally integrate with their legacy infrastructure and systems.”

Telestream’s Vantage products can process metadata in sidecar xml files, or embedded in MXF files. It can also process what it calls ‘work orders’ - metadata contained in CSV files. More than that, Vantage can create metadata by automatically analysing files, and it can convert metadata between different stylesheets so that the metadata can be used by different systems.

“In a very simple example, but one that is common in news ingest, Vantage can examine metadata on files that come into a news organisation, examine this to decide the resolution, aspect ratio and even orientation of the shot, and then apply the necessary processing rules to get this story into the news production system as quickly as possible,” explains Paul Turner, vp of enterprise product management. “This allows broadcast news companies to benefit from footage that comes from the massive number of video cameras in phones that are now first on the scene at almost every incident.”

The two main types of metadata in Editshare’s Flow production asset management system are asset-level metadata which refer to an entire media file (e.g. master clip metadata) and log level/subclip metadata, which refers to a particular section of a clip (i.e. subclip).

Metadata fields can be entered by text, pick lists, timecodes, dates, booleans (true/false), as well as some special purpose log list metadata fields called groups and categories, which are used in the Flow Logger application to perform point-and-click logging in reality or sports productions of contestants, players or activities. Extensive metadata can also originate from tapeless ingest or other third party systems and is preserved throughout Flow workflows.

“Increasingly, customers will see that good editorial metadata doesn’t just magically create itself— that the data you find is only as valuable as the time you put into entering it on the front end,” says Jeff Herzog, product manager for Flow asset management and video products. “An investment in metadata input pays dividends in the editorial process as well as to future monetisation possibilities for content.”

The ability to reuse and repurpose content has become critical in order to control the burgeoning costs of programme creation. Organisations need to understand what content they own or have the rights to use, and where the content is located.

“They also need to manage where the content is stored: across fast edit storage, slower commodity disk, tape, optical, or cloud services. Metadata lets teams take control of all of these issues and more,” says Dave Clack, CEO, Square Box Systems. Given all of this, companies are finding that their old methods of managing content are failing.”

This includes inconsistencies in file and folder naming, a reliance on ‘hero’ individuals who are the go-tos for performing video search and retrieval, and manual methods such as spreadsheets and documents. These methods simply can’t keep pace with the exploding amount of content being created, in addition to the huge physical file sizes that resulting from emerging technologies such as 4K/8K, HDR, HFR, and VR.
Clack explains that CatDV is extremely effective for capturing and logging all types of content and resource metadata. This ranges from asset-based, technical metadata for items such as cameras, exposures, video and audio formats, and containers to custom metadata that reflects the needs of an organisation. 

“CatDV captures a huge variety of data types including text, numbers, lists, multiple selections, and predictive text entry – anything to make logging simple, quick, and useful,” he says. “It also captures time-based and temporal metadata. Examples include markers for the best moments in the content, such as the goals in a soccer match, or markers for QC or bad language, as well as integration points for automating image detection and speech transcripts.”

TMD prefers to divide metadata into two broad categories: technical metadata, which aids automation; and descriptive, which aids discovery.

“One of the key issues we’ve found, particularly during data take-on from legacy systems, is that there is sometimes confusion between the two,” reports Wilkins. “To quote an obviously disastrous example, one major broadcaster had made the number of audio channels a descriptive text field not a prescribed technical one. That resulted in hundreds of different descriptions for a stereo soundtrack, a nightmare to translate to a modern, rigorous database structure.”

Content creation and/or preparation with multi-versioning & IMF packaging is among the most discussed workflows currently. For this type of workflows, the Dalet AmberFin media processing platform, together with the Dalet Workflow Engine, allows sophisticated technical decisions to be controlled by business rules that enter the system as simple XML files, watch folder triggers, or API calls. 

Multi-platform delivery has caused the industry to move away from the simplicity of ‘one profile per customer,’” explains Kevin Savina, Director of Product Strategy. “But with the help of a powerful workflow engine, we can create an environment where a single workflow can produce all the desired outputs just by changing the metadata that initiates a particular process.”

SMPTE’s standardized mastering format IMF – the Interoperable Master Format has a wealth of facilities for identification, auditing and tracking of media, titles and metadata. Although based on human-readable XML, it is fundamentally optimised for machine processing.

“With the huge variation of starting points for IMF creation, it is imperative that a workflow engine is versatile and able to use the right tools at the right time to form a valid and verifiable IMF bundle at reasonable speed and for reasonable cost,” says Savina. “Dalet Workflow Engine has been optimized for these kinds of workflows where the number of input files is not known until the job starts, and the workflow proceeds without losing or changing vital information. The icing on the cake is the ability to see the performance of the jobs in a data analytics engine that is able to spot trends in operation so that continual optimization of the tools can be performed.”


Metadata evolution

There are already plenty of examples of technical metadata being used to seamlessly link content from creation to consumption, setting the optimum format at each stage of the process. The next stage will be for an extension of descriptive metadata. More information will be gathered and created, some of it perhaps automatically. 

“Intelligent systems might ‘listen’ to the script and ‘view’ the content to build a comprehensive description and audience rating of a programme,” suggests Wilkins. “That rich metadata could then be exposed to consumers to help them identify their sort of content. Ultimately this, too, could be automated. Intelligent set-top boxes and online clients could build a profile of the user, learning what sort of content they like.”

Perhaps in the near future, Artificial Intelligence systems will take advantage of metadata by using metadata along with complex search algorithms to find new relationships between media and audience interest,” suggests Schleifer. “Technology such as speech-to-text, voice-search systems like Siri, Cortana and Alexa are all maturing into powerful tools that will improve how we search and find what we are looking for. Properly catalogued media gives the broadcaster more opportunities to leverage their libraries to the public.”

Vizrt’s Araneda confirms that metadata will increasingly be populated with a higher degree of automation enabled by new technologies such as image analysis for face recognition, speech-to-text, and object recognition, and many more. 

“The end result will be a higher degree of metadata and more automated rules and tools that use that data,” she says. 

“Finding footage of a particular person, or past comments on a particular topic, will be just a few clicks away for the producer. We also believe that with the social media explosion, keeping track of use data will be more and more important in the planning stage of producing a story, in order to take decisions on how the story creation should be done, and on what platform to target similar content.”

In conclusion, the success of online video relies, in part, on metadata. Publishers need to manage the creation of metadata as a part of the production of the video. Video is a complex medium that requires both automatically created metadata and human authored, which is more flexible and accurate, thus providing a fuller experience to the consumer. 

“Higher quality metadata leads to a more engaged consumer, which means monetized video assets,” emphasises Dalet’s Savina. “Publishers need to incorporate media asset management systems that allow authoring and managing asset metadata in their production workflow in order to build engaged audiences and maximize the value of their internet advertising.”


Thursday, 17 November 2016

Sky Launches Into VR


Streaming Media 

Sky is committed to seeing VR succeed where 3D failed, and has established the 10-person Sky VR Studio to spearhead innovation and content creation.


Broadcaster Sky is powering ahead with VR content production in much the same way it spearheaded the ultimately failed drive to 3D TV.
Sky VR Studio is producing more than 20 individual films, across a range of Sky content—from major cultural events in news to sporting events—for distribution over a free-to-download Sky VR app.
The 10-person team is led by executive producer Neil Graham and creative director Richard Nockles.
Sky original VR content is being distributed in the app launched for Android, iOS, and Oculus Rift, alongside videos from Disney, Fox, and Warner Bros. The originals—which include Giselle, a doc with English National Ballet's principal Tamara Rojo—must be 3-4 minutes long and offer a unique perspective that would not be possible from a traditional linear broadcaster.
Nockles cites as an example working with Formula 1 commentator and pundit Ted Kravitiz for a VR tour of the pit lane in Barcelona ahead of the 2016 Formula 1 season.
"We trained him in the morning and sent him out with a 360° cam on a gyro-stabilised carbon fibre pole. The nature of 360° means you lose the pole and you just see him walking which is an incredibly intimate experience of a sports venue especially one that gets into areas where visitors can't go like the mechanic's garages and pit lane."
The 3-minute documentary "Ted's Notebook" received 1.6 million hits in 24 hours on Facebook.
Nockles is also workshopping several different scenarios for scripted content, such as confrontations or emotional scenes, to set out the best production techniques to use with the format.
"We want to develop a language for VR to help producers raise their game and help the industry get to the stage we all want."
Nockles' company Surround Vision was one of the first to begin working with Sky and he has since been seconded to the broadcaster to shape its VR output. He is also on the BAFTA VR Associate Advisory Group.
"We've started experimenting with VR six years ago using Ladybug 360° cameras from Point Grey Research which were a real pain to use, unreliable and bulky and tethered to the recording device but also amazing given the grounding this gave us in working out some of the issues shooting VR at such an early stage," he explains.
He says Sky VR documentaries are storyboarded but also rely on good fortune—as you would for any documentary. One such piece of luck was having the single VR camera in the right corner for Anthony Joshua's ultimate punch in his title fight last April which stopped opponent Charles Martin "a metre away from our camera."
The immersive allure of VR means directors are obsessed with the proximity of the cameras, and how close they can get to the action so they can create that feeling of intimacy.
"As content producers, we need to recalibrate our brains for VR, add more of a theatrical style of production," he says. "We're effectively becoming theatre producers."
When it comes to live, with which Sky is also experimenting, there are some key issues to address. One of them is avoiding motion sickness for viewers watching fast action, of Formula 1 cars or cyclists.
While NBA Digital in the U.S has begun streaming one live basketball game a week in VR via the NBA League Pass package (available on the NextVR app and viewed on Samsung Gear VR headsets) Sky and pay TV rival have yet to follow suit.
This is partly because soccer, the most popular sports property that both own rights to, doesn't translate well to VR. The distance of the rig from action on the pitch is the main impediment.
"The intimacy is lost but your motivation for viewing is different," he says. "Most football fans go to watch a game of chess happening in front of them. When I go see a game I love watching the movement across the entire pitch on and off the ball. A lot of the time you can't see that the way soccer is conventionally covered but with VR you can."
To counter this, Sky is experimenting with a live solution where viewers are able to play with content by "pushing in" to a feed and switching from 360 to traditional feeds. "This becomes more of a playful experience," says Nockles. "It's all about testing to see how audiences like to engage."
Sky's strategy is around promotion and building awareness for the technology and also testing the water with different genres/ styles.
"Sky has always been a leading player forefront of new tech and innovation – not all of which have been successes," says Michael Boreham, analyst, Futuresource. "VR is no different in that respect. Given the potential VR presents as an immersive medium for content and as viewers begin to test and possibly opt for different viewing screens and methods, Sky needs to ensure it's able to engage on them."
The next few years will see a lot of experimentation in production techniques and content creation by the majors and independent content companies alike in order to develop compelling content for VR. However, says Boreham, Hollywood studios are unlikely to release blockbuster titles in VR as this would not be suitable for the theatrical sector and they would not want to jeopardise their theatrical revenues. "A separate non-VR version would be required, effectively needing two productions to be shot, resulting in increased production costs. Short-term it is expected that VR activity by the Hollywood studios will be limited to companion pieces to the main feature, with content being made available via Electronic Sell Through or transactional VoD."

VR's Two-Tiered Takeoff

Streaming Media 

Despite a year of content and production experimentation by studios and broadcasters, poor quality experiences could yet impede VR take-off

http://www.streamingmediaglobal.com/Articles/Editorial/Featured-Articles/VRs-Two-Tiered-Takeoff-114846.aspx

From Hollywood to Isleworth, the entertainment industry is energised about the greenfield potency of virtual reality. There's been a vertiginous rush toward production of VR content and considerable investment in development across sectors as diverse as medicine, education, and entertainment. It's clear, though, that a two-tier model is developing in which VR, for the short term at least, will be dominated by low-cost headsets and arguably poorer-quality virtual experiences.
This is the difference between 360° video and what is being termed "full" or "true VR."
"True VR video is much more compelling once people see it, but many of the monoscopic 360° videos available are akin to a 2D video that has been pasted onto the inside of a fishbowl which you've put you head into," says Paul Jackson, principal analyst, Digital Media for Ovum. "It's great for landscapes and distance shots but quickly loses its magic as you get closer to objects or people. Also the resolution tends to be shocking—think a 1080p or 720p resolution, but stretched all around your head."
The installed base of VR headsets will climb to 81 million in 2020, according to IHS Markit, dominated by lower-cost models, which are basically "shells" into which people slide their smartphone—such as Google Cardboard and Samsung Gear. IHS suggests smartphone VR headsets' share of the total installed base will be 87% at the end of 2016 while Strategy Analytics puts the figure at 92% of units sold.
Strategy Analytics emphasises the big gulf between price and quality, stating that the experience of Cardboard versus HTC Vive "is as different as listening to a car stereo versus being in the front row of a concert."
Even though Google has now upgraded to Daydream View, for Android cell phones costing $79, this phone holder pales (if you believe the reviews) besides Sony Playstation 4 VR's $400 head-mounted display (HMD)—plus the PS4 machine.
"360° video is the lowest possible entry and is a not a truly virtual experience," says Technicolor entertainment technologist Mark Turner. (pictured at right) "To move you into another reality and give you a sense of presence the experience needs to be 3D and it needs an immersive audio feed. Some 360 videos are of fairly low quality production, and if done badly can done cause nausea. What we don't want to do is put people off. We need them to keep coming back for experiences so we can build the virtuous circle and hopefully step them in to VR experiences."

VR Production Challenges

He identifies production challenges for content creators who want to address as wide a market as possible without having to create content more than once. "You could have something truly interactive at the high end while producing content from the same assets down to a 360° video run on Facebook," posits Turner. "While video you easily scale to different devices, with VR the challenge is to scale the experience. Part of the the new workflow the industry needs to deliver for VR is how do this efficiently and at scale."
Nokia's Guido Voltolina—who has the intriguing job title "Head of Presence Capture"—describes the current state of activity in the VR market as "kinetic, innovative and exploratory."
"We are seeing two main trends: one regarding 2D 360° video, which is largely about delivering free content in order to introduce people to virtual reality, and a second regarding real immersive experiences that is already testing monetisation, although still with a limited audience."
However, if VR is to become widely adopted for media service delivery, it needs to move away from big wired headsets to a more seamless mobile experience. Smartphones have the great advantage of being ubiquitous TV consumption devices, in any format, from 2D to 360° and full VR.
"A fan may opt to wear a headset for a few minutes to engage with pre-match activity ahead of a big sports game or to enjoy the atmosphere in 360° video before beaming the match in 4K to a large TV set," suggests Fabio Murra, SVP marketing at compression specialist V-Nova.
A higher resolution of display than current VR devices is also necessary to ensure a realistic and immersive experience is offered to consumers.
"360° video and VR services need to start at least at 4K resolutions," argues Murra (pictured at right). He says V-Nova is experimenting with 8K, 12K and even 16K resolutions. "This will enable truly realistic VR experiences as well as neat features like the ability to zoom in and out of VR and 360° content. Without high quality, VR lacks life-like details and loses the core of its market proposition – its capacity to engage deeply with the consumer."

Affordable VR Headgear

All of this needs to be provided without consumers having to invest in expensive playback devices, without impacting the data bundles limit of mobile and ISP contracts, and for a reasonable fee.
"Headsets are making their way into people's homes, but it's going to take a while before a HMD is as much a part of the furniture as a remote control," says Sol Rogers, CEO/founder of VR content producer Rewind. "Great content will create demand for HMDs, driving growth, but while distribution is limited, many brands are questioning reach and return on investment, which limits the amount of content made. Even when distribution is there, many of the brands we work with may still choose to give away the content as it falls under marketing activity, which doesn't often bear a cost to the consumer."
With the exception of gaming, the audience isn't yet big enough to monetize content. Devices needs to be in millions of peoples' hands to make it a reasonable money-making opportunity. The dial on VR device sales is expected to move heading into the holiday season, driven by PS4 VR and Daydream.
While Samsung's Gear VR will have the largest installed base out of all the major branded headsets this year at 5.4 million, IHS predicts Daydream will become the most popular headset for VR by 2019 due to industry support and "a compelling $79 [€71] price point."
Consumer spending on VR entertainment is forecast to ramp from $310m this year to $3.3 billion in 2020 – however this will still represent less than 1% of overall entertainment spend worldwide.

VR Monetisation Models

To date most content providers are mainly focused on VR short-form pieces to complement their existing content, acting as a bonus feature. Syfy, for example, created a mini-series that acts as an extension to its traditional TV series Halcyon, with five VR videos that allows fans to be more interactive with the show.
Technicolor's Turner believes VR could monetized along lines akin to gaming where a viewer watches the first 5 minutes of a piece of content for free before a pay window pops up. "Or maybe you give consumers some value, such as a first episode for free, and then as they go into the content further the experience could be populated with micro transactions (similar to the in-game purchases of virtual skins, tools or weapons by which games publishers like Valve make revenue from e-sports).
"Since we can place people inside the content we do some much more contextual advertising models," continues Turner. "Billboards in the virtual environment could be linked to online media systems. It means different users will get a different experience. This can be done in much more natural, seamless way than a jarring 30-second ad break. So long as organisations are flexible enough to try new models of monetization and not just fall back into the old ones which may translate less well."
According to Futuresource Consulting analyst Michael Boreham, "News and documentaries may start as a freemium model but evolve to a subscription service. Similarly, drama is likely to evolve from free to either ad-funded or SVOD, while sports is likely to evolve to either pay-per-view, ad-funded or season-pass models."
Aside from specific pay-per-view transactions, live events are likely to be a "top-up" for pay-TV subscribers at probably $2-$4 (£1.50-£3) a month, he suggests.
This will likely be feature interviews, behind-the-scenes or locker-room recorded experiences until the technology and install base make it worthwhile to live stream a full game or concert from multiple angles.
"If you can be sat on the front row inches away from Beyoncé or Bieber, why wouldn't fans pay for that, when the technology is readily available in their homes?" says Michael Ford, founder,  Infinite Wisdom Studios which is being funded by the BBC to create a VR interactive proposal around live entertainment.. "It's only a matter of time before hardware access proliferates and so too does content demand."
While gamers may spend a significant amount of money on HMDs to watch immersive video, the real proposition for the majority of media consumers may lie in short, high-value content that can enhance an existing programme for a small additional fee.
"For instance, sports fans could have unprecedented access to the team's dressing room, with the possibility of listening to the head coach's pre-match motivational team-talk," says Murra. "Music fans might have the opportunity to immerse themselves in backstage proceedings, such as their favourite band members fine tuning their instrument before a concert is broadcast on TV or accessed via an OTT service."

Interoperability

Interoperability is another issue holding back the market. The stitching software is very varied and post producers are performing colour grades and editorial by looking at a flat screen image. "You can't do edits or sound in VR environments because the tools don't exist," says Turner.
Publishing VR content is tricky because of the fragmented market with multiple different headsets offering very different capabilities.
"There is no platform that works across all hardware," says Sol Rogers. "This is a drawback to the development of the VR industry because any content created has a limited distribution, and it is the proliferation of quality content that is the key to the growth of the industry."
Social is considered another key aspect, but one that is likely to be solved. VR is currently an insular experience and anathema to how sports fans in particular like to share a game.
At the Oculus Connect conference in October, Facebook CEO Mark Zuckerberg lifted the wraps on software that allows people to share the same virtual space. It's a work in progress but features Oculus Touch, the firm's soon-to-be-released 3D controller, to change an avatar's emotion. Future versions with facial tracking could do this automatically.
"Ultimately you will be able to create a live show with a person in the VR game having their view broadcast live onto green screen," says Colin Parnell, a producer at live event screen hire firm Fonix. "We're working on it. It is really hard to do but it will be cracked. I'm sure there are TV game show producers working on this right now."
Technicolor has built a multi-million dollar "experience centre" in Culver City, California bringing together kit—including cameras, renderers and headsets—with creatives and researchers to collaborate on a new form of storytelling which it admits may take many years to develop.
"It's relatively simple to move the gaming experience into VR because it is already a 360° experience, but how narrative moves into VR is a much more nuanced conversation and could last a long time," says Turner.
Futuresource expects 2017 to see further experimentation across multiple genres of content and also the introduction of monetisation. "We also expect the production of longer/full-length pieces of content (including full sports matches) more frequently," says analyst Amisha Chauhan.

Tuesday, 8 November 2016

An Object Lesson in Personalized Streaming Video Experiences

Streaming Media 

What custom content does each viewer want to see? As broadcast and broadband converge, object-based media is showing the way to the future, and the BBC is taking the lead.

One of the powerful arguments for delivering object-based, as opposed to linear, media is the potential to have content adapt to the environment in which it is being shown. This has been standard practice on the web for years, but it is now being cautiously applied by broadcasters and other video publishers using standard internet languages to create and deliver new forms of interactive and personalised experiences as broadcast and broadband converge.
“The internet works by chopping things up, sending them over a network, and reassembling them based on audience preference or device context,” explains Jon Page, R&D head of operations at the BBC. “Object-based broadcasting (OBB) is the idea of making media work like the internet.”
Live broadcast content already comprises separate clean feeds of video, audio, and graphics before they are “baked in” to the MPEG/H.264/H.265 signal on transmission. OBB simply extracts the raw elements and delivers all the relevant assets separately along with instructions about how to render/publish them in context of the viewer’s physical surroundings, device capability, and personal needs.
The nearest parallel to what an object-based approach might mean for broadcasting can be found in video games. “In a video game, all the assets are object-based, and the decision about which assets to render for the viewer’s action or device occurs some 16 milliseconds before it appears,” says BBC research engineer Matthew Shotton. “The real-time nature of gaming at the point of consumption expresses what we are trying to achieve with OBB.”
MIT devotes a study group to object-based media, and its head and principal research scientist, V. Michael Bove (right), agrees that video games are an inherently object-based representation. “Provided the rendering capacity of the receiving device is known, this is proof that object-based media can be transmitted,” he says. The catch is that this only works provided the video is originated as an object.
The BBC’s R&D division is the acknowledged leader in OBB. Rather than keep its research a secret in its lab, the company is keen for others to explore and expand on its research.
“We want to build a community of practice, and the more people who engage in the research, the faster we can get some interesting experiences to be delivered,” says BBC research scientist Phil Stenton. “We are now engaged with web standards bodies to deliver OBB at scale.”

Back to Basics: What Is an Object?


In the BBC’s schema, an object is “some kind of media bound with some kind of metadata.” Object-based media can include a frame of video, a line from a script, or spoken dialogue. It can also be an infographic, a sound, a camera angle, or a look-up table used in grading (and which can be changed to reflect the content or to aid visual impairment). When built around story arcs, a “theme” can be conceived of as an object. Each object is automatically assigned an identifier and a time stamp as soon as it is captured or created.
Since making its first public demonstration of OBB during the 2014 Commonwealth Games, the BBC has conducted numerous spinoff experiments. These range from online video instructions for kids on how to create a 3D chicken out of cardboard to work with BBC News Labs to demonstrate how journalists can use "linked data" to build stories. It has created customised weather forecasts, a radio documentary constructed according to the listener's time requirements, and most recently a cooking programme, CAKE, which was the first project produced and delivered entirely using an object-based approach.
All these explorations are a means to an end. “They illustrate how we build an object-based experience and help us understand if it is technically feasible for distribution and delivery for ‘in the moment’ contextual rendering,” says Stenton. “The next step is to extract common tools and make them open for others to use.”
In particular, the BBC is wrestling with discerning which objects are domain-specific and which can be used across applications, how those common objects can be related to one another, and what standards are needed to make OBB scalable.
Most websites are able to accommodate and adapt to the wide variety of devices that may be used to view them with varying layouts, font sizes, and levels of UI complexity. The BBC also expects a sizeable portion of both craft and consumer applications of the future to be based on HTML, CSS, and JavaScript. However, the tremendous flexibility afforded by that web tech is also a disadvantage.
“Repeatability and consistency of approach among production teams is extremely difficult to maintain,” says BBC research engineer Max Leonard, “especially when combined with the sheer volume of possible avenues one can take when creating new object-based media compositions.”

Object-Based Compositions

The BBC’s OBB experiments have relied on HTML/CSS/JS but have taken different approaches to accessing, describing, and combining the media, making the content from one experience fundamentally incompatible with another.
“The only way we can practice an object-based approach to broadcasting in a sustainable and scalable way at the same level of quality expected of us in our linear programming is to create some sort of standard mechanism to describe these object-based compositions, including the sequences of media and the rendering pipelines that end up processing these sequences on the client devices,” says Leonard. “The crux of the problem, as with any standard, is finding the sweet spot between being well-defined enough to be useful, but free enough to allow for creative innovation.”
BBC R&D has a number of building blocks for this language. This includes an Optic Framework (Object-based Production Tools In the Cloud) which, to the end-user, will appear as web apps in a browser, but the video processing and data are kept server-side in the BBC’s Cosmos cloud.
The Optic Framework aims to deliver reusable data models to represent this metadata, so different production tools can use the same underlying data models but present differing views and interfaces on this based on the current needs of the end user.
Optic uses the JT-NM data model as its core, and each individual component within it uses NMOS standards to allow for the development of tools within an open and interoperable framework.
Inspired by the WebAudio API, the BBC has built an experimental HTML5/WebGL media processing and sequencing library for creating interactive and responsive videos on the web. VideoContext uses a graph-based rendering pipeline, with video sources, effects, and processing represented as software objects that can be connected, disconnected, created, and removed in real time during playback.
The core of the video processing in VideoContext is implemented as WebGL shaders written in GLSL. A range of common effects such as cross-fade, chroma keying, scale, flip, and crop is built in to the library. “There’s a straightforward JSON [JavaScript Object Notation] representation for effects that can be used to add your own custom ones,” explains Shotton. “It also provides a simple mechanism for mapping GLSL uniforms onto JavaScript object properties so they can be manipulated in real time in your JavaScript code.”
The library—available as an open source— works on newer builds of Chrome and Firefox on the desktop, and, with some issues, on Safari. “Due to several factors, the library isn’t fully functional on any mobile platform,” says Shotton. “This is in part due to the requirement for a human interaction to happen with a video element before it can be controlled programmatically.” The BBC is using the library internally to develop a streamable description for media composition with the working title of UMCP (Universal Media Composition Protocol).
It has taken a cue from Operational Transformation, a solution to support multiuser, single-task working, which powers Google Docs and Etherpad. “With a bit of domain-specific adaptation, this can be put to work in the arena of media production,” explains Leonard.
The kernel of the idea is that the exact same session description metadata is sent to every device, regardless of its capabilities, which can, in turn, render the experience in a way that suits it: either live, as the director makes the cuts, or at an arbitrary time later on.
“It is the NMOS content model which allows us to easily refer to media by a single identifier, irrelevant of its actual resolution, bitrate, or encoding scheme,” explains Leonard.
“One of the substantial benefits of working this way would be to allow us to author experiences once, for all devices, and deliver the composition session data to all platforms, allowing the devices themselves to choose which raw assets they need to create the experience for themselves,” he says. Examples include a low bitrate version for mobile, a high-resolution version for desktop, and 360° for VR headsets.
In theory, this would allow the production team to serve potentially hundreds of different types of devices regardless of connection or hardware capability without having to do the laborious work of rendering a separate version for everyone.
The hardware for an object-based production, called IP Studio, is being adapted for IP by the BBC. From a production point of view, equipment from a camera to a vision mixer or archive can be treated as an object. “IP Studio orchestrates the network so that real-time collections of objects work as a media production environment,” says Page. So, in the BBC’s schema, Optic will output UCMP, and that sits on top of IP Studio.

OBB Goes Commercial


As a public-funded body, the BBC is driven to unearth new ways of making media accessible to its licence fee-paying viewers. Larger onscreen graphics or sign presenters in place of regular presenters to assist people with impairments are two examples of OBB intended to improve accessibility.
The BBC is also part of the European Commission-funded 2-Immerse with Cisco, BT, German broadcaster IRT, ChyronHego, and others. It is developing prototype multiscreen experiences that merge broadcast and broadband content with the benefits of social media. To deliver the prototypes, 2-Immerse is building a platform based on European middleware standard HbbTV2.0.
OBB is likely to be commercialised first, though, in second-screen experiences. “The process of streaming what’s on the living room TV is broken,” argues Daragh Ward, CTO of Axonista. “Audiences expect to interact with it.”
The Dublin-based developer offers a content management system and series of software templates that it says makes it easier for producers to deploy OBB workflows instead of building one from scratch. Initially, this is based around extracting graphics from the live signal.
Axonista’s solution has been built into apps for the shopping channel QVC, where the “buy now” TV button becomes a touchscreen option on a smartphone, and The QYOU, an online curator of video clips that uses the technology to add interactivity to data about the content it publishes.
The idea could attract producers of other genres. Producers of live music shows might want to overlay interactive information about performances to the second screen. Sports fans might want to select different leaderboards or heat maps, or track positions over the live pictures. BT Sport has trialed this at motorcycle event MotoGP and plans further trials next year.
Another idea is to make the scrolling ticker of news or finance channels interactive. “Instead of waiting for a headline to scroll around and read it again, you can click and jump straight to it,” says Ward. Since news is essentially a playlist of items, video content could also be rendered on-demand by way of the news menu.
This type of application still leaves the lion’s share of content “baked in,” but it’s a taste of OBB’s potential. “All TV will be like this in future,” says Ward. “As TV sets gain gesture capability and force feedback control, it allows new types of interactivity to be brought into the living room.”
The audio element of OBB is more advanced. Here, each sound is treated as an object to add, remove, or push to the fore or background for interactivity, to manage bandwidth, processing capacity, or for playback on lower fidelity devices.
Dolby’s Atmos object-based audio (a version of its cinema system) is likely to be introduced to consumers as part of a pay TV operator’s 4K/UHD package. Both BT Sport and Sky, the broadcasters dueling it out with 4K live services in the U.K., have commissioned their mobile facility providers to build-in Atmos recording gear. Sources at these OBB providers suggest that a switch-on could happen by this time next year.
Initially, a Dolby Atmos production would allow additional user-selectable commentary from a neutral or team/fan perspective, different languages, and a referee’s mic. It would also add a more “at the stadium” feel to live events with atmospheres from the PA system and crowd.
BT’s research teams are also exploring the notion of responsive TV UI for red button interaction on the big screen and targeting 2020 as time for launch.
“Today we tend to send out something optimised for quite a small screen size, and if you have a larger screen it is then scaled up,” Brendan Hole, TV and content architect at BT, told the IBC conference.
“We are asking what happens if the broadcast stream is broken into objects so that the preferences of the user can be taken into account. You can add or remove stats in a sports broadcast for example, have viewer selection of specific feeds. It could automatically take account of the size and type of screen or it could take account of the fact I have a device in my hand so elements, like stats, could be delivered to mobile instead of on the main screen.”
Others investigating OBB include Eko Studio, formerly known as Interlude’s Treehouse. It offers an online editing suite that lets users transform linear videos into interactive videos so that the viewer can choose the direction of the video.
New York-based creative developer Brian Chirls has developed Seriously.js an open source JavaScript library for complex video effects and compositing in a web browser. Unlike traditional desktop tools, Seriously.js aims to render video in real time, combining the interactivity of the web with the aesthetic power of cinema. Though Seriously.js currently requires authors to write code, it is targeted at artists with beginner-level JavaScript skills so that the main limitation is creative ability and knowledge of video rather than coding ability.
MIT put the groundwork into object-based media a decade ago. It has since moved on to holographic video and display, although some of the same principles apply.
“We are exploring holographic video as a medium for interactive telepresence,” says Bove. “Holosuite is an object-based system where we used a range-finding camera like Microsoft Kinect as a webcam to figure out which pixels represent a person and which pixels the room with the ability to live stream content of people separately from the backgrounds and with full motion parallax and stereoscopic rendering.”
For content creators, object-based techniques offer new creative editorial opportunities. The advantages of shooting in an object-based way is that media becomes easily reusable, and it can be remixed to tell new stories or build future responsive experiences that don’t require any re-engineering.
“Either we need to produce multiple different versions of the same content which is highly expensive or we capture an object once and work out how to render it,” says Page. “Ultimately, we need to change the production methodology. OBB as an ecosystem has barely begun.”

Monday, 7 November 2016

In Five Years the Problem won’t be Bandwidth but Knowing what to do with it

IBC

While IP dominated discussion at IBC, there is an equally transformative technology developing in parallel with profound implications for media communication.

In Five Years the Problem won’t be Bandwidth but Knowing what to do with it - See more at: http://www.ibc.org/hot-news/in-five-years-the-problem-wont-be-bandwidth-but-knowing-what-to-do-with-it#sthash.UbWUhsyN.dpuf

“Mobile is the biggest revolution broadcasters have faced. It is happening now, and we still do not see a huge, pressing desire to take advantage of this revolution among the broadcast industry,” declared Ben Faes, Managing Director of Partner Business Solutions for Google, at IBC2016.

Video over mobile is forecast to multiply exponentially in the next five years and represent 80% of all internet traffic by 2021. Not just limited to TV and film content, video is transforming social media too. Some 300 hours of videos are uploaded to YouTube every minute, half of which is viewed on mobile devices. In addition, 75% of Facebook video browsing is performed on smartphones.

Facebook Live, Twitter, YouTube, Snapchat and gaming platforms such as Amazon-owned Twitch are all investing significant resources in live video delivery.

Such demand means that at a certain point, the existing 4G mobile network technology will be unsustainable. A new infrastructure is needed and it is being developed rapidly.

The broad outlines for 5G mobile communications have been agreed by organisations like EU 5G Public Private Partnership, initiated by the European Commission with manufacturers, telcos, service providers and researchers. A standard is expected by 2018.

Base specifications include regular mobile data speeds clocking 1Gbps, peaks of 10Gbps, a latency below 1 millisecond and very low power consumption that could see devices last a decade - spurring internet growth in emerging markets.

It’s a combination that will make high resolution, live, personalised media a reality. Applications like Virtual Reality (VR), which rely on real-time data tracking and communication, will be opened up by 5G. Real time 4K broadcasting and instant VOD downloads will also be enabled.

Beyond this, real-time holographic video is anticipated. Several operators demonstrated holography at Mobile World Congress last February, including South Korea’s SK Telecom.

“5G will be transformational,” Ulf Ewaldsson, CTO of Ericsson told the IBC Conference.  “It means we can change the production of content, change the way we distribute things. We will be able to create new content such as combining 8K with AR. This is not so far away.”

Whether there’s any benefit to viewing 8K on a mobile device is moot. The point is that bandwidth speeds will increase so dramatically that an unprecedented wealth of data will be available to mix and match applications like AR, VR, 3D, 4K, 8K in real time.

At IBC, Discovery Communications CTO John Honeycutt, said the broadcaster will be studying VR and AR as it heads towards Eurosport’s coverage of the 2018 Winter Olympics in South Korea.

“We had access to an early Microsoft Hololens, and when you put it on you can start to imagine walking up the street with a map in front of you, with restaurant menus and personal reminders all while you’re having a Facebook chat,” Honeycutt said. “From a content consumption and a utility perspective 5G is a big deal.”

A clutch of European telcos, including Deutsche Telekom, Nokia, Telefonica and Vodafone, say they will begin conducting large-scale tests by 2018, with a launch in at least one city in each EU country by 2020. Before then, 5G will be demonstrated to the public at the 2018 Fifa World Cup in Russia and 2018 Glasgow-Berlin European Athletics Championships.

“Bandwidth drives content and content drives bandwidth,” Spencer Stephens, CTO, Sony Pictures Entertainment told IBC. “As we get more bandwidth we can do more things with it, but if people want to do more things with it there becomes a greater demand for bandwidth. For example, can you substitute point to point, the Netflix model of delivery to the consumer, with broadcast? Obviously point to point takes a lot of bandwidth but if we can get enough bandwidth to the consumer then we can change fundamentally how we deliver content.”

More profoundly, the technology is expected to allow the growing number of sensors to communicate with one another. Intel expects this number to hit 50 billion by 2020. This will drive the digitisation of every industry, from healthcare to manufacturing. Applied to the automotive sector, for example, 5G opens up the car as a mobile venue for content consumption.

“The promise of 5G is fantastic – huge capacity, available everywhere, at low cost,” David Wood, Deputy Director of EBU Technology and Development and Chair of the World Broadcasting Unions’ Technical Committee told Broadcast magazine. “It could precipitate social change on the scale of the web itself.”

The underlying technologies of faster bandwidth, higher-resolution sensors, greater storage capacities and incoming internet protocols will present a whole range of opportunities for media organisations. The question is how to take advantage of them and blend them into the media creation process.

"The industry is going through a period of experimentation," summed up John Ive, Consultant & Chief Technologist for the IABM. "Nobody has all the answers but we need to scale up what works well and turn off what doesn't. Having IT, TV, and telco people in one place at IBC is creating a very important dialogue about the sorts of creative applications we need to go to market." - See more at: http://www.ibc.org/hot-news/in-five-years-the-problem-wont-be-bandwidth-but-knowing-what-to-do-with-it#sthash.UbWUhsyN.dpuf