Tuesday, 15 May 2018

Compression enters the high-performance era


IBC
Delivering new methods and codecs for working with large-scale data, particularly VR and HDR, is high priority for the media and entertainment industry. As MPEG begins work on a successor to HEVC, we take a look at the hyper-efficient compression technologies being developed for streaming immersive media.
The sheer volume and complexity of video coming down the track, not least with the imminent opportunity of super-speed 5G mobile networks, makes efficient data processing essential if live-streamed VR and other ultra-resolution low latency media applications are to fly.
Arguably, standards bodies like MPEG have never been busier. The 30-year old institution has drafted and released an average of six standards a year since launch and it only succeeds for the industry if it stays way ahead of the game.
Ericsson Media Solutions’ Principle Technologist Tony Jones, says: “Compression efficiency is one of the primary tools for providing new or better services, minimising the distribution costs, or a combination of the two.”
That’s why work developing a means of handling large-scale data is so urgent. Chief among these is a successor to the current video streaming standard HEVC. The Joint Video Experts Team (JVET), a collaborative team formed by MPEG and ITU-T Study Group 16’s VCEG, has started work on Versatile Video Coding (VVC) which is promised, like MPEG 2, MPEG4 and HEVC before it, to be 50% more efficient than its predecessor.
Spokesperson for MPEG Christian Timmerer says: “The goal of VVC is to provide significant improvements in compression performance over the existing HEVC standard and to be completed in 2020.”
Timmerer, who is Associate Professor at Austria’s Klagenfurt University and Head of Research at codec vendor Bitmovin, adds: “The main target applications and services include — but are not limited to — 360-degree and high-dynamic-range (HDR) videos.”
According to MPEG, initial proposals for VVC have demonstrated “particular effectiveness” on ultra-high definition (UHD) video test material. It predicts compression efficiency gains “well-beyond the targeted 50% for the final standard”.
VVC would therefore join an increasingly crowded market for OTT streaming, which includes the current most frequently used codecs AVC, VP9 and HEVC, and the newcomer AV1.
Bitmovin has just published comparison tests of these codecs which suggest that AV1 (like VP9, but unlike AVC and HEVC, is royalty free), is able to outperform HEVC by up to 40%.
However, the company is of the opinion that multiple codec standards can exist side by side. Indeed, the company has stated this is “mostly necessary”, in order to stream to a wide range of devices and platforms, adding that “the support of multiple video codecs is confirmed with the appearance of VVC.”
An important aspect of VVC is for encoding to be more focused on specific regions of a 360-degree frame where most of the relevant image activity is happening and which the majority of users will watch.
Timmerer says: “VVC is still in its infancy but we might see companies making announcements in this direction at IBC.”
Enter JPEG XS
Whereas MPEG is typically utilised for storage, delivery, and consumption by end users, the work of JPEG has historically centred on still images, but it has just delivered a new codec for video production and streaming.
JPEG XS is open-source and goes against the grain of historic codec development by having a compression ratio of 6:1 ratio, which is actually lower than the standard JPEG (10:1).
École Polytechnique Fédérale De Lausanne (EPFL) Professor Touradj Ebrahimi says: “For the first time in the history of image coding, we are compressing less in order to better preserve quality, and we are making the process faster while using less energy.”
Ebrahimi, who led JPEG XS development at EPFL, adds: “We want to be smarter in how we do things. The idea is to use less resources and use them more wisely. This is a real paradigm shift.”
JPEG XS is an evolution of the TICO codec (SMPTE RDD 35), itself based on JPEG2000 and now widely accepted for transporting video over IP workflows using SMPTE 2110.
IntoPix, the Belgium firm behind TICO, also helped design JPEG XS.
IntoPix Director of Marketing & Sales Jean-Baptiste Lorent feels it will be most useful for workflows “wherever uncompressed video is currently used”.
“A new codec is necessary to handle ever increasing data volumes due to increasing resolutions, higher frame rates, 360-degree capture and higher quality pixels,” adds Lorent.
JPEG XS is intended to address uses where low complexity and low latency are necessary, but reasonably high bandwidths can be used, for example, UHD at around 2 Gbit/s vs uncompressed at 12 Gbit/s.
Tony Jones says: “JPEG XS is an intra-coding technique. That is, no temporal prediction is performed. This results in much lower bit rate efficiency than compression standards such as AVC and HEVC, but in turn offers extremely low latency.
“There are a wide range of potential professional applications, including studio use, remote production and other instances where latency is critical, but where high bandwidth connections are still available,” adds Jones.
It is likely to be suited to 4K and 8K, in particular for production and editing (both live and file based), though its profile includes handling 10K.
“Light compression, such as JPEG XS, is a realistic technique to keep bandwidths, file sizes and file transfer times under control for high-quality assets, where the quality needs to be virtually indistinguishable from the uncompressed quality,” says Jones. “JPEG XS is also useful for keeping the latency well below one video frame.”
Jean-Baptiste Lorent is of the opinion that such a low latency, low compression and high efficiency codec is ideal for streaming video via Wi-Fi and 5G and will later assist the operation of drones and self-driving cars – technologies where long latency represents a danger for humans.
According to Fraunhofer IIS – developer of a JPEG XS software plugin for Adobe Premiere Pro CC ­– the codec is optimised for the use with mezzanine (very light) compression when high image quality data has to be transferred via limited bandwidth or has to be processed with limited computing resources.
Under standardisation by ISO, JPEG XS will likely be ratified by the end of 2018 with the first products, including cameras, due shortly after.
Omnidirectional VR to the home
MPEG is also addressing delivery into the home of immersive media, for example 360 video and VR.
In both cases, according to Ericsson’s Tony Jones, there is an extremely stringent motion-to-photon requirement – the responsiveness of the display to any change in head position must be extremely low latency.
Jones says: “For 360 video, the rendering is performed locally from either the entire 360 image or a suitably sized portion of it, whereas for true VR, the scene itself must be created based on those head movements. If the scene creation can be performed locally, such as in a games console, then the requirements are not too challenging. If, on the other hand, the rendering is performed remotely and needs to be delivered without an excessive bit rate demand, then there are significant challenges to achieve that at the same time as meeting the motion-to-photon requirements.”
A broad initiative that may help is MPEG-I. It’s at various stages of development; while the first part of the scheme, which defines systems, audio and video parameters, is due for publication soon, other parts are largely outline.
VVC is part of MPEG-I, as is a related Immersive Audio Coding scheme, though this is still at the architecture level. However, the most intriguing phase of MPEG-I is Omnidirectional Media Format (OMAF). The first version targets 360-degree video compression in HEVC and is complete.
Timmerer says: “OMAF enables many optimisations but it may take some time until widely adopted, if at all, as it basically has a major impact on encoding, streaming, decoding, and rendering.”
A second version (OMAFv2), to be drafted by October, will target 3DoF+, an advance which includes ‘motion parallax’ to allow a viewer to also ‘watch behind objects’. To put it another way, OMAF is addressing potential holographic displays.
Later versions of OMAF will also address ‘omnidirectional 6 Degrees of Freedom (6Dof) for social VR’ and even the ‘dense representation of light fields’. Timmerer describes social VR as cases which “enable VR content to be consumed in a social environment, either within the same geographic context”, for example in the same room, or “with different geographic context” – different rooms and countries.
Other aspects of MPEG-I examine point cloud compression. This form of depth information can be used to produce three dimensional or holographic scenes.
“This is in its hot phase of core experiments for various coding tools,” says Timmerer. The results are set for incorporation into a working draft.
According to Timmerer, there is no relation between VVC and OMAF although that might change in the future (perhaps 2020).
“I expect OMAFv2 will be completed earlier than VVC and therefore OMAFv2 will still rely on HEVC,” he says. “This is my current estimation.”
Publication of OMAF version 1 is in the hands of ISO, but the final draft international standard can be used now. “Basic use cases could be deployed already,” says Timmerer. “I’m pretty sure there will be some demos at IBC. It’s a bit tricky though. Devices are not yet [aware of] OMAF.”
Compression for holograms
There’s yet another layer, a scheme that specifically addresses compression of massive data recorded as a light field. While part of MPEG-I there also seems some divergence on the approach.
Streaming a ‘true native’ light field would require broadband speeds of 500Gbps up to 1TBps. That’s according to estimates by Jon Karafin, CEO at holographic display developer Light Field Lab.
However, Karafin adds: “That’s never going to get into homes in our lifetime.”
Being able to work with so much data, let alone transmit it, requires serious compression. A group at MPEG is drafting a means of enabling the “interchange of content for authoring and rendering rich immersive experiences”.
It goes under the snappily titled Hybrid Natural/Synthetic Scene data container (HNSS).
According to MPEG, HNSS should provide a means to support “scenes that obey the natural flows of light, energy propagation and physical kinematic operations”.
Timmerer says the group is working on scene descriptions in MPEG-I, “which will study existing formats and tools and whether they can be used within MPEG-I.”
In fact, the activity is being led under the MPEG banner by CableLabs - a think tank funded by the cable industry - with input from OTOY and Light Field Lab among others.
The approach differs from conventional video compression techniques by looking to create 3D models of a scene by trapping texture, geometry and other volumetric data then wrapping it in a ‘media container’.
Not everyone is convinced that a media container is the right one.
MIT holographic expert V. Michael Bove says: “There isn’t a universally agreed on best practice yet. I expect that will be taken care of. It’s not an insoluble problem.”
Karafin points out that the concept is already familiar to the entertainment industry. The DCP (Digital Cinema Package) is commonly used to store and convey digital files for cinema audio, image, and data streams.


Formula One to launch OTT service


IBC

The launch of F1 TV this weekend marks the motorsport’s biggest investment in its digital transformation to date. Few see it as a risk.

When Liberty Media took over the FIA Formula 1 World Championship for $8 billion from Delta Topco in early 2017 the new executives in charge were scathing about its former ownership.
“Frankly F1 is almost a start-up, because there had been so little done on the digital and social media front, and the way the content of F1 was presented,” said Ross Brawn, the renowned former F1 team technical director recruited to lead the sport into the digital century.
“Formula One is a global premium sports brand that’s been around for 60 years but has not invested in digital or social or OTT,” explains David Bailey, F1’s Head of Research. “To hammer that point home, the F1 Facebook page was only created in 2016.”
Likewise, at the time of Liberty’s takeover, the official Twitter account of two-wheeled sport MotoGP had almost the same number of followers as that of Formula One.
“On the commercial side it does feel like a start-up because of the way the energy is focussed and that the longest people have been here is six months,” adds Bailey.
Taking their cue from sports like Major League Baseball, PGA Tour and Nascar, which have a raft of digital products curated by editorial teams, Liberty’s task, in the words of managing director of commercial operations Sean Bratches, was “to pivot this from purely a motorsport company to a marketing and entertainment company.”
The flagship is new live Grand Prix subscription service, F1 TV.
Getting to this point meant first setting up a digital and research department. “There wasn’t much on file in terms of research,” says Bailey.
Along with Brawn, Bratches, a 27-year veteran at ESPN who had charted its growth from single channel to global juggernaut, and F1 Group CEO Chase Carey directed the project. Bratches quickly hired Frank Arthofer from ESPN to lead the digital initiative as director of digital and new business.
“Their overriding philosophy was to put fans first,” explains Bailey. “In order to develop the right product the first task was to segment the audience: what do different fans of Formula One look like?”
At the lower end, he says, his team identified ‘peripherals’ and ‘incidentals’ who liked the sport but didn’t engage and watched occasionally and passively.
‘Sociable’ and ‘Habitual’ groups were much as the name suggests; “not necessarily loyal to F1 but watching out of habit and more likely to do so on a TV.”
At the top end were the ‘excitables’ – passionate petrolheads of a mostly younger demographic who already consumed F1 digitally to some extent.
Of the 500 million of the sport’s fans worldwide, the ‘excitables’ constituted a sizeable 120 million. Focus grouping them further, F1 learned that such fans had formed their own independent online communities.
“We used Reddit to basically reach these fans and to act as a forum. Essentially we asked them what they wanted from a digital service.”
The results were stark. Overwhelmingly, OTT streaming came out on top. Some fans even had the nous to shout for an OTT stream at 60p. Most wanted a reliable and responsive service.
“These fans will react on Twitter if they are angry about something. No-one wants a one-star rating, especially when you first launch. It would kill us stone dead.”
Packaging was another key. F1 TV costs from U$70-U$150 (€57-€122) annually, with monthly rates varying between U$8-$12 but research told F1 that they could price the service differently in different markets.
User testing told them that fans didn’t want clutter encroaching on the real estate of the main screen; that they wanted onboard cameras presented on top of each other “Mario Cart” style not side by side.
“One problem: we wanted to do it quickly,” says Bailey. “To make sure we launched early in the 2018 season we had six months to do something that would normally take 18 months to complete. Even locking in prices had to be done months in advance of launch.”
The premium option, F1 TV Pro, features live races and all 20 driver-cameras, as well as live race viewing and additional exclusive feeds.
Subscribers are able to personalise the way they watch a Grand Prix, the content they view and how and when they access it. All of practice, qualifying and races is offered live, along with press conferences and pre- and post-race interviews.
Later in the season, the main support series FIA Formula 2 Championship, GP3 Series and Porsche Supercup will be added.
A less expensive, non-live subscription tier, F1 TV Access provides live race timing data and radio commentary, as well as extended highlights of each session from the race weekend. It will also provide archive footage – which is extensive, with F1’s library going back decades.
NBC Sports’ Playmaker Media, and iStreamPlanet are systems integrators and video streaming partners respectively.
CSG delivered identity management, payment processing and business analysis. Ostmodern has designed and delivered the web and app product experience. CDN and connectivity services to distribute the F1 TV content globally is by Tata Communications.
“It’s a fantastically high spec,” comments Ovum senior analyst Ed Barton. “You can watch cameras from any car and this will build to a huge archive of content.”
It is yet to be seen quite how a payment direct to F1 from consumers will impact on the existing carriage deals it has struck with pay TV services – or indeed if a sport which has been built on the shoulders of multimillion dollar broadcast contracts can survive the risk of cannabilising its cash cow.
In 2017, US broadcaster NBC ended its relationship with the sport – and was replaced by ESPN – as plans for the OTT network developed.
“The impact will negligible on the value of TV rights,” believes Bailey. “We work with ESPN in North America and they work with F1 TV side by side. The same in the UK with Sky (which is locked into a broadcast deal until 2024). We have a very distinct target audience.”
Displacing linear services with a direct to consumer offer is not necessarily an either/or proposition. A live OTT super fan experience can work in the same market as the existing commercial broadcast.
“F1 would argue that this is a complementary property in markets where it has pay deals,” says Barton. “We would apply the same concept to SVODs like Netflix with which there is a huge payTV overlap in most markets.”
Additionally, a live post-race show is available exclusively on Twitter. The F1 Live Show includes interviews with drivers, engineers and team principals following each race. Post-race highlights are also distributed via Twitter video clips.
“Twitter came to us early on with an emphasis on co-producing original live content to extend the race weekend dialogue,” explains Arthofer. “Given how well their platform caters to driving conversation around global live events, the strategic fit was perfect.”
An Esport competition, launched last year with Codemasters and Gfinity, is further intended to build a greater connection with younger fans.
This weekend’s launch of F1 TV is just the start.
“There are so many data points that come from an F1 car,” says Bailey. “We can pick up about 2 Terrabytes of data but at the moment we’re using only 3Gb. You can add aerodynamics and tyre temperature or even the heartrate of drivers. Putting that in the hands of fans alongside live footage is incredibly powerful.”

ShiftCam 2.0 promises professional grade lenses for your iPhone

RedShark News
Adding lenses to your iPhone is not a new idea, with the convenience of using them dogged by having to attach, detach, attach each time you want a new look by which time you may as well be using a DSLR. Until now.
Last year, start-up Shiftcam came out with a handy way round this with a custom iPhone case incorporating three pairs of dual lenses that can slide or shift down the case and over the rear camera lens via magnetic lens cap.
Now the group has upped the ante with a redesigned case that offers the same 6-in-one travel set plus five 'pro' lenses.
You won’t need to take the case off except, in the case of dual camera iPhones, to align the other row of lenses to the camera you want. However, unlike the original 6-in-1 lens set, the pro lenses do come separately, although they work with the same slide rail system so attachment should be easy.
The new line up of interchangeable lenses includes a 2x telephoto, 120-degree wide-angle, 238-degree fisheye, ‘traditional’ 10x macro, and ‘long-range’ 20x macro options. There is also a 120-degree wide-angle adapter for your front camera - for taking group selfies.
The system is available for the iPhones 7/Plus and 8/Plus, as well as the iPhone X.
By all accounts the pro lenses are of solid build and deliver on the promised ‘professional’ quality.
Shiftcam 2.0 is a kickstarter funded project which smashed through its target so we expect this to ship from this month (May).
The entire 12 lens kit will currently cost you around £250 and comprises:
  • ShiftCam 2.0 Case
  • Front Facing Wide Angle Lens Set
  • 6-in-1 Travel Lens Set
  • Macro ProLens
  • Wide-Angle ProLens
  • Telephoto ProLens
  • 238° “Full Frame” Fisheye Advanced ProLens
  • Long Range Macro Advanced ProLens

Friday, 11 May 2018

Craft Leaders: Julian Slater, Sound Designer


IBC
Oscar-nominated sound designer Julian Slater speaks about new ways of editing sound and why the term ‘immersive’ can be a red herring. 

Julian Slater’s star was already rising before he set to work on Edgar Wright’s Baby Driver, but the high-octane soundtrack of the musical heist film shot his profile about as high as it gets for the ‘below-the-line’ craft of sound editing.

 “For lots of people to pick up on the sound of a movie in this way is just amazing,” says Slater.

“It’s a career highlight for me. But I’m only as good as the director. They have to be open to new directions as much as I want to explore new directions.”

Slater’s already impressive career has seen him work on BBC TV drama Life on Mars, blockbusters such as Mad Max: Fury Road and for A-list filmmakers Tom Hooper (The Danish Girl), Martin McDonagh (In Bruges) and most famously collaborating on all Edgar Wright’s features.

Unusually, Slater also takes on multiple production roles. On Baby Driver he was supervising sound editor, re-recording mixer and sound designer, for which he was Oscar nominated in Sound Editing and Sound Mixing categories (shared with re-recording mixer Tim Cavagin and production mixer Mary H. Ellis).

“A supervising sound editor has two roles,” Slater explains to IBC365.

“To run a team of other editors in order to give the director the sonic soundscape they envision and also the responsibility for controlling the budget.

“Sound designers on the other hand are generally tasked with creating weird and wonderful bespoke noises perhaps by manipulating completely unrelated sources.”

You might not think the doppler effect of a racing engine and a lion’s roar might go together but Slater combined both to form the unique sound of Baby’s car in Baby Driver.

“I discovered on Mad Max that just because you have a recording of the cars you actually see, they may not sound particularly great. In Baby Driver we used six different cars to compose one car’s sound.”

He explains that a re-recording mixer would traditionally prepare all the sounds required for a project some months before blending them in a final mix. “These days it’s a bit of a rolling process,” he says. “For Jumanji: Welcome to the Jungle I was mixing the sound FX while sound editing. A fair amount of this made it into the final picture.”

For some projects Slater is the common thread throughout the process, asked for his input into selecting a production mixer on the shoot, ensuring everything required from principal photography is recorded, then coming in for the director’s cut and seeing it through to the finish.

He is just as happy performing the sole function of re-recording mixer, which he has just done for Emilio Estevez-directed feature The Public: “At least as soon as you pull your fingers away from the faders your job is done for the day,” he says. “If you’re supervising then there is always more prep to do.”

Just as important is the ability to dial into the vision of the show’s main creatives, the director and their editor. “You have to tap into that as soon as possible and do your best to service their concept. If you don’t, it’s not going to end well.”

Slater always knew he wanted to do something with sound. “Aged seven you couldn’t detach me from my Sony Walkman,” Slater recalls. “I’d tape [DJ] Tommy Vance doing the top 40 and then tape to tape record it again editing out his voice.”

It was the sight of a mixing console in the video for Police single Every Little Thing She Does Is Magic that set him thinking about audio production. A course at the School of Audio Engineering (now SAE Institute) in London included two-week work experience at music library DeWolfe. They were so impressed they offered him a position once he’d finished studying. He moved up from the transfer bay (“transferring a CD to 35mm or quart inch reel to DAT”) to the sound FX department, in the course of which he met Nigel Heath, co-founder of Hackenbacker Audio Post.

A year after joining the team in Soho, the company was dissolved but he and Heath became business partners and reformed the company trading as Hackenbacker Ltd. Slater was just 21.

The sixteen years he spent there saw him handle TV projects like Animal Country(with zoologist Desmond Morris), a string of commercials and animated idents to feature films including two for avant-garde director Peter Greenaway (The Baby of Macon, The Pillow Book) then wider exposure for Mike Figgis’ Oscar nominated Leaving Las Vegas.

“The variety of things coming through gave me a really full education on different styles and workflows,” he says. “But it was really luck that I fell into TV and film rather than music. I’m glad because the budgets kind of fell out of big music studio recordings and picked up in movies.”

Working for Chris Morris on The Day Today and Brass Eye Slater gained a reputation for comedy which piqued Wright’s interest ahead of his debut feature Shaun of the Dead in 2004.

“Chris and Edgar are very similar in that they are very into sound and if it’s not right they will push you to get it right and then they go even further in another direction,” he says.

“Chris wrote a sketch about kids in the 1960s being sent to school to take as many drugs as possible in order to build their resistance up. He shot on 16mm film from which we took the audio file and spun it out to 16mm mag, we then pulled the mag off the machine and literally stomped around on it to put audio ‘drop outs’ in it to age it before turning it back into digital media. It was this cool, aged sound - just for one sketch.”

Similar attention to detail is evident in Baby Driver in which sound design, mix and music not only drive the score but also the characters and plot. Slater describes the effect as a “symphonic cacophony”.

He says: “We had to come up with new ways to edit the sound that I don’t think has been done to this degree before. Instead of timecode we were working in musical notation, in bars and beats.”

The opening sequence, for example, was syncopated to Bellbottoms, a track by The Jon Spencer Blues Explosion. Working in Avid Pro Tools, Slater took the sound of the police sirens and tempo-map each one to the music.

“Then we’d pitch-correct them. What that means is that in some places they sound musical and completely great but in other places, they sound totally unrealistic as a police siren. So, the sirens and every single sound in the movie is pitched up or down, to work with the piece of music it was up against. What you end up with are sounds that work both musically and cinematically. For me, that’s the crux of the sound challenge: if they didn’t work both ways, we don’t have them in the mix.”

Slater has yet to produce for the medium of virtual reality but thinks this could be the next cross-roads of storytelling and cinematic experience.

“VR will be about how certain perspectives are played out and each will be unique for the person sitting in row K seat 17 to perspective of the person in row K seat 18, so to speak. But we’re years from this happening.

“I feel like the term immersive audio is a bit of red herring,” he says.

“I could put headphones on and watch a film on an iPad and still feel immersed. Any technology that helps push and pull an audience through a story is great, just as long as it doesn’t add a huge amount to your workflow. Good technology should tick both of those boxes – to enable the creative process not bog it down.”

Around 12 years ago he visited Los Angeles at the invitation of editor Carol Littleton (E.T: The Extra Terrestrial). “She showed me round the studios and I realised that the opportunities there were rather better than in London.”

So as soon as Marvel’s Ant Man was greenlit for Wright to direct in 2015 Slater jumped at the chance to relocate to LA. Although Wright’s take on the tiny super-hero ultimately failed to make it to the screen, it served its purpose for Slater who subsequently worked at Formosa Group and later at Technicolor on the Paramount lot in Hollywood before going freelance.

“If I can do it, anyone can do it,” he says. “I was sitting in the Cary Grant Theater [Sony Pictures recording stage] with Kevin O’Connell (veteran sound artist with credits stretching from The Empire Strikes Back to Top Gun] and pinching myself that a chap from a small town in Suffolk got here. I’d say to anyone who wants to work in sound, work hard and treat everyone around you with respect. I’m a believer in karma like that.”


Broadcasters look to programmatic programming

TV Connect / Knect365
The logical outcome of the industry’s drive to OTT will be television tailored to consumers. While some years away from this eventuality broadcasters have begun making moves in this direction.
“There will be personalised TV channels,” said Steve Reynolds, President, Imagine Communications in the opening keynote to TV Connect. “You will see a channel personalised specifically for you.”
Allan McLennan, CEO / founder PADEM Group & TV Connect Leaders in Global Video Group Chairman dubbed the development ‘Programmatic programming’.
“What is not going to change is the reason that consumers love TV and pulls them to platforms which is the quality of content,” he said. “What will change is how we programme the content to get to the consumer.
“Intelligent systems will learn what consumers want, when they want it and be able to serve that content up with relevant associated ads.”
Imagine Communications can give a helicopter view of such trends since its technology plays a role in about 80 percent of broadcaster playout operations worldwide.
“The pace of acceptance of IP over the last year has picked up,” said Reynolds. “A lot of traditional broadcasters are really embracing the dynamic happening on the customer side.”
He cited Disney, an Imagine client, which is launching a pair of SVOD services, one around sports, the other (due 2019) around entertainment.
“Its strategy is that instead of building walls around its business it will build a platform that gets them to the audience.”
The end game for Disney and other broadcasters transitioning to IP is to put playout – and programming – into the cloud.
“Why? Because their customers are now connected to the cloud,” said Reynolds. “The idea of a bricks and mortar broadcast centre probably goes away at some point. With all content in the cloud and the ability to deliver that over IP to consumers, this is where fundamental change begins.”
New over the air delivery standards like ATSC 3.0 and the rollout of cellular network 5G are important parts of the infrastructure assisting broadcaster moves to live and on-demand personalised streaming while keeping one foot in delivery of linear programming.
Advances in Machine Learning and AI will “eventually help us to better understand content and targeting advertising,” said Reynolds. “This is still a few years away but it will make some fundamental changes to the media and entertainment experience.”
Conviva CMO Ed Haslam agreed that individualised channels – one per person – is the direction things are headed.
“What we see across our providers is the personalisation of monetisation,” he said. “Every consumer has a different sense of what they are prepared to pay or how many ads they will watch for live sport and also for entertainment.
“We will see increasing use of data to understand individual monetisation preferences so the provider can deliver personalised skinny bundles or convert AVOD-only customers into VOD customers or a hybrid of the two.”
The underlying assumption is that consumers want highly personalised content and bespoke forms of payment.
“This will increase engagement which is necessary in order to grow the business,” added McLennan.

Monetization and ecosystem complexity wrinkles not roadblocks to all IP media

TV Connect / Knect365

After years of talk, the industry is finally at the point of TV Anywhere where efficiencies in IP technology are enabling broadcasters to get the content consumers want into their hands.


Delegates at day one of TV Connect made it clear, though, that wrinkles in technology and monetization strategies still needed careful thought.


“OTT is not future. It is the present,” declared Nicolas d’Hueppe, CEO and Founder, SVOD service Alchimie. “The game has started.”


The broadcast industry globally is reckoned to be worth $300 billion and OTT is pegged at $20-25bn yet the shift to IP is well underway.


“Arguably we are already there with addressable TV,” said Ed Haslam, CMO, Conviva. “Certainly an ‘all IP’ media world will happen at some point.”


Bill Martens, MD at streaming services provider BamTech, agreed, but warned that brakes will be applied while internet capacity builds to support scale.


“If you want to support multiple languages this effects audio and video consumption,” he said. “If you want to support text languages this impact UI design. How much granularity do you want in payment options? Do you want to enable local marketing campaigns? For sports you have to cater for every league having different distribution partners and each has their obligation to protect that content. Such product complexity is a huge issue.”


“In the short term the industry will remain a hybrid OTT and linear delivery model but in 10-15 years IP will definitely takeover,” predicted Viacom senior digital director Namrata Sarmah.


From a consumer’s point of view the lines between linear and OTT have already blurred, concluded analyst Colin Dixon, Founder, nScreenMedia.


Part of this is attributable to the rise in views to connected TV via Roku, Chromecast, Amazon Fire and other streaming devices, and a correlated decline of viewing to the PC. “Video consumption in the living room has switched to using the TV as a streaming experience,” noted Haslam.


The facilitation of streaming to the living room led Allan McLennan, CEO / founder PADEM Group to declare that “consumer take up and demand for OTT services is only just starting in Europe.”


Whereas initial IP services were on-demand, like Hulu, increasingly OTT players are addressing live content. The problem remains that delivering live in a packet-based delivery network is problematic particularly when it comes to reducing latency and building scale.


“We are seeing half of all video viewing of OTT on mobile which puts huge constraints on bandwidth,” said Guido Meardi, CEO at compression specialist V-Nova. Some 3 billion viewing hours are lost to rebuffering a year, according to nScreenMedia figures revealed at TV Connect. “On the other hand, people are buying 55-inch and 80-inch UHD TVs seeking immersive experiences at very high resolution. These are polar opposites but it’s not an either or for broadcasters. It’s important to service both.”


Delegates also spoke of challenges en route to growth.


“There is no question that audience fragmentation is the number one thing the industry is dealing with,” said Steve Reynolds, President, Imagine Communications. “We must cater for different connections to reach the audience and even different sources of advertiser content. It is a hugely complex area but if we can come up with solutions for that then there is a lot of opportunity.”


Reynolds was also concerned by the expectation among millennial audiences to receive content for free.


“There are two ways to make money in TV: you can sell content or sell the audience,” he said. “In 75 years that’s all we have come up with as an industry. The pendulum swings back and forth and right now its back toward ad supported video [AVOD]. The issue is that [younger generations] don’t generally pay for content.”


The problem is particularly acute at Viacom which is a largely ad-driven business. “No one wants to pay for content or to look at ads. That is the biggest challenge we face as an industry,” said Viacom’s Sarmah. “No one want to see ads more than 10 seconds. That’s a huge bottleneck for us. We are trying to reduce the ad time, but that not something advertisers want.”


Despite its audience comprising kids and millennials, she said the majority of Viacom’s business is still linear. “We can’t call ourselves leaders in digital yet.”


For pure play streamer though, like content aggregation service Pluto TV, the legacy model presents nothing but opportunity to exploit.


“Ad supported TV needs to have huge ratings and huge audiences to monetise the channels and consequently this mean a lot of niche content disappears or is programmed late at night,” said Olivier Jollet, MD. “This creates new business opportunities. For example, you can have aggregate massive amounts of thematic content and deliver a curated section of that to just 5000 users.”

Broadcasters urged to Spotify the user experience

TV Connect / Knect365

Personalisation means putting the consumer, not the advertiser or content provider, first

In the drive to personalise video it seems that you, the consumer, may be missing out. Lessons should be taken from the music industry, urged experts at TV Connect.
“Personalisation is an end user benefit but that’s not where we are at today,” said Claire McHugh, CEO at interactive video format creator Axionista.
“Tens of millions of dollars are being spent to try and crack personalisation but the end user doesn’t gain the significant benefit they should,” said Aneesh Rajaram, CEO, Vewd (formerly Opera TV and billed as the world’s largest streaming TV platform).
“This model needs flipping on its head so that users gain from opting in.”
Consumers are willing to trade data for something that is useful to them. “They are not willing to do it if it’s an annoying or insulting experience,” said McHugh.
Matt Stagg, Director of Mobile Strategy, BT Sport agreed: “The one key in any personalisation is relevance. There is a fine line between personalisation and imposition. Go the wrong side of that and you lose brand loyalty and gain disengagement.”
He added, “You can have personalisation based on demographics, on football teams, on property as well as recommendation engines or tailored content but for a content provider it’s trying to get the sweet spot between all of that and maintain relevance.”
It was widely agreed that the technology is good to go as far as richer personalised user experience are concerned. But the TV industry is not making as good as use of it as it might.
“You can hyper-customise TV experiences on certain smart TV platforms today,” said Rajaram. “Android TV has opened up personalisation options as has Vewd OS. There is a potential conflict when you start to see some OTT services on the platform take home all the bacon. They understand users better while other (more linear, less connected) services can’t compete because the platform doesn’t share content with them.”
McHugh said no-one has cracked the best UI yet. “Even Netflix doesn’t have a ‘don’t recommend this to me again’ button,”.
Spotify is held up as the poster-child of what a video service could become. “It allows the user to feel like they own it rather than pushing music at them,” she observed.
“Why is it that music services get it right?” posed Rajaram. “We have a much higher tolerance threshold for music curation. We don’t question it so much but in the video world we are very judgemental. We often feel a recommendation is wrong. We can learn a lot from Spotify but I think platforms have a big role in guiding users around the benefits of consuming a piece of content.
“For example, nobody is telling me, as an end user, that if I watch 10 ad slots in one hour of TV I will be rewarded with a free month of Netflix. The move to hybrid IP and linear delivery to connected TVs will open up more analytics to create more of such engaging experiences for everyone.”
In some respects of course, music is a much simpler equation than video. It’s ultra-shortform and generally two clicks away from being listened to or discarded. Video, on the other hand, comes in all sorts of formats, with a mix of rights, pay mechanisms, platforms and devices to consider.
“I think the TV industry should look to the music industry with regards to the sharing of metadata over different platforms,” suggested Stagg. “I’d like to see sharing of metadata across platforms because I believe that is key to personalisation on the big screen [connected TV].”
Arguably the mobile is tailor-made as the platform for personalised experiences. Stagg said the future of sport was increased fan engagement by augmenting their experience with offers of selectable video live game angles. “Machine Learning can help us a lot by taking the hard work out of the process and tailoring content – such as highlights of my favourite three soccer teams from last Saturday. There are so many things we’ve not yet done.”
Turns out that personalisation means different things depending where you stand. It might imply a one to one relationship but Sky AdSmart, Sky’s addressable TV platform, has a higher ceiling.
“We personalise down to 5000 households,” explained Dan Stephenson, Head of Sky Adsmart. “Advertisers and marketers have been quick to embrace our use of first and third-party data combined with their own data to drive national to local campaigns.”
He said Sky AdSmart had conducted over 200 studies since launch five years ago “and we can clearly see a positive impact of tailored communications for certain brand performance metrics.
Since ad breaks which contain addressable ads “significantly reduce tune-away” he concluded that “addressability creates a more engaged audience” but he called on brands to be braver with personalising the creative experience rather than issuing a standard 30-sec spot.
“You have to think who the audience is, is it more than one person?” outlined McHugh. “What types of things might they be doing and on what device. Then you tie it all together into a fun engaging user experience.”
The incoming GDPR privacy rules may actually benefit TV rather than stifle innovation. “We will see the highest levels of engagement on connected TV because people trust TV,” said McHugh.
“The PSBs are seen as part of the establishment since people know you are giving them your viewing habits in return for something of value to them,” added Rajaram. “Your viewing data is being used to inform your viewing experience.”