Monday, 5 September 2016

VR comes to market


Digital Production Middle East


While the terms virtual reality and 360-video tend to be used interchangeably there is an important distinction. VR means a more immersive format, associated with head mounted displays being brought to market by HTC Vive, Oculus Rift and Sony Playstation. 360-degree video on the other hand looks like the more mainstream version, of panoramic video viewed through smartphones with or without head-gear. Many of the technologies and techniques are the same, but it is 360-video which, through sheer accessibility, is already going mainstream.
Hollywood studios have lavished a lot of attention on VR, but it is 360-video as a news format, that is on the way. This summer pan-European news channel Euronews launched a virtual reality news reporting project with funding from Google. Euronews said it will produce regular multilingual 360-degree interactive news videos with finance from the Google fund, making it the first international news provider to “fully incorporate” 360-degree interactive video news into its regular workflow. YouTube will host VR coverage of the US Republican and Democratic National Conventions.
Meanwhile, internet service provider and media group AOL is making 360-video with AOL-owned properties like The Huffington Post and Engadget, after acquiring VR producer Ryot. AOL is also opening a live street facing VR studio on Broadway, New York in October.
It’s still very early days and the breathless predictions for its disruptive impact on everything from filmed entertainment to journalism need reigning in. Nonetheless, it is reasonable to expect that VR will not repeat the failure of stereo 3D.
JP Morgan Securities forecasts VR to be a US$13.5 billion industry by 2020 mainly comprised of hardware sales topping 89.3 million units. Sales of consumer gear will only rocket if there’s content to watch, but here too production seems to be growing at an astonishing pace.
HTC has a $10 billion venture capital fund with 26 partners intent on accelerating the VR ecosystem and Disney led a $65 million investment round in 360-video camera business Jaunt VR, as just two examples.
“Quality rather than quantity must be a critical consideration for the successful future of VR,” warns Carl Hibbert, associate director – entertainment content & delivery, Futuresource Consulting. “Done badly, it runs the risk of putting consumers off the technology.”
It’s a statement that was repeatedly applied to stereo 3D. The signs are that VR is different. “3D is a bolt-on to standard video, and doesn’t add much more perspective than your brain is capable of inferring,” says Ampere analyst Andrew White.
“This, in combination with its high cost and shortcomings, doomed it to failure. VR doesn’t compete with standard video in the same way, since there’s no way to convert 360-degree video to 2D while retaining the original context. VR should be seen as an entirely new medium, running in parallel or as a companion to TV and movies, rather than as an evolution of them.”
By 2020 in Western Europe, Futuresource expects an installed base of close to 30m headsets (including both smartphone-based versions and dedicated premium units like Oculus Rift/HTC Vive). What’s remarkable is just how accessible VR is becoming. Smartphones can act as a viewing platform for the ever growing number of headsets and, which, with content to fuel attraction, should see adoption grow rapidly.
“Samsung’s Gear VR should represent a more mainstream product with a broader base of customers than more premium hardware like the Vive and Rift but it too requires a flagship Samsung smartphone,” says White.
“This means that VR is not yet a good way to reach more than a niche group of millennials. That said, this is a fast moving space, and the release of Google Daydream may change this.”
Google Daydream, coming in Q4, is an advanced operating system for Android intended to boost 360-video content creation and upload to YouTube’s 360-platform. Google says that Daydream-ready phones, as well as a VR viewer and motion controller will be available this autumn.
Facebook has also launched a 360-video platform and is working on its own production system. Google is launching a cine-style VR camera to fuel content on IMAX branded experiences at a six venues launching this year.
Barely a day passes without announcement of a VR initiative. LittlStar, which has the backing of Disney, wants to be the Netflix of VR and has accumulated a library of professional content from Discovery, Showtime, online gaming giant Wargaming.net and fashion brand DKNY.
“Currently, there is a joint industry initiative to make the technology work and drive uptake by enticing customers to the platform with free content,” says Hibbert. “As soon as consumer payment becomes a core component, rights will become a major issue – whether that’s sports, concerts or other types of event.”
The OTT market is moving in the direction of VR with 67 percent of OTT companies believing VR is “here to stay”, according to research by Level 3, Streaming Media and Unisphere.
Viaccess-Orca has packaged an end-to-end live VR solution and tested it with TF1 and Sky Italia. Components include Harmonic ProMedia Xpress transcoder and ProMedia X Origin media server integrated with VideoStitch’s Vahana 360-video stitching software. Viaccess-Orca provided its Connected Sentinel Player for DRM, playback and services for interactivity and analytics.
“Operators have a vested interest in terms of optimising the image quality at the same time as reducing bandwidth lens to lens,” says Alain Nochimowski, EVP of Innovation. “For organisations which own rights there is clear interest in creating new monetisable experiences.”
Mark Blair, VP EMEA for Brightcove thinks there may be interesting product placement opportunities, taking the smartphone gaming phenomena Pokémon Go as a cue.
“If you overlay graphics, animation, text or information on top of a 360-video or VR broadcast and make it interactive you’ve got a monetisation model with click throughs to more information or click to buy,” he says.
“The opportunities are endless. The question is how you can make it efficient enough to become more mainstream. That’s where some of the challenges are.”
Where a Pokémon Go game app is played using video content is created on the device, trying a similar approach over VR goggles is more problematic.
“That’s where I feel 360-video as opposed to VR has a smoother path to being mainstream from a monetisation perspective. The use of goggles seems to be quite a key differentiator between 360-video versus a true VR experience. You have to get goggles into the hands of a large amount of end users for commercial models to really take off.”
Brightcove is supporting 360-video in its player technology. “Companies wanting to use a professional enterprise grade video solution can use our player technology (its PLAY online video player) to build an integrated experience,” he says. This means that, for example, brands with a VR digital marketing campaign can distribute it via Brightcove’s OVP and hook the campaign into marketing automation tools like Oracle Eloqua to track engagement or conversion to purchase, “something free platforms like Facebook or YouTube do not concentrate on,” says Blair.
While video games remains the big initial content draw for consumer VR – likely given a boost when Sony debuts Playstation VR in October – sponsors are also paying to brand live sports experiences. Automotive brand Lexus, for example, sponsored the VR user experience of The Open Golf which NextVR produced for Fox Sports. “We will test both subscription and single view pay per view models this year, mostly for Live Nation properties,” says Dave Cole, co-founder of NextVR. “2016 is a year of audience building. We are not going to put a paywall in the way of audience aggregation.”
Arguably the biggest production challenge is stitching the camera views together. When stitching is performed manually on recorded content, it usually leads to several hours if not days of laborious processing. Cinema style VR creation reportedly costs $10,000 per finished minute including compositing and rotoscoping.
That’s not feasible for broadcast which is why VR producers have tended to devise their own technique. “We take advantage of the fact that the cameras we use are not off the shelf but fully calibrated industrial units,” says Anthony Karydis, CEO, Mativision, a London-based company behind the VR live stream of the MTV Europe Awards, Muse and Sigur Ros concerts. “Developing our own players ensures total control right to the delivery stage. Our players are very mature and rich in features; there is simply no comparison with anything else in the market today.”

VR trending at IBC

IBC has made VR a conference theme reflecting its rapidly emerging impact on many points of media and entertainment.
Alexandre Jenny, GoPro’s senior director immersive media solutions, is an IBC speaker. He suggests that 360-video developers need overcome three main challenges.
“The first is parallax, the disparity caused with marrying different views from the different lenses of any multi-camera 360-rig,” he says.
A second challenge is editing the footage. This, says Jenny, is less of a technical hurdle than an editorial one. “The grammar of storytelling in 360-degrees, including when and where to cut, is still being worked through.”
Then there’s the live streaming and social sharing of content. In particular, Jenny feels that developments need to focus on making the VR environment more interactive. “Pokemon Go is a great example of gamification overlaid on live video as augmented reality. The question is how we can bring an interactive layer to full VR.”
“VR is enabling all of us to discover and experience so many fabulous new locations,” he enthuses. This is the last step before teleportation – I can be present without having to travel there. We are that close.”

Also at IBC, there’s a VR primer from Solomon Rogers, founder & CEO of Rewind. The idea of new visual and audio grammar is explored in depth by Simon Gauntlett, CTO of the DTV Group. He tackles issues like how one might create standards for live 360-video.

DDVTech Updates MistServer, Claims Load Balancing Lead


Streaming Media Global

The Dutch developer releases research which promotes claimed industry leading load-balancing and latency improvements for MistServer.


DDVTech arrives at IBC with a raft of updates to the open source MistServer including Prometheus instrumentation, triggers, and a "meta-player."
The Netherlands-based company, which also markets itself as MistServer comes armed with fresh research analyzing its load balancing technique and live streaming latency.
The company has published a paper (also linked from http://mistserver.org/documentation#Research), comparing its algorithm with that of competitor load balancing techniques in cooperation with the University of Leiden.
To paraphrase the results, DDVTech's algorithm is "significantly better" in all test load simulations, "providing a less crash-prone method to balance overload and generally resulting in smoother and more predictable distribution of load over servers."
It has also run comparative tests of the latency different protocols produce when trying to optimize them for speed. DDVTech finds quite a bit of difference in the original streaming protocols and the "newer" segmented protocols with the originals tending to have latencies around 3-10 seconds while segmented protocols like HLS/DASH lag up to 40 seconds behind. The tests measured the time from live ingest point to various player devices.
It is also in the process of running performance tests comparing MistServer with other media servers in resources used and amount of streams served.
Jaron Vietor, CTO and co-founder, discussed the enhanced version of MistServer, beginning with the triggers. This is described as a way to alter the behaviour of the system.
"It basically allows you to call up an executable or URL, feed it information from the server, and it then responds with the action to be taken," he says. "This allows users to build authentication systems and more detailed monitoring, but also on-the-fly creation of both VOD and live streams plus the recording or pushing out of streams when wanted."
Another new feature is support for RTMP push to other targets from all streams, both VOD and live. DDVTech has combined this with DTSC pull input. DTSC is the firm's internal format that can hold any codec and allows efficient translation to various output protocols.
"DTSC pull input is the most latency-free method to sync live streams over multiple server instances on-demand," explains Vietor. "In other words, no traffic is used if a server doesn't need a specific stream, as opposed to push, where a central server always sends a stream to another instance, regardless of viewer count."
This feature further combines with DDVTech's new load balancing solution, which tries to cluster viewers together on servers in such a way that the amount of different streams per server is minimised.
"This can provide great savings on inter-server bandwidth as well as CPU and RAM usage on the instances themselves," says Vietor.
Prometheus instrumentation is an open source monitoring software package, which allows users to receive a live view of all kinds of server health metrics on a global level as well as per-stream. For anyone preferring a different type of monitoring tool, DDVTech also has a JSON output of the same data that can generically be parsed by anything.
The developer says it has improved its handling of various audio languages and subtitles, added RTSP input support, and stabilised TS inputs and outputs to be more widely compatible.
"Additionally, we've created a 'meta-player' which isn't really a player itself, but more of a wrapper around various players," says Vietor. "It will auto-detect which players will work on an end-consumer device, and decide which will give the best results for a given stream+device combination. This includes players such as basic flash, HTML5, but also JWPlayer, Theoplayer, and the DASH-IF reference player. All of them share a single unified look and a single code will do everything automatically.
"The look can be customised with normal CSS, allowing anyone to override the looks to match their website and making integration with any web platform very easy. It will even provide a fallback if scripting is disabled in the device's browser and still function using the device's native players. As far as we know, this is the only player in the market with this wide range of compatibility. Basically, as long as a device is capable of playing video, our player will make it do so."
The base version of Mist is open source with roughly two thousand users. Among these are hobby users, universities, houses of worship.
A pro version is in use at several large customers that run their entire web streaming platforms on it, DDVTech says. It also has several partners that sell devices with Mist pre-installed that are usually white-labelled.
"The ideal application is an integration into a larger system, where it provides all the transmuxing and protocol handling, while other systems do the control through our API," says Vietor. "We've seen successful use of Mist inside encoder appliances, generic media server appliances, as part of streaming web platforms, and in media storage and playback systems, and in broadcast to OTT conversion systems."
Vietor would like to see Mist installed on camera firmware and inside CDNs. "Mist is the only technology small enough to fit into cameras like GoPros," he says. "While GoPro already offers the ability to view video over Wi-Fi you cannot stream out of the camera to streaming server but you can with Mist."
Attempts to get camera developers and CDNs to consider Mist have been frustrated. "CDNs have to cache files separately for different delivery protocols like HLS, HSS or DASH. We can create a single cache for all protocols – with WebRTC coming in future—which saves CDNs from having to duplicate the load. CDNs are among the most reluctant to change their existing set-ups, though."

DDVTech's Background

DDVTech is an offshoot of an old project from 2009 in which Vietor and colleagues attempted to live stream gamers.
"We wanted to make a gaming-focused competitor to Justin.tv (which later became Twitch) but the project failed for several reasons, some of them social/community reasons, but also because we had trouble with the streaming technologies we were using at the time," explains Vietor.
"While I was complaining about the streaming tech, one of the project leads asked me 'Well, could you do any better yourself?' A few months later, the first version of MistServer was born."
Over time, MistServer became the main business and Vietor set up DDVTech around it. "Since then, we've kept a serious focus on R&D and we only do minimal marketing, allowing us to keep up a rapid development pace," he adds.
Vietor says the main thing that makes DDVTech different is that it is developer-friendly. "We don't try to produce a full solution that takes care of everything. We know there is no practical use for something like that. You want to integrate with other systems, have more control, stuff like that. So, Mist is specifically a toolkit. It is not a single application, but a set of applications that each perform a set task. Then there's the controller which ties it all together, and provides a single point for monitoring and control.
"Everything else has a use separately, without needing the rest of the server to be running, however. We even include a set of tools that have nothing to do with the media server itself, but allow you to debug or parse media streams from any server, including competitors. They have been invaluable tools during our own development and testing."
He continues: "What also helps is that we don't distinguish between live and VOD or types of content and things like that. Streams are all handled the same internally, allowing us to generically use all features and methods on all types of stream."
MistServer open source has no options for DRM. MistServer Pro can support DRM but comes without DRM unless requested. DDVTech found it more efficient to custom build a customer’s preferred flavour of DRM into MistServer contending that "standard DRM usually just doesn't quite cut it and we like to offer you the DRM you want instead of forcing you into one." It has multiple DRM templates ready to be adapted and implemented into MistServer.

Sunday, 4 September 2016

Why Blending Frame Rates is a New Artistic Tool

IBC

Ang Lee's latest feature Billy Lynn's Long Halftime Walk is garnering Oscar buzz even ahead of its November release because of its unprecedented blend of visual formats.
The film is shot and produced in 3D 4K and 120 frames a second (for both of the 3D views) and it is the speed of the show which is most remarkable.
While visual effects pioneer and director Douglas Trumbull wowed IBC in 2014 with a screening of his short film UFOTOG which was produced in 3D 4K and 60fps (both eye views), no mainsteam feature has ever been released in the format.
Peter Jackson’s The Hobbit: An Unexpected Journey in 2012 was shot at 48 fps, and James Cameron has also publicly stated he is considering higher-speed cinematography for the Avatar sequels.
There are many technically challenging aspects to Lee's production such as the management of 40 times more data than a conventional movie and the challenge to outfit exhibitors with the projection equipment possible to play it back at 3D 4K 120 but the most intriguing artistic aspect is the experiment with selecting different frame rates for scenes within his final master.
Shooting at 120 fps unlocks the ability for fine control in post over ‘the look’ of the material because the camera’s shutter angle is no longer 'baked in' to the rushes. This same synthetic shuttering technique can tailor the look for all deliverables up to and including the maximum of 120 fps.
“Creatively this means you have the ability to make some sections more normal and other seem more heightened, so scenes in Iraq might be more heightened than scenes when Billy is with his family,” explains the film's editor, Tim Squyres ACE.
He describes how frames can be blended together to provide different looks for how we perceive movement and motion blur.
“When you move between frame rates you are trading off strobing with motion blur,” he says. “By using a 360 shutter you can pretty much eliminate strobing but if you move to 60 there will be some strobing and going down to 48 there is even more. The straightforward means to move down frame rates is to throw frames away – so from 120 to 60 you could just throw away every other frame, but by using software we could blend the before and after frames, or just a portion of them, or combine two frames together, all designed to smooth the strobing and hard-edged look you get in 3D.”
While such extreme rates are entirely new, it's worth noting that variable frame rates for different scenes (usually between 18 and 23 fps) were the norm in the silent film era with film reels often delivered with instructions as to how fast or slow each scene should be shown.
One of the founding fathers of cinema, Thomas Edison, preached that 46fps should be the  optimum speed at which film is shown, a factor limited by the need to manually crank films through a projector. When the talkies arrived in 1926, a 24 speed was chosen by the studios because it was the slowest frame rate possible for producing intelligible sound.
High time then, nearly a century on, to bring back frame rates as an aesthetic rather than a technical or economic decision.
“120 is a different way of seeing things entirely,” enthuses Squyres. “It has an immediacy and an intimacy that is not what we are used to seeing. 24fps is arbitrary, and it comes with limitations, but talented filmmakers have been doing beautiful work in 24fps for a long time, and that’s the look that audiences have grown to love. This film may begin to change their perception.”
IBC delegates can experience this too courtesy of a unique installation of two Christie Mirage projectors and a special preview of clips from the film presented by Ang Lee. Together, these projectors are able to achieve 120 fps, 3D, 4K, employ 6P laser illumination and spectral filtering 3D by Dolby. In addition, the system has the ability to playback content in high dynamic range and Dolby Atmos.
IBC will be holding a deep-dive session with Tim Squyres and the film's digital production supervisor Ben Gervais following Ang Lee's keynote on Monday to examine these new concepts and what set of delivery formats Sony will be opting for.
In addition, there's a session on Saturday examining the synthetic shuttering technique with an explanation of the Tessive software used on Billy Lynn's to blend frames together and how this helps optimise the creative vision across all formats.
Earlier in the day there is a startling demonstrating of Light Field featuring speakers from Lytro and Fraunhofer IIS. This technology, which is in experimental phase, is a new paradigm in optical capture promising not just fine control of shutter speed but every other cinematographic tool like lens choice to be made after the event.

- See more at: http://www.ibc.org/hot-news/why-blending-frame-rates-is-a-new-artistic-tool#sthash.uU6TbKcj.dpuf

Friday, 2 September 2016

Electrifying OTT Services With the Cloud: Going Beyond the Box

Streaming Media Global

Though fraught with challenges, the move to the cloud for DVR, UI-UX, and ad insertion is helping service providers and operators maximize revenue and deliver a better experience to their subscribers.


In search of increased revenue, cost savings, improved service, and customer retention, service providers are migrating key services to the cloud. The number of TV services that are accessed over the network via a service platform or back end for video on demand (VOD), subscription video on demand (SVOD), and other TV Everywhere services such as start-over or catch-up is ramping up fast. The arguments in favor of the cloud are compelling and the momentum unstoppable, although the shift is not always straightforward and should give pause for thought.
Before we take a deeper dive into cloud DVR (cDVR), ad insertion, and the user experience (UX), it may be helpful to define what we mean by “cloud.” As ever, different people mean different things by it.
For example, “cloud” could mean the use of public virtualized IT infrastructure, such as Amazon Web Services (AWS), on which privately licensed software is operated. Or it could mean that the cloud supplies the actual function being offered. In that case, the underlying IT infrastructure may be still be cloud-based or privately owned to create a “private cloud.”
Ian Munford, Akamai's director of product marketing and media solutions, points out that “Cable and IPTV operator deployments are typically happening in private clouds, which limits the value to viewers. OTT providers such as broadcasters use cloud partners like Akamai to deliver DVR-like services and have done so for many years.”
Another important subtlety is the actual location of the equipment inside the cloud, which becomes very important for certain aspects of TV delivery. There’s a general assumption that because something is “in the cloud,” it doesn’t matter where it is actually located, and that it’s probably centralized somewhere to benefit from economies of scale. That’s not true, and without proper consideration, this could be a very expensive mistake.
“From a technical point of view, cloud can be about moving functions out of client devices and into servers,” says Andy Hooper, VP of cloud, solutions and services EMEA at ARRIS. “In that sense you could argue that a telco service provider has been running a telephone network as a ‘cloud’ service for 50 years. In a business context, cloud also implies outsourcing of operations.”
Essentially, we are talking about moving what used to be done in a box in the living room to the network. The most obvious benefit of this is the ability to tap the cost savings of virtualization and rapid response to changing demand.

Cloud DVR

Park Associates' research suggests worldwide cDVR subscriptions will rise to 24 million by 2018. As Ericsson’s Sarah Paris-Mascicki wrote in TV Technology, this reflects a shift in consumer viewing habits and creates “an imperative for providers to build more flexibility into their services.” A recent study released by Technavio expects the global cDVR market to grow at a compound annual growth rate of more than 30 percent over the next 3 years.
By 2020, Ericsson predicts that more than half of viewing will be time-shifted with providers able to reap double to triple the current benefits their DVR services provide, simply by upgrading to the cloud. Other vendors see similar growth. TV applications solutions provider Accedo, for example, expects more than 50 percent of cDVR penetration to have occurred by 2022.
“When it comes to processing key functions of the client box, such as the DVR experience, in a more centralized way and reducing the requirements of the STB [set-top box] there is a clear trend,” says ARRIS' Hooper. “What’s more fragmented is whether this is outsourced to third parties. Some Tier 2 and Tier 3 operators don’t regard TV as core to their business and are more ready to outsource cDVR functions to someone else’s data center. It’s a commercial decision: build versus buy.”
ARRIS says it has a project with a Tier 1 European operator that is building its own cDVR infrastructure, and the company is working with other clients to build the same capabilities in order to sell a white-label service to other operators.
Edgeware customers, including Belgacom in Belgium and Netherlands’ KPN, are using DVR in the cloud quite extensively, according to CMO Richard Brandon (right). “cDVR is usually one of the top three services their viewers use when consuming TV delivered over an IP network—along with Startover TV and VOD,” he says.

The Benefits of cDVR

The value proposition for operators moving to a cloud architecture is well-documented. It includes financial efficiencies such as lower operational expenses in maintaining STBs and reduced overall capital expenses; increased revenue potential by being able to upsell storage services, multiscreen experiences, etc.; and the ability to reduce customer churn by offering advanced services such as parallel recordings, unlimited storage, and improved reliability.
“The cost of supplying and maintaining home DVRs is becoming prohibitive as users demand more and more storage,” says Brandon. “Many of today’s connected devices don’t have sufficient storage for local recording.”
Operators using Edgeware's TV CDN have achieved increased subscriber satisfaction and loyalty from adding catch-up and cDVR. KPN, for example, tripled its TV subscriber base to more than 1.6 million households since launching cDVR in 2011.
“Additional benefits to the viewer include the ability to access stored programs from multiple devices or locations, escaping from the cycle of upgrading out-of-date equipment and simplifying the buying and configuration experience,” says Brandon.
Ericsson reinforces the capital savings a service provider can generate from theoretically fewer service callouts. It calculates that a typical truck roll costs $75 per subscriber and says cost-efficiencies have been the main driver for the deployment of cDVRs by U.S. operator Cablevision.
However, costs don’t simply vanish with a move to the cloud. Service providers face a huge challenge storing unique copies. “Cloud DVR may bring significant potential for video providers to accelerate monetization opportunities, but that potential quickly loses its luster if storage costs for a large-scale service reach $1,215 per subscriber,” says Yuval Fisher, Imagine Communications’ CTO. That alarming figure is tied up with the thorny legal issue of the legal uncertainties surrounding private/public copies and is the single issue thwarting rapid rollout.

Regulatory Challenges

In most markets, it is completely practical to deploy cDVR services. However, the licensing arrangement for private copies in some countries, notably those in North America, is a sticking point. The pivotal issue is whether a unique copy is required for every subscriber, which in the U.S. is deemed as the standard.
“A private copy system requires a unique copy of a program to be saved for every subscriber that requests it, meaning recordings cannot be shared,” says Itai Tomer, head of the cDVR business line at Ericsson. “Each single, unique copy of the program has to be saved for each user, which requires a huge, growing volume of storage and very high recording and playout concurrency, and that can be problematic to sustain.”
“Technically, it is nonsensical to have to create private copies, but it’s a license requirement from some content owners,” says Brandon. “Where that exists, it makes cDVR more expensive and cumbersome to implement.”
In Europe, copyright laws vary according to region. “In Switzerland, for example, the regulatory framework is very well-defined and in accord with single copies, so you find all the major players like Swisscom and Sunrise offering cloud recording,” says Hooper. “In other places, it’s very much a traditional content rights negotiated issue and therefore a commercial decision as to whether the numbers make business sense for an operator to move to the cloud.”
Imagine Communications has done some calculating. “For content owners that want to stick to the exact letter of the ruling, it means, for example, an operator with a million users and 300 hours of recorded content per user will require 120 Petabytes (Pb) of storage,” says Fisher. “If you just store one physical copy of a year’s worth of content for 500 channels—which is a use case from one of our customers—then it means 20Pb of storage. So whether in private or shared copy mode, there are huge storage costs.”
To make matters worse, he says, delivery to multiple devices in multiple ABR formats ups storage requirements fourfold to six-fold, rapidly becoming an even more expensive proposition.
“For a use case where the aggregate bitrate is 15Mbps with three ABR formats and 200 hours of recorded content for every subscriber, and we assume the cost per TB is $300, then you get a cost per sub of $1,215,” Fisher says.
In another example, taking the aggregate bitrate at 10Mbps with two ABR formats and 75 hours storage per user and a TB at $200, the cost is $135 per user. “Even with a very extreme scenario of very low bitrates, you still end up with a cost of $22 per user,” Fisher says. “Multiply any of those scenarios over millions, in the case of any large service provider, and you can see how the costs quickly stack up.”
Imagine Communications advocates just-in-time (JIT) packaging to encrypt and JIT transcoding to minimize storage costs by 50 percent. “Store assets in a single format but deliver in all adaptive bitrate formats for multiple devices,” advises Fisher. “Operators want the ability to shift between interpretations of the private copy ruling, giving them flexibility at the program level.”
Akamai’s Munford reports that many operators still haven’t been able to secure rights for their cDVRs in a way that allows the benefits of the technology to be fully realized. “Until this problem is fixed, the value proposition for viewers will remain confusing and the economic benefits to operators limited,” he says. “OTT services do not have this problem as rights are readily available or, as is often now the case, owned by the OTT provider.”
The legal framework is evolving to open up the market. For instance, the French Digital Bill, approved in July by the French Senate, encourages the deployment of cDVR in that country.
“New and renewed contracts often add the provision for shared copy storage. Over the next few years, it is expected that most, if not all, non-adversely negotiated contracts will allow shared copy,” says Simon Trudelle, senior product marketing manager at Nagra.

User Interface and Experience

Transplanting user experiences to the cloud offers many of the same advantages to operators, notably the ability to change the UX rapidly and at scale, rather than rewriting the UX for every make and model of CPE, and enabling an operator to innovate a discrete user interface (UI) for every subscriber.
User experiences delivered as MPEG or H.264 streams to every STB with full interactivity enable operators to efficiently deploy services “that are equal to—or better than—experiences that run on the box itself,” according to Murali Nemani, CMO at ActiveVideo.
Let’s take a step back and look at where TV came from before we look at where it’s headed. Consumers are used to systems such as Apple TV, where the UX is underpinned by the computing power of multicore CPU and GPU chipsets. By contrast, the TV world in some cases still is restrained by embedded systems that tried to get the most out of low-horsepower chipsets and minimum memory. CE vendors used embedded software developers to build UIs, and the results often were not pretty.
The arrival of multicore chipsets for customer premises equipment (CPE) allowed operators and TV set makers to use designers instead of developers for the UI-UX. As a result, the TV UX has evolved.
“Most TV UIs are now very heavily graphics-focused and what we would call a televisual experience using ‘posters’ or ‘jackets’ and a lot of picture-in-picture,” says Anthony Smith-Chaigneau, senior director of product marketing at Nagra. “A stunning UI-UX that delivers responsive experiences is now expected. The cloud UI is driving us to a position of compromise because the functionality of a native embedded UI-UX cannot be replicated with today’s cloud UI offering.”

Why Cloud for TV UX?

The arguments for a cloud TV UI-UX are similar to those for cDVR and ad insertion; they can be summarized as follows:
  • Virtually unlimited back-office CPU power to implement the UI-UX
  • Takes advantage of legacy-deployed customer-premises equipment that is technically less capable than modern devices
  • Potential for new, less-capable CPE, as all the heavy computing is done in the cloud
  • Since applications are run in the cloud, upgrades are avoided in client devices
  • Reduced complexity of managing the different models of CPE deployed on the network
  • Application download capability such as download of new STB software to any connected device
Nagra’s Smith-Chaigneau points out that many legacy devices simply do not have the physical capacity to offer a slick UI and great UX, so the industry is looking for ways to fix that problem.
“Ironically, TV Everywhere is addressing laptops, smartphones, and tablets that have enormous computing power,” he says. “So with a cloud UI-UX, are we just talking about the issue of ‘incapable’ STB/CPEs in the field?”
Smith-Chaigneau points out that some operators are large enough to support the cost of deploying advanced services and advanced UI-UX by implementing a middleware in the client STB and by supporting all STB hardware models. “They might also look at using cloud services to reduce their total cost of ownership, but it becomes difficult to weigh the real cost of these services, as they have to support millions of consumers,” he says. “Also, they will have to look at the usage of network bandwidth, balancing between unicast and multicast services.
“It may well be that cloud UX is the solution for small and medium operators who want to deploy similar advanced services without having to bear the cost of implementing a middleware in the client STB—or at least be able to support a middleware that provides mainly the video and audio rendering means: no PVR [personal video recorder], no video gateway to home network,” he continues. “Network bandwidth still remains a challenge, but there might be fewer problems as these operators have to serve a smaller number of clients.”
Cloud UX deployment has its share of technology challenges. Nagra summarises these as follows:
  • Latency of the remote control: each action of the remote has to be transmitted to the cloud for processing.
  • Limited network resources: if network resources are limited, it is difficult to anticipate the network’s actual load. This is the case for live/linear services where each video stream is unicast. Some cloud UX technologies propose unicast for the UI and multicast for the content. The merge of both streams made in the client device requires relatively powerful devices.
  • Concurrency of consumer activities: the industry is still learning about the scalability of cloud infrastructure and its availability to support the peaks created by live events.
In addition, Smith-Chaigneau suggests there is a real question about the simplicity of the STB in that both video and audio still need to be decoded “taking into account the numerous compression and transport formats (HD, UHD, Dolby Atmos, etc.) which requires a variety of computing power requirements.”
Vendors such as ActiveVideo espouse the innovation aspect of cloud-based UI-UX. ActiveVideo points to Ziggo's VOD and catch-up services in Netherlands; trend-driven UIs with multiple tiles of live video on single tuner STBs with Liberty in Puerto Rico; and the complete YouTube experience to upward of 500,000 existing STBs at UPC Hungary.
Nagra questions how open providers of video services will be to being “proxied” by a cloud infrastructure. “For example, video services like Netflix and YouTube have their UI implemented in the client device,” says Smith-Chaigneau. “Will they accept that the UI is implemented in the cloud?”

Ad Insertion

Ad insertion is less mature than cloud DVR, although early adopters are beginning to implement the technologies, and many multichannel video programming distributors (MVPDs) are exploring it.
“It’s early days,” says Ericsson’s Tomer. “One thing is clear, though. Operators agree that changing viewing habits combined with OTT video and innovation in cDVR technology have changed the game for advertising.”
The advent of programmatic advertising has begun a transition for sellers and buyers of ad inventory from an offline transaction world to an instant online experience with claimed benefits of better control, choice, and transparency.
“While traditional ad inventory selling and buying mechanisms exist, we anticipate operators will rapidly adopt the cloud as their default ad insertion infrastructure,” says Sanjay Kirimanjeshwar, head of global marketing at Amagi. “This is not restricted to buying ad inventory alone. The entire workflow of selling inventory, buying ad spots, payments, uploading video assets, managing insertions, reporting, and measurement is in the process of becoming integrated. Numerous third-party services and technology providers are plugging in their products and offerings to make this cloud workflow robust.”
He explains that operators are widening access to their ad inventory by partnering with multiple ad exchange platforms. “Since the systems are cloud-based, it eliminates geographical limitations related to sourcing and delivery,” he says. “For example, media planners based in the U.S. can create, manage, and monitor tailored advertising campaigns for audiences in Canada and Central and Latin America. As the broadcast feeds permeate geographical boundaries, subject to necessary regulatory clearances, operators are rapidly expanding their audience base and reach.”
Operators in the OTT space quite naturally see the cloud as the technology choice for ad insertion, whether for live or VOD services. It’s worth noting Nagra’s observation that with the exception of the U.S. market—where MVPDs actively manage some of the advertising space on behalf of broadcast networks—“Demand for addressable ad insertion remains low as the ad space is managed by broadcasters,” according to Nagra’s Trudelle.
Cloud deployment (meaning ad insertion software deployed on virtual machines) can be used for both broadcast and multiscreen ad insertion. This is happening in three different ways:
  • Server-side ad insertion for live streaming, cloud DVR, time-shifted, and VOD services
  • Client-side ad insertion for pre-ingested ads onto the origin or on the cloud
  • Instances of integrations with digital video ad networks (such as Google or SpotX that traditionally serve video to the web) with video service provider networks
“Client-side” ad insertion relies on the client requesting an ad to be streamed to the device at each ad break. “Server-side” ad insertion inserts the ad in the viewing stream as part of the actual program. One advantage of service-side ad insertion is that it is far less likely that the viewer can use ad-blocking software to override a client, and the advertiser can be sure its ad was actually delivered.
The key advantage overall is the ability for operators to create more value for advertisers by enabling delivery of targeted and personalized ads.
“Advancements in server-side ad insertion, especially for live sports and news content with abrupt ad breaks, are catching operator attention,” says Kirimanjeshwar. “Likewise, operators are beginning to serve personalized ads on VOD platforms where subscriber profiles are predetermined. Either way, operators can offer an enhanced experience to both advertisers and viewers.”
Scalability is another key advantage. Cloud simplifies the addition of new ad exchanges and integration with demand-side platforms, and it supports ad insertions for a growing audience base compared with traditional and offline models. For some, the major benefit is wresting full control and visibility over ad insertions.
“The other development among operators is the adoption of programmatic spot ads,” says Kirimanjeshwar. “Compared to the earlier 1-minute spot ad inventory model where ad sourcing was largely localized, the introduction of cloud technologies has allowed aggregation of spot ads. Now, operators can sell spot ads programmatically.”
Since the storage is cheaper and computational speeds are higher, cloud-based systems should be able to process audience information faster and deliver targeted ads accordingly, in a more cost-effective fashion for operators.
There are challenges, though. Like cDVR and timeshift services, inserting different ads for different viewers creates personalized and unicast streams, potentially unique to each viewer. Streaming ads from a centralized point can put strain on the delivery network because viewers generate their own streams of traffic right across the network. Additionally, it’s important that the ad is sent in the same format and bitrate as the program it is being inserted into.
For cDVR, Brandon explains that Edgeware’s customers are looking to solve this problem by distributing ad-insertion functions closer to the edge of their TV service infrastructure. “The ad decision-making is hosted in the cloud, but the ads themselves are stored locally and inserted at the edge of the network in real time,” he says. “Where, of course, they are not vulnerable to client-based ad-blocking software.”
Perhaps the greater impediment—a speed bump rather than a roadblock—is on the business side and the attempt to align various stakeholders (broadcasters, brand advertisers, and measurement firms). The feeling is that the value of TV Everywhere is directly aligned to the ability to manage addressable ads and that as this becomes more transparent, the whole industry will shift.
This article originally ran in the Autumn 2016 issue of Streaming Media European Edition as “Electrifying OTT Services With the Cloud.”

Thursday, 1 September 2016

Get your Game on: Esports is here

TV Technology Europe

Pro video gaming is more watched than played among millennials making it TV gold.


The emergence of pro video gaming has been likened to that of action sports like kiteboarding and trial riding – activities that went mainstream with the oxygen of video streaming. The difference is that while not everyone has access to mountains or a surfboard, pretty much anyone can play a video game. What has made broadcasters sit up, though, is that e-sports is more watched than it is played.

The phenomena had a slow burn before the metoric rise of the last few years. Online game-play ignited twenty years ago when avid gamers began showcasing their skills to gain street cred online. Amateur competitions attracted games publishers to formalise play into leagues and promote their titles. A the same time, individual gamers began posting videos of gaming-with commentary on YouTube, exemplified by Swedish star PewDiePie. Player and team profiles rose on a wave of internet streaming, sponsors have helped legitimize the activity, Amazon took game-casting mainstream by scooping Twitch (ahead of Google), prize money rocketed accordingly, and gaming tournaments are streamed live from packed stadiums.

So far esport has not needed traditional media to grow,” confirms Amisha Chauhan, research analyst, Futuresource Consulting which puts a $500m current value on e-sports worldwide. From its online base it has grown immensely due to fans that are highly tech savvy and internet fanatics.” 

Esports draws comparison with top tier global sports like Champions League football in terms of the number of viewers it can attract. The 2015 UEFA Champions League final had around 180 million TV audience in 200 territories (and a total estimated reach of 400 million viewers). By comparison, last December's League of Legends world championships boasted a cumulative number of viewers online and TV of 334 million over the four-week tournament. 

Esports is fast becoming one of the most watched and passionately followed global sports categories among younger audiences,” said Jørgen Madsen Lindemann CEO of Swedish digital media powerhouse Modern Times Group. “There are now almost as many gamers in the world as traditional sports fans.”

The fanbase is overwhelmingly the demographic which has deserted TV for online entertainment. “Gamers are ultra-consumers: early adopters of new technology, heavy users of broadband, more interested in HD and natural-born multi-screeners,” says Michiel Bakker, CEO Ginx.

With the rise of YouTube and Twitch, games have become media themselves,” says Todd Hooper, CEO of virtual reality gaming platform VREAL. “More people are watching games than playing games. If you are a publisher or studio building a game you are also thinking about how it will be viewed as entertainment.” 

That's why broadcasters are eager to bring esports onboard. MTG, which bought a controlling share of Cologne-based Electronic Sports League (ESL) a year ago for 78m, launched the world's first 24/7 esports TV channel in April. It calculated that the average revenue per eSports enthusiast in 2014 was over $2, compared to $56 for traditional sports fans: “This global phenomenon has tremendous potential,” declared Lindemann.

In May, Turner Broadcasting's Eleague – a joint venture with talent agency WME IMG – went live on the TBS TV channel and on Twitch scoring more than 150 million minutes of video consumption in its first week and 92,000 concurrent streams on Twitch.

In June, Sky and ITV took minority stakes in London-based Ginx eSports with the aim of launching a 24-hour TV channel. Ginx TV will air competitions such as Counter Strike: Global Offensive live from its studio in King’s Cross. It says the deal will enable it to reach 37 million households worldwide making it the world's biggest esports TV channel.

Investment is piling in from elsewhere too. Multichannel network Machinima, in which Warner Bros and Google have stakes, is launching magazine show Inside eSports on Go90, the mobile video service of U.S telco Verizon. Call of Duty maker Activision Blizzard paid $46m (£30m) for esport network Major League Gaming and plans to launch its own esports cable channel.

Production 
Live events such as Dota 2 and League of Legends held at major stadium venues in front of a capacity crowds are treated in much the same way as any OB.

When BBC iPlayer and BBC Three showcased the quarter finals of the League of Legendsworld championships last October from Wembley Arena, Trickbox TV built a temporary flyaway control room.

It was of the same standard and using the same kit which we would deliver to any live broadcast,” explains Trickbox MD Liam Laminman. “With three incoming host feeds, our location facilities were augmented coverage with Sony HDC-1500 cameras, an EVS and other equipment including a Trilogy Messenger talkback system.”

For production of Eleague, Turner built a 10,000 sq ft arena including 25,000 sq ft of LED lighting in Atlanta. The facility is fitted with 26 cameras including 12 devoted to capturing POVs for each player and one trained on the collective team. A camera suspended from the ceiling offers 360-degree angles of the event floor. In addition, Turner Studios has built custom e-sports training facilities and 75 post production suites.

Red Bull runs its own e-sports studio in Santa Monica. One format produced from there pits two video gaming teams of five players against one another. As they play, Reidel MediorNet Modular frames ingest twenty HDMI POV video signals from the gaming consoles, convert them into HD-SDI, and carry them to the control room. The POV cameras focus on the faces and hands of all ten players, with additional HD-SDI cameras positioned on the game commentators. These inputs are combined with the primary gameplay feeds to produce the e-sports broadcast. 

Many esport companies run production on BlackMagic Design and/or Ross Video hardware. Romanian-based sports producer PGL has several ATEM 2M/E 4K mixer, Teranex converters and Decklink capture cards. Rival streamers ESL and Hitbox deploy Ross Video Carbonite switchers and Xpression or casperCG graphics gear.

We surpass [TV] in some aspects,” claims Vlad Petrescu, head of broadcast, PGL. “While TV has the edge in overall professionalism and broadcast consistency... an esport production looks and feels more complex.”

For example, esports tends to be highly connected to viewers via social media. During PLG's production of The Manila Major it showed a custom Battleview for Dota 2 and received “tonnes of valuable feedback from people that know how they want to watch an esports match,” reveals Petrescu. The next day we coded these features and presented a new version, which was way better received by both viewers who saw it for the first time and those that didn't like it very much the first time around.”

During an event, PGL will scan social media for questions or remarks from viewers. “If we find something interesting, we have ways of showing it during different segments of the show. For example, we'll have a Q&A segment at the end of a match where our analysts answers questions from various social media channels.”

According to Adam Simmons, director of content for game streaming platform Dingit.TV, latency is crucial for streaming where audience interaction is vital to platform success.

Using social with the live stream is vital,” he says. “Players can type in a chat room to respond to fans or to explain move. If that delay is more than a few seconds the game will have moved on and you will have lost your audience.”

Both Dingit.tv and Hitbox claim their latency is the net's best. “We can deliver in milliseconds which is no different to Skype,” says Jason Atkins, e-sports player turned Hitbox events manager.

Hitbox also claims to be first to market with 4K video by trialling game Heroes of the Storm in 4K in February. “4K will be standard in a couple of years,” says Atkins. “People are beginning to get kit which won't break the budget, like Nvidia GTX 1070 graphics cards, to power 4K.” 

E-sports Olympics
The marriage with TV should help legitimize e-sports in the public consiousness. “This level of recognition will help propel e-sports towards mainstream audiences rather than mainly millennials,” says Futuresource's Chauhan. “The integration of e-sport in the 2020 Olympics would potentially help overcome the stigma against it.”

Chauhan also points to cheating and drug abuse as issues impacting player performance. “The other road bump would be the lifecycle of the actual games and how long they (e-sports producers) can sustain viewership for.”

There are obvious differences in an e-sports athlete verses a action sports athlete in that they aren’t propelling their bodies during their sport. However, argues Kimberly Popp, e-sports performance manager at Red Bull, e-sports players are using skills and mechanics such as hand-eye coordination. “Their physiology impacts performance,” she says. “Players train for hours to perfect their craft. Just playing the game is no longer enough to remain competitive.”

Esports could be given another boost with the sale of virtual reality (VR) displays. Esports players use a mouse and keyboard to play making it hard for the general public to see them as athletes or accept esports as a real sport, observes Petrescu. “When VR arrives this prejudice will disappear for good. One will need to be a true athlete to be a successful in VR esports.”

sidebar: VR meets Esports
The key here, as with action sports, could be enabling viewers to share in the experience with friends. “Anyone who has tried VR knows this is the future of gaming,” says VREAL's Hooper. “Traditionally, games have been treated like video content, as a 2D service. VR breaks that paradigm. VR gamers want to move around and experience their own POV. Video doesn't really enable us to that but game engines do.”

Seattle-based VREAL (Virtual Reality Entertainment and Livestreaming) has a platform in beta which, when integrated via an SDK into any Unreal Engine or Unity-powered experience), re-renders the game for the viewer in realtime.

This allows viewers to have their own independent camera, and freely move about in the world instead of being locked into the player’s view,” says Hooper. “It enables a watcher to feel inside the game.” 

VREAL's beta includes a rendering of avatars representing a viewer in the game. For those without a VR headset the platform will stream a 2D and 2D 360 version of the game.

sidebar: The Manila Major 
Sixteen teams competed for a $3 million prize pool on June 7 - 12, 2016 in the Mall of Asia Arena in Manila, Philippines. PGL produced the show and its video production.

When I started to design the production for this event, it became apparent that it was too big to run on a single video mixer or even with a single director,” explains Petrescu. “As a result, the production was split in what we call 'cores'.

One such core was the in-game production. This refers to everything that happens on screen during an actual match. The equivalent for football would be what viewer's see on TV after the referee starts the match.

Our ingame director had full control over what he showed on screen during the match,” says Petrescu. “We had several never-seen-before features like split screens with three observer points of view at the same time and even an insane composition where we put ten player cameras and modified the game interface to show a full five versus five battle in a more comprehensive manner.”

PGL produced three more cores, all with their own mixers, CGI systems, routing and multiviews. According to Petrescu , the challenge was to get the cores to link and communicate with each other. “Which is why, for instance, our CGI operator can activate transitions, videos, lights, sounds, LED animations and request stats from a server, all at the same time. A lot of things need to be perfect for this to work.”

A new development for PGL are arena effects. Game 'events' stored on the server are used to activate lights and pyro effects in the venue. “Let's say a bomb has been planted and it explodes... at the exact moment the bomb explodes on the server, a pre-programmed light and pyro show lights the arena,” explains Petrescu. “This is a concept we are constantly developing and improve and it's different for each game we run events for.”