Monday, 31 December 2018

Rendering the impossible

FEED

When games engines meet live broadcast the real and photoreal are interchangeable


Design and production tools that enable broadcasters to create virtual objects that appear as if they’re actually in the studio have been available for a few years, but improvements in fidelity, camera tracking and, notably, the fusion of photoreal games engine renders with live footage has seen Augmented Reality go mainstream.
Miguel Churruca, the marketing and communications director at 3D graphics systems developer Brainstorm, explains, “AR is a very useful way of providing in-context information and enhancing the live images while improving and simplifying the storytelling. Examples of this can be found in elections nights, plus entertainment and sports events, where a huge amount of data must be shown in-context and in a format that is understandable and appealing to the audience.”
Virtual studios typically broadcast from a green screen set. AR comes into play where there is a physically-built set in the foreground, and augmented graphics and props are placed behind the camera. Some scenarios could have no physical props with everything behind and in front of the presenter being graphics.
“Apart from the quality of the graphics and backgrounds, the most important challenge is the integration and continuity of the whole scene,” says Churruca. “Having tracked cameras, remote locations and graphics moving accordingly, perfect integration, perspective matching and full broadcast continuity are paramount to provide the audience with a perfect viewing experience of AR graphics.”
The introduction of games engines, such as Epic’s Unreal Engine or Unity has brought photorealism into the mix. Originally designed to quickly render polygons, textures and lighting in video games, these engines can seriously improve the graphics, animation, physics of conventional broadcast character generators and graphics packages, but it’s complicated because of the constraints of real-time rendering and operation.
That, though, has been cracked.

Virtual/real live music show
Last year a dragon made a virtual appearance as singer Jay Chou performed at the opening ceremony for the League of Legends final at the famous Birdsnest Stadium. This year, the esports’ developer Riot Games wanted to go one better and unveil a virtual pop group singing live with their real world counterparts.
It’s a bit like what Gorillaz and Jamie Hewlett have been up to for years, only this isn’t as tongue in cheek.
K/DA, is a virtual girl group consisting of four of the most popular characters in League of Legends. In reality, their vocals are provided by a cross-continental line-up of accomplished music stars: US-based Madison Beer and Jaira Burns, and Miyeon and Soyeon from actual K-pop girl group (G)I-DLE.
Riot tapped Oslo-based The Future Group (TFG) to bring them to life at November’s World Championship Finals opening ceremony from South Korea’s Munhak stadium.
Riot Games provided art direction and a base CG imagery model for K/DA’s lead member Ahri, and TFG transformed Ahri into her popstar counterpart and originated models for her three group mates, based on concept art designs from Riot.
LA postproduction house Digital Domain team supplied the motion capture data for the characters, TFG completed their facial expressions, hair, clothing, and realistic texturing and lighting.
Lawrence Jones, Executive Creative Director, TFG. “We didn’t want to make the characters too photorealistic. They needed to be stylised yet believable. That means getting them to track to camera and having the reflections and shadows change realistically with the environment. It also meant their interaction with the real pop stars had to look convincing.”
All the animation and the directing cuts were pre-planned, pre-visualised and entirely driven by timecode to sync with the music.
“Frontier is our version of Unreal which we have made for broadcast and realtime compositing. It enables us to synchronise the graphics with the live signal frame accurately. It drove monitors in the stadium (for fans to view the virtual event live) and it drove real world lighting and pyrotechnics.”
Three cameras were used all with tracking data supplied by Stype including a Steadicam, a PTZ cam and a camera on a 40ft jib.
“This methodology is fantastic for narrative driven AR experiences and especially for elevating live music events,” he says. “The most challenging aspect of AR is executing it for broadcast. Broadcast has such a high-quality visual threshold that the technology has to be perfect. Any glitch in the video not correlating to the CG may be fine for Pokemon on a phone but will be a showstopper for broadcast.”
Over 200 million viewers watched the event on Twitch and YouTube.
“The energy that these visuals created among the crowd live in the stadium was amazing,” he adds. “Being able to see these characters in the real world is awesome.”

WWE WrestleMania
The World Wrestling Entertainment (WWE) enhanced live stream production of its annual WrestleMania pro wrestling event last April with Augmented Reality content produced by WWE using Brainstorm’s InfinitySet technology.
The overall graphic design was intended to be indistinguishable from the live event staging at the Mercedes-Benz Superdome in New Orleans
The graphics package included player avatars, logos, refractions and virtual lighting and substantial amounts of glass and other semi-transparent as well as reflective materials.
Using InfinitySet 3, WWE created a wide range of different content, from on-camera wrap arounds to be inserted into long format shows, to short self-contained pieces.  Especially useful was a depth of field/focus feature, and the ability to adjust the virtual contact shadows and reflections to achieve realistic results.
Crucial to the Madrid-based firm’s technology is the integration of Unreal Engine with the Brainstorm eStudio render engine. This allows InfinitySet 3 (the brand name for Brainstorm’s top-end AR package) to combine the high-quality scene rendering of Unreal with the graphics, typography and external data management of eStudio and allows full control of parameters such as 3D motion graphics, lower-thirds, tickers, and CG
The Virtual Studio in use by the WWE includes three cameras with an InfinitySet Player renderer per camera with Unreal Engine plugins, all controlled via a touchscreen. Chroma keying is by Blackmagic Ultimatte 12.
For receiving the live video signal, InfinitySet is integrated with three Ross Furio robotics on curved rails, two of them on the same track with collision detection.
WWE also use Brainstorm’s AR Studio, a compact version which relies on a single camera on a jib with Mo-Sys StarTracker.  There’s a portable AR system too designed to be a plug and play option for on the road events.
The tech played a role in creating the “hyper-realistic” 4K AR elements that were broadcast as part of the opening ceremony of the 2018 Winter Olympic Games in PyeongChang.
The AR components included a dome made of stars and virtual fireworks that were synchronised and matched with the real event footage and inserted into the live signal for broadcast.
As with the WWE, Brainstorm combined the render engine graphics of its eStudio virtual studio product with content from Unreal Engine within InfinitySet. The setup also included two Ncam-tracked cameras and a SpyderCam for tracked shots around and above the stadium.
InfinitySet 3 also comes with a VirtualGate feature which allows for the integration of the presenter not only in the virtual set but also inside additional content within it, so the talent in the virtual world can be ‘teletransported’ to any video with full broadcast continuity.

ESPN
Last month, ESPN introduced AR to refresh presentation of its long running sports discussion show, Around the Horn (ATH).
The format is in the style of a panel game and involves sports pundits located all over the U.S talking with show host Tony Reali via video conference link.
The new virtual studio environment, created by the DCTI Technology Group using Vizrt graphics and Mo-sys camera tracking, gives the illusion that the panellists are in the studio with Reali. Viz Virtual Studio software can manage the tracking data coming in for any tracking system, works in tandem with Viz Engine for rendering,
“Augmented reality is something we’ve wanted to try for years,” Reali told Forbes. “The technology of this studio will take the video-game element of Around the Horn to the next level while also enhancing the debate and interplay of our panel.”
Sky Sports
Since the beginning of this season’s EPL Sky Sports has been using a mobile AR studio for match presentation on its Super Sunday live double-header and Saturday lunchtime live matches.
Sky Sports has worked with AR at its studio base in Osterley for some time but moving into grounds is aimed to improve the output aesthetically, editorially and analytically. A green screen is rigged and de-rigged at each ground inside a standard matchday 5m x 5m presentation box with a real window open to the pitch. Camera tracking for the AR studio is done using Stype’s RedSpy with keying on Blackmagic Design Ultimatte 12. Environment rendering is in Unreal 4 while editorial graphics are produced using Vizrt and an NCam plugin.
Sky is exploring displaying AR team formations using player avatars and displaying formations on the floor of the studio, having them appear in front of the pundits.
Sky Sports head of football Gary Hughes says the set initially looked “very CGI” and “not very real” but it’s improved a lot.
“With the amount of CGI and video games out there, people can easily tell what is real and what is not,” he says. “If there is any mystique to it, and people are asking if it is real or not, then I think you’ve done the right thing with AR.”

Spanish sports
Spanish sports shows have taken to AR like a duck to water. Specifically, multiple shows have been using systems and designs from Lisbon’s wTVision, which is part of Mediapro the Spanish media group.
In a collaboration with Vàlencia Imagina Televisió and the TV channel À Punt, wTVision manages all virtual graphics for the live shows Tot Futbol and Tot Esport.
The project combines wTVision’s Studio CG and R³ Space Engine (real-time 3D graphics engine). Augmented Reality graphics are generated with camera tracking via Stype.
 For Movistar+ shows like Noche de Champions wTVision has created an AR ceiling with virtual video walls. Its Studio CG product controls all the graphics. For this project, wTVision uses three cameras tracked by Redspy with Viz Studio Manager and three Vizrt engines with the AR output covering the ceiling of the real set and the virtual fourth wall.
The same solution is being used for the show Viva La Liga, in a collaboration with La Liga TV International. 
AR is also being used for analytical overlay during a live soccer match. Launched in August, wTVision’s, AR³ Football is able to generate AR graphics for analysis of offside lines and freekick distances from multiple camera angles. The technology allows a director to switch cameras, the system auto-recalibrates the AR and it takes a couple of seconds to have it on air.


Friday, 21 December 2018

Cloud microservices - What are they and what’s the best approach to using them?


Feed


Microservices are the building blocks on which all modern and successful digital businesses are constructed. 


It doesn’t sound much. A microservice is the minimum embodiment of technology which provides a performant and valuable service. Yet this software design architecture is instrumental to the spectacular success of Spotify, Netflix, Expedia, Uber, Airbnb and essentially every other digital business that gained commercial prominence in the past decade.
It’s an approach to application development and management that is also seen as crucial to the transition of old school media – pay TV operators and commercial broadcasters – to the brave new world of the Cloud.
The key word is ‘approach’ and not all broadcast or vendor engineering teams are getting it right.
“Microservices serve a fundamental role in most aspects of the software landscape for media and entertainment,” says Brick Eksten, CTO, Playout & Networking Solutions at Imagine Communications. “Whether it is a core network function, a piece of a website, or a utility that runs in the background, there is an opportunity to improve that role or function through the use of microservices.”
So, what is a microservice?
It’s used to describe the practice of breaking up an application into a series of smaller, more specialised parts, each of which communicate with one another across common interfaces such as APIs and REST interfaces like HTTP.
“These smaller components can scale independently of each other and the wider stack,” says Kristan Bullett, co-MD at cloud product and services provider Piksel. “They are modular so can be tested, replaced, upgraded and swapped out easily. It is also much easier to break down workloads using microservices and spread them across the cloud, efficiently matching resources more closely to business needs.”
A familiar example is the landing screen you face when accessing a new service on a website which asks you to sign in with Facebook or Google or email. Netflix build much of their operation from microservices, dozens of which interoperate to provide the slick experience users of their platform receive.
“The reason microservices are so great is because typical software approaches rely on bulk installations, which must be upgraded all at once (much like OS updates to phones or laptops),” says Alex Snell, associate solution architect at system designers and consultants BCi Digital. “Microservices can be changed as the provider sees fit, so a user sees different things between visits to a platform, or even during a single session.”
In the broadcast facility, the same approach of bulk software implementations exists. To add new functionality to a system, often a month’s long provider acquisition and consultation period is followed by further months of installation, testing, and finally launch.
“As speed to deployment increases, broadcasters want upgrades to be faster,” says Snell. “If a system were built from microservices, implementation of upgrades and changes can be realised in days, or even hours.”
Microservices contrast with the older broadcast model which is typically characterised as monolithic. Essentially inflexible, cost inefficient and no longer fit to compete with digital first rivals, any organisation stuck with this model won’t travel far. A monolithic architecture is where the functions needed to run operations are tightly interwoven so that a change to one part of the software will have immediate consequences for the rest.  
“Microservices are really about making small changes in a controlled and non-invasive fashion,” explains Bullett. “By taking a microservicesapproach you have a much smaller slice of functionality which you can test and introduce with great confidence into the wider service. Where cloud utilises compute, storage and networking resources more efficiently, microservices-in-the-cloud makes even better use of them, pulling down operating costs.”
While a number of workflow functions including transcoding, graphics insertion and scheduling are harnessing microservices, it is the macro benefits of the approach that are more important.
These include the ability to scale cloud resources, avoiding the need to scale an entire platform that the applications are part of; the ability to pick best of breed applications and scale easily with them; the isolation of software development so developers can work on part of a service without interfering with the rest of the stack; and the agility to update, release and – if necessary – pull back a software release without impacting the applications around it.
“Broadly, it’s about lowering the cost and the risk of change while increasing flexibility,” says Bullett.
Cloud-native
Microservices could just as easily be called ‘cloud-native software’ since they are specifically re-architected for life in the cloud.  However, much of current cloud usage in the TV industry is what is dubbed ‘lift-and-shift’. This is when developers which previously married software with dedicated hardware simply port their existing software into a datacentre, without any software redesign.
“There are crucial differences between how physical and virtual hardware systems operate,” says Bullett. “Software designed to sit on dedicated hardware systems will be constrained in what it can achieve in the cloud. This ‘lifted and shifted’ software simply can’t scale as efficiently as a ‘cloud-native’ solution. They lack the ability to tap into traditional cloud characteristics such as elastic scaling, geo-dispersion and advanced process automation.”
“The best ways to write software for the cloud doesn’t necessarily change what the software is doing,” says Shawn Carnahan, CTO, Telestream.
His company has spent the last 18 months taking the software it ships today and migrating it to a microservice, often using the exact same code.
Imagine’s Zenium platform, according to Eksten, allows its customers, and partners to create microservices on demand.  “Our investment has been to develop a platform that allows us to develop at the nano-services scale, one that maximises the philosophy of microservices, while allowing us to maximise our return on R&D investment,” he says. “The next step, where the community will become more involved, is to establish a set of standards around how microservices interoperate at the network level.  It’s something we need as a community to move the collective success of the industry forward.”
The EBU, a coalition of Europe’s broadcasters, is working on this. The Media Cloud and Microservice Architecture (MCMA) builds on previous work in the FIMS (Framework for Interoperable Media Services) project and aims to develop a set of APIs to combine microservices in the cloud with other in-house services and processes. MCMA will also share libraries containing “glue code” between these high level APIs and low level cloud platforms.

Supply chain methodology
According to Simon Eldridge, chief product officer, SDVI Corporation a provider of software as a service solutions, the real challenge is less about the technology per se but about an approach to business.
“Traditionally, vendors have sold product with licences or in boxes and broadcasters are used to buying boxes and licensing software then amortising the expense over time,” he says. “When you move to an operations model where you only pay for what you use it seems difficult for some organisations to get their heads around this. Once they do, they immediately see the benefit in only paying for the for services they require.”
He adds, “Essentially, in a real microservice environment the end user gets to pick and choose the providers of each service from the best of breed vendors, and plug them together in a way that lets them construct their own solution, not simply turn things on and off from a single vendor - that’s just software options on a monolithic application.”
Carnahan agrees, suggesting broadcasters need to shift mindsets further toward trusting in software as a service. “The client doesn’t know what machine the software is actually running on. There’s no sense of permanence. The software only lives long enough to do the workload that it is cast to do and then goes away. That is a very different concept from shipping software that runs 24/7 but is chewing up power all day long while the customer is paying off the lease it used use to buy the gear in the first place.”
It’s becoming more and more common for media companies to consider what they do in the context of a supply chain – essentially receiving raw material (the content), then processing, assembling and packaging it for distribution to consumers, much like any manufacturing facility. But customers need to get their head around the loss of control that comes with moving to software as a service.
“They need to adapt to a supply chain methodology which asks how much does it cost to produce and deliver a show, rather than how much will it cost me to buy gear,” suggests Carnahan. “And instead of dealing in thousands of pounds of capital investment the answer will be in fractions of a penny per minute.”
Vendors, he says, can take care of solving that equation by providing it as a service which delivers on the desired outcome - whether quality, turnaround, reliability.
“Vendors are providing the infrastructure that sits indirectly on top of a public cloud but its margins have to be sufficient to sustain the business. In the end, the customer is probably paying more for software as a service than if they had capitalised the hardware and software themselves but the trade-off it that they don’t carry that capex and they have far more options to change their business.”
The classic example is being able to spin up a channel in days if not hours and if it doesn’t work, simply turn it off without ongoing costs.
Don’t write off hardware
Can microservices solve all problems? Well, no, as microservices are fundamentally a software solution to problems that may be better solved in hardware.  For example, Imagine’s Selenio Network Processor is an FPGA-based, low-latency 1RU solution that can handle simultaneous processing of up to 32 HD streams or 8 UHD streams.  Can you perform the same functionality in microservices?
“Absolutely, but the cost would be prohibitive,” says Eksten. “The benefit of moving to a COTS architecture is based partially on Moore’s Law: as compute power goes up, costs come down. We are at the point where it is now financially and physically efficient to perform many, if not most, broadcast functions in software as microservices — but not every function.”
The move to microservices doesn’t have to happen overnight, but it does have a “best-by” date associated with when giant player like Amazon and Google will overwhelm any payTV or broadcast competition.  You are already seeing the effects of this chess match being played out by the M&E giants and the IT market leaders.  At some point it will become a relevant discussion for the rest of the market. It’s not a question of “if,” but a question of “when.”

How to switch to microservices in a few easy moves…

The easiest way to get into microservices, at least according to Imagine, is to purchase a microservices-based solution. (Imagine has the major technologies covered, offering microservices-based playout, multiviewer, encoding, transcoding and ingest solutions.) 
Alternatively, you can explore microservices as provided by a cloud vendor, since all major cloud providers layer their service offerings on top of individual microservices.  For instance, if you wanted to try the captioning services from IBM or Microsoft, you can start by sending files into the services, and in turn, those cloud providers can provide back captions or captioned content.
The most common way to start building a microservices-based infrastructure would be to select existing service/solution chains and start swapping out individual components for ones based on microservices. 
“That will provide you with a piece-wise method for stepping into microservices,” says Eksten.  “If, however, the customer wants to move to a more pure microservices architecture, there are many considerations to get from traditional rack/stack thinking to a process and plan based on microservices.”
The first step is organisational.  Eksten explains: A pure-play microservices solution requires a team to consider each service element (each microservice) as a logical unit of functionality — what it is, what its requirements are, how it works, and ultimately how it will be deployed and managed. This doesn’t necessarily mean DevOps, but it does mean bringing together the right people.  It takes a diverse team with a broad skillset all leaning in to the same problem to bring the right perspective and to ultimately be successful in moving to microservices. Build a team that couples forward-thinking broadcast engineers with microservices-savvy personnel, or find a technology partner like Imagine who can provide the experience, expertise and engineering talent to achieve a successful deployment. 
The second step is to set goals, starting with small pieces of the solution set. The first steps should be isolated, easy to measure, and easy to replicate.  I suggest starting with compressed workflows, which allows everyone to get comfortable with failure/recovery modes and lowers the overall infrastructure performance bar.  One way to accomplish this is start with a transcoder or encoder, both of which are flow-thru solutions that do not require external timing and automation.
The third step is to begin building up the scope and complexity of the deployment. Start looking at bringing in the third-party services — like automation (if you are in playout) or monitoring solutions —that you will use to manage the overall service.  Having a plan for orchestration, provisioning, monitoring, recovery and scale will allow you to consider more complex solutions.
The last step is to tackle the more complex solution-sets like playout.  Moving from the simple to the complex is achievable once you get into the habit of defining, designing, deploying, and measuring each individual functional aspect of the overall solution so that you understand where the dependencies are and what the recovery modes will be.




MPEG heads to the holograph

IBC


MPEG is promoting a video-based point cloud compression technology – and Apple is driving it.
https://www.ibc.org/tech-advances/mpeg-heads-to-the-holograph/3507.article
At its most recent meeting, at the beginning of October in Macau, standards body MPEG upgraded its Video-based Point Cloud Compression (V-PCC) standard to Committee Draft stage.
V-PCC addresses the coding of 3D point clouds – a set of data points in space - with associated attributes such as colour with the goal of enabling new applications including the representation of human characters.
In other words, avatars or holographs existing as part of an immersive extended reality in the not too distant future.
“One application of point clouds is to use it for representing humans, animals or other real-world objects or even complete scenes,” explains Ralf Schaefer, Director Standards at Technicolor Corporate Research.
In order to achieve decent visual quality, a sufficient density of the point cloud is needed, which can lead to extremely large amounts of data. Understandably that’s a significant barrier for mass market applications – hence the demand for a workable lossy or lossless means of compressing the information.
Xtended Reality
V-PCC is all about six degrees of freedom (6DoF) - or fully immersive movement in three-dimensional space - and the goal which Hollywood studios believe will finally make virtual and blended reality take off.
Limitations in current technology mean Virtual Reality is restricted to three degrees of freedom (3DoF).
Companies are already switching their attention from VR to focus on augmented reality, mixed reality or in the new jargon, eXtended reality (XR).
For example, VR pioneer Jaunt, in which Sky and Google are investors, is jettisoning VR camera development to focus on its XR mixed reality computing platform. Jaunt recently acquired Chicago-based Personify maker of a volumetric point cloud solution called Teleporter.
Apple has the most extensive AR ecosystem with which it is leading this field. Its Augmented Reality kit targets developers wanting to create AR experiences viewable on iOS devices.
It is positioning itself as the destination for AR and blended reality experiences for the time when the iPhone, and smartphones in general, are superceded by a form of wearable goggles as the consumer interface for communication, information and entertainment. Microsoft (Hololens), Google (Glass using AR toolset Project Tango), Facebook (redirecting its Oculus VR headgear team toward development of AR glasses) and Magic Leap are among competitors for this next stage of internet computing.
It should come as no surprise then that Apple’s technology is reportedly the chief driver behind MPEG’s V-PCC standard.
“The point cloud solution that MPEG has selected is the one proposed by Apple,” confirms Thierry Fautier, president-chair of the Ultra HD Forum and video compression expert.
Using existing codecs
MPEG is actually investigating two approaches to compressing point clouds. The other is based on geometry (G-PCC) which uses 3D geometry orientated coding methods for use in vehicular LiDAR, 3D mapping, cultural heritage, and industrial applications.
What is important about the V-PCC initiative is that it attempts to leverage existing video codecs (HEVC is the base application although this could be swapped out for other codecs if they are proved more efficient), thus significantly shortening time to market.
“As the V-PCC specification leverages existing [commodity 2D] video codecs, the implementation of V-PCC encoders will largely profit from existing knowledge and implementation (hardware and software) of video encoders,” explains Schaefer.
The V-PCC specification is planned to be published by ISO around IBC2019 (Q3 2019) so the first products could be in the market by 2020.
“The latest generation of mobile phones already include video encoders/decoders that can run as multiple instances and also powerful multicore CPUs, allowing the first V-PCC implementations on available devices,” says Schaefer.
Already the current V-PCC test model encoder implementation would provide a compression of 125:1, meaning that a dynamic point cloud of 1 million points could be encoded at 8 Mbit/s “with good perceptual quality” according to MPEG.
Says Schaefer: “This is essentially achieved by converting such information into 2D projected frames and then compressing them as a set of different video sequences by leveraging conventional video codecs.”
It is the relative ease of capturing and rendering spatial information compared to other volumetric video representations which makes point clouds increasingly popular to present immersive volumetric data.
“A point cloud is a collection of points that are not related to each other, that have no order and no local topology,” explains Schaefer. “Mathematically, it can be represented as set of (x,y,z) coordinates, where x,y,z have finite precision and dynamic range. Each (x,y,z) can have multiple attributes associated to it (a1 ,a2, a3 …), where the attributes may correspond to colour, reflectance or other properties of the object or scene that would be associated with a point.”
He continues, “Typically, each point in a cloud has the same number of attributes attached to it. Point clouds can be static or dynamic, where the latter changes over time. Dynamic objects are represented by dynamic point clouds and V-PCC is being defined for compressing dynamic point clouds.”
Capturing point clouds
Point clouds are generally well suited for 6 DoF immersive media applications as free view point is natively supported and occlusions are avoided. On the capture side, point clouds are usually deduced from depth and/or disparity information from multi-view capture.
This includes lightfield systems such as the rigs of multiple GoPro cameras being tested in Google’s research labs.
“In current lightfield camera systems there is always a limitation in the number of cameras or in the number of micro-lenses when considering plenoptic cameras [such as the one developed at Lytro],” says Schaefer. “Whether it is calculated or measured, depth information can be associated to the texture acquired by the camera. As soon as there is texture and depth, the subsampled lightfield can be represent by a point cloud.”
V-PCC is part of MPEG-I, a broader suite of standards under development all targeting immersive media. Compression will reduce data for efficient storage and delivery which is essential in future applications. MPEG has also started work on the storage of V-PCC in ISOBMFF files which is the next step towards interoperability of such immersive media applications.
“In my opinion 6DoF will not be a consumer application [initially] but more a business enterprise application,” says Fautier. “Development will take time. There will also probably be need for cloud computing to assist the heavy computations. So, for me, 6DoF is an application we’d expect to see over fibre and 5G beyond 2020. After which, the sky is the limit.”


New forces enter the codec arena

IBC



Standards body MPEG is lining up two new codecs as stop gaps before its next-generation heavyweight codec VVC comes on line in a move which could both stimulate and muddy the market for live and on demand streaming.
As if the codec space wasn’t crowded enough, MPEG is entertaining the prospect of not one but two more standards for video delivery within the next couple of years.
Its recent call for proposals for two new codec technologies can be seen an attempt by the standards body to force HEVC license holders, including its own division MPEG LA, to reduce the terms of their license demands.
“Earlier this year the industry realised that new codecs were coming and started to panic but the result is that the market is getting even more fragmented,” says Thierry Fautier, chair of the Ultra HD Forum and VP video strategy at Harmonic.
“HEVC works and has 2 billion installed devices but the problem is the licensing. MPEG may feel that the more pressure that can be put on HEVC, the more it will give in.”
Samsung’s codec
The first proposal is for a new video coding standard “to address combinations of technical and application requirements that may not be adequately met by existing standards”.
As MPEG points out, coding efficiency is not the only factor that determines industry choice of video coding technology for products and services. There’s the cost implication too. To which one might add, politics.
The focus of this new video coding standard is on use cases such as offline encoding for streaming video on demand and live over-the-top streaming. The aim is to provide a standardised solution which combines efficiency similar to that of HEVC “with a level of complexity suitable for real-time encoding and decoding and the timely availability of licensing terms.”
Essentially this is what the Alliance for Open Media (AOM) is already doing with its AV1 codec.
If you are a consumer electronics manufacturer like Apple being charged even 1 dollar per device you sell to use the HEVC codec then that’s at least $200 million per year off the bottom line. In reality the cost could be more than this, added to which is the uncertainty around the actual licensing costs which certain other HEVC license holders will apply.
But that’s not all. Apple is a member of AOM but noticeably absent is its big rival Samsung. No surprise then that Samsung is the chief and, to our knowledge, only backer of this new video coding standard under proposal at MPEG.
 V-Nova’s codec
The second codec being considered by MPEG is actually an existing technology which acts to enhance the deployment of hardware codecs processing MPEG4/H.264 AVC and HEVC.
It is calling on the industry to propose solutions for ‘low complexity video coding enhancements’ and further explains that the objective is to develop a codec which has a base stream decodable by a hardware decoder overlaid by a software layer which will improve the hardware’s compression efficiency.
The target once again is live and on demand video streaming.
The argument here is sound – instead of waiting for a new codec to come along what if it were possible today to add software which improves compression efficiency and which also saves on broadcasters having to rip and replace their existing (AVC, HEVC) codec investments?
Turns out this is entirely possible since the initiative is designed around the Perseus Plus technology devised by V-Nova.
“MPEG wants to standardise what V-Nova does,” says Fautier. “So far their success was limited, one reason [being] is this is not [yet] a standard based solution.”
Any other vendor can submit a proposal for MPEG evaluation but it’s likely that V-Nova’s will win the day since it’s already in the market and MPEG’s specs fit the Perseus design.
“AVC and HEVC are massively supported by various platforms and hardware solutions/assistance is provided on many devices,” says Christian Timmerer, who speaks for MPEG.
“However, it has been shown that software solutions can enhance quality significantly which can provide a better quality of experience for the end user at lower (or equal) bitrate but with some additional processing efforts to be done on the terminal side.”
V-Nova itself claims that at or below HD resolutions, Perseus improves H.264 performance by more than 40% and at UHD resolutions improves HEVC performances by 70%.
Although its solution would not be royalty free, it will argue that its approach addresses the large installed base of AVC today such that broadcasters don’t have to change their decoder to HEVC or expose themselves to HEVC license terms.
“We welcome the efforts made by both MPEG and AOM to develop codecs, however, we are also conscious of the fact that lead times for new codecs are long and replacement cycles of devices in the field often even longer,” says Fabio Murra, SVP product & marketing, V-Nova.
 “As such, this approach represents a game-changing opportunity, now and in the future, to accelerate the deployment of more advanced compression between the longer cycles of codec evolution.”
MPEG’s call for low complexity video coding enhancements is not only intended to enhance any existing video codec but could be used to provide an upgrade to any future codec.
“There is rapidly growing interest in enhancing compression efficiency, and doing so without necessarily requiring new hardware is something the industry and our customers find compelling,” states Murra.
Content Aware Encoding
It is an approach, though, that is competitive with Content Aware Encoding (CAE) for OTT, a technique that also does not need to change the decoder, and is supported by Apple’s operating system for mobile iOS11.
“We believe the most reliable approach is to focus on improving the codec implementation” - Mark Donnigan, Beamr
In contrast to encoding using ABR (Adaptive Bit Rate) where each resolution such as HD or SD is matched to a given bit rate, CAE can make more efficient use of bandwidth, particularly in live streaming scenarios.
The leaders in this field are Harmonic (which markets EyeQ content-aware encoding software), Brightcove (which includes CAE in its online video platform) and Beamr (which deploys a patented content-adaptive bitrate solution, CABR, over HEVC).
“Beamr can reach high quality video at additional savings of as much as 40% over a comparable HEVC encode using our CABR mode, which in my mind obfuscates the need for an ‘enhancement’ layer,” says Mark Donnigan, vice president, marketing, Beamr.
The Isreali and Palo Alto-based outfit has achieved greater than a 50% improvement in the bitrate efficiency of its HEVC codec from first launch five years ago. Since its tech is integrated with the rate-control of the encoder it means that, in theory, CABR could be used with a dual-layer approach.
However, Donnigan maintains that since CABR often leads to bitrate reductions of an additional 30-40%, “a user adopting CABR won’t likely need the added implementation complexity of a dual-layer solution to meet their objective for high quality at lower bitrate.”
He says, “The question that must be asked is whether the benefits of the additional complexity such as the MPEG committee is calling for, are worth it? We believe the most reliable approach is to focus on improving the codec implementation, which when combined with content-adaptive technology such as CABR, will yield a benefit for all devices with HEVC decoders today - no updates required.”
He argues that the importance of playback compatibility is often overlooked by every new codec or technology enhancement.
“As a striking example, this is the situation right now with AV1, where there are encoding vendors offering solutions for creating AV1 compatible files, but - outside of a very narrow combination of a desktop computer and a beta browser - the files cannot be played back where consumers want to consume the content such as on a mobile device, TV, game console or media player.
“The fact is, with a well-designed implementation of an existing codec such as HEVC, [new codecs] are not required,” he says.
Donnigan believes that MPEG’s proposal for low complexity video coding enhancements is “an academic initiative” that will not have commercial appeal because the complexities of developing and implementing it outweigh the codec efficiency that he says Beamr delivers today.
“And we deliver efficiency completely inside the standard of HEVC with no changes required to the decoder or player application.”
Whatever happens with the Samsung or the V-Nova promoted proposals, it will be many months before either gains standard status as the proposals move within MPEG. It’s not a sluggish organisation – far from it. It’s got multiple standards efforts on the go all the time for which the process includes working drafts in multiple iterations until a final draft gets the vote of national bodies. According to Timmerer, that could take two years (or more) depending on the number of contributions.
By then in 2020 MPEG’s main next gen codec initiative will be about due. This is VVC (Versatile Video Codec) which could well trump every other codec out there.
What’s more there’s an independent body to police the whole situation and prevent the mess which the industry has got itself into with the lack of clarity around HEVC licensing.
The Media Coding Industry Forum (MC-IF), launched at IBC2018, and has companies like Canon, MediaKind, Sony, Nokia as well as Apple on board. HEVC Advance, one of the HEVC patent holding pools is also a member.
Speaking without his MPEG hat, Timmerer admits, “We are entering a situation with multiple codecs and this might be a problem in terms of market fragmentation but usually the market will make its decision one way or another.”
Referring to AV1 and other codec upstarts, Fautier says “VVC will wipe out all these monkey codecs, assuming it can be licensed to everyone’s satisfaction. Development has to go hand in hand with common sense.”

Ang Lee’s high frame rate Gemini Man “the future of cinema”


IBC
Oscar winning cinematographer Dion Beebe has described using ultra high frame rates to film Ang Lee’s sci-fi actioner Gemini Man, as part of the future of cinema.
Using speeds in excess of the standard 24 frames a second has polarised opinion but Dion Beebe ACS ASC believes a new generation will accept rates as high as 120 as the norm.
Gemini Man, which stars Will Smith and Clive Owen, was shot at 4K resolution in stereo 3D and 120 frames a second.
“When Ang asked me to join him on Gemini Man I was curious about what had drawn him to this format,” he says. “In the process of making the film I’ve realised that HFR is without question part of the future language of cinema.”
“For older generations of audience and filmmaker 24 frames is the world of cinema but there is a new wave coming out of video gaming who are far more familiar with viewing content at 60, 120 and even 240 frames.
“For this audience, [HFR] is not a new experience and the more likely it will be for high frame rates to become a theatrical standard.”
Beebe has embraced new techniques before, notably using digital HD to conjure an unsettling LA landscape for Michael Mann’s neo-noir Collateral in 2004.
He describes the extreme clarity of Gemini Man’s visuals as “incredibly vivid and confronting”.
“When you watch it for the first time it is both alien and captivating,” he adds. “Ang is not pursuing high frame rates for 2D. It’s about the 3D experience and eliminating motion blur through extreme frame rates.
“When you achieve this, it is truly like looking through a massive window, one in which you can look left and right and into the distance in such a way that the depth and detail in the picture becomes a whole new element in your storytelling.”
The artistic merits of high frame rates have divided audiences and critics. Previous films include The Hobbit: An Unexpected Journey, shot 4K 3D 48 fps, and Lee’s Billy Lynn’s Long Half Time Walk. This was filmed in 4K 3D 120 but was only able to be shown in five theatres (two in the US, two in China, one in Taiwan) with specially customised projection equipment.
While Billy Lynn’s had a wide release in other formats (and was the first 4K 60 fps Blu ray release), theatrical screenings of Gemini Man at the full fat 4K 3D 120 are likely to be similarly restricted.
“Ang told us before starting the movie that ‘We are not good enough for this format because we don’t know enough about it.’ He is right. We are trying to find and understand the language that high frame rates bring.”
Gemini Man, about an assassin who gets stalked by his own clone, is a Skydance Media and Jerry Bruckheimer production for Paramount and due for release October 2019.
Beebe’s credits also include Chicago, Miami Vice, Edge of Tomorrow, Green Lantern and Mary Poppins Returns. He won the Academy Award in 2005 for Rob Marshall’s Memoirs of a Geisha and has just started work on a CGI and live action remake of animated favourite The Little Mermaid for the same director.


Monday, 17 December 2018

What will happen to the media cloud services market in 2019?

Content marketing Rohde & Schwarz 

There’s a perfect storm brewing. Numerous forecasts point toward the rapid growth of video, driven by demand for video over mobile often in tandem with breathless predictions for the rollout of 5G, the next generation wireless broadband. The choice for media companies is to build the physical infrastructure to cope, or go to the cloud.
Cisco’s latest Video Networking Index, for example, suggests video will make up 82% of all IP traffic in just three years’ time. Ericsson’s recent Mobility Report predicts mobile video will grow 35% annually through 2024 to comprise 60% of all mobile data.
Equally important is that viewers no longer distinguish between content accessible by TV or OTT regardless of screen. With more video traversing online networks than ever before, viewer expectations of quality is set to become as much a major headache for service providers and content providers as the relevance of personalized content and individual user experiences.
There’s money to be made if media organisations and service providers can capitalise on this clear trend, but to do so they need to rip and replace their approach (let alone the hardware) to production and delivery infrastructure.
Put simply, traditional approaches to video processing and delivery cannot keep pace with the changing nature of video consumption and the rising expectations that flow from it.
The new digital methodology requires retooling infrastructure, business models and, importantly, internal engineering culture for the more operationally agile software-first environment of the cloud.
Playout may have evolved less quickly to the cloud than other areas of the business, since channels requirements are more predictable over longer periods than the bursty nature of processes like postproduction.
What broadcasters want is the ability to launch new channels quickly – such as a temporary channel around a quadrennial sports event - and to adapt services for existing ones to target different audiences or regions for a short period of time.
That isn’t possible with a traditional playout model reliant on single-purpose machines which need purchasing and installing many months in advance.
The functions for channel deployment from video servers to compliance recorders have been discrete appliances taking up (heating, cooling, real estate) space in racks. These tools are being re-tuned by vendors across the piece to work on commodity storage and compute systems using standard IP/IT protocols to interoperate with each other at a micro-services level.
As more and more elements of the chain from automation to graphics, logo insertion and transcoding become virtualized, the potential for pop-up or localized or trial channels and even just-in-time broadcasting come closer to being realised.
The trick – and it’s by no means an easy feat – is to bring broadcast-grade capabilities into the cloud at a rental or pay-as-you-go price with no upfront expenditure. A successful transition will enable video providers to tap into a number of opportunities to improve the performance, efficiency, and costs of their video workflows.
In this context we need to see the Rohde & Schwarz acquisition of Pixel Power. Its software-based IP solutions enable dynamic content to be delivered more efficiently for linear TV, mobile, online and OTT/VOD. Its solutions are virtualizable for either private or public cloud and deliver on the new OPEX business models which are core to the broadcast technology transformation.
A perfect solution for a perfect storm? The market will decide.

AI's ability to fake video comes on in leaps and bounds

RedShark
First crudely, and then in more seamless fashion, videos have appeared in which a person’s face has been substituted for another. Such face-off counterfeiting techniques surfaced prominently a couple of years back when celebrity faces were substituted onto those of porn actors. The words and facial gestures of politicians have also been manipulated in this way.
Inevitably, the technology has advanced such that it will soon be possible to translate not just faces but whole bodies onto the videoed movements of others.
Describing the advance in a two-minute YouTube video is Károly Zsolnai-Fehér a research scientist at the Institute of Computer Graphics and Algorithms, at the Vienna University of Technology.
Target videos of people playing tennis or performing chin-ups have been overlaid and effectively replaced by the body shape and faces of others. A machine learning generative adversarial network (GAN) can be used to create fake videos.
Remarkably the algorithm is able to synthesise angles of target subjects that it hasn’t seen before, for instance it was able to correctly guess and add-in details like a belt around the waist despite not being shown a directly corresponding image.
The resulting full body characters can also be put in a virtual environment and animated.
Zsolnai-Fehér admits that the work is raw and experimental with issues with movement, flickering and occlusion giving the trick away.
Anyone looking at this footage can tell in a second if it’s not real but the reason he is so excited is that it’s a big leap to making deepfakes a viable concept.
“It will provide fertile ground for new research to build on,” he says. 
A bit down the line it will work in HD and look significantly better, he predicts, particularly when CGI lighting and illumination techniques are applied.
Target applications are movies, computer games and telepresence but there will be others warning of the dangers of hoax videos. Reddit, Twitter and Pornhub have banned deepfakes, but will their AI’s be sophisticated enough to keep pace with research in this area?

Friday, 14 December 2018

Effective File-based QC and Monitoring

InBroadcast
Expanding viewing platforms and evolving regulations require better content quality control and monitoring of all parts of the delivery platforms.
That broadcasting has become more complex with the advent of OTT services is an understatement. Playout is no longer the final point of quality control. Going further down the content delivery chain, CDN edge points, targeted ad-insertion, multi-language support, and event-based channels require the expert scrutiny of broadcast engineers. The need to manage a more complex ecosystem with an ever-growing list of logging and compliance requirements has become a priority for content owners and regulators alike.
The sheer scale of the problem is compounded by the fact that there is almost no point in trying to monitor those streams back in the facility.
“When it comes to monitoring live channels over multiple OTT streams and ABR profiles, it is no longer practical for display panels to mirror all the possible video outlets,” argues Erik Otto, CEO, MediaProxy. “There is a wide choice of devices and delivery outlets that need to be supported and viewers expect the same level of service, no matter how they are accessing content. The days of tracking one channel in one format and resolution are long gone.”
Though content may be correctly streaming from the playout encoder, an edge location may experience its own issues, which could be local or originating from within the CDN. Issues such as local blackouts, bandwidth discrepancies and re-buffering may not be immediately apparent to OTT streams downstream of the CDN.
“Also, what happens if national and regional feeds deviate?” poses Otto. “Or when the wrong program, or an incorrect version, is played out? What happens if the wrong graphics or tickers are mistakenly overlaid on a live broadcast? These issues are becoming increasingly critical for comparing traditional linear services with OTT representations and are very difficult to pinpoint by only looking at OTT playout.”
Having a large volume of content increases the chance for errors. Some of the common issues that can impact streaming quality of experience include poor video quality caused by over-compression during content preparation, profile alignment issues causing glitches in playback when the player switches bitrates, encryption-related issues, and server-related problems such as HTTP failures caused by client or server errors. Issues can also occur during the delivery phase, causing long join times or frequent stalling and switching during playback.
Since OTT content is delivered over the internet, which is unmanaged, the quality and bandwidth of the delivery changes based on network congestion. Broadcasters have minimal or no control during the last mile delivery for OTT content, so quality is never guaranteed. In addition, content protection becomes more important since data flows on a public network. And with multiple stakeholders involved in the delivery chain (CDNs, ISPs etc.) as well as an evolving technology, it can be hard to identify and resolve QoS issues.
“Deploying an OTT monitoring solution that works in tandem with a file-based QC tool will allow broadcasters to quickly and correctly address any issues, all the way from ingest to delivery,” says Anupama Anantharaman, vp product marketing at Interra Systems.
When it comes to compliance, automated quality control (AQC) has become critical for just about any media facility doing business outside their immediate territory.
Explains Howard Twine, director of software strategy, EditShare, “If you are delivering to Japan or the UK, you need to check against PSE levels. With the UK, its DPP AS-11 for broadcasting material and for many OTT platforms, you need to comply with IMF standards. And of course, there is verifying files on the way into the facility to ensure that valuable post time is not wasted on non-compliant files. AQC is a time and money saver all around and simply a must for delivery workflows.” 
OTT delivery follows on from the file-based content preparation workflows and the good news is that the latest OTT QC and monitoring solutions are well-suited to address file-based content preparation and distribution workflows.
Interra’s Baton, available both on-premise and in the cloud (hybrid QC) supports a combination of automated and manual QC checks. It integrates with cloud computing infrastructure like (Amazon S3, IBM Cloud Object Storage, Avid Interplay and Google Cloud and with the linear as well as streaming media workflows.
Baton+ is an add-on tool that offers QC trend analysis across multiple Baton systems to improve workflow efficiency. Among its benefits are time-based reports to analyse daily, weekly and monthly QC data. Baton Winnow is a AI/ML based software for content classifying content based on specified criteria, such as explicit scenes, violence, profane language.
To address the complexity, Mediaproxy LogServer enables operators to log and monitor outgoing ABR streams as well as Transport Stream and OTT stream metadata including event triggers, closed captions, and audio information, all from one place.
Red Bee Media recently selected ogServer for compliance monitoring at the world’s first software-only uncompressed playout deployment. LogServer MoIP (Media over IP) software interfaces with Red Bee’s playout system and will also provide caption, loudness and SCTE trigger detection and monitoring.

EditShare’s AQC solution QScan Pro is for mid-sized post facilities and allows Each department such as audio, grading, VFX, and editing to set up parameters to test their files, with up to four files being tested simultaneously. QScan Max is the enterprise version allowing a large operator to test hundreds of files concurrently.
This solution now supports high-speed transfers with Aspera Orchestrator via a plug-in. Facilities can set up AQC workflows that ensure files meet compliance coming in and leaving the facility and apply a patent pending QScan Single-Pass Analysis process can at any point during the workflow.
“This gives businesses confidence that what they are handing off is spot on - whether it’s to the post department for colour grading or to an OTT provider,” says John Wastcoat, vp, business development, Aspera.  “In an industry where deadlines are tightening, there is no room for file errors, so this integrated solution can be a fundamental part of our customers’ workflows.”
The test requirements for SDI systems are relatively well known and documented. However, as the industry transitions to IP-based technologies a whole series of new measurements is required to understand what is happening on the network. Timing is particularly important when dealing with the ST2110 standard where the video, audio and data are sent in separate streams. Being able to test the integrity of the separate data paths is essential to ensuring that the transmission reliability is maintained.
The need to help customers deal with this complexity in part drove the development of Tektronix hybrid IP/SDI monitoring line Prism.
“As the industry matures and we see more widescale IP deployments we’re going to add all the things to the platform that I think people expect Tektronix to provide,” says Charlie Dunn, general manager for Tek’s Video Product Line. “So, all the operational concerns, all the QC concerns, all of the compliance concerns are all going to be part of what we call the PRISM platform.”
To support this transition, Tektronix will now, as standard, include all the necessary connectivity needed for SMPTE ST 2022-6/7, ST 2110, and PTP (ST 2059) on the latest additions to the Tektronix Prism line. The media analysis instrument is packed with familiar editing and live production features like Waveform, Vector, and Diamond and has a 25G upgrade path.
Sentry, the firm’s QoE and QoS video network monitoring solution, now has a way of providing picture quality ratings that closely correlate to the viewer’s actual experience. TekMOS uses machine learning techniques to generate a Mean Opinion Score (MOS) for the content along with reasons for not achieving a perfect score. This, says Tektronix, reduces the guess work involved in diagnosis and enables quick and effective corrective action.
The company’s new pricing options include subscription-based pricing per stream and on-demand options for live and VOD quality assurance.
Vidchecker is Telestream’s dedicated QC system that functions almost entirely without human interaction (outside of reviewing the final QC reports). Simply select the file(s) you’d like to QC test, choose the template to test against, and continue working while Vidchecker does all the heavy lifting in the background. If you choose, Vidchecker can fix many common errors, exporting a new file (which passes QC) that can then be delivered to the client, distributor, or broadcaster.
Telestream offers two different versions: Vidchecker and Vidchecker-post. The latter is identical in functionality to the higher-end Vidchecker but is limited to processing a single file at a time (as compared to four simultaneous files with the full version) and utilizing only 8 logical CPU cores. Vidchecker-post is also restricted from using Vidchecker Grid, which enables additional processing nodes from your network to increase processing speed.
At IBC this year, Qligent introduced Vision-VOD, a new automated, file-based solution for front-end QC and back-end QoE verification of VOD content.
“Few companies offer both upfront file-based QC and back-end/last-mile VOD content verification, and only Qligent offers a service capable of supporting large enterprise-level deployments in a SaaS-based model,” said Ted Korte COO, Qligent on launch. “Vision-VOD is virtualized for deployment in large-scale networks to quickly ramp up end-to-end verification of VoD-based networks, without the need to install, train and staff the operation. Its most important value is removing dependency on human resources, while verifying the content quality and automating business-critical procedures.”
Veneratech’s Pulsar is a file-based AQC system, which combined with the firm’s Rapid add-on module, can be used to perform quick scanning, QC, auto sorting and in-depth verification at any stage of the workflow. Among its attributes is a claimed 6x faster than real-time processing of HD files and performance of technically complicated checks such as Field Dominance, Cadence Digital Hits or Active Aspect Ratio.
EditShare has implemented IMF (Interoperable Master Format) package testing into QScan, its AQC range is based on the acquisition of Quales technology. In addition, all QScan models carry compliance certification from the DPP and AMWA which includes support for PSE testing. QScan aids in the IMF QC workflow by detecting the existence of the corresponding XML files (CPL, PKL, OPL, AssetMap, Volume Index) reading the contents and providing information about the structure of the entire IMF package (IMPs).
Twine elaborates, “It is particularly interesting the way QScan shows the information of these XML files. QScan provides a timeline view of the CPL where all essence files are described, thus helping the user better understand the structure of the whole IMP. This approach educates while simplifying the entire process.”
QC Checks for UHD HDR Content
With HDR TV shipments expected (by ABI Research) to surpass 4K TV shipments by 2020, attention is turning to the potential for the wider colour gamut to cause more QC challenges.
Comprehensive HDR checks include frame-maximum light level, frame-average light level, light level over the entire content, local and global contouring, contrast loss due to panel mismatch, and customised checks for compression artefacts. These checks should incorporate all the existing HDR standards like Dolby PQ, HDR10, HDR10+, HLG and so on.
Interra addresses this with Baton. Tektronix’ Aurora file-based QC system can be used to validate 4K-HDR content, and Venera’s support for HDR includes reporting and analysis of HDR metadata. Users can also check the correctness of metadata.