Wednesday 29 April 2020

Powering on to an 8K future

IBC
2020 was supposed to be the year when 8K seeped into consumer consciousness as the new premium broadcast experience. But the postponement of the Tokyo Olympics has derailed the plans of TV display makers like Sharp who were looking to capitalise on the interest but has not stopped the launch of UHD-2 equipment.
In fact, 8K is gaining ground as an acquisition format for high-end TV series and films. Season 2 of Amazon Prime’s psychological drama Homecoming and season 4 of Netflix runaway hit Money Heist were shot at 8K and 7K respectively for a variety of creative reasons. Mank, a feature biopic of the co-writer of iconic film Citizen Kane from director David Fincher is also shot in 8K and in black and white. The Eddy, a Netflix musical drama from La La Land director Damien Chazelle shot most of its episodes with the Red Helium 8K sensor, cropped to mimic capture on S16mm.
Producers can’t sell productions delivered in less than 4K to Netflix and it stands to reason that Netflix will be among the first streamers to offer an 8K option. When it does, it will have a catalogue of 8K material in the bank.
At this moment, though, if you want to record 8K in cine-quality your main options are Red cameras or the Panavised version Millennium DXL. As good as they are, the filmmaking community is excited by the prospect of Canon’s EOS R5. Reckoned in some quarters to be as revolutionary a Canon product since the debut of the 5D Mark II over a decade ago, the EOS R5 promises to put 8K into the hands of the prosumer.
The R5 is a mirrorless full-frame sensor camera able to record 8K Raw at up to 30fps (in 4:2:2 10-bit C-Log and HDR PQ) and 4K at 120fps (ditto) to onboard CFexpress and SD cards. It can also output 4K at up to 60fps via HDMI.
As others have pointed out, on raw specs alone, the camera seems to outperform even high-end cinema gear like the Euro35,500 full-frame Sony Venice, which maxes out at 6K/30p and 4K/60p, both externally recorded.
What’s missing are details on price and release date but it will be welcomed by filmmakers whose only current option for cine-quality 8K recording is RED.
6K is the new 4KWhile 8K is still exotic, there is a trend toward shooting at resolutions beyond 4K for high end documentary and cine work. The additional data again providing scope to post-manipulate the image but still deliver in UHD.
Among cameras offering this are the EOS C500 Mark II (5.9K); Sony’s PXW-FX9 XDCAM which boasts a 6K full-frame sensor plus colour science inherited from the Venice camera; Blackmagic Design’s 6K, S35-sized sensor version of its Pocket Cinema Camera which allows users to fit EF lenses without the need for a converter; and Panasonic’s Lumix S1H full-frame mirrorless camera.
Blackmagic recently made the PCC more attractive by reducing its price, and by making the 6K and 4K versions compatible with live production with BMD’s latest ATEM Mini switchers.
Switcher control of up to four cameras and its parameters, lens and tally light via an HDMI connection enable a professional live studio workflow. It’s a package Blackmagic is angling at the current demand for multi camera live streaming from home broadcast studios.
Panasonic Lumix S1 users will soon be able to output 12-bit Raw video over HDMI via the Atomos Ninja V HDR monitor-recorder. The result is claimed to be the highest quality video images ever seen from a mirrorless camera.
The new firmware enables the output of full-frame 5.9K at rates up to 30fps, plus Super35 C4K (4128x2176 with a 17:9 aspect ratio) at rates up to 60fps. It can also record 3.5K anamorphic 4:3 at up to 50 frames. Both updates will be provided free.
HDR monitor-recorder.
Earlier last year Atomos had announced its first 8K product, an update to its Neon monitor-recorder enabling recording and playback of 8K ProRes Raw. Jeromy Young, Atomos CEO hailed the development saying, “The detail is incredible, with enough resolution to reframe for 4K and 6K and even enable virtual multi-camera angles from one shot.” While still in development, the product has yet to make its debut.
The Sony Venice is finding its feet as a serious cine-quality production tool with TNT series Snowpiercer and new Netflix drama Hollywood both shot using the camera. A recent firmware upgrade means it will not capture HFR at 6K 90fps and output HD ProRes 444 up to 30fps to SxS cards.
8K post productionProductions need the flexibility to work with and combine content spanning a range of resolutions as well as both HDR and SDR ranging from HD SDR to 8K HDR, and everything in between.
“There’s a misconception that 8K is vastly more expensive than it actually is,” says AJA director of product marketing, Bryce Button. “Generally, moving to 8K is an incremental cost, especially if you’re already working in 4K or have worked in stereo 3D. The biggest expense often comes with storage and moving large volumes of data, but the strides made by the industry to support 4K and S3D have provided a strong foundation to support the data needs that 8K workflows require.”
AJA offers I/O solutions like the KONA 5 to facilitate downconversion and monitoring of 8K content on 4K displays, whether for editing or other tasks.
Cinegy’s embrace of 8K goes back to IBC 2015 when it introduced the Daniel2 codec, capable of decoding 16K video at 280fps using a now obsolete Nvidia Quadro M6000.
Since then, in the words of Jan Weigner, co-founder and CTO, “we’ve been optimizing the hell out of it” and integrating the codec across its software product. As a logical follow-up to other recent releases, Cinegy Multiviewer 15 joins its Capture PRO and Air PRO in a collection of 8K supporting solutions.
‘SDI must die!’ has been Cinegy’s motto for ages it continues to believe in this. The new Multiviewer will support SDI but it prefers to promote options including NDI over IP and over the public internet using the SRT protocol.
Live 4K / 8KWith NHK’s showcase for 8K TV on hold until Summer 2021, BT Sport may find itself with the world’s first regular live broadcast of 8K UHD. It has announced it would launch an 8K service starting with the 2020/21 European football season. The date for that is still to be determined of course, but the English Premier League remains on track for its scheduled mid-July KO.
It’s probable that its 8K HDR coverage will use a handful of native 8K cameras while maintaining a regular multi-camera 4K HDR plan, perhaps upscaling some 4K positions. In live demos, the broadcaster has used Ikegami’s SHK-810 8K cams. These models employ a 33 million pixel Super 35 CMOS sensor with a PL lens mount and has the same operation routines as existing Ikegami HD and 4K broadcast camera systems. Production-related functions include lens aberration correction and 8K focus-assist.
For 4K, Ikegami has just launched the UHL-4 camera that carries the same 4K-native sensors as its UHK-430 and UHK-435 Unicame XE series. 4K signal outputs are via 12G-SDI with an optional 3G-SDI quad link. The camera also incorporates three 2/3-inch 8-megapixel CMOS sensors, each of which can capture 4K-native 3840x2160 resolution images, high sensitivity and high signal-to-noise ratio.
HDR is now ubiquitous in the broadcast studio and shoulder camcorder line-ups of the major broadcast camera brands. The aim now is to simplify workflows to ensure operators are as comfortable using HDR as they are with standard DR production.
Recently unshackled from Belden and with venture capital backing Grass Valley seems to have a new lease of life. It’s ‘virtual’ NAB line-up concentrates on upgrading its live production technology to full 4K IP workflows.
Marco Lopez, Grass Valley’s svp of live production, says “The demand for increased production bandwidth has created workarounds that in the past have ultimately slowed down and put limits on the workflow.”
The GV K-Frame XP “eliminates the need for compromise”. There’s no reduction in I/O count in 4K UHD, no reduction in M/Es, keyers or DPMS in 4K UHD, and no change in operator workflow in 4K UHD.
LiveTouch 1300 is also said to simplify 4K UHD requirements with “unique” dual redundant 50G SMPTE ST 2110 IP interfaces (NMOS IS-04 and IS-05) “that allow operators to use standard IT network connections to quickly identify and connect the system.” It is designed to facilitate 4K UHD over IP for remotely produced replays.
The company also launched the LDX 100, a high speed, native UHD camera built specifically for IP workflows. It connects into the contribution network, eliminating the need for an external control unit.
HDR is now ubiquitous in the broadcast studio and shoulder camcorder line-ups of the major broadcast camera brands. The aim now is to simplify workflows to ensure operators are as comfortable using HDR as they are with standard DR production.
Recording 8K video requires lenses with an exceptionally advanced optical performance. Canon unveiled a pair of 8K zooms last November both equipped with 1.25-inch sensors: the UHD-DIGISUPER 51 box field zoom lens and the 7x10.7 KAS S portable version. Not to be outdone, Fujinon has announced a new box lens and a portable lens.
The 66x zoom of Fujinon’s HP66x15.2-ESM box lens is claimed as highest zoom magnification on any 8K lens to date with a focal length of 1000mm. Its released remains timed for this Olympic summer. The smaller Fujinon HP12x7.6ERD-S9 claims the world’s widest focal length for 8K at 7.6mm with deliveries due around September.
Both lenses can be used with the focus position demand unit (EPD-51A-D02/F03), released by Fujinon in March, to achieve precise focusing. The combination should simplify shooting 8K video, which demands an advanced level of focusing precision.
Sony had its 8K systems camera UHC-8300 on the market for over two years but would appear to have had few orders for it outside of Japan. It is, however, enhancing its range of cameras for live remote production for recording 4K 60p. Due in August, the SRG-XP1 (robotic point of view) and SRG-XB25 (box cam) offer wide-angle lens or optical zoom ideal with remote control and signal distribution over IP using NDI-compatible hardware or software. The SRG-XP1 POV camera can capture 100-degree wide horizontal viewing angles making it particularly useful for reality shows, and in e-sports.

Tuesday 28 April 2020

The winners will be those using the full potential of cloud

Blackbird
The clock cannot be wound back. As governments around the world begin to relax some of their lockdown restrictions it is clear that the crisis has catalysed some fundamental work-life changes.
Even when an effective treatment for COVID-19 becomes widely available, business and society will not simply return to normal.
Beyond an acceleration of existing trends – such as the super-charged use of bring your own device (BYOD) business technology and remote production workflows – there are deep-seated paradigm shifts.
In the short term, broadcast and media organisations will be forced to take a more wholesale digital approach to cope with the increased risk of physical contact.
But in the post-pandemic economy it will be companies that have moved to the cloud which will survive and thrive.
Resilience will be at the forefront of every company strategy going forward, yet it is greater agility enabled by cloud that will ensure competitiveness as well as an ability to respond to the unexpected. To achieve this, businesses will have to re-evaluate where they must be strong and where they must be flexible.
Outdated practices will be culled. Technologies that allow true remote over the internet production, management and distribution of content will be prioritised.
Technology suppliers are urged to transition to as-a-service offerings to meet the needs of revenue starved clients.
In this new era of business the watchwords will be agility, scalability and automation. According to the World Economic Forum, those businesses that have these capabilities now will be the winners.
Those that have designed their solutions to use the full potential of cloud computing will not buckle under the pressure, it declares.
Only the cloud provides the financial and operational flexibility allowing production to scale and shrink in response to sudden demand. Only tools built to work natively in the cloud can enable users to scale effortlessly, ensure content quality and drive massive efficiencies.
Needing only limited bandwidth to use, and ready to deploy in Amazon Web Services (AWS), Microsoft Azure or Google Cloud, Blackbird is being used by blue chip customers such as IMG, A+E Networks, Deltatre, Peloton, US Department of State and Eleven Sports to transform their future today.
The platform’s powerful professional capabilities enable the ingest, edit, publishing and monetizing of video through any browser from small to medium to enterprise level. Blackbird technology dramatically reduces the cost and time of content creation and distribution enabling media organizations to rapidly go to market and reach viewers voraciously consuming video online.
We will emerge from this period wiser and more connected as a global society but only those engaging the full potential of cloud will emerge stronger.

Monday 27 April 2020

How Did Damien Chazelle Convince Netflix to Let Him Shoot 16mm?

No Film School

https://nofilmschool.com/damien-chazelle-convince-netflix-16mm

In 2014, Netflix began shooting and delivering all of its Originals in 4K, primarily, it explained, to future-proof content as more and more of us replaced HD with UHD capable TVs. Not more than 10% of a show’s runtime would be allowed in anything less than a 4K UHD capture, a demand from which it has rarely if ever deviated – until now.
The Eddy, a new 8-episode musical drama set in Paris landing on 8 May, has had a full quarter of its production filmed on 16mm. It’s testament to both the creative clout of executive producer and director Damien Chazelle as well as the streamer’s willingness to work with creative intent that this was greenlit.
Conceived by Grammy-winning songwriter Glen Ballard, scripted by BAFTA-winner Jack Thorne (This Is England) and chosen by Chazelle as his first TV project following the Oscar success of La La Land and First ManThe Eddy dramatizes racial tensions and family relations centered in and around a present-day Parisian jazz club.
Chazelle, who lived in Paris and is fluent in French, directed the first two episodes which were lensed by Éric Gautier AFC, a seasoned DP with a CV for directors including Alan Resnais (Wild Grass), Olivier Assayas (Irme Vep), Walter Salles (The Motorcycle Diaries) and Sean Penn (Into the Wild).
“Damien invited me to photograph La La Land but I was already committed to another project,” Gautier reveals. “The next time he came to Paris he asked me. We got along so well, there was no need to talk really, it was a great collaboration.”
Like Chazelle, who famously segued his experience as a jazz drummer into films including Whiplash, Gautier was also a passionate jazz musician before he discovered filmmaking in the 1980s.
“The idea to shoot 16mm was Damien’s but I wanted this too. [The format] is intimately connected to jazz as well as the setting for our story,” explains Gautier. “We are taking cues from the Nouvelle Vague in the 1960s and the impact it had on American cinema in the 1970s, which is to be free with the camera in low light conditions.”
He reveals that a key text for Chazelle was The Killing of a Chinese Bookie, the 1976 movie starring Ben Gazzera as the owner of a Sunset Strip nightclub filmed by director John Cassavetes in a semi-documentary style heavily influenced by the French New Wave.
“We want to be free like [Jean-Luc] Godard but we are not depicting the Left Bank [artistic area] of Paris in the sixties but what is happening in the north of Paris today where the jazz scene has been reborn in an area of ethnic diversity and which attracts a youth crowd,” Gautier says.
Jazz is an ever-changing music that keeps reinventing itself and it was this raw energy and fluidity which is captured by Gautier’s handheld 16mm camera.
“At the very outset of production, Éric had the general idea that each episode of the series should evolve slightly and gradually,” explains Julien Poupard (The Price of Success), who took over from Gautier to film episodes 3,4, 7 and 8. “He wanted each director and each DoP not to exactly copy the previous episode but to invent something.”
Aside from the interior of the jazz club, which was built as a set, the series is shot on location in and around the 20th arrondissement.
“We’re shooting outside without any additional light and with Zeiss Classic T2.1 lenses sensitive to flaring without any matte box in order to provoke some accident of light,” explains Gautier. “We want nothing to do with perfection. Digital so perfect and you can control it so perfectly but that was not what we wanted to achieve.”
Jazz is born of such experimentation but Netflix tolerance for this aesthetic would only go so far. “Netflix insisted on doing the rest of the series digital and asked me to do it,” Gautier says. “But it’s not my world.”
There was a big discussion,” confirms Marie Spencer, DP on episodes 5 and 6. “Julien and I wished to shoot S16 because the footage was so beautiful but Netflix were adamant that only Damien could [shoot the format] and no other director.”
Gautier did, however, work closely with Poupard and Spencer as well as directors Alan Poul, Houda Benyamina and Laïla Marrakchi, to find a close match in look and style to follow the first two episodes.
“I insisted that my gaffer [Éric Baraillon] worked on all episodes so that he could help them and both Marie and Julien worked as camera operators on the first two episodes so they could see the way we worked,” Gautier says.
Spencer says, “I realised that Damien and Eric were searching a lot and that they were feeling where the camera was going using the emotion of the performances to direct the camera in the moment.”
She had operated camera on gritty modern French classics La Haine and A Prophet before soloing shows like Paradise Beach.
“Damien urged us to keep the same mindset… to be free with our camera. He asked us to look at documentaries of jazz from the 1950s and to note how the footage was framed with parts of bodies of musicians, parts of people dancing, headshots of artists. He wanted us to try to keep that energy with the music.”
She adds, “The most important message from Éric and Damien was for us to feel free, to be sincere and not to try to match exactly what they had shot.”
After tests comparing various 4K cameras with 16mm the team decided to maintain the same relationship between sensor size and lens and to use the same lenses as on eps 1 and 2.
This led them to the RED Helium and to crop its Super 35mm sensor to match S16, while providing the same angle of view and depth of field as the Zeiss lenses on Gautier’s camera.
Colorist Isabelle Julien, with whom Gautier has worked for decades, provided colour consistency of the show’s ACES pipeline by grading all eight episodes for Netflix standard 4K deliverable. After DI, the 16mm eps were given a 4K scan.
All the show’s musical performances are captured live with a single handheld camera and occasional B-camera on tripod with a longer lens for coverage, sometimes running for 10 minutes without cut.
“We didn’t want to be afraid of making mistakes even if this meant that some shots were over or under exposed,” says Poupard. “Even scenes with no music were directed in a way that was inspired by the music.”
The nightclub set was lit with minimal fixtures, all of which (on ceiling or on stage) are in view. “Damien wanted the camera to be free in 360-degrees so there was nothing that could hamper our movement on set,” Spencer says. “The actors were free to move and we were free to go with them. That was a tremendous experience, I wish all shows were like this.”
Gautier says, “We laid down a baseline look for the series but at the same time Julien and then Marie were free to explore and improvise. I hope that same spirit continues to evolve if there is another season. After all, everything is boring if everything is the same.”

Cracking the code - AI enters the frame

IBC
The traditional means of optimising video streaming workflows have run their course. Future advances will be made in software automated by AI.
Online video providers have never been under so much pressure. Excess demand has caused Netflix, YouTube and Disney+ to tune down their bitrates and ease bandwidth consumption for everyone, in the process deliberately compromising the ultimate quality of their service.
Even once the crisis has subsided operators will have to equate scaling growth with the cost of technical investment and bandwidth efficiency. Even in a world with universal 5G, bandwidth is not a finite resource.
For example, how can an esports streaming operator grow from 100,000 to a million simultaneous live channels and simultaneously transition to UHD?
“Companies with planet scale steaming services like YouTube and Netflix have started to talk about hitting the tech walls,” says Sergio Grce, CEO at codec developer iSize Technologies. “Their content is generating millions and millions of views but they cannot adopt a new codec or build new data centres fast enough to cope with such an increase in streaming demand.”
New codecs are meant to provide an answer to the needs of better quality and greater efficiency but the industry is coming to realise that traditional methods of development have reached the end of the line.
“Many of the basic concepts of video coding such as transcoding, motion estimation and compensation were developed in the 1970s and 1980s,” says Christian Timmerer, head of research and standardisation at Bitmovin. “MPEG-1, the base coding standard, was developed and implemented industry-wide in the early 1990s. Since then there have been incremental developments, optimisation with computing resources and power and memory, but the basic principles are still the same.
“We need something else that is able to optimise the current infrastructure to get better performance.”
Even Versatile Video Coding (VVC) which MPEG is targeting at ‘next-gen’ immersive applications like 8K virtual reality is only an evolutionary step forward from HEVC.
“It still uses the block-based hybrid video coding approach, an underlying concept of all major video coding standards since H.261 (from 1988),” explains Christian Feldmann, video coding engineer at Bitmovin. “In this concept, each frame of a video is split into blocks and all blocks are then processed in sequence.”
AI enters the frameIt’s not only the concept which has reached its limit. So too has physical capacity on a silicon chip. There are more and more requirements for applications to have available general-purpose silicon such as CPU and GPU cores, DSPs and FPGAs. At the same time, new types of data are rapidly emerging such as volumetric video for 6-degree-of-freedom experiences.
“From a broadcaster and operator perspective the use of dedicated hardware for encoding streams to distribute to end users is rapidly disappearing as the benefits of pure software implementations that can be rapidly updated and deployed to lower-cost generic servers (or virtualised in cloud environments) have become increasingly apparent,” says Guido Meardi, CEO and co-founder, V-Nova. “However, there remains a huge number of professional and consumer devices from cameras to phones where dedicated hardware video encoding provides the small form factor and low battery power consumption that is critical for them.”
The R&D labs at the organisations whose patented technologies created MPEG standards are looking to machine learning and AI to crack the code.
According to Meardi: “AI/ML techniques differ fundamentally from traditional methods because they can solve multi-dimensional issues that are difficult to model mathematically.”
InterDigital helped developed telecoms standards like 4G, owns patents in HEVC and in VVC.
“We think that you could use AI to retain essentially the same schema as currently but using some AI modules,” says Lionel Oisel, director, Imaging Science Lab, InterDigital. “This would be quite conservative and be pushed by the more cost-conscious manufacturers. We also think that we could throw the existing schema away and start again using a compete end to end chain for AI - a neural network design.”
InterDigital is working on both but it is not alone. There are a range of different ways that AI / ML techniques can be used within video codecs. Some vendors have used machine-learning to optimise the selection of encoding parameters, whereas others have incorporated techniques at a much deeper level, for example, to assist with the prediction of elements of output frames.
First AI-driven solutionsV-Nova claims to the first company to have standardised an AI-based codec. It teamed with Metaliquid, a video analysis provider, to build V-Nova’s codec Perseus Pro into a AI solution for contribution workflows now enshrined as VC-6 (SMPTE standard 2117).
In addition, during IBC2019, it demonstrated how VC-6 can speed-up AI-based metadata content indexing championed by Al Jazeera, Associated Press, and RTÈ – all organisations with huge archives.
V-Nova explains, “Currently, broadcasters can only afford to analyse a small portion of their media archive or a limited sample of frames. They are often forced to reduce the resolution at which the analysis is performed because it’s faster and cheaper to process. However, lower resolutions lose details, which reduce the accuracy when recognising key features like faces or the OCR of small text.”
AP’s director of software engineering Alan Winthroub, called V-Nova and Metaliquid’s proof-of-concept a “step-change in performance” adding, “this means we can process more content, more quickly while generating richer data.”
Meardi says, AI/ML will never be a complete replacement for the wealth of techniques and tools that make up existing video compression schemes.
“However, there are a large number of areas where AI/ML has the potential to add further optimisations to the existing tools and its use will only increase as the industry gathers greater knowledge and expertise.”
The video streaming world is also looking at content-aware encoding (CAE) in which an algorithm can understand what kind of content is being streamed, and optimise bitrate, latency, and protocols, accordingly.
Harmonic offers content-aware technology it brands EyeQ which aims to reduce OTT delivery costs and improve viewer experiences. It claims its CAE tests on 8K live streaming matches the efficiency of that promised by VVC, “proving that we can use today’s technology to deliver tomorrow’s content, and without burning the budget,” says Thierry Fautier, vp of Video Strategy.
Also using AI-optimised CAE in its coding tools is US developer Haivision. Late last year it bought Lightflow Media Technologies from Epic Labs and subsequently launched Lightflow Encode which uses machine learning to analyse video content (per title or per scene), to determine the optimal bitrate ladder and encoding configuration for each video.
Perceptual optimisationIt uses a video quality metric called LQI which represents how good the human visual system perceives video content at different bitrates and resolutions. Haivision claims this results in “significant” bitrate reductions and “perceptual quality improvements, ensuring that an optimised cost-quality value is realised.”
Perceptual quality rather than ‘broadcast quality’ is increasingly being used to rate video codecs and automate bit rate tuning. Metrics like VMAF (Video Multi-method Assessment Fusion) combines human vision modelling with machine learning and seeks to understand how viewers perceive content when streamed on a laptop, connected TV or smartphone.
It was originated by Netflix and is now open sourced.
“VMAF can capture larger differences between codecs, as well as scaling artifacts, in a way that’s better correlated with perceptual quality,” Netflix explains on its blog. “It enables us to compare codecs in the regions which are truly relevant.”
London-based startup iSize Technologies is working on a novel approach to the compression bottleneck using deep learning as precursor to the current encoding process. It has been expressly designed to capitalise on the growing trend for perceptual quality metrics such as VMAF.
iSize’s solution is to pass the original (mezzanine) file through a layer of perceptual optimisation prior to being encoded as normal using existing encoding platforms.
This ‘precoder’ stage enhances details of the areas of each frame that affect the perceptual quality score of the content after encoding and dials down details that are less important.
“Our perceptual optimisation algorithm seeks to understand what part of the picture triggers our eyes and what we don’t notice at all,” explains Grce.
This not only keeps an organisation’s existing codec infrastructure and workflow unchanged but is claimed to save 30 to 50 percent on bitrate at the cost in latency of just 1 frame – making it suitable for live as well as VOD.
The company has tested its technology (shown here) against AVC, HEVC and VVC with “substantial savings” in each case.
The system can be dialled to suit different use cases. Explains Grce: “Some directors of studio distributed VOD content will want to keep some grain in the picture for creative reasons and would not be happy to save 30% bitrate if all that noise was deleted. Gaming content on the other hand might opt for 40-50% savings because that type of content looks more pleasing to our eyes without ‘noise’. Live streaming is somewhere in between [those two applications].”
Grce says the tech is in the late stages of testing with a “global scale VOD platform”, with a “large UK live sport streaming platform” and beginning last stage evaluation with a “global social media platform.” A “large gaming hardware manufacturer” has also tested it and it has been demoed in use with AV1 at the invitation of Facebook and Google.
Compression and decompression mechanisms are the drivers behind the delivery of all VOD services from Amazon to Quibi. Adoption of new codecs is essential but likely to be quicker than the standard five-year norm because along with less requirement for hardware encoding more of the processing will run in the cloud.

Friday 24 April 2020

TV Studios

Televisual 

When COVID-19 subsides TV studios will need to deal with a problem of finding enough key staff

As the UK continues to entice record investment for international film and TV productions, it is less the capacity to house it all than a shortage of talent to service it which is exercising the minds of some studio chiefs.
“There’s definitely capacity in the next 24 months provided the studios and production teams can be flexible with their schedules but there’s potentially a talent shortage behind the camera to deliver against the big expectations in output,” says Andrew Moultrie, CEO, BBC Studioworks.
Netflix is expanding its physical presence in the UK by creating a ‘dedicated production hub’ comprising a mix of 14 sound stages, workshops and offices space it will lease at Pinewood-owned Shepperton. While mostly catering for original scripted production it will also be capable of facilitating large shiny floor shows.
Comcast-owned Sky is progressing a similar build costing £3bn near Elstree Studios. The new Sky Studios Elstree will feature 14 stages covering over 20,000 sqft each plus postproduction and digital facilities and is expected to open in 2022.
Meanwhile Pinewood is spending £1bn to nearly double its sound stages, backlots and other production accommodation after striking a long term deal with Disney. The arrangement is expected to begin this year.
“Sky Studios Elstree will impact the supply chain irrespective of whether it produces any shiny floor productions,” argues Moultrie. “The point is that the talent pool around London, while first class, has not been replenished for the last decade. In fact, talent has been a dwindling resource and not invested into.”
The NFTS is looking to increase the number of apprenticeships in support skills beyond that of senior head of department. “The industry needs to address ways of finding the next wave of gaffers, grips, lighting technicians, set build and turnaround specialists,” says Moultrie. “Shiny floor productions require quite particular skills. When you are doing six shows a week versus a standing stand for 12 months the show has a different rhythm and energy. You have to understand the way it operates. With audience management you have to ensure 24/7 that you are not impacting on the community itself.”
Roger Morris, Elstree Studios’ managing director agrees – to an extent. “The skills set for those behind the camera is vital for the success of our creative industry,” he says.
Morris is the chair and founder of the Elstree Studios University Technical College with 450 students and he is also involved in an initiative with Oaklands College, St Albans to run a film and TV course for basic industry skills such as, carpentry, painting and electrics. 
The BBC is more exercised about the problem because of its Ofcom mandate to relocate operations outside the M25. Moultrie is heading a strategy review to realign Studioworks’ London-centricity with BBC quotas which require two thirds of production to be done in the nations and regions by 2027.
“We are looking to focus on existing centres of excellence and also looking at supply ecosystems, the workforce and the impact on the local community,” Moultrie says.
The BBC already has a strong studio presence in Salford, Cardiff and Glasgow making these the most likely places for an expanded Studioworks presence.
“It’s not just cost, catering and carparking it’s about how to transition out of London based on nations and regions targets to make sure [existing staff] are comfortable,” he says.
Shiny floor demand tends to be seasonal, peaking in the September to Christmas period, but Dock10 predicts unusually busier activity this summer.
“We are seeing interest from various indies who work for SVODs,” says Andy Waters, Head of Studios, alluding to a big live event landing at the studio in August. “Perhaps these streamers recognise the market saturation of drama and are looking for alternative content.”
He says the skills base in the north is strong as a result of decisions taken a decade ago to shift channels and commissions to MediaCity.
“Our skillbase has grown to accommodate the available work. We have local talent in all areas from electrical to post to scenic staff.”
One success story (though a surprising one) is that local freelancer Julie Metzinger is the among the first (perhaps only) female sound supervisor to have mixed a sitcom (Citizen Khan, Porridge).
Elstree’s regular LE shows range from Pointless and The Chase to The Voice and Strictly Come Dancing leading Morris to justify it as the number one studio for the top LE shows in the UK.
In the meantime, BBC Studioworks is continuing its partnership with Elstree Studios until at least March 2024.
The collaboration sees it continuing to hire ‘Elstree 8’, ‘Elstree 9’ and ‘George Lucas Stage 2’– an arrangement initially formed in 2013 to temporarily house parts of BBC operations while TVC was being redeveloped.
Elstree is separately building two new stages (max capacity 36,000 sqft) capable of facilitating high end work including LE. “They will be our largest stages,” says Morris. “In addition, we have power backup and extensive broadcast line capability. The stages will be fully operational in 2021 or sooner I hope.”
Morris is vocal about “artificially or politically trying to change the production landscape. “Studios are expensive to build,” he critiques. “Some of this expense is funded by the taxpayer and it’s unfair on investors who have spent considerable sums on the big four [M25] studios (Elstree, Pinewood, Shepperton, Leavesden) where investment has been made to ensure they are considered the number one production hub in Europe.
Morris continues, “For those producers who are made to work in the regions outside the M25, it also means that talent and crew are more expensive unless they live in that specific location. Inward investment created productions are not controlled by such pressure to locate to regional areas but too much pressure in the future could stall investment and damage the industry. The market is what the market is, accept the success of the industry at this moment in time.”
Morris goes on to say that, “There’s not enough good space that has the size, the facilities and the location,” as the existing London players.
With only so many hours able to be filled in the schedule, demand for shiny floor entertainment is unlikely to soar in the stratospheric way that high end drama has – unless VOD streamers like Netflix decided to expand their output into the area.
Capacity has in any case been eased with the reopening in Hammersmith of Riverside Studios following a five-years of redevelopment. The former BBC TV studio has been rebuilt with three flexible studio spaces for TV, theatre, dance, opera, music and comedy, as well as a cinema, screening room, archive and rehearsal space.
A key element of the project is the interconnection of 42 positions at various locations, including triax and SMPTE camera cabling, video, audio and data cabling, fibre tie-lines and low voltage power cables for facility boxes.
Russell Peirson-Hagger, managing director of ATG Danmon which integrated and systems built the site says “Multi-function venues of this type are becoming increasingly popular and this looks set to be one of the most advanced of its kind in the UK.”

Riverside’s largest studio, Studio 1, has a floor space of over 600 square metres with a seated audience capacity of 400. The facility will be able to capture productions in UHD or transmit live to BT Tower.
BBC Street (at Pacific Quay) has the only and therefore largest space equipped for shiny floor ent north of the border. Its nearest rival is in Salford.
Studio A (567sqm) hosts All Round to Mrs Brown’s and Ready Steady Cook reboot with Rylan as well as BBC children’s network shows Swashbuckler and The Dog Ate My Homework.
“The bigger shows require the support of multiple green rooms, contestant briefing areas, audience holding and warm-up areas which requires of us some logistical and scheduling challenges,” relates Alex Gaffney, Commercial and Partnerships Manager. “We’ve repurposed some large meeting spaces into studio support space. We sometimes have 2-3 bookings on top of each other but we’re not having to turn business away and I don’t believe there’s a necessity for more TV studio space here.”
Dock10 invested in virtual set technology for the start of the 2019-20 football season but the intention was always to broaden its use beyond Match of the DayBlue Peter has used the virtual set during the week and the technology is being used to host a first entertainment client in a pilot due end of March.
“We also do quite a bit of work in the esports market, in particular developing ways to integrate data feeds in realtime into the virtual set,” Waters says. “Producers can use that to animate AR objects in the studio to tell a story in a new way.”
Last summer, Dock10 hosted the semis and final of esports tournaments including Fortnite for sponsor JD Sports.

Tuesday 21 April 2020

5G: The transformation of video distribution is underway

Digital TV Europe
2019 was a pivotal year for 5G: operators began launching commercial services, enterprise use cases became clearer and its moved into consumer consciousness. 5G was on course to add nearly US$2.2 trillion to the global economy between 2024-34 and to account for 20% of all global connections by 2025.
Albeit reported in February by the GSMA before the global shutdown, these figures suggest 5G has finally become a reality. So, after all the years of talk, speculation and hype, did mobile’s latest generation meet expectations? Even the GSMA is not sure.
“In some ways, yes. But, in others, the answer is a resounding, no – or, at least, not yet,” blogged Jan Stryjak at GSMA Intelligence.
4G technology is now the dominant mobile technology across the world. These connections will continue to grow, particularly across Sub-Saharan Africa, CIS and parts of MENA, reaching just under 60% of global connections by 2023.
“This underscores a key learning over the last year: it’s not all about 5G,” says Stryjak. “Yes, 5G is now live in 24 markets but that leaves the vast majority of connections still running on a 4G or slower network and more than half of the global population which won’t even be covered by 5G.”
Ground-breaking applications
A study from Ericsson found that 65% of early adopters – the so-called 5G Forerunners which comprise 14% of smartphone users – expect ground-breaking new applications.
“But, in reality, the biggest app we’ll see in 2020 isn’t really an app at all… it’s speed,” says Jim O’Neill, principal analyst, Brightcove. “The biggest impact 5G will have in its first couple of years in the market will simply be on the mobile broadband experience. Users are looking forward to the increased bandwidth, reduced latency, and flat out speed that 5G will be able to deliver.”
However, as Theirry Fautier, VP video strategy at video technology outfit Harmonic points out, in terms of video, most smartphones are currently only capable of supporting 1080p, so HD will initially be delivered over 5G.
“Even when 1080p is zero rated (viewing unlimited content from a service provider without impacting the user data allowance) it is already a big leap forward from the experience today on 4G,” he says. “Delivering 100Mbps simultaneously to millions of subscribers, combined with the low latency possible with 5G, could have a transformative impact on video distribution, especially for live streaming.”
For reference, 4K sports content today is encoded with HEVC at about 25Mbps. “For in-home delivery, I expect we will start with 4K HDR everywhere, then 8K,” says Damien Lucas, founder and CTO at technology company Anevia. “But even on handheld devices, my guess is that HD will become the new minimum quality.”
Other applications that could drive interest in 5G video in the nearer term include what Fautier describes as immersive personalised 8K streaming services.
“8K is the only technology that can capture the entire field for sports like tennis, soccer and baseball. We think there could be interest in this because it is already being done in the broadcast production domain.”
The consensus is that within the M&E vertical, live sports, both as is and when enriched with AR, and videogaming will be early adopters.
“Applications that can uniquely leverage its low latency will be first movers to require 5G technology,” says Robert Koch, VP technology solutions at EPAM, citing real-time betting either remote or in-stadium as one of them.
In the mid-term there could be a fusion of 5G with more traditional broadcast, according to Juliet Walker, chief marketing officer at media services provider Globecast, “to bring an augmented reality approach for premium content and more interactivity between consumers and content creators.”
Other applications may come to life including full-screen, high-quality social network video or co-production of user-generated content. In other verticals, Baruch Altman, AVP R&D at LiveU, flags telemedicine and technician assistance because of the requirements for consistent quality and low latency in a proven market.
He says: “Coronavirus has shown the benefits of remote consultation from anywhere and this will more and more include high-quality video, home imaging and other sensory information.”
Lucas also takes lessons from current social distancing and home working. “We are seeing some of our colleagues connecting mainly through 4G instead of ADSL,” he says. “This goes to show that, once 5G is deployed, it could become the preferred means of communication everywhere where fibre does not arrive.”
A prime early use-case in video is ‘first mile’ contribution, particularly for live sports production. For instance, with 5G it will be possible to broadcast a live sports event directly from cameras on the field or more likely, to stream directly to a broadcast facility,” says O’Neill.
Sports production companies are conducting video tests for contribution and some remote production. Cellular bonding backpacks are used by Globecast as part of its content capture infrastructure and associated redundancy links.
Verizon Media’s demo in the Hard Rock Stadium during Super Bowl LIV meanwhile featured real-time streaming and multiple camera views.
“5G is still very conceptual in many people’s minds, but the technology is already having a transformative impact on video distribution,” says Ariff Sidi, general manager and chief product officer, media platform at Verizon Media. “The deployment of next-generation networks is going to transform the live arena with the way that we experience sports games, concerts, and other major events. Greater bandwidth, connection density and lower last-mile latency connectivity will lead to new user experiences, like in-event AR, greater reliability and higher bitrates for improved streaming quality.”
Testing and development of 5G video applications
Testing for B2B contribution has been underway for more than a year and is well advanced, according to Globecast. B2C distribution will follow 5G rollout and will, as always, depend on the penetration of suitable devices. Apple’s first 5G iPhone is planned for end of the year but supply chain issues may derail this.
“US, South Korea and Japan will lead but it will be linked to subscriber management costs of post-pay and pre-pay,” Walker notes.
Harmonic has insight on South Korea where VR applications are already deployed in 5G networks by operators SKT, KT and LGU+. The main applications are VR video for sports and music clips on HMD and phones; multiview, including multigame and multicam (mostly on phones) and 8K streaming with phone pan and scan, including zooming in on the entire content.
In Europe, some testing is being done via various operator collaboration programmes and the EU Horizon 2020 and 5GPP test platforms. As Altman reports, almost all the 5G commercial networks are ‘non-stand-alone’, reliant on the operator’s existing 4G core, and not yet supporting the full span of 5G performance capabilities.
“To commercialise these capabilities, a full ecosystem should be implemented, from actual 5G network deployments to 5G smartphones and modems,” he says. “We are in the early stages of all that, but companies like AT&T with Ericsson, Samsung and LiveU have partnered with content owners to commercially produce live sports using 5G non-public networks in venues [during the NBA summer league 2019].”
Huawei and Sharp among others have demonstrated the feasibility of delivering 8K video over 5G. “But in order for this to go commercial, it would require a broad adoption of 5G to the home (not 5G to the device), since handheld devices are definitely not about to support 8K, due to their limited screen sizes,” says Lucas.
Yet, as Kock points out, mobile devices, pixel densities, and video quality are becoming nearly indistinguishable to the human eye meaning that 4K-8K mobile video will likely not be the killer app of 5G.
“One of the most significant changes that should be enabled via a mature 5G network is Fixed Wireless Access-based video delivery,” he says. “This will bring high quality video to the biggest screen in your home and enable carriers with 5G networks to compete directly with traditional wireline cable providers.”
His EPAM colleague, Bhuban Hota – senior manager at technology consulting – agrees. “The key is to showcase the value proposition in user experience so that consumers are willing to pay for the enhanced experience offered by 5G,” he says. “Standard video applications can’t provide that differentiating experience even though QoS will be improved. Video providers are still searching for the killer app, which might go in the direction of even more immersive video experience.”
Verizon Media’s RYOT Studio is pioneering new production technologies and media formats, enabled by 5G connectivity.
“We’re only beginning to scratch the surface of what 5G can do, but the potential for true, live, highly personalised and interactive experiences is more of a reality as we move into a 5G-enabled era,” says Sidi.
The take-off of VR for video applications has so far been disappointing, admits Harmonic’s Fautier, although initial VR deployments were in 4K, with limited capability on the device side.
“We expect the combination of 5G with 8K production delivered to next-gen devices like the Qualcomm XR2, featuring a newly announced VR reference design, will bring new life to VR video.”
Edge computing, broader spectrum, massive MIMO and multicast will support a higher number of simultaneous viewers under various conditions.
“AI will be used to improve the resource allocation by the network, to predict consumption by the content owners or device owners, and if coupled together, better experiences at lower infrastructure cost can be achieved,” says Altman. “I expect creative minds to come up with new ideas, if the price is right. Perhaps immersive personalised and dynamic advertisements or ad-supported content.”
Further along the adoption curve, 5G offers a Mobile Edge Computing (MEC) platform that can be used for several purposes. “An area where we see 5G MEC shining is offering deep caching in the mobile network and on-demand CDN capacity,” Fautier says. “The second relevant application of MEC is for an in-stadium experience, which we believe will include local processing and streaming from MEC to significantly reduce the delay vs a centralised cloud solution.”
As importantly, the rise of 5G will help the video-delivery infrastructure – especially the CDN – to be integrated tightly with the network.
“Since 5G networks rely heavily on Software-Defined Networking (SDN), this is a great opportunity to enable the video CDN to be reconfigured dynamically, at the same time as the network is being reconfigured through SDN,” argues Lucas.
“This is important because the video industry sees huge fluctuations in terms of usage, with occasional huge peaks that put a strain to the system. And secondly, because thanks to 5G, we will see the adoption of multi-access edge computing. Combined with SDN, it enables different applications to share the use of those edge resources. Being able to dynamically reconfigure a CDN to deploy cache servers on those edge resources as needed will significantly help the delivery of high-quality, low-latency video during peak traffic.”
The importance of multicast
Although the first two releases of 5G (3GPP Rel-15 and 16) only supported point-to-point (unicast) transmissions, multicast is now considered an essential feature, not least for scaling live events.
“Delivering video to thousands of spectators gathered in a single space has proven to be challenging until now,” says Fautier. “The 4G standard includes eMBMS [evolved Multimedia Broadcast Multicast Services] and has been relatively disappointing, as it requires an expensive base station upgrade as well as specific mobile device functionality.”
Multicast, however, allows operators to reduce their distribution costs especially for live events, says Altman. “The bigger the event, the larger the savings in parallel to increased quality and hence experience.”
Broadcasting information once via an overlay network is much more efficient than sending it hundreds of thousands of times to mobile cells.
“With bigger cell coverage, this improved flexibility will substantially reduce the cost of deployment and operation,” says Thomas Janner, director of R&D, signal processing at Rohde & Schwarz. “It helps MNOs offload their heavy streaming and data loads so that they avoid infrastructure over-provisioning and therefore serve their consumers with higher QoS while reducing capex and opex.”
The benefits of 5G multicast are not confined to live events and mobile TV. As Janner points out, it reaches smart vehicles with in-car media and entertainment and map updates and can transmit public safety messages, such as urgent weather and community information. “Several other services could be optimised while using multicast over 5G, including OTA multicast for centralised configuration and control, live commerce and rural e-learning where no internet connection is available.”
Consequently, 5G broadcast and multicast field and lab trials are becoming increasingly important. R&S is testing this technology with China Broadcasting Networks in Beijing and with Brazil’s TV Globo. There are tests in France, Austria, Finland, Spain, and the Philippines. Janner reports high interest in South Africa, Mexico, Malaysia, Australia, UAE, Russia, Hong Kong, Korea and the UK. In the US he predicts a major expansion of 5G multicast commercial trials.
Globecast agrees that “IP multicast ABR by satellite (to mobile towers) has potential to reduce CDN bandwidth consumption and help with network resource optimisation. Walker says, “If the new 5G frequencies aren’t enough to manage all the expected mobile video traffic increase in the next five years, this may drive mobile operators to look at such technology.
“The key challenge will be whether there is sufficient cooperation to have the Multicast feature standardised with all the stakeholders and implemented by the chipset manufacturers,” she warns. “Agreeing standards across the telco industry can be a slow process.  This may also be regional. We have already seen very different decisions around spectrum reallocation between C-band and mobile in different countries.”
Broadcast enhancements for the new radio and core will be addressed by 3GPP in Rel-17 due 2021. Harmonic expects this will lead to service deployments in 2022.
“The key challenge is to bring point-to-multipoint (PTM) delivery in 5G in a transparent manner to both users and content providers, such that it becomes an internal optimization tool of the network,” Altman explains. This was the main design principle of EU H2020 research project 5G-Xcast. “Indeed, making it a service transparent to both content owners and viewers, in all aspects, including moving from unicast to multicast and back, to support mobility, transparent digital rights and billing management.”
Not everyone is convinced. “Multicast sounds like a great solution to reduce bandwidth but in fact it does away with many of the exciting possibilities of OTT,” Lucas says. “Personally, I don’t see this as a priority. And the massive failure of 4G multicast technology [LTE-broadcast] seems to prove this.”
5G as complement or replacement to DTT
Both 5G broadcast/multicast and digital-terrestrial TV are considered to be complementary rather than in direct competition. In the US, broadcasters are betting heavily on ATSC 3.0, but they’re also looking to use 5G.
“It would not surprise me to see a product like [a 5G receiver] that also incorporates an ATSC 3.0 receiver as a gateway device, making available content from both wireless connections for distribution through the home,” says Mark Aitken, VP of advanced technology for US broadcast group Sinclair Broadcasting.
In Europe, broadcasters submitted their requirements to the 3GPP, including the obligation of public service broadcasters to offer linear TV and radio programs free-to-air. This has finally led to FeMBMS (or LTE-based 5G Terrestrial Broadcast), but if and to what extent broadcast content and services will be distributed over FeMBMS networks in future is up for debate.
“Although FeMBMS is sometimes addressed as a potential replacement for DTT, it has been developed for a different purpose than digital terrestrial television standards,” says David Gomez-Barquero,coordinator of the 5G-Xcast project. “For example, whereas DVB-T2 is highly optimised for the distribution of linear TV services for fixed rooftop reception, FeMBMS is meant to reach portable and mobile devices and carries the LTE legacy.”
With FeMBMS planned (though not agreed) for 3GPP release in 2022, “telcos will need to make the trade-off between reserving some of the capacity for broadcast versus the cost of carrying multiple copies of the same content,” Walker advises. “Multicasting will help, 5G Broadcast is likely to be a later step.  Will it make sense for a telco to do this for big events – a presidential address or a World Cup final, for example?”
Regulators will play a role. “Will they insist on a slice of 5G for B2B or for broadcast?” Walker asks.  “Again, this remains to be seen in many markets.”
In some cases, such as fixed wireless, 5G will be a possible complement to very high-speed broadband to the home. Harmonic also believe DTT can be replaced by very high speed wired networks like DOCSIS 3.1.
“5G can be an option but the scale and cost of deployments might not be relevant in the coming years,” Fautier says.
What is more realistic, he thinks, is to deliver high-quality video during sports events to 5G devices. Fautier points to the Harmonic’s involvement in France Télévisions’ pilot with the French Open tennis tournament last May. “Although we achieved our goal to transmit 8K over 5G, we clearly saw that in order to deliver this experience to thousands of people in a dense area a multicast solution is necessary.”
Anevia views wireless OTT TV infrastructure as a complementary technology to terrestrial and satellite networks – “until OTT can actually deliver higher-specification content than traditional broadcast,” Lucas says. “8K, for example, has the potential to become a primarily OTT-based standard.”
One example of OTT replacing DTT – but over a 4G network – is Telia in Lithuania. In September 2018, it phased out its terrestrial services, moving everything to OTT over outdoor LTE enabling it to reach rural areas with next-gen TV services. This was possible in a country where subscribers were used to spending €7 per month for their TV service.
“Doing this at a mass scale while keeping prices low, offering high-quality video, and still staying in business is more problematic,” Lucas says.
Unless, that is, the carrier can make use of the specific advantages of 5G and deliver much more value to consumers withpersonalised and interactive content.
“As latencies come down any advantage that traditional broadcast has over OTT will evaporate,” says Sidi. “There will also be increased amounts of OTT-exclusive content and this, combined with the consumer flexibility, will continue to divert viewers to streaming, making it the primary method of consumption.”
Naturally thenuances of rollout and overlap will vary between and within markets.
“In cities, we will see 5G to handheld devices, delivering video streams to mobile phones,” says Lucas. “In rural areas, we will see 5G to the home, delivering first-screen video services.”
In countries without existing DTT, satellite will be part of 5G networks to feed base stations or phones directly. “Within DTT-enabled countries, 5G could replace DTT in the next 10 years but, as above, it will depend on the regulators and negotiations in different countries,” says Walker.
South Korea’s KT Corp already is offering a 5G IPTV service via mobile devices and set-top boxes.  The Bavarian Broadcasting Corporation is working with IRT, Kathrein, Rohde & Schwarz and Telefónica Deutschland on field trials in expectation of transmitting to millions of 5G devices in the near future.
“Territories with wide DTT distribution have a great opportunity to free up the DTT frequencies and allocate them to 5G, to enable non-linear services for all users,” Lucas adds. “But all this will take time, since TVs will need to be connected to a 5G network.”