Thursday 9 April 2020

Bits per buck

InBroadcast
Video compression, live or file-based, is all about generating the most cost efficient and high quality final picture to increasingly discerning viewers
While H.264/AVC continues to be a pillar of online streaming, as the most used video codec in 2019 according to survey data from Bitmovin, the market is shifting to MPEG successor H.265 / HEVC and planning to implement the similarly performing non-MPEG standard AV1 both which roughly double the compression efficiency for the same bit rate.
Matters are moving at pace in AV streaming standards not least because of huge acceleration in demand for delivering more volume at greater quality and at lower cost.
Two new standards are due to be ratified by MPEG later this year which go some way to address this.
MPEG-5 Essential Video Coding / EVC will have roughly the same performance as HEVC. It will be offered in a royalty free baseline version, using patent licences which have expired, and in a version which offers more advanced features which users can choose to pay for or not.
EVC will require hardware implementation is likely to take at least a couple of years for initial adoption to take hold in devices including TV sets and smartphones.
Low Complexity Video Coding is a software addition for any codec (AVC, HEVC, EVC, or in future, VVC) which is designed to improve the performance of underlying hardware encoder. This solution is based on V-Nova’s Perseus Plus compression tech and makes it a cheaper alternative for companies than ripping and replacing devices in the field.
Versatile Video Codec is the next-gen codec from MPEG targeting immersive media applications like VR and 8K. This will also require hardware implementations and there is no word yet on cost, expect that it will be licensed and everyone is keen to avoid a repeat of the opaque patent pool administration of HEVC.
Sticking with standards and usage costs, the instigators of AV1, Alliance for Open Media which includes all the members of FAANG had previously declared the codec to be licence free. That is being challenged by companies including Philips, Ericsson Dolby and Orange who state that AV1 contains some of their patented developments.
A patent pool for this along with costs (i/e €0.24 for display devices including smartphones – which is a lot when you tot up how many iPhones Apple sells) has been announced but the Alliance has yet to respond.
Powered by AI
It’s likely that the traditional methodology to optimising video streaming workflows has run its course. Future advances will be made in software and automated by AI.
One of the important ways that AI can do this is by calculating bitrate to optimise bandwidth usage while maintaining an appropriate level of quality. This is something that simply cannot be done by hand; there is too much information to process in the time before network conditions change again.
The video streaming world is also looking at content-aware encoding (CAE) in which an algorithm can understand what kind of content is being streamed, and optimise bitrate, latency, and protocols, accordingly, for use in both live and VOD workflows.
Haivision Lightflow Encode, for example, uses proprietary machine learning to make smart analysis of video content (per title or per scene), to determine the optimal bitrate ladder and encoding configuration for each video. It uses a video quality metric called LQI which represents how good the human visual system perceives video content at different bitrates and resolutions. Haivision claims this results in significant bitrate reductions and perceptual quality improvements, ensuring that an optimized cost-quality value is realised.
Harmonic’s software-based Electra XOS live video processor is another example and it being used by Australian SVOD Foxtel to deliver UHD programming within existing bandwidth constraints. This features EyeQ content-aware technology (see what they did there?), which aims to reduce OTT delivery costs and improve viewer experiences. 
Harmonic also uses CAE and has conducted tests, including of an 8K broadcast with BT Sport of a  Rugby 7s match late last year, which validates that CAE efficiency depends on content complexity. Indeed, its CAE tests on 8K live streaming is claimed to match the efficiency of that promised by VVC in 2022, “proving that we can use today’s technology to deliver tomorrow’s content, and without burning the budget,” says Thierry Fautier, vp of Video Strategy at Harmonic.
“8K is now being delivered with technology that was developed almost three years ago. Operators want more affordable bit rates, with a goal to come close to what is currently used for 4K OTT streaming (a 25 Mbps connection is required for Netflix in HDR). We have demonstrated that it is now possible with a range of 14 Mbps to 39 Mbps, without any optimisation done for 8K, using cloud-powered encoding and CAE technology.”
iSize Technologies has developed BitSave v.2 for the perceptual optimisation of video prior to encoding. What this means is that the AI optimises video frames before the video is actually compressed. This ‘precoder’ enhances details of the areas of each frame that affect the perceptual quality score of the content after encoding and dials down details that are less important.  It can be used with third-party encoders conforming to AVC, HEVC, and VP9, and in real-time for up to 4K resolution (incurring a single-frame frame of latency).
iSize Technologies CEO Sergio Grce says that consumers are increasingly investing in and consuming higher quality video content. In response, iSize develops software solutions “that allows data-heavy video content to be compressed to a fraction of its original size, meaning content can be streamed faster and at a better quality.”
Testing has shown that the iSize patent-pending AI features can make encoding up to 500% faster. 
V-Nova is also developing AI-powered codecs, this time for contribution and remote production workflows. It teamed with Metaliquid, a video analysis solutions provider, to build PPro (previously PERSEUS Pro) into a AI-powered software library for encoding and decoding SMPTE VC-6 (ST-2117).
“VC-6 is an intra-only picture compression scheme which uses deep learning to deliver bitrate savings at optimum image quality with particular advantages for broadcasters in live remote production,” V-Nova explains.
Low latency CMAF to the rescue
Live streaming is undergoing a dramatic shift as more and more consumers turn to mobile phones, smart TVs, and other connected devices to view and interact with video content. In normal times much of this would be live such as sports, esports, concerts, and news.
To provide the best possible live viewing experience, content creators and video providers need to reduce latency – still the biggest problem for video developers in 2019, identified by Bitmovin’s survey. Almost 50% of the survey participants said they were going to implement low latency in the next 1-2 years, the majority “with realistic and achievable latency expectations of less than 5 seconds,” according to the company.
Poor latency can even lead to some viewers switching off completely – the average number of times a viewer will let a video re-buffer before they stop watching is falling steadily.
The issue stems from the fact that streaming systems weren’t typically designed or implemented with low latency in mind. Traditionally, the multiple components within the broadcast chain – processing, packaging, streaming and decoding – have added latency. Therefore, each component must be upgraded in order to reduce latency. As broadcasters try achieve this, machine-based processing and the Common Media Application Format (CMAF) could be the answer.
“Even though the standard was launched almost two years ago, it is only now gaining wider recognition among OTT operators as live streaming becomes more commonplace and the need to reduce latency becomes more pressing,” says Remi Beaudouin, Chief Strategy Officer at ATEME. 
CMAF aims to simplify the delivery of HTTP-based streaming media as a one-design-fits-all format. Because it promotes the use of the same media segments in both HLS and DASH, it can ultimately cut costs. It also has a low-latency mode that allows the encoder/packager to push video chunks instead of request-based video segment delivery, as a way to lower streaming latency.
“This means that near-real-time delivery can get underway while later chunks are still processing,” says Beaudouin, While CMAF’s main benefit is considered to be its ability to reduce latency, it also streamlines server efficiency by keeping workflows fairly simple.” said Remi.
A combination of machine-based encoding and CMAF could reduce latency to less than one second, on a par with terrestrial and cable deliveries.
HEVC latest
While the industry waits for encoders based around standards like EVC, HEVC encoding tools are available.
Among the latest is Vitec’s MGW Diamond OG which can deliver up to ten UHD channels or 40 HD channels in 2RU openGear card format.  The card supports 4:2:2 10-bit encoding and multichannel audio, as well as Zixi, and Pro-MPEG and SRT transport protection technology for reliable AV and metadata transmission over lossy networks.
Suitable for live contribution and distribution applications, ZyCast’s 4K encoder offers real-time encoding with HEVC. According to the company, it delivers lower bandwidth utilisation and cost-savings while providing superior 4K video quality and motion fluency. HDR is supported as is video distribution over IP, DVB-T2 and DVB-C.
Spain’s Spin Digital has live encoding tools for 12K 360-degree video using a software real-time HEVC encoder. It is even looking ahead to 16K immersive media resolutions. Realtime 8Kp60 real-time encoding at just 48 Mbit/s “with broadcast-grade quality” was demonstrated at InterBEE late 2019 and at ISE 2020 it showcased real-time encoding of 12K x 6K 360° video at 30 fps.
Without requiring partial streaming techniques (favoured by companies like MediaKind and Harmonic), Spin Digital’s technique streams the entire panoramic video in HEVC at 50 Mbps using a low-latency streaming solution based on the RTP protocol. The firm also has a media player capable of processing up to 16K panoramic video on a single PC.
The EdgeCaster 4K HEVC/H.264 encoder from Videon is the first third-party encoder that is compatible with AWS Elemental MediaStore. Videon customers can now use AWS Elemental MediaStore as a direct ingest from the EdgeCaster to support low latency workflows with outputs supporting both HLS and DASH using CMAF. The company said AWS Elemental MediaStore’s support of EdgeCaster’s ingest protocols enables less than three-second worldwide latency to be achieved with standards-based, scalable, and cost-effective workflows.
Videon demonstrated this low latency Videon/AWS workflow during online streaming coverage of the 2019 FIVB Volleyball Championship held in Japan in October, where they delivered live video to hundreds of thousands of people.
“We are at an inflection point with video as OTT and linear TV blend,” says Todd Erdley, founder and CEO.  “Looking forward, we foresee OTT being an augmentation of the linear experience. Making that happen requires a low latency approach and the capability to seamlessly add more features and functionality as time goes on.”

No comments:

Post a Comment