Wednesday 1 November 2017

Overcoming Network Congestion in OTT Video Content Delivery

Knect365 - Media + Networks

Failing to deliver live-streaming premium events to large-scale audiences can be a knock-out blow for operators, writes Adrian Pennington. It is inherently difficult and notoriously unpredictable.
This difficult and unpredictable nature was made clear with the recent Mayweather-McGregor boxing match in Las Vegas for which rights holder ShowTime is now subject of a class action served by consumers who claim they didn’t receive the streaming experience they expected for their US$99.99 PPV fee.
Part of the issue lies in the fragmentation of an end to end delivery chain which often involves many companies. This ranges from companies that produce and ‘shape’ the content (picture quality, resolution, audio formats, ad-insertion markers, subtitles) to others that deliver the network packets containing the content. For live streaming, these companies are the national or international content delivery networks (CDNs) feeding into the broadband access networks such as cable, DSL, fibre, Wi-Fi or 3G/4G mobile.  Mostly (but not always), these are all different companies and unpicking problems is extremely tricky.
“The best Quality of Service (QoS) will be delivered through MVPD networks,” says Thierry Fautier, VP, Video Strategy, Harmonic referring to multichannel video programming distributors – traditional payTV providers - which control the full delivery mechanism. “This can be achieved today in ABR (adaptive bitrate) however, it requires a commercial agreement between the virtual MVPD (vMVPD) and MVPD who own the IP network. If vMVPDs want to deliver a high-quality experience, they will have to follow in the footsteps of Netflix and install their own caches in ISP points of presence (PoPs). This only works for top bandwidth-consuming vMVPDs.”
In recent research, CDN Limelight Networks highlighted that video rebuffering (when the video pauses during playback so it can reload) is the most frustrating aspect for consumers of online video viewing, globally. This was followed by poor quality video (when users have to wait for the video to start playing, and when the video is unavailable on the user’s device). When a video stops and rebuffers during playback, 21.6 percent of people worldwide will stop watching. If a video rebuffers twice, more than 61 percent will stop watching. And when a video rebuffers three times, 84.7 percent of the audience is lost. Only 15.3 percent of viewers will continue watching after a video has rebuffered three times.
On the other hand, the class action against ShowTime alleges not just buffering but a less than HD 1080p service, as blurry images apparently from the claimant’s screen appear to show.
“Buffering is always an issue for people watching live events. In fact, that’s why HTTP-based (HLS and MPEG-DASH) live streaming was created – to create a buffer-less experience that used server-based caching to ensure seamless playback of a live event,” says Chris Michaels, Director Communications, at Wowza Media Systems. “However, caching introduces latency for the delivery, meaning that streaming customers could watch something anywhere from 30 seconds to a full two minutes behind an over the air experience.”
He continues, “As some stream providers look to mitigate the latency, they’ve tuned down their HTTP delivery to have smaller chunks, which may cause CPU issues for origin servers, and can complicate things when there are service interruptions or network congestion. Essentially, in the event of an interruption, customers could experience players needing to rebuild the playlist to get the required number of chunks for playback.”
Video issues can crop up at multiple locations in the end to end chain, and usually when one bottleneck is isolated, the next will soon spring up.  Fluctuating and peak demand can cause unpredictable results, especially where the third party CDNs are being used to deliver other voice, data and video at the same time. Using multiple CDNs can facilitate this, but the bottleneck may just move to the access networks, where it’s not so easy to dynamically switch suppliers.

Demand vs. scale
“If it’s local, the question is do you have enough bandwidth and redundant servers to intelligently balance the load?” asks Michaels. “Likewise, for global delivery, are you prepared for regional spikes? Something like the Champion’s League Final (insanely popular globally), or a World Series game (more regionally popular) could vary in demand from fans not only in home markets but around the world.”
Bandwidth is critical. Congestion or instable networks might be caused at the origin (ingress), midgress and even the last mile to the customer. Sending from a sub-optimal network could result in drops and stream failures. Receiving in a sub-optimal network (or a highly congested one) could result in buffering, failed connections from the player, poor image quality and dropped packets (choppy video).
Then there is quality. Most workflows now use ABR streaming to deliver in enough renditions for customers to watch on any kind of device. “However, some ABR delivery systems require you to send all renditions from origin into your CDN or distribution workflow,” explains Michaels. “This means more bandwidth required at the point of origin unless you’re using a transcoding service to then point to your CDN. Even then, you still have to send a minimum of 1080 or 4K stream if you’re trying to reach large TVs.”
Where does it go wrong?
The challenge of figuring out where the video feed is going wrong is highly complex, and takes a combination of passive monitoring (watching ‘on the wire’) and active testing to test the availability of the streams in different regions.
“Ideally, we want to provide an early warning system for video problems such as bad picture quality, accessibility errors, buffering and outages,” says Stuart Newton, VP Strategy & Business Development at Ineoquest, part of Telestream. “This includes testing immediately after the content is produced and packaged, and then periodically at multiple geographic locations after it leaves the content delivery networks (in data centres, on premise or in the cloud). Sampled coverage testing at the edge of access networks, whether broadband cable, Wi-Fi or cellular must also be part of the matrix.”
“Most solutions focus mostly on the last mile and not much at the first mile,” says Michaels. “You have to know the health of your encoder, transcoder, the entire performance through a network and finally playback. The only way to do this, is to install tracking solutions throughout the workflow, and hope that they can talk to one another. Even then, most of the reporting makes for a great post-mortem to know where a failure occurred, not prevent one.”
The need for standards
The market arguably needs to drive towards standards-based broad scale low latency delivery. New emerging standards such as SRT/ Secure Reliable Transport (first mile and possibly CDN) and QUIC from Google (last mile) are addressing these needs.
For CDN ingress and within the CDN, SRT developed by Haivision and made open-source in April 2017 has been endorsed by over 50 companies spanning internet streaming (Wowza, Limelight, etc.) and broadcast contribution (Harmonic, Eurovision, etc.) markets and applications.
SRT allows for a higher level (128 and 256 bit) of encryption around the stream and also alleviates a lot of the packet loss and bandwidth constraints associated with traversing sub-optimal networks.
“SRT seems to be well on its way to replacing RTMP for ingest due to its low latency, bandwidth optimization, firewall traversal, and security,” says Peter Maag, CMO, Haivision. “The market really hasn't settled on the optimal framework for CDN egress to the player. A number of companies including Akamai seem to have a formula but broad standards are needed in this area.”
Michaels points out that, on the playback side, RTMP currently requires Flash for playback, which will cause issues since it’s no longer supported on many devices. Similarly, SRT isn’t yet supported by players natively, and will require some customization or transcoding. He points to his firm’s own protocol, WOWZ (supported by the Wowza Player), as a replacement for low-latency Flash workflows.
“If somewhat higher latency is acceptable for the end-user experience, use HTTP-streaming (e.g. HLS or MPEG-DASH),” he suggests. “You may try a single-bitrate ingest to the cloud to alleviate a lot of bandwidth issues from the venue, and then transcode/transrate on the origin server, or at the edge closest to the customer. This might be beneficial for content that is regionally popular, but might not have global appeal (such as an England vs. New Zealand rugby match that may have high demand in those specific countries).
“The industry needs new techniques for live use cases, based on standards,” demands Fautier. He points to the DVB’s work on an ABR multicast protocol that can be deployed not only on managed networks, but also on the public internet. A standard should be available by early 2018 to cover those two use cases. 

No comments:

Post a Comment