Friday 13 October 2017

Can Better Monitoring Prevent a Replay of Class Action Against Showtime?

Streaming Media

Showtime faces a class action after problems with the Mayweather-McGregor fight stream. Continual and complete testing might mitigate future brand damage if the issue was not with the content provider.

The class action against Showtime for allegedly failing to deliver on its as-advertised quality of service for the Mayweather vs McGregor fight in August has alarm bells ringing across the industry. If the internet cannot handle large volumes of concurrent video streams, will rights holders think twice about the claims made for OTT sports in future? More pertinently, can anything be done about it?
http://www.streamingmediaglobal.com/Articles/ReadArticle.aspx?ArticleID=121082
"A global show on the scale of a Super Bowl cannot be live streamed today to everyone online," says Charles Kraus, senior product marketing manager at Limelight Networks. Recent Super Bowls have topped 110 million broadcast viewers. "The internet would have to grow by an order of magnitude in capacity to support streaming for everyone."
Kraus pinpoints the main issues as congestion in the last mile "over which nobody has control." "The average bandwidth in the US is 10Mbps, [and in] the UK [it's] 20Mbps, but you need at least 30Mbps to deliver 4K. Even where 4K is advertised (by Netflix, Hulu) and people pay a premium for it, you never hear these providers state that the end-user's ability to receive this will depend on your ISP network."
While latency and buffering are the perennial headaches for live stream OTT providers, particularly of premium sports, the main complaint of Zack Bartel of Oregeon was that he'd paid Showtime $99.99 for an HD 60fps experience and wasn't getting it on his Apple TV. He had the foresight to capture screen shots of what seems to be blurry pictures of the fight and includes these as evidence in the legal documents apparently filed day and date with the fight on August 26. He took a speed test to make sure the issues weren't being caused by a bad home internet connection (speed test results also included) and tested YouTube and Netflix at the same time which were ‘in crystal clear HD, as usual'.
The claim argues, "In hopes of maximizing profits, defendant [Showtime] rushed its pay-per-view streaming service to market, without securing enough networking bandwidth to support the number of subscribers who paid to watch the fight." It cites Showtime's use of HLS (HTTP Live Streaming) and VBR as not being equal to the task:
"Defendant knew and should have known its system wasn't able to conform to the qualify defendant promised its customers, based on defendant's available bandwidth and subscriber numbers. Instead of being upfront with consumers about its new, untested, underpowered service, defendant …. intentionally misrepresented the quality and grade of video consumers would see using its app…"
It's a class action which could see Showtime sued for millions of dollars in returned PPV fees. Neulion, the cable network's official live streaming platform for the fight, has kept its head down since the event, after promoting its participation extensively in the run up.
"The big challenge is that if consumers in a particular region or on a particular access network are hit by poor service, but the content provider is providing perfect packaged content into the network, then who is to blame?" says Stuart Newton, VP strategy & business development at Ineoquest, which is owned by Telestream.  "Ultimately, the content provider brand is damaged, and, as they are the company charging the pay-per-view fee, they are the ones that receive the wrath of the customers. The next question is how much was the customer affected? Did they get a few 'annoying' glitches caused by a particular issue in the delivery chain, or was the content totally unwatchable due to a major failure? Compensation needs to be appropriate, and the only way to do that is by knowing who was affected and how badly."
Is it possible or desirable to pinpoint the exact point of failure during a particular live stream, and therefore for the rights holder to hold that vendor or service partner accountable?
According to Newton, this depends on how many points of monitoring are in place, and how well integrated the end-to-end systems are.  The more data silos there are, the longer it will take to pinpoint where the actual error is.
"This is why it's becoming critical to integrate operational management systems with client analytics solutions to provide near-real-time impact analysis, and initial deployments are starting to happen now," he says.
"It also makes sense to test after configuration changes (which are happening all the time whether you know it or not), and certainly testing when new protocols, resolutions, or network upgrades are instigated. It basically means you need to test continually—there is always something changing that you won't be aware of, even if you own the delivery networks. Test, and then keep testing," Newton says.
Checking video availability at many geographical locations is key for understanding where the issue originated (the preparation stage, a third-party CDN or access network?) and will help in being able to mitigate future brand damage if the issue was not with the content provider.
"Having the data allows for negotiation with the CDN and access network providers—the worst possible situation is not knowing what to fix for the next major event," says Newton.
As the impact and root-cause knowledge becomes greater and more real-time, the case for more automation and self-healing video networks also becomes stronger. The good news is that this is also happening due to advances in the use of dynamic orchestration for cloud and virtualization (especially network functions virtualization [NFV]). As functions in the delivery chain become virtualized, they are evolving to have advanced control and configuration capabilities, as well as new scaling capabilities leaning towards micro-architecture-based services.
"Next-generation video services will be able to 'spin up' an encoder on demand for a new channel, or as a failover mechanism for an existing channel," says Newton. "It's also possible to dynamically deploy the necessary monitoring needed as a micro-ervice, so the orchestration system knows in real-time when something is wrong and can take the appropriate action. In reality, this means that real-time monitoring is becoming an essential part of the system – you can't take corrective action unless you know what's happening. Driving self-healing services from Twitter feedback is not really practical."
As Limelight's Kraus points out, CDN technology will improve and bandwidth capacity will increase. But the traffic is likely to increase fast as well, as audiences will grow.
"Advances in video compression can also help provide premium video quality at lower bitrates," says Thierry Fautier, VP, Video Strategy, Harmonic. He describes Harmonic's EyeQ, as a ‘live content-aware encoding technology' addressing this. The firm claims to have shown up to 50 percent bitrate reduction for OTT HD profiles while keeping 100 percent compliance with H.264.
However, Fautier says, "today we see the limitations of the internet when traffic is just a few percent of broadcast. Unicast does not scale." He contends that the industry needs new techniques for live use cases, based on standards.
Peter Maag, CMO at Haivision agrees the market needs to drive towards standards-based broad scale low latency delivery. He cites emerging standards such as SRT (Secure Reliable Transport) developed by Haivision (for first mile and possibly CDN) and QUIC from Google (last mile).
"In my opinion, although still in development, today's internet is more capable to reach truly global audiences," Maag says. "With increases in quality, latency, and reliability throughout the workflow imminent, combined with the richness of content and the ubiquity, linear broadcast's days are numbered."
The DVB is also working on an ABR multicast protocol that can be deployed not only on managed networks, but also on the public internet. A standard should be available by early 2018 to cover those two use cases.
Fautier, who is part of the Ad Hoc Sub-Group of CM-AVC (which is tasked with defining the Commercial Requirements for ABR Multicast), says "ABR streaming delivered over the top on managed networks is based on unicast technology, making it difficult to scale for live applications. To resolve this issue, the DVB has decided to develop a specification that will enable ABR multicast distribution via any bidirectional IP network, including telco, cable, and mobile."

No comments:

Post a Comment