Tuesday 8 November 2022

From Cloud to Edge and Back Again: What Comes Next in Live Streaming

NAB

With video accounting for over 80% of all internet traffic challenges to bandwidth utilization and escalating bandwidth and energy costs must be resolved.

article here

For live streaming applications in particular the industry is turning to edge computing to help reduce latency and bandwidth costs by bringing processing and storage closer to users and live streaming accounts for 60% of downstream internet traffic.

If service providers want to meet increasing consumer expectations to interact with video and to capitalise on fabled next-gen applications like VR, massive multiplayer mobile gaming and multi-view video then edge computing will give them a significant, well… edge.

“These [next-gen] applications are delay-intolerant and require real-time response to maintain users’ quality of experience,” says Naren Muthiah who leads strategy and business design functions for Cox Edge, an edge cloud service from Cox Communications.  “They are also bandwidth-hungry, resulting in escalating bandwidth costs and energy consumption.”

Having barely got used to transferring storage and compute processes outside of their own facilities and into the cloud the next step for service providers keen to catch revenue from gaming or ultra-low latency metaversian activations is to work with communications service providers and CDNs with at the network edge.

With edge computing, cloud providers configure edge servers in last-mile data centers as part of their CDN services. Content providers deliver streams to the edge servers closest to the user. 

“Since the edge network is generally fewer hops away from the user, requested views can be streamed with minimal delay from edge servers,” explains Muthiah. “This can reduce latency compared to streaming directly from the content provider.”

One advantage of edge computing is that providers can offload the generation of different video streams to the edge servers. Such ‘virtual view’ generation can also be adapted to suit bandwidth conditions and resources at the edge or on the client’s device to optimise QoE.

Edge compute vendor Videon is naturally bullish on the potential of the technology to supercharge interactive content.

“Edge computing for live video streaming enables a host of flexible and reliable cloud computing functions to be brought to the point where video is created,” says Todd Erdley, founder and president. “Placing this capability as close to the video source as possible simplifies live video workflows, reduces latency, enables faster deployment of standardized protocols across networks of devices, and eliminates unnecessary operational costs.”

He believes that edge computing empowers live video broadcasters and content creators to continue innovating and building the functions and capabilities they need to create customized live video workflows; “Rather than dictating how they should operate, edge computing is an innovation-enabler that helps media companies shape their present and future.”

For example, he says that sports leagues and broadcasters can use edge computing capabilities to get more feeds into their production workflows without having to spend huge amounts on additional encoding equipment while also decreasing operational costs. Using edge computing at the point of video origination in concert with cloud computing “improves quality resulting from needing to encode only once vs. twice with the traditional encoder or cloud workflow.”

By doing more at the edge rather than in the cloud, certain video use cases can half OPEX costs claims Videon and reduce latency from tens of seconds to under 200ms.

In this scenario edge compute is complementary to workflows and processes – such as encoding – in the cloud. Taking a hybrid approach by using an edge computing platform to augment the cloud gives end-users, developers, and media companies the freedom and flexibility they require.

Telecom consultants STL Partners forecast the value of the edge market to be worth $180bn by 2025 across all industries. It also reckons there will be 1,500+ network edge data centres built by telecoms operators alone by 2025. Non-telcos including CDNs like Akamai and Proximity Data Centres based in Oxford, UK are also building out edge services. Demand is being driven by the main [hyperscale] cloud providers – Microsoft, Amazon, Google, Alibaba.

“Edge computing essentially allows companies to access the benefits of cloud closer to the end-user,” it states. “The possibility of various business models when monetising edge computing is attractive.”

Will the edge really get to the network edge?

Analysts IDC defines the edge as “the multiform space between physical endpoints … and the core” and defines the core as the “backend sitting in cloud locations or traditional datacenters.”

HP Enterprise says edge computing is “a distributed, open IT architecture” that decentralizes processing power. It says edge computing requires that “data is processed by the device itself or by a local computer or server, rather than being transmitted to a da­ta centre.”

Both definitions say that edge computing isn’t meant to occur at the data center prompting the question whether large data centers will continue to be the go-to model for both storage and delivery?

Based on responses to a recent State of Streaming survey produced by Streaming Media, the answer will involve a mix of small regional data centers focused on smaller towns and cities alongside the behemoth data centers located near major metropolitan areas.

According to the survey, industry respondents overwhelmingly (65%) plan to adopt an approach to smaller, regional data centers for their edge strategies.

Tim Siglin, founding director of the not-for-profit Help Me Stream Research Foundation, points out that the digital divide – those who can access fast broadband in rural or poorer communities - was thrown into stark focus during the pandemic.

“In rural areas local homes might have plenty of Wi-Fi and maybe even decent downlink connectivity to allow viewing of cached on-demand content, but almost no ability to backhaul (meaning limited ability to participate in Zoom classes, business meetings, or FaceTime with relatives),” he says.

He provides figures suggesting that this a massively under-served market so that even there’s no political or moral motivation to bring the edge closer to home, there’s a compelling commercial argument.

“According to World Bank estimates, on a global scale, approximately 44% of people live in remote or rural locations. While each of the individual pockets of connectivity is small, and the backhaul for live-event streaming from these rural areas may be expensive, in aggregate, this is not a small population that can be ignored from either a societal or economic standpoint.”

He recruits Steve Miller-Jones, VP of product strategy at Netskrt, a company interested in getting content to the far reaches of the internet, the hard-to-reach parts of the internet,” which includes not just remote or rural locations but also moving targets such as airplanes, buses, and trains

“If we come back to how we think about capa­city planning for large events or capacity planning going out 5 or 10 years in the future,” says Miller-Jones, “the statistical relevance sphere is about large populations that are well-connected. But there’s 44% of the world that, from a single-event standpoint, may not be significant, but it is relevant to our content provider and their access to audience, their increase of subscribers, churn rate, and even revenue.”

Trains have sizable captive audiences to access. On UK domestic services alone, with 3,500 trains and an average journey that lasts just under 2 hours, that’s significant access to nearly 2 billion passenger trips annually on the domestic rail, says Siglin. [Well in theory, when those trains run and there aren’t strikes, Tim]

“So, it makes sense why solving the challenge of edge computing in this scenario means access to a significant and previously untapped audience that’s probably more inclined to consume content as they hurtle across the countryside.”

Data centers are shrinking with local smaller data centers springing up in towns. The edge can also be in mini-data centres in the mobile network – combining the benefits of edge with 5G.

 

Edge concept 101

To explain the concept,. let’s start with a 10,000ft view use an analogy from Videon. Consider the first generation of mobile phones. The ability to make calls while moving or away from a fixed location was a significant improvement over the landline. Yet, the use case was centered around voice conversations — and later simple text messages. Landlines were technologically more advanced devices — but all the functions were built into the hardware by the manufacturer. If you wanted more or different features, you had to buy another phone.

Now compare that to the first smartphones. Although the initial use case was very similar, the difference was the ability to use applications. It’s true the first applications that smartphones shipped with were voice and text. But with a simple download from an app store, the smartphone became a mapping tool, web browser, taxi booking service, baby monitor – and today, a plethora of different use cases. Voice calls and texting were just an application – rather than the sole purpose of the device.

This is the central tenant of edge computing for live video. Instead of having a fixed device that does just one video processing task, you can deploy a device that can do anything you program it to do based on its available processing power, storage, and software applications.

Place this capability as close to the video source as possible to reduce transit cost and latency. If a workflow needs to change, then adapt the software rather than replace the device. To help mass adoption, just like the smartphone, it needs to be a simple and reliable ‘appliance’ rather than a traditional PC to avoid managing the complex layers of disparate hardware, drivers, operating systems, and peripherals.

 



No comments:

Post a Comment