Monday 17 May 2021

Monitoring OTT- The new challenges monitoring ‘all IP’ brings

copywritten for TAG Video Systems 

Performing automated analysis of video and data on thousands of signals while keeping costs down requires sophisticated Adaptive Monitoring, explains TAG Video Systems

p50   http://europe.nxtbook.com/nxteu/lesommet/inbroadcast_202105/index.php?startid=50#/p/50  

Satellite, cable, and telco operators are increasingly using OTT delivery to supplement and even replace traditional media delivery methods but monitoring doesn’t get any easier.  To maintain high quality of experience for their customers, operators need a way to monitor hundreds— sometimes thousands—of channels without compromising real-time error detection. In most cases, the immense scale of their service offerings makes continual visual monitoring of all streams impossible.

In contrast, SMPTE ST 2110-based IP workflows are relatively straightforward. ST 2110 enabled the ability to transport independent streams of video, audio, metadata — as many you want associated with a program. Aside from monitoring the fundamentals of IP transport such as jitter and packet loss, ST 2110 requires the raw essence of each component to be synchronized. Taking just one example, audio timing is vital. For lip sync, of course, but imagine doing a live mix from The Proms. when audio from every mic in the orchestra pit has to be in phased accurately.

“More streams (essences) mean more monitoring points,” says Alain Beauvais, Director of Service and Support, TAG Video Systems. “Plus, we also have to keep an eye on the Precision Time Protocol to synchronize clocks in the network so everything stays in time and locked.”

Paul Briscoe, Chief Architect, TAG Video Systems says, “It is more complex than an SDI workflow but because ST 2110 mirrors legacy wiring the leap can be made. OTT however is of an order of magnitude more complex and a far bigger departure from the monitoring broadcast engineers are used to.”

OTT complexity

OTT delivery is built on a complicated processing infrastructure with many moving parts. The distributed infrastructure supporting the end-to-end delivery chain often includes third-party systems and solutions. Scaling of resources for short-term events poses further challenges, including management of associated peaks in monitoring requirements.

The capability to encode with Adaptive Bit Rates (ABR) means the operator can send more than one variant of the stream to optimise the user experience.  This means monitoring, detection, and alarms all along the OTT delivery chain, from the camera to the consumer’s TV, tablet, or smartphone.

Because it often involves different formats and streams, various CDN and cloud vendors, and even multiple workflows within a single operation, each OTT implementation is unique, with unique monitoring requirements. No two deployments should command the same monitoring solution. In addition, to which there is the perennial challenge of keeping costs low and service quality high.

“Then what happens when the industry advances to the point where they want to enable a live streamed sports event and give the viewer additional live camera to select?” says Beauvais. “Each of those versions now requires monitoring to meet different bandwidths at different bitrates.

“Your operator can receive an alarm and won’t know where it is or what it means or how to solve it. Everything can get out of control. Whenever video or data is manipulated, whether it's being moved between facilities or run through a process such as encoding or transcoding, error detection is essential.”

Bringing back control

One option for operators is to monitor the video signal across many different points, but this approach can rapidly soar in expense, particularly as the channel count grows. In conventional monitoring deployments, the cost of licenses and compute power for full-time monitoring would place a ceiling on the number of points that could be monitored.

Fortunately, help is at hand. As operational workflows across the media industry have evolved to resemble true IP or IT workflows, the presence of errors across the delivery chain can be determined with sophisticated software solutions that minimize the need for human intervention.

“If you know any operator who can stay awake constantly checking a thousand monitors on a screen sign them up for life,” says Briscoe. “The fact is, operators don’t need to look at each and every stream all the time. They just need to make sure all streams are being probed and monitored and that problematic streams are automatically brought to their attention.”

Using thresholds set by the operator or triggered by an API command from external devices monitoring the overall ecosystem, the monitoring software should automatically ensure optimal monitoring of all streams at all times and full visualization of errors whenever there is an alarm.

Adaptive Monitoring

Here’s how it works. In full monitoring mode, the input source is fully decoded and can be displayed in real time in a tile of the multiviewer’s mosaic output while being continually probed, analysed, and alarmed for errors. This is the mode typically used to keep an eye on premium or popular channels, as well as any problematic channels. Each and every frame of the video is decoded to create a smooth and continuous picture, and this requires a great deal of CPU resources. However, many aspects of monitoring simply don’t require realtime video, and full-time decoding isn’t necessary. If, for example, the picture goes to black, a second or a fraction of a second delay isn’t catastrophic.

“When operators begin to use Adaptive Monitoring, they see an immediate impact on CPU usage,” says Bristowe. “Instead of dedicating 100% of CPU power for full monitoring at one point, operators can opt for ‘light’ or ‘extra-light’ monitoring and use a fraction of the resources. They have the agility to balance CPU resources against their need to monitor streams in real time.”

Because just 80% or 90% of the information about the nature of the stream is really needed for visual analysis, errors of this type don’t call for full decoding. That’s where light and extra-light monitoring modes offer a better, more efficient, and more economical option. In these modes, the input source is continually probed and analyzed for errors, but video is not fully decoded and cannot be displayed in real time in a tile of the mosaic output.

Operators can keep their eyes on the most important live streams, knowing that other streams are being continuously monitored for any issues. The rule is ‘monitoring by exception’. If one of those video streams violates predetermined thresholds, it can be immediately decoded and routed, along with associated metadata, to the platform’s integrated multiviewer mosaic for closer inspection and troubleshooting.

“With the freedom to implement different monitoring modes within a single deployment, operators can take advantage of automated and adaptive resource allocation to get the most out of their available server resources,” says Bristowe. “While Adaptive Monitoring is invaluable in optimizing monitoring using on-premises hardware, it yields even greater benefits for cloud-based operations.”

Moving away from physical hardware, operators no longer need to scale their equipment and infrastructure to support maximum channel capacity—or leave hardware unused during non-peak times. Whether processing takes place on-premises or in the cloud, Adaptive Monitoring ensures that if the system detects a problem on a channel, that channel is automatically switched to full monitoring mode.

The right probing, monitoring, and multiviewing software combined with Adaptive Monitoring, and cloud-based processing resources allows operators to move toward a more economical pay-per-use model in which they can scale instances to match their need.


 

No comments:

Post a Comment