Thursday, 5 March 2015

Which Territories Are Following The UK DPP’s Lead?


TheBroadcastBridge


The UK’s DPP may have set the bar high in terms of standard file-delivery and compliance but other territories are working toward their own version which means vendors need to rework their tools to fit.

https://www.thebroadcastbridge.com/content/entry/2140/which-territories-are-following-the-uk-dpps-lead
In most territories, standardisation is moving towards AVC-i, which is inherent in the DPP spec. AMWA in the US, for example, has begun defining its own version of DPP. Not only do the tools have to be reworked, but so does compliance procedure for each and every different standard.
The German-speaking (DACH) region is probably the closest in terms of specification and roll out, although the program there differs significantly for a number of important reasons.
1. The German file specifications, the ARD-ZDF MXF Profiles, were borne out of a QC initiative of the EBU which organized all QC tests into one of five categories (see below).
2. Within the German-speaking broadcasters, the requirements for interoperability extend far beyond straight-forward file-delivery. Files can be exchanged between the public broadcasters at any point in the workflow from acquisition to distribution.
3. Not only does ARD-ZDF specify tightly constrained encoding profiles, but also a set of decoder tolerances. Theoretically, a more tolerant decoder will result in a far more stable and robust workflow, however a wider variance in possible inputs to a decoder in practice puts a greater strain on the testing of decoders to ensure compliance with the profiles.
The ARD-ZDF MXF Profiles were published in 2014 and are now being specified as the requirements for almost all new projects and upgrades in the German-speaking market.
4. Coming up on the rails behind Germany is the US, or at least Hollywood. The IMF initiative has gained significant momentum in the last couple of years and with the arrival of UHD material and file deliveries, adoption of IMF has accelerated significantly.
5. IMF also differs from the DPP initiative in the sense that it is more the content creators and owners saying “this is how we want to deliver media” rather than the broadcasters saying “this is how we want to receive media”. This difference results in a very different approach to the file delivery standard and methods.
In the US, QC of closed captions, video descriptions, and languages has largely remained a manual effort. The FCC and other standards bodies have tightened the quality standards for captions and increased the breadth of content that must comply. What is the cost of continuing to do so manually and what are the costs of ignoring the problem?
“Content creators, distributors, and broadcasters will need to turn to automated approaches to verify closed captions, video descriptions, and languages,” says Colin Blake, Sr. Sales Engineer, Media & Entertainment Products at Nexidia. “Continuing to rely on manual review or spot checks leaves you exposed and is not scalable. The coverage obtained by spot checking is insufficient to find problems and these failures will mean loss of viewership, reduction in the perceived quality of the programming, and possible regulatory fines.”
Blake says the UK DPP has “certainly attracted attention” in the US with several groups evaluating whether it can be of value in their area.
“We will continue working with those groups to meet any standards in place and do everything we can to support our customers. Technology vendors answer to their customers and must stay relevant; keeping pace with standards bodies is part of that. Nexidia will continue to develop unique products looking at the essence of the content and not just the metadata.”
The EBU QC 5 Test CategoriesRegulatory A test that must be performed due to requirements set by a regulator or government. Has a published reference document or standard. Absolute Defined in a standards document including a standard scale. May have a defined pass/fail threshold. As a user, I ought to be able to compare the results of different QC devices and see the same result. Objective Measurable in a quantitative way against a defined scale, but no standard threshold is agreed. There may be no formal spec or standard, but an agreed method is implied by or exists in the test definition. As a user, I ought to be able to compare the results of different QC devices if I configure the same thresholds. Subjective May be measurable but with only a medium degree of confidence. A failure threshold is not defined, or is very vague. May require human interpretation of results. As a user, I cannot expect different QC devices to report identical results. Human-review only  

Gold coloured cards are used for test that can only be carried out by human EYES or EARS (Golden Eyes and Golden Ears) or where a human is required to fully interpret the results of an automated QC tool.

No comments:

Post a Comment