Thursday 26 March 2015

Preserving your assets

Broadcast 

As the number of channels, devices, and markets for distribution has multiplied the processes associated with repurposing assets and rights management have become far more complex. Where indies once kept physical assets (tape) secured and on-site, in a tapeless digital environment a different approach is needed to control and preserve content. At the same time, the cost of media asset management (MAM) and archive systems has reduced, prompting rights-owners to consider in-house investment. For those indies weighing up purchase, systems vendors present the benefits and costs of ownership.


Round table contributors
CL: Chris Luther, director of professional Services, SGL
DC: David Carter, vp marketing, ProMAX Systems
EM: Esther Mesas, chief sales & marketing officer, Tedial
CD: Craig Dwyer, senior director, global centre of excellence, Avid
LF: Laurent Fanichet, product marketing manager, EMEA, Quantum

Why should indies consider a MAM/archive solution?

LF: Data (video) is the most valuable asset, and the concept of data reuse (re-monetisation) is a primary revenue driver. Having a file based strategy is crucial. The combination of a MAM application to be able to search and retrieve relevant assets quickly along with an archive platform allows indies to repurpose and monetise content when the need arises.


CL: It is essential to locate material quickly and easily. A MAM/archive allows indies to set up a system that enables them to be productive and cost-effective.


CD: Content creators need to get more value from each asset - by tagging it, protecting it, and distributing it to wider and wider audiences, but at lower cost. They have three choices: Do nothing and risk falling behind their competitors; invest in MAM technology to drive efficiency and profitability; buy asset management capabilities on a per project basis from a services provider.
One big advantage is that indies can track the metadata created in production. With many productions using freelance staff, sometimes important data gets lost when a shoot is finished. By managing this information more efficiently in a central system, it allows producers to re-purpose and re-version content for other platforms.

What criteria needs considering ahead of investment?
CD: The first step is to decide whether you want to have the technical infrastructure on premise. The alternatives are to use service providers or post-production companies. Once a decision has been made to own a system, the next step is build out the right technical and operations team and think about specific requirements. These include how many hours will be stored and what workflows and integrations are needed to post-production facilities and broadcast workflows.CL: In the past five years the cost of MAM/archives have reduced considerably, which provides a more attractive proposition for many indies. One of the important MAM considerations is: do you only need to find and manage the content or do you need automated workflows around that content? Do you need to store it at a certain resolution? Does the MAM have all the features you need for your workflow? Are staff centrally located or do they work remotely?


What should indies budget for?
CD: A MAM system that focuses on the long-term storage and preservation of assets would start in the range of £100,000, but there are many variables involved. Factors such as the number of assets, the quality that’s being stored, the amount of integrations and users required to access the system is needed. Typically a client should also allocate approximately 15-20% for ongoing support and maintenance.


CL: Ongoing costs include software support for both the MAM and the archive. Then you have to factor in how the material is stored, whether that’s spinning disc or LTO. People often request disc because it’s so fast. But there was a recent report in the US that showed that the cost of electricity, cooling and floor space for disc is around 50 times the cost of tape.


DC: If you choose a MAM with no per seat license fees then you can scale the solution to expand the number of users without additional costs. For archive, the ongoing costs needs to be calculated based on the amount of data you need to move to tape, and how often. The ongoing cost is adding new tapes or maintaining the solution. You can keep costs down by continuing to effectively optimise how you are using your storage. When you’ve finished with an online project, move it off the online storage. ProMAX system costs range from £4,000-£168k.

EM: Solutions are designed for each unique customer. Some companies, like Tedial, offer vendor financing so everything can be an operating expense. Systems start at £100k.

How is return on investment achieved?
DC: Archive is both a way to protect your media assets and also a strategy to optimise the use of your storage space. In that sense, archive gets into aspects of ROI. It’s not cost effective to continue to expand your expensive online storage. It’s more cost effective to scale that out with either disc-based, Tier 2 storage or LTO archive.

DL: For a lot of facilities the decision is made based on the cost of long-term storage verses online storage. Long-term storage is about 1/25th of the cost. Having a MAM and archive also significantly reduces the labour costs of people searching for and digitising material. Disaster recovery is also important: there should be two copies of every piece of content, with one kept off-site because if there’s a catastrophic event how do you recover from that?

CD: This can be calculated based on the amount of media handling that’s happening in a manual process, and what can be automated going forwards. It’s becoming imperative that content is handled as automatically as possible and that tasks are not duplicated (e.g. re-keying meta-data). Another advantage comes from the ability to access and search the library, reuse and monetise footage easily and effectively.

CL: If you look at the cost of LTO6, priced at around £30 for 2.5TB of storage (with LTO7 just around the corner), it’s extremely cost-effective. There’s not always a full understanding of the benefits of a MAM/archive; there’s often the idea that more people will solve the problem. But people don’t scale very well.

LF: As digital archive libraries are grow, indies need to assess the volume of assets they need to preserve for the long term as well as the retrieval patterns as much as possible. Using a tiered storage approach is all about aligning data value to storage costs to help them meet their business needs.


When would outsourcing be a better option?
CD: I would suggest looking at the scale of operation. For a very small archive operation it can be very difficult to justify the technical resources required and specialist skills needed to really manage the archive effectively. Where production companies own a large quantity of assets and are trying to monetise and repackage them, it may make more sense to bring operation in-house and have greater control of the costs and the underlying assets.


EM: When a producer only makes a single weekly programme or a small set of media projects each month. In this instance the cost repayment doesn’t work but the core benefits of the system are still valid. A post facility or archive house can aggregate small clients and build a solid business and if the system they select supports true multi-tenant operations, then they can continue to scale to add more business.


Monday 23 March 2015

LTE Broadcast "Ready for Prime Time"


Streaming Media Europe

Mobile solutions provider QuickPlay and video optimization solution Roundbox—which QuickPlay just acquired—say it will replace cable TV in some areas

"LTE Broadcast is definitely ready for prime time," says Dennis Specht, CEO and co-founder of Roundbox, recently acquired by mobile solutions provider QuickPlay Media.
For QuickPlay and Roundbox, the technical part of the service has been solved. What is missing is the commercial model—but that's coming, they say.
"It changes the game for TV in some areas," said Specht. "In APAC, for example, we are seeing LTE Broadcast being leveraged as a cable replacement. You can offer 12 channels for $7 a month over mobile.
In other parts of the world, such as Europe, a strong use case is spectral efficiency, where the technology enables a far more efficient use of the spectrum that mobile operators own.
The main benefit to operators is that LTE Broadcast (also called evolved Multicast Broadcast Service or eMBMS) offers "dramatic" operational efficiency.
"As [operators] are getting pounded with video traffic, mobile data outage is a problem which LTE Broadcast will help them deal with," said Specht
In the U.S. and Europe,multiple tests have been done around delivery of live sports events, with mobile operators and pay TV players likely to leverage their investments over LTE Broadcast. Verizon has a $1 billion deal to stream coverage of NFL games to mobile; Telecom Italia owns rights to mobile coverage of soccer league Serie A, and BT (which owns the UK's 4G network mobile operator EE) and Sky in the UK share rights to English Premier League coverage.
"Where are there are more than six people in a cell site accessing HD video it becomes a problem from a capacity perspective, so broadcast will offer a more efficient delivery," said Mark Hyland, SVP, global sales at QuickPlay. "We see this being monetised by large operators with content rights as pay-for-use or by advertising that drives a free application."
While the technology is solid, executives say, there are still differences in applying it. "What is needed is a series of interoperability tests with various vendors to deploy a solution," said Hyland.
M2M, digital signage, OTA software updates, and in-car TV are other potential applications for the technology.
"From a consumer perspective we are likely to see more push types of application where content and large data files are made readily available on devices," Hyland says.
On the acquisition of Roundbox, Hyland says the move would help carriers and content providers manage the end-to-end provision of LTE services from content ingestion to application delivery. The Roundbox client and server solution will be positioned as a managed service offering to customers.

Friday 20 March 2015

Auto-pilot drones take to the air… and sea

The Broadcast Bridge

https://www.thebroadcastbridge.com/content/entry/2251/auto-pilot-drones-take-to-the-air..-and-sea


​Innovation in UAV filming space is extended to automated tracking systems which operate without camera-op or pilot. Broadcast Bridge looks at three systems all kickstarter funded and all in development including one designed for aquatic photography.
Hexo+ is a kickstart-funded project that has already raised $1 million. Its software is capable of autonomously flying a drone, filming and following an object controlled by a 3D model of the camera’s point of view on a smartphone app.
In theory, when users can set the framing on a smartphone and lock into a filming target (a BMX rider, for example). The drone will automatically take off and fly to its specified position – and hover there until the subject starts moving. The drone is “attuned to the slightest of movements, continuously repositioning itself to match the filming parameters you set for it.”
The drone itself is a Hexacopter (six blade) design weighing 2.2lbs and dimensions of 62 x 52 x 12 cm - 24 x 20 x 5 inches. Flight time is rated 15 min with 3S battery, gimbal and GoPro attached with operation over up to 2km distances.
“You might think this is borderline magic, but it’s actually all orchestrated by a smart and lean use of technology,” explain Hexo+ on its website. “On the smartphone side we have an intuitive interface for users to position the drone where you want in space and potentially Wi-Fi video live feed. This user interface is backed by trajectory anticipation algorithms that crunch data coming from the sensors on the drone and smartphone to predict the next, most likely position of the subject. This enables a quick, accurate tracking of the subject and is key in achieving great images. The smartphone and the drone communicate relative position to each other and data over the MAVLINK protocol, developed by the ETH in Zürich.”
The onboard software is based on the 3D Robotics open-source code, and Hexo+ optimised the MAVLINK implementation and the behaviour of the drone to improve response time to commands sent from the mobile app. It also integrated over the air gimbal control to obtain the best possible camera angle based on the relative positions of the drone and the subject.
Development has progressed to working prototype. “We’ve been ramming through hardware and software hurdles until last month, when we managed to consistently make our prototype work in field conditions,” the team states. “The drone is fast and agile, and the flight controller as responsive as we want it to be – from our action sport movie-making people perspective.”
To leap to industrial production (to make HEXO+ affordable at $499 each) requires further kickstarter funds to pay for moulds, minimum orders on parts and build. It hopes to launch commercially in May.
The development team
is led by CTO Christophe Baillon and William Thielicke, the design team by CEO Antoine Level, Telecom & Electronic Engineer Xavier De Le Rue and professional snowboarder and action sports movie maker 
Matthieu Giraud.
There is a catch. There is no avoidance system included in the first version. Instead, users will have to film in open areas. “Picture yourself driving a car with a trailer - you have to anticipate the trajectory of the drone following you,” the developer advises. It is working on an avoidance system. “So far it looks promising but we can't yet commit to a release date.”
Helico Aerospace Industries, led by Latvian inventor Edgars Rozentals is readying AirDog for release after securing $500,000 in kickstarter investment. It originally set a date of November 2014 for the release of its $1,295 unit which is controlled by a wrist-worn wireless leash at a range up to 1000 feet (300m).
Among the upgrades still being tested are heated units that keep critical sensors like gyroscopes and accelerometers at a constant temperature. This is to counter a problem caused by changing temperatures (such as filming snowboarders in sub zero cold) which causes the sensors to drift and the drone to crash immediately after takeoff.
This is typically solved by manual recalibration of sensors for each flight in different temperatures or mathematical correction if the characteristics if the drift are known at different temperatures. Helico says it has turned to the military for a solution by stabilizing the temperature - literally heating sensors up to certain temperature that is significantly higher then the possible maximum ambient temperature and keeping it stable while the drone is turned on. This requires about 40 seconds warm up time and tests are ongoing.
The team is testing two types of sensor for a collision avoidance system. One are LIDARs from the Pulsed Light company, the other being sonars from MaxBotix in both a high performance ultrasonic rangefinder and a waterproof version. The latter is apparently twice as expensive but useful for humid conditions. Helico seems to be erring on the side of LIDARS since it recently switched to a more robust PWM interface (from a i2C interface) and claims this is the main reason it has been still holding back with beta test unit shipping.
However, it is also investigating microwave radar which could serve well for active obstacle avoidance. “We are still waiting for snow to test sensor capabilities to detect ground on reflective and sound dampening surfaces,” Helico states.
The drone itself is has a top speed of 40 mph and the battery is a 14.8 V LiPo.
The third auto-pilot kickstarter project looks at first sight more of a gimmick than the others but could have pro potential for sports like America's Cup. Splash Drone is a quadcopter encased in a buoyant waterproof shell, so it can safely land and float on water without being damaged.
It's got a waterproofed housed-gimbal for fitting a Go-Pro and features auto-follow functionality, so it can shoot video autonomously while you play, surf, jet ski, wakeboard, canoe, white water raft etc etc in the water. The top speed is 25 mph. It is currently on track to reach $100K in funds, way past its target of $17,500 and is expected to ship in July.
“We've tested several camera gimbal configurations and are working on testing the latest prototype in fresh and salt water,” explains project lead Alex Rodriguez on kickstarter. “The major challenges were waterproofing the giro stabilization board and the signal cables. We used the same concept as the GoPro dive box and are now testing it for endurance.”

Thursday 19 March 2015

The Cloud of Clouds Starts to Form


IBC
If you look back at history we had three disruptive technologies; the steam engine, electricity and then computers,” declared Elie Abou Atme, Senior Account Manager, Equinix at IBC Content Everywhere MENA in January. “The fourth disruptive power will be the Cloud. If you don't get into the Cloud or learn it you will not survive.”

Atme was speaking on a platform with Cisco Senior Manager of Product Marketing, JT Taylor who said that 2015 will be the year of take-off for service operator Cloud deployments.

“From a mindshare perspective and based on the number of announcements we see in the pipeline this year we believe more operators will move to the Cloud,” he said. 

Research firm Markets and Markets suggests that the global cloud market is already worth $121bn and it could reach $270bn in 2020 if Market Research Media is to be believed.

Cloud is on a roll and its impact is being felt in all areas of the creation, management and delivery of content everywhere from acquisition to post and onward consumption by individuals.
While the technology isn’t new, only recently has it begun to play a critical role in the video workflow process as broadcasters seek new ways to reduce the costs of their video processing and quality control needs. The immediately available processing infrastructure of the Cloud is naturally appealing to broadcasters as it eliminates the need to actually purchase and deploy costly equipment, which reduces their capital costs.

Storage is the most common application.  Broadcasters can configure the same computing platforms, operating systems and amount of storage as they have used on their premises. They can send and retrieve files at the time of processing, or store the files in the Cloud and perform operations such as transcoding exclusively within the Cloud.

However, use cases beyond storage are however increasingly common and include transcoding, QC, streaming, editing, and content distribution.

We are also becoming familiar with the ways cloud providers bill for their service. These include Infrastructure as a Service (IaaS) which basically substitutes the cloud for certain on-premises hardware; Platform as a Service (PaaS) which includes a computing platform and a solution stack for developing web applications that includes OS, web server, database and programming language; and Software as a Service (SaaS), which brings virtually everything into the Cloud. 

There are also different types of Cloud: public cloud infrastructure which is seen as a utility by businesses for buying computing, storage and bandwidth on-demand. Private Clouds for companies that want exclusively control over how their data is managed; and hybrid Clouds which are a mix of the two.

However, rarely will one Cloud provider or one business model work for one media company. There is a need for orchestration between multiple Clouds. This is where IT giant Cisco steps in. 

Cisco has pumped billions of dollars into Intercloud – an interconnected global 'Cloud of Clouds' – since its launch eight months ago. Cisco has been gathering significant and rapid traction for its vision for which will publish the first reference architecture next month.

According to Cisco, Intercloud will provide the building blocks for Cloud providers to evolve beyond traditional IaaS offerings to create combined IaaS, PaaS and SaaS solutions that deliver more value to their customers.

“Ultimately this is where Cisco sees real value,” said Taylor. “Everything will be centralised on one Intercloud and that is where we will be able to deliver that differentiated service consumers are looking for.”

Cisco does not intend to compete head to head with existing and far larger Cloud service providers like Microsoft and Amazon. Indeed Microsoft and Amazon are partners in the project. Essentially Cisco is providing a means for securely moving workloads between Clouds. 

There are already 60 Intercloud partner Cloud providers bringing a combined footprint of  350 data centres spanning 50 countries. They include Amazon Web Services, Microsoft Azure, BT, Deutsche Telekom, Telecom Italia, Telefonica, ViaWest (US) and China's Yunan Nantian Electronic Information. Cisco predicts it will have 1,000 partners by 2018.

The concept is not patented to Cisco and was first used in the context of cloud computing in 2007. IBM has also invested over $1bn in its intercloud offer and the IEEE is part way through a global testbed with 21 Cloud, network service providers and research institutions to produce a an open-source Cloud operating system. 

The Intercloud concept is interoperability on a giant scale, a communication protocol linking the current islands of Clouds, or as some would have it, the internet on steroids. It is also seen as a pre-requisite for connecting machine to machine and machine to person at a level which is necessary for the Internet of Everything which analyst IDC reckons will exceed $7tn in five years.

The Cloud offers food for thought and is a key topic for IBC2015.

Wednesday 18 March 2015

Stargate opens UK site at Ealing


Broadcast 
LA-based TV visual effects facility Stargate Studios has opened an outpost at Ealing Studios to tap into the UK tax relief for high-end drama and export its unique brand of virtual location production.
It has begun work on Apocalypse Slough, the Working Title comedy drama for Sky 1, at the request of the indie’s parent NBC Universal.
Next up is Damien for Fox TV and Luther for the BBC, while it has previously handled shots for Doctor Who’s 50th anniversary special.
The UK wing will be headed by Michelle Martin, former VFX producer at Double Negative and head of production at Lola, and VFX supervisor David Serge, who also leads Stargate Malta. Martin is recruiting up to 10 compositors.
“We are not looking to take projects away from [facilities in] London,” said chief executive and founder Sam Nicholson. “We are looking to bring new production techniques to the market.”
While Stargate London will initially offer compositing and CG, the group’s speciality is a ‘virtual backlot’, designed as a low-cost alternative to shooting on location.
It has been used on series such as The Walking Dead, Grey’s Anatomy and NCIS.
Stargate has 29 teams in locations such as Canada and Cairo, shooting backdrops using a spherical rig of up to 14 Sony F55 or Canon 1D-C cameras. The 4K footage from each camera produces special images with horizontal resolutions of up to 50K.
This is stitched together and texture wrapped with additional CGI before being used as a backdrop to live action. The set-up was used in South Africa for backdrops for Apocalypse Slough.
The facility is able to render the vast volume of data using a highspeed data-management system and cloud network that links computing resources across its facilities in the US, Canada, Germany, Malta, the UAE and Mexico.
Stargate underwrites the cost in exchange for co-ownership rights to the production’s footage. Producers retain exclusivity to material “identifiable with their show”, while Stargate banks the remaining clips into its 6,000-hour library, which can then be used on other productions.
Nicholson said Stargate is looking to collaborate with indies to co-produce content using this technology.
“London is an expensive place to do business so we’ll be outsourcing work, such as rotoscoping, to our Maltese facility in combination with our Berlin and Cologne facilities,” said Nicholson.
“We want the highest level of art and design to come out of the UK, interfaced with the rest of the group.
“Everyone is looking for ways to increase the creative capacity to tell bigger stories, and for ways to control cost. There are very few facilities capable of completing a thousand shots in 10 days. I would consider us the fastest and highest-quality system out there.”

UEFA TV Production debuts next-generation services to enrich match-day content offering


Sports Video Group
Ahead of the 2015/2016 UEFA Champions League season, the first of a new three-year rights cycle, UEFA is set to broaden the range of broadcast and digital services offered to its rights holding broadcasters. These ‘next generation services’ will enable broadcast partners to access significantly more of the content produced on-venue by UEFA.
“Across a typical UEFA Champions League match night, upwards of 15 cameras are on hand at every venue to capture the match action,” explained UEFA’s Head of TV Production Bernard Ross at Sports Video Group Europe’s Football Production Summit 2015 in Barcelona. “Now, from the 2015/2016 season, broadcasters will have access to a wider selection of clips and content from a selected number of these feeds for exploitation across both ‘second screen’ digital platforms, such as web, mobile and tablet, and post-production broadcast.”
To supplement this new enriched content offering, UEFA will also be providing broadcasters with an enhanced graphics, data and statistics service. UEFA will also be trialing a state-of-the-art ‘audio watermarking’ mechanism imbedded into the multilateral ‘world feed’, enabling broadcasters to utilize further marketing strategies across second screen devices.
These services will be delivered by UEFA in conjunction with deltatre and the EBU as service providers. Workflow testing has been ongoing since the start of the 2014/2015 UEFA Champions League season, and will be showcased at the 6 June UEFA Champions League Final in Berlin, ahead of launch at the UEFA Super Cup in Tbilisi, Georgia on 11 August.
All these solutions will be offered to broadcasters as individual bookable services, allowing broadcasters to construct their own unique UEFA Champions League experience for the fan.
The challenge
Coordinating, processing and distributing this additional content represents a new challenge for UEFA and its partners, with the new operation significantly more complex than the work that deltatre, HBS and Netco Sports provided for FIFA’s digital coverage at the 2014 Brazil World Cup.
“With a dedicated IBC, the World Cup operation is made simpler by having all material available through one central location,” Ross told SVG Europe. “UEFA Champions League venues stretch from Arsenal to Zenit St Petersburg, and not all stadia are equipped with dedicated fibre connectivity. With up to eight matches per night, this represents a far greater operational and technical challenge.”
Digital services
Currently, deltatre has been appointed by UEFA to provide a number of services from each Champions League venue. These include on-air graphics generation which is directly embedded into the multilateral feed, and data capture for UEFA’s official results system, produced using a combination of a player tracking system and dedicated in-venue spotters.
Now UEFA, in collaboration with deltatre, will supply an enhanced production out of the venue with full digital capture. This will provide broadcasters with three additional ISO camera live streams, multi-angle clips, data feeds, on-air graphics and infographics for audience consumption in a component fashion.
“The origin of the project was to find a way to gather all that content from the venue and make it available to rights holders via a public cloud enabling broadcasters to offer an enhanced digital experience and to reach further audiences,” Gilles Mas, deltatre director, told SVG Europe.
This content can be wrapped as a both a white-label and comprehensive turnkey solution for web, tablet and mobile, with a self-contained video player (SDK) and accompanying data widgets, provided directly from deltatre in a flexible manner for incorporation into the UBPs existing digital portfolio.
Broadcast services
Broadcasters will also receive the option of receiving broadcast quality near-live UEFA Champions League additional footage direct to their location. Seven to 12 minutes per match of unseen angle clips will be delivered during the live match, with an additional fifty minutes of content delivered after the match. Broadcasters will also have access to video content generated by UEFA or the host broadcaster, for example the News Exchange Feed produced the day before each match.
This broadcast quality content will be pushed direct to the broadcaster’s location via the innovative ‘UEFA Box’, capable of storing approximately two match days’ worth of aggregated content.
Audio watermarking
UEFA will also be trialing audio watermarking at the venue, a process that involves embedding audio stamps at source into one of the audio tracks of the multilateral feed. These stamps are attached to the appropriate match footage, and enable broadcasters to further market their second screen experience.
As Olivier Gaches, Digital Media Solutions Manager at UEFA explained, “A Lionel Messi goal would be instantly audio watermarked linking the match action to a series of relevant additional content available on the viewer’s second screen – for example, further information about the player, an opportunity to view a selection of his previous Champions League goals or an Adidas e-commerce promotion.”
The workflow
The dedicated on-site UEFA production unit, via their multi services van, will take 12 camera feeds direct from the host broadcast OB truck at each venue. Up to three of these feeds will be live encoded and pushed to the cloud as a live stream made available in mezzanine and transcoded format.
Simultaneously, all 12 feeds are ingested to the EVS C-Cast onsite, hosted in the multi-services van, for retrieval of unseen multi-angle footage. These feeds are then pushed to the cloud and to the UEFA Box.
The final next generation services workflow is the trial encoding of the multilateral feed audio channel with audio watermarks. These three new services will be completed in parallel with the established graphics workflow and data feeds, which will now be uploaded to the UEFA ‘central content production factory’.
Content delivery
UEFA has appointed the EBU to provide signal transport from the venue over the Eurovision fibre and satellite network. Content will then be packaged, transformed and offered to broadcasters via the Interoute cloud platform.
Broadcasters can retrieve the live feeds directly from the cloud origin points. The multi-angle feeds and clip reels enriched with metadata from C-cast, as well as the graphics assets and data feeds, will made available for download via UEFA’s deltatre-hosted cloud platform.
The EBU UEFA Box will also offer broadcasters another material transport option. Situated at the broadcaster’s premises, broadcast quality clips produced at each venue will be uplinked and pushed to directly to their location.
“These new next-generation services represent an exciting step forward for UEFA TV Production,” Ross told SVG Europe. “Building upon the foundation of many successful years of UEFA Champions League broadcasting, and the strong ties with our service providers, the viewer can now be provided with the opportunity to watch and catch-up with more UEFA Champions League action, across a variety of platforms, than ever.”

Producers back campaign to halt kids TV 'crisis'


Broadcast
Kids’ television producers have thrown their weight behind a campaign to halt the “crisis” in original children’s content over the past 12 years.
A number of indie bosses have voiced support for Pact and The Ragdoll Foundation’s recommendations to boost the sector after the organisations exposed dramatic declines in children’s output since the 2003 Communications Act.
In a joint report, submitted as part of Ofcom’s PSB Review, Pact and Ragdoll revealed that the volume of original UK kids’ content commissioned by public service broadcasters fell by 68% between 2003 and 2013.
It also highlighted that spending by commercial PSBs has fallen 95% to £3m.
The two organisations hope the PSB Review can “redress the balance” and have called for Ofcom to be handed powers to set quotas for kids’ programming, bringing it in line with news, current affairs, and originated productions. 
They also want to give PSB benefits, such as EPG prominence, to digital channels including CITV, in return for original production or scheduling commitments.

“The status quo is not far off market failure,” said Kate Little, joint managing director of Evermoor producer Lime Pictures. “If you don’t invest in children’s live action content and if kids are used to watching oversees imports or movies, then where is your drama audience of tomorrow?”

Billy Macqueen, co-founder of Topsy and Tim indie Darrall Macqueen agreed with Pact’s suggestions, but feared they may be too late. He also raised concerns about the BBC and Channel 4’s attempts to change terms of trade.
“Putting quotas through is an act of parliament, which will take a year to implement if not longer,” he said, adding: “If the terms of trade do change, then we can kiss goodbye to all kids indies doing live action and animation as well.”
Richard Bradley, the joint managing director of Horrible Histories producer Lion Television, said: “If we want to have a robust children’s industry then it needs something of a different order of magnitude.
“I am not sure if quotas are the answer, but a strong commitment is needed by broadcasters to end the industry’s perpetual state of struggle to fund live action programming.”
The tax break for live-action children’s content is due to be introduced next month, in line with the benefits offered to animation and high-end television. 
“Tax breaks were a recognition by the government that something needed to be done about funding children’s culture,” explained Greg Childs, editorial director of the Children’s Media Conference.

“The issue now is about fully grasping the implication that funding children’s content will cost a considerable amount of money but that it is vital for the country’s cultural welfare.”

Wednesday 11 March 2015

Why Brightness is Lighting up the UHD Agenda

IBC
While reports continue to surface about impending launches of UHD broadcast services, the issue of higher dynamic range (HDR) has risen to the top of the industry agenda.
Broadcasters like Sky Deutschland (now Sky) and the BBC have used IBC in recent years to argue that better pixels, not just more pixels, are required to kick-start UHD in the home. They needed to convince TV set makers not to go to market solely on resolution and earlier this year it seems they had finally done so.
Announced in January the UHD Alliance brings together consumer electronics brands like Samsung with the studios Warner Bros. and Fox as well as Netflix to explore how HDR can be delivered. Colour depth, luminance and colour space are all within its remit. It will address fundamental issues like, 'how bright should bright be?' and for colour gamut, 'how wide should the colour space be?'
“HDR allows us to both raise the ceiling and drop the floor, to the point where dark blacks and grey gradients reveal incredible detail that the consumer has never before been able to see,” says Technicolor, a UHD Alliance member. “Expanding the dynamic range has a side benefit of increasing the available saturation of any particular colour, so even without expanding the colour gamut, HDR can create richer colours. By establishing a minimum level of viable specifications for HDR, we can promise consumers a defined level of quality.”
Collaboration across the ecosystem will ensure the industry can move forward together, however the path is not clear. For a start the UHD Alliance conflicts with the Ultra HD Forum, set up in 2014 by Harmonic to explore an end-to-end ecosystem for delivering UHD services. The publicly expressed aim of both groups is to agree a joint approach by NAB 2015 but their dual existence appears to expose different commercial goals.
While content everywhere vendors would prefer to increment UHD technology each year in order to sell more products, broadcasters would prefer a big bang introduction that justifies charging consumers a premium for a new service.
Futuresource Consulting has even observed that the UHD Alliance can be seen as an attempt to counter Chinese content everywhere vendors by differentiating UHD Alliance-member products from supposedly inferior quality but certainly cheaper competition.
Approaches to HDR are already being played out in standards bodies. A version before SMPTE seems primarily designed for theatrical display, while the BBC and NHK are among several parties tabeling other proposals before the ITU.
Among expressed concerns: how will viewer's eyes adapt to viewing HDR augmented content in a living room (rather than darkened auditoria) and juxtaposed with SDR (traditional HD) content?
A related discussion is how to make content that works for both HDR consumption and SDR consumption since there is an ongoing need to support current TV platforms.
Another issue surrounds HDR metadata which is used to describe the dynamic range alongside the picture asset. “In a production and delivery environment there are too many options for that to get out of sync and cause a bad end-user experience,” says Simon Gauntlett, CTO, Digital TV Group.
Then there is the whole marketing piece. Content Everywhere vendors rightly point to the concept of 4K being more readily understood by consumers familiar with HD 1080P. But HDR?
“HDR is hard to communicate to consumers and it has nothing to do with the technology,” declared Netflix's Scott Mirer at CES. “[the industry] does not have experience with how to talk about HDR’s benefits to consumers and we don’t have convergence on how to implement it.”
According to Technicolor, the challenge for the industry will be to convey the HDR and colour space value at the retail level.
Viewers won't see any benefit in HDR while content is still produced to match the existing Rec. 709 standard. Upscaling technology in TV sets is a workaround, although the results can look unnatural and over saturated. One of the goals of the UHD Alliance is to standardise around Rec. 2020 but until content is produced in that format, a TV's ability to display extra colour gamut will be wasted.
On the production side, HDR impacts right through the chain. Many digital cameras can capture 14+ stops of dynamic range but this tends to get thrown away quite quickly in the capture process. A workflow needs to be found to store and retain the information into the pipeline.
As it stands content may require a separate HDR grade, adding cost to the post process. Professional monitors capable of displaying HDR are few and far between. As with the move from SD to HD there are even implications for how a scene is lit, how special effects are composited and even how makeup is applied on set.
The concerns raised by the broadcaster lobby are being explored within the UHD 1 phase 2 specifications currently working their way through the DVB. Once standardised, new chipsets will be needed to accommodate the change which could be a couple of years away.
None of this is to suggest that broadcasters won't launch a UHD service before 2017. The odds are that they will bow to competitive pressure and launch UHD live sports services first where higher frame rates are more the issue than higher brightness.
According to Technicolor, a single, open specification accepted by both content creators and display developers will eliminate the 'chicken and egg' scenario of content and hardware availability, allowing consumers to experience the full benefit of these new technologies.
However, there are those who argue that the introduction of HDR should be detached from UHD so that it can be applied to HD as well.  Those who have witnessed comparisons of HDR-augmented High Definition versus non-HDR UHD content at shows like IBC leave convinced that HDR is the greater visual bonus.

Thursday 5 March 2015

Which Territories Are Following The UK DPP’s Lead?


TheBroadcastBridge


The UK’s DPP may have set the bar high in terms of standard file-delivery and compliance but other territories are working toward their own version which means vendors need to rework their tools to fit.

https://www.thebroadcastbridge.com/content/entry/2140/which-territories-are-following-the-uk-dpps-lead
In most territories, standardisation is moving towards AVC-i, which is inherent in the DPP spec. AMWA in the US, for example, has begun defining its own version of DPP. Not only do the tools have to be reworked, but so does compliance procedure for each and every different standard.
The German-speaking (DACH) region is probably the closest in terms of specification and roll out, although the program there differs significantly for a number of important reasons.
1. The German file specifications, the ARD-ZDF MXF Profiles, were borne out of a QC initiative of the EBU which organized all QC tests into one of five categories (see below).
2. Within the German-speaking broadcasters, the requirements for interoperability extend far beyond straight-forward file-delivery. Files can be exchanged between the public broadcasters at any point in the workflow from acquisition to distribution.
3. Not only does ARD-ZDF specify tightly constrained encoding profiles, but also a set of decoder tolerances. Theoretically, a more tolerant decoder will result in a far more stable and robust workflow, however a wider variance in possible inputs to a decoder in practice puts a greater strain on the testing of decoders to ensure compliance with the profiles.
The ARD-ZDF MXF Profiles were published in 2014 and are now being specified as the requirements for almost all new projects and upgrades in the German-speaking market.
4. Coming up on the rails behind Germany is the US, or at least Hollywood. The IMF initiative has gained significant momentum in the last couple of years and with the arrival of UHD material and file deliveries, adoption of IMF has accelerated significantly.
5. IMF also differs from the DPP initiative in the sense that it is more the content creators and owners saying “this is how we want to deliver media” rather than the broadcasters saying “this is how we want to receive media”. This difference results in a very different approach to the file delivery standard and methods.
In the US, QC of closed captions, video descriptions, and languages has largely remained a manual effort. The FCC and other standards bodies have tightened the quality standards for captions and increased the breadth of content that must comply. What is the cost of continuing to do so manually and what are the costs of ignoring the problem?
“Content creators, distributors, and broadcasters will need to turn to automated approaches to verify closed captions, video descriptions, and languages,” says Colin Blake, Sr. Sales Engineer, Media & Entertainment Products at Nexidia. “Continuing to rely on manual review or spot checks leaves you exposed and is not scalable. The coverage obtained by spot checking is insufficient to find problems and these failures will mean loss of viewership, reduction in the perceived quality of the programming, and possible regulatory fines.”
Blake says the UK DPP has “certainly attracted attention” in the US with several groups evaluating whether it can be of value in their area.
“We will continue working with those groups to meet any standards in place and do everything we can to support our customers. Technology vendors answer to their customers and must stay relevant; keeping pace with standards bodies is part of that. Nexidia will continue to develop unique products looking at the essence of the content and not just the metadata.”
The EBU QC 5 Test CategoriesRegulatory A test that must be performed due to requirements set by a regulator or government. Has a published reference document or standard. Absolute Defined in a standards document including a standard scale. May have a defined pass/fail threshold. As a user, I ought to be able to compare the results of different QC devices and see the same result. Objective Measurable in a quantitative way against a defined scale, but no standard threshold is agreed. There may be no formal spec or standard, but an agreed method is implied by or exists in the test definition. As a user, I ought to be able to compare the results of different QC devices if I configure the same thresholds. Subjective May be measurable but with only a medium degree of confidence. A failure threshold is not defined, or is very vague. May require human interpretation of results. As a user, I cannot expect different QC devices to report identical results. Human-review only  

Gold coloured cards are used for test that can only be carried out by human EYES or EARS (Golden Eyes and Golden Ears) or where a human is required to fully interpret the results of an automated QC tool.

Sony expands on IP live production plans ahead of NAB


Sports Video Group
A key theme at NAB will be the growing ecosystem for live production over IP. For Sony, in particular, it is a core development path as it promotes its own version of IP networking which it hopes will become adopted as industry standard, mirroring the hand it played in bringing to market the Serial Digital Interface in the late 1980s.
“Our vision for IP Live is to eventually replace conventional SDI routers,” explained Paul Cameron, senior trainer of Professional AV Media for Sony in a webinar. “IP Live offers all the benefits of IP and all the benefits of SDI not only for local connections, but for transmission over longer distances.”
Sony seems to have co-opted the term ‘IP live’ although the company itself says the technology underlying it is better known as the Networked Media Interface. Perhaps this will become known in time as NMI. It’s also the name of a Sony-led group of broadcast tech manufacturers to have backed this particular AV over IP approach. Cisco Systems, Evertz, Imagine Communications, Matrox Rohde & Schwarz DVS and Vizrt are among them.
IP Live showcase
At NAB, Sony is concreting some of the IP Live products it demonstrated in prototype last IBC. These include “crushing” the functionality of the core Low Latency Video Codec (LLVC) chipset to fit a 3U rack mounted system with options to fit nine SDI-to-IP convertor boards per rack.
According to Cameron: “This will allow broadcasters to build an IP Live island in their live production system, emulating what happened in normal production (where edit bays of file-based workflow were surrounded by coaxial camera links and SDI routing).”
The LLVC is based on the same codec used in Sony studio cameras and supports 4K 60p transmission over 10 Gbps Ethernet. The creation of a SMPTE Registered Disclosure Document for the codec is in process. Further, Sony will show more of its products – and presumably those of third party adherents to the Network Media Interface – with an IP connection.
“Systems cameras, switchers, servers, monitors will be fitted with standard RJ45 connectors to allow direct connection into the IP Live system,” said Cameron.
Sony is also bringing out a System Manager which will allow network management and routing from one central point. It will feature topological views of the network, list views and cross-point views. Expect all of these to ship around IBC2015.
Cameron admitted the product details were “sketchy”, but proceeded to paint several use case scenarios.
In a traditional live studio environment there would be one router sitting between lines in and lines out for transmission. “It requires an awful lot of cabling which is difficult to set-up, difficult to take down and are inflexible in terms of technology and from the stand point of contracts typically used to lease them,” he said.
By contrast with IP everything goes through a central IP switch. The amount of cabling drops significantly and changing links or contracts is much straightforward.
In a typical current 4K OB situation, he continued, the truck would be based on a SDI router with 4 lines required to connect it with each 4K camera. The amount of cabling is complex and adds to the weight of the vehicle. Adding another server is a complicated procedure, he said.
“With IP the control of the whole system is from a central IP switch. Each camera requires just one cable. The whole set-up is much simpler and arguably more cost-effective. For remote production of minor league sports events, live IP means the ability to control cameras from a central point reducing the number of staff and kit on-site.”
Some challenges remain, he suggested, notably the increase in bandwidth from HD to UHD “which we do need to watch”. A single 100 Gbps link would allow for two 4K streams, he said.
The NXL IP55, which was Sony’s first foray into live IP production, cannot be used for 4K. It has been popular with certain broadcasters, Sony said, not least because it transmits functions such as audio, intercom and tally, along with multiple HD video streams over a IP Local Area Network, but it is “not fully Networked Media Interface compliant,” said Cameron.
Latency is another big issue. “If you connect a camera to a monitor you expect to see the picture instantly,” he said.
Sony has adopted SMPTE ST 2022-6 and ST 2022-7, a technology which transports uncompressed HD-SDI over IP, although Cameron would only commit to saying the performance was fast and that “we are working on it”.
Networked Media Interface features
The chief benefits of the Networked Media Interface were summed up as:
- The ability for broadcasters to use COTS (commercial off the shelf) hardware such as standard IT servers, rather than bespoke and expensive broadcast specific ones;
- Less hardware and cabling infrastructure;
- Perfect synchronization: Working with SMPTE 2059, a precision time protocol that permits switching of different devices over IP to achieve broadcast quality IP genlocking;
- Bi-directional cabling: One reason for fewer cables;
- High fault tolerance: Sony calls it Hitless Failover;
- Scalable and extensible: IP products and networks can be modified easily and the system reconfigured far quicker than using conventional AV gear;
- Realtime monitoring: Operators located centrally can monitor and tune the traffic across the network;
- Analysis: Can be done much more easily with IP tools as can the scheduling of tasks such as updating hardware.
At the end of the webinar Sony polled those in attendance. Over half it seems were still learning about IP technologies and how it impacts their business; 28% expressed that they were not ready for Networked Media Interface technology; while 14% said they were early adopters and intended to be at the forefront of IP Live production.
The next stop after NAB for Sony in this area is a follow-up webinar on May 9, then a programme of online education before face-to-face training on new product around IBC.