Wednesday, 18 October 2017

Graphics and editing round up

InBroadcast

Avid moves to the cloud, Adobe offers collaborative editing and Blackmagic’s focus is on realtime composites.


The product strategy of most production and post production vendors continues to be one which shifts functionality and services into the Cloud. Perhaps the biggest thing in Avid’s favour is that its brand offers a familiar and trusted on-ramp for content creators, facilities and broadcasters moving to the cloud.

At IBC, the company debuted MediaCentral, a suite of tools accessed by browser which promises to help create, distribute, and manage content using one common platform.

According to Chairman & CEO Louis Hernandez, Jr., Avid has spent half a billion dollars on re-engineering its product for the cloud.  

He said, “We’ve not just completed a full transformation we have created a platform that is flexible and open and agile enough to allow us to move forward.”

The MediaCentral production suite is the firm’s centrepiece. It costs from $20,000, includes Media Composer and Nexis storage, and has modules and apps for editorial, news, graphics, and asset management as well as an array of media services and partner connectors.
Nor is it exclusively an Avid domain. One of those ‘partner connectors’ could be Grass Valley or Adobe, should Premiere be your craft tools of choice.

“MediaCentral delivers a unified view to all media – whether stored on Avid or another platform,” said Hernandez. “The user experience is common for everyone.” Avid is now rolling out web-based applications like search, edit, logging and publishing.

Media Composer VM is debuted for a user to access a cloud-based version of Media Composer from a tablet or computer. Media Composer Cloud Remote is an application run from the computer but it accesses content located in the cloud.

Microsoft is Avid’s preferred cloud provider. Dana Ruzicka, VP and chief product officer, said “The move to cloud is imminent for almost all of our customers – but it’s not a lift and shift. It’s an evolution based on their business needs. This really is the beginning of the cloud era for our industry.”

ProTools, news workflows and archiving are already available via Azure. Avid is also allowing users to tap into Microsoft’s machine learning tools, for example, facial recognition and social media analytics.

Avid also unveiled Maestro Sports described as an all-in-one broadcast graphics and video control system targeting live sports production. Its selling points include operation by a single person so broadcasters can save on costs. An operator can integrate tracked-to-field graphics and augmented reality to highlight and analyze plays, as well as visually tie sponsors into the game. All from the same UI you can create virtual formations and starting lineups, 9-metre circles, distance to goal indicators, and even virtual elements such as team logos, 3D objects, and advertising.
The system includes a video server, for graphics and video rendering, and playout as well as connection to external storage such as Avid NEXIS in the studio.  

Blackmagic Design’s main focus at IBC was on the Ultimatte 12, the latest version of the keyer which Blackmagic acquired this time last year.

Preferring to describe it as an advanced real time compositing processor, Ultimatte 12, is for creating composites with both fixed cameras and static backgrounds, or automated virtual set systems. It can also be used for on set pre visualization by allowing actors and directors see the virtual sets they’re interacting with while shooting against green screen.

Ultimatte 12 also features one touch keying technology that analyzes a scene and automatically sets over a hundred parameters to make live compositing easier.  Based on a new hardware processor the machine costs under $10,000 and is available now. Ultimatte 12 is controlled via Smart Remote 4, which is a touch screen remote that connects via Ethernet. Up to eight Ultimatte 12 units can be daisy chained together and connected to the same Remote. This costs an additional $3,855.

Announced at NAB and now released are DaVinci Resolve 14 and DaVinci Resolve Studio 14. Enhanced editing capabilities, multi-user abilities, increased native OFX plug-ins and, certainly not to be glossed over, the inclusion of Fairlight all combine to make Resolve and Resolve Studio 14 a major post production tool.  The price of Resolve Studio has been lowered from $995 to $299, making the jump from the still-free Resolve to the more feature-laden Studio easier.

New features coming to Adobe Premiere Pro via Creative Cloud include shared projects, which are like bins in Avid. These allow for different subprojects to all be stored inside of a master project. This means editors can work on different scenes at the same time while all having access to each other’s subprojects with different permissions set for who can read/write vs. who can just read and refresh changes. Other useful new features include more marker colours, close gaps and font previews. This could be a ‘game changer’ for Adobe’s place in the multi-editor workflow of large feature films and TV shows, which Avid has dominated.

VR integration between Premiere Pro and After Effects is made possible by Immersive, a toolset which includes Adobe’s new VR Comp Editor in AE with the ability to transform 360 degree footage into flat rectilinear shots. In other words, it allows the editor to review their timeline and use keyboard driven editing for trimming and markers while wearing the same VR head-mounts as their audience. It will work either on screen or more appropriately with an Oculus Rift or HTC Vive headset. Immersive effects are designed to apply Blur, Glow, Sharpen, Denoise and Chormatic Aberration filters in a VR version allowing for proper results that a flat image filter could not produce.

In addition, audio will be determined by orientation or position and exported as ambisonics audio for VR-enabled platforms such as YouTube and Facebook. VR effects and transitions are now native and accelerated via the Mercury playback engine.

“Video is the fastest growing medium in communications and entertainment,” said Bryan Lamkin, svp of Digital Media at Adobe. “Adobe is breaking new ground in collaboration, social sharing and analytics to accelerate workflows for all video creators.”

Brainstorm’s InfinitySet 3 virtual set tools now includes the graphics editing and creation features of Aston so it can edit, manage and create 2D/3D motion graphics when required.
The incorporation of Aston graphics allows for direct editing of not only Aston projects but the individual elements, animations, StormLogic, data feeds and so on as if they were in Aston. This means that in broadcast operation, or while on-air, InfinitySet 3 operators will be able to adjust any object in the scene, even those with added properties such as animations or external data feeds, without having to do so in a separate Aston application.

The package supports third-party render engines such as the Unreal Engine. As it is integrated with the Brainstorm eStudio render engine, this allows InfinitySet 3 to control in real-time the Unreal Engine’s parameters such as 3D motion graphics, lower-thirds, tickers, and CG.

The product also comes with technologies such as 3D Presenter, TeleTransporter, HandsTracking, and FreeWalking, all of them using Brainstorm’s unique TrackFree technology. Especially interesting is a VirtualGate feature, which allows for the integration of the presenter not only in the virtual set but also inside additional content within it, so the talent in the virtual world can be ‘teletransported’ to any video with full broadcast continuity.

Sixty has a slick Interactive TV graphics system that layers on top of existing high-end graphics displays (from companies such as ChyronHego, Vizrt, or Ross). This creates a simplified workflow for producers while offering a highly-interactive graphical display for users on a touch-screen tablet or phone.

Users can access loads of advanced analytics and Sixty fully integrates with data services from Opta, STATS, and more. The software also places monetisation at the forefront, allowing for producers to automate ads tied to events such as goals and even prompt sales of products such as jerseys or other items that are tied to real-time moments in the stream.

ChyronHego has a freshly minted integration of its Silver robotic camera head with the RoboRail straight camera rail system from Mo-Sys for a camera tracking and augmented reality graphics solution. Silver is part of ChyronHego's range of virtual studio/AR trackers that provide real-time, precise camera motion within 2D or 3D computer-generated backgrounds.

Installed on the RoboRail, mounted vertically or horizontally on a wall, ceiling, floor, or even the anchor's desk, the system makes it easy for news productions to enliven their newscasts with visually compelling AR graphics. MOS interfaces with newsroom computer systems enable ChyronHego's CAMIO graphic asset management server to deliver the AR and virtual set graphics to air, together with the right assets and camera motions.

Google has spent $30 billion on building out its cloud services and is now pitching it to media. “For us, cloud is about scale and there is no limit to what media can do,” explained Jeff Kember, Technical Director Media, Google Cloud, at IBC. “We have an end to end production workflow from camera raw and debayering through image recognition, editing and on to archive and disaster recovery.”

The editing he mentioned refers to Google’s ability to apply machine learning to data in the cloud. For example, it has a research application allowing production teams to automatically generate highlights reels from uploaded rushes – potentially shortcutting the bespoke systems of Avid or EVS.

“Our goal is not to replace human editors but to make the process more automated,” said Kember of what he called ‘intelligent clipping.’

Its cloud rendering was used by facility MPC in London and the shooting stage in LA for creation of The Jungle Book. “VFX companies are using Google for simulation and beginning to expand their entire production workflow into the cloud. Ultimately, digital intermediate colour correction will go this way too. If you have just released a 2K version of a show and need to do a 4K version you can pull this data from the cloud in minutes, perform all the DI, master the HDR and use Google to distribute.”

Friday, 13 October 2017

Why VR could be on shaky ground as Nokia abandons Ozo


RedShark News



First it was 3D, now it could be VR. The news that Nokia is halting development of its Ozo VR system is a further signal that wearing hefty head gear while viewing content is not a marriage made in heaven.


Nokia’s abandonment of its VR camera Ozo is a major reality check for proponents of the new format. Citing “slower-than-expected development of the VR market” the Finnish company is resetting its sites on another digital vertical – that of e-health. In the process, it will lay off over 300 people working on VR and will no longer develop the camera.

It’s a shock, given what we can only assume is the millions of Euros in R&D that the company must have invested to build the Ozo. That Nokia is prepared to walk away from it suggests that it believes that either much more outlaying was needed to perfect the technology or — more likely — that there was no prospect of making its money back anytime soon.

The Ozo, launched in March 2016, was positioned at the very top of the nascent capture market. Priced €55,000 (reduced to €34,780 earlier this year), it was seen as a professional turnkey solution to high-quality capture with eight lenses, each with a 2K x 2K sensor, eight mics for spatial audio, a global shutter and its own fine-tuned stitching software — versus the cheaper jury-rigged GoPros and off-the-shelf stitching tools.

In contrast to the similarly high-end Jaunt VR camera, the Ozo was also seen as the principal answer for live streaming of VR. It was adopted by UEFA’s streaming production partner Deltatre for a test capture of Euro 2016 soccer games. BT used the camera and its workflow along with UEFA to produce the much heralded live VR coverage of the Champions League final from Cardiff, last season. Other live stream producers which adopted the Ozo as official production partner include Streaming Tank. It is likely, though not confirmed, that Nokia underwrote much of the investment in these live sports productions

While Nokia says it will continue to support existing customers, it’s hard to see how BT Sport and others can continue to place their faith in an obsolete technology. BT executives would have liked to launch a regular live VR stream around Premier League matches this season but have so far not done so. Sky — which is not wedded to a camera but has a stake in Jaunt — has restricted its VR trials to short-form content across genre with VR of sports like boxing being mostly recorded.
The market is simply not here yet. CCS Insight predicts 14 million smartphone VR headsets will be sold this year, rising to 25 million in 2018. That’s global. Sales of head-mounted displays from Oculus or Sony have also been disappointing.
While there are analyst figures which suggest greater sales and higher forecasts, these are still insufficient audience sizes for content owners to address commercially.

Live high-end VR (as opposed to live streaming 360/180 content to Facebook) suffers from the same syndrome as 3D. Fans in the same room want to be able to share the experience, not be isolated behind a mask. BT Sport has realised that a live soccer match is too long for a VR experience. In addition and perhaps more fundamentally, the hyped experience of ‘immersion in the best seat in the house’ doesn’t seem that compelling. We want to be told the story of a game, not have to look around and into the distance or navigate (direct) ourselves to enjoy the game.

There will be continued investment in live VR. Nokia is not the only game in town and we can expect more marquee events to be offered with a live 360 stream — the Discovery produced 2020 Olympics for one.

There are also more bets being laid on narrative and factual VR content. AMC Theatres, the largest cinema chain worldwide and largest in the US (and owner of Odeon in the UK), has invested in Dreamscape Immersive, a firm developing VR attractions inside and outside of AMC theatres. It is also committing $10 million to a fund to generate bespoke VR content for the facilities. Meanwhile, Discovery and Google are to make the 38-episode travel series Discovery TRVLR, to seed YouTube, the Discovery VR app and sales of the Google Daydream View headset.

One can’t help feeling, though, that the VR industry will remain a niche and slow burn until a killer piece of original content emerges. Perhaps someone should invite director Denis Villeneuve to produce something that can only be experienced this way.

Nokia has a history of innovation and a growing reputation for dropping technology too. It went from hero to zero in the global smartphone business when it sold its mobile devices division to Microsoft in 2013 after striking a deal to replace its Symbian operating system with Windows and seeing its market share slide dramatically.

Can Better Monitoring Prevent a Replay of Class Action Against Showtime?

Streaming Media

Showtime faces a class action after problems with the Mayweather-McGregor fight stream. Continual and complete testing might mitigate future brand damage if the issue was not with the content provider.

The class action against Showtime for allegedly failing to deliver on its as-advertised quality of service for the Mayweather vs McGregor fight in August has alarm bells ringing across the industry. If the internet cannot handle large volumes of concurrent video streams, will rights holders think twice about the claims made for OTT sports in future? More pertinently, can anything be done about it?
http://www.streamingmediaglobal.com/Articles/ReadArticle.aspx?ArticleID=121082
"A global show on the scale of a Super Bowl cannot be live streamed today to everyone online," says Charles Kraus, senior product marketing manager at Limelight Networks. Recent Super Bowls have topped 110 million broadcast viewers. "The internet would have to grow by an order of magnitude in capacity to support streaming for everyone."
Kraus pinpoints the main issues as congestion in the last mile "over which nobody has control." "The average bandwidth in the US is 10Mbps, [and in] the UK [it's] 20Mbps, but you need at least 30Mbps to deliver 4K. Even where 4K is advertised (by Netflix, Hulu) and people pay a premium for it, you never hear these providers state that the end-user's ability to receive this will depend on your ISP network."
While latency and buffering are the perennial headaches for live stream OTT providers, particularly of premium sports, the main complaint of Zack Bartel of Oregeon was that he'd paid Showtime $99.99 for an HD 60fps experience and wasn't getting it on his Apple TV. He had the foresight to capture screen shots of what seems to be blurry pictures of the fight and includes these as evidence in the legal documents apparently filed day and date with the fight on August 26. He took a speed test to make sure the issues weren't being caused by a bad home internet connection (speed test results also included) and tested YouTube and Netflix at the same time which were ‘in crystal clear HD, as usual'.
The claim argues, "In hopes of maximizing profits, defendant [Showtime] rushed its pay-per-view streaming service to market, without securing enough networking bandwidth to support the number of subscribers who paid to watch the fight." It cites Showtime's use of HLS (HTTP Live Streaming) and VBR as not being equal to the task:
"Defendant knew and should have known its system wasn't able to conform to the qualify defendant promised its customers, based on defendant's available bandwidth and subscriber numbers. Instead of being upfront with consumers about its new, untested, underpowered service, defendant …. intentionally misrepresented the quality and grade of video consumers would see using its app…"
It's a class action which could see Showtime sued for millions of dollars in returned PPV fees. Neulion, the cable network's official live streaming platform for the fight, has kept its head down since the event, after promoting its participation extensively in the run up.
"The big challenge is that if consumers in a particular region or on a particular access network are hit by poor service, but the content provider is providing perfect packaged content into the network, then who is to blame?" says Stuart Newton, VP strategy & business development at Ineoquest, which is owned by Telestream.  "Ultimately, the content provider brand is damaged, and, as they are the company charging the pay-per-view fee, they are the ones that receive the wrath of the customers. The next question is how much was the customer affected? Did they get a few 'annoying' glitches caused by a particular issue in the delivery chain, or was the content totally unwatchable due to a major failure? Compensation needs to be appropriate, and the only way to do that is by knowing who was affected and how badly."
Is it possible or desirable to pinpoint the exact point of failure during a particular live stream, and therefore for the rights holder to hold that vendor or service partner accountable?
According to Newton, this depends on how many points of monitoring are in place, and how well integrated the end-to-end systems are.  The more data silos there are, the longer it will take to pinpoint where the actual error is.
"This is why it's becoming critical to integrate operational management systems with client analytics solutions to provide near-real-time impact analysis, and initial deployments are starting to happen now," he says.
"It also makes sense to test after configuration changes (which are happening all the time whether you know it or not), and certainly testing when new protocols, resolutions, or network upgrades are instigated. It basically means you need to test continually—there is always something changing that you won't be aware of, even if you own the delivery networks. Test, and then keep testing," Newton says.
Checking video availability at many geographical locations is key for understanding where the issue originated (the preparation stage, a third-party CDN or access network?) and will help in being able to mitigate future brand damage if the issue was not with the content provider.
"Having the data allows for negotiation with the CDN and access network providers—the worst possible situation is not knowing what to fix for the next major event," says Newton.
As the impact and root-cause knowledge becomes greater and more real-time, the case for more automation and self-healing video networks also becomes stronger. The good news is that this is also happening due to advances in the use of dynamic orchestration for cloud and virtualization (especially network functions virtualization [NFV]). As functions in the delivery chain become virtualized, they are evolving to have advanced control and configuration capabilities, as well as new scaling capabilities leaning towards micro-architecture-based services.
"Next-generation video services will be able to 'spin up' an encoder on demand for a new channel, or as a failover mechanism for an existing channel," says Newton. "It's also possible to dynamically deploy the necessary monitoring needed as a micro-ervice, so the orchestration system knows in real-time when something is wrong and can take the appropriate action. In reality, this means that real-time monitoring is becoming an essential part of the system – you can't take corrective action unless you know what's happening. Driving self-healing services from Twitter feedback is not really practical."
As Limelight's Kraus points out, CDN technology will improve and bandwidth capacity will increase. But the traffic is likely to increase fast as well, as audiences will grow.
"Advances in video compression can also help provide premium video quality at lower bitrates," says Thierry Fautier, VP, Video Strategy, Harmonic. He describes Harmonic's EyeQ, as a ‘live content-aware encoding technology' addressing this. The firm claims to have shown up to 50 percent bitrate reduction for OTT HD profiles while keeping 100 percent compliance with H.264.
However, Fautier says, "today we see the limitations of the internet when traffic is just a few percent of broadcast. Unicast does not scale." He contends that the industry needs new techniques for live use cases, based on standards.
Peter Maag, CMO at Haivision agrees the market needs to drive towards standards-based broad scale low latency delivery. He cites emerging standards such as SRT (Secure Reliable Transport) developed by Haivision (for first mile and possibly CDN) and QUIC from Google (last mile).
"In my opinion, although still in development, today's internet is more capable to reach truly global audiences," Maag says. "With increases in quality, latency, and reliability throughout the workflow imminent, combined with the richness of content and the ubiquity, linear broadcast's days are numbered."
The DVB is also working on an ABR multicast protocol that can be deployed not only on managed networks, but also on the public internet. A standard should be available by early 2018 to cover those two use cases.
Fautier, who is part of the Ad Hoc Sub-Group of CM-AVC (which is tasked with defining the Commercial Requirements for ABR Multicast), says "ABR streaming delivered over the top on managed networks is based on unicast technology, making it difficult to scale for live applications. To resolve this issue, the DVB has decided to develop a specification that will enable ABR multicast distribution via any bidirectional IP network, including telco, cable, and mobile."

Cinema projection has a limited lifetime

RedShark News

The writing is on the wall for cinema projection. New screen technology composed of LEDs is set to disrupt the century-old way of viewing movies on the big screen by the end of the next decade. The transition will as likely be driven as much by filmmakers as manufacturers keen to present their work with a high dynamic range far greater than any projector can achieve.

https://www.redsharknews.com/distribution/item/4929-cinema-projection-has-a-limited-lifetime


“LED displays are likely to obsolete projection by 2030 or sooner,” predicted Peter Lude, Technical Advisor, RealD.
LED displays — also called emissive or direct view displays — are composed, as you might expect, of tiny LEDs of 0.2/0.6mm. These are built into modules, the modules into panels and the panels into whatever size you like. The base resolution is 4K (or over 8 million pixels per panel), but Sony says its Crystal LED technology, which emerged out of the firm’s visualisation division (for high-end advertising) can display resolutions of 16K.
“Without being arrogant… our technology is too good for cinema,” said Oliver Pasch Sales Director Digital Cinema Europe, Sony. “But that’s good since it’s easier to make something of lower spec than to make something of higher spec.”
What’s wrong with projectors anyway? Well, emissive displays can deliver dramatically improved contrast for a dynamic range that beats projection capabilities.
“With every projector light leaks from the light source, but with LED, when you turn it off, there is no light coming out,” said Lude, who light-heartedly suggested coining the term Super Extraordinary Amazing Dynamic Range (SEADR) for its superlative effect.
Sony and Samsung, the two manufacturers currently backing direct view cinema screens, claim contrast ratios of infinity to one.
There’s also the issue of ambient light in a cinema — the exit door signs, for example — where the light is reflected off the projection screen back out to the audience. No so with LEDs that suck in the light.
“LEDs can render blacks to be extremely dark, meaning that, for example, light reflections from water or headlights can be displayed at higher brightness than is possible in any projector,” said Lude.
Plus, you can have new screen configurations. A panoramic cinema experience is currently being marked by Barco (called Barco Escape) using three projectors, but this suffers from light bouncing between the screen and degrading the grey scale.
“So, imagine an immersive screen wrapping around the audience using LED panels. The blacks are stunning, there are no straight edges, the curves are very tight and it could be supplied with content from the VR world taken from head-mounted displays and engineered for a movie narrative. None of that is possible with projection.”
There are economic benefits too for cinema owners. Since the LED consumes no power when they are switched off to ‘illuminate’ black, this saves on electricity versus the always-on energy of laser projection or xenon lamps. LED panels last up to 100,000 hours or 15 years, whereas projectors have a lifespan of barely half that. You can get rid of projection booths and all the speakers that currently sit behind the projection screen — for multiplex owners to fit in more screens.
Are there any downsides? The principal one is cost. Just now, to make a LED screen of 40-50ft, which is the size of the average main screen in your local picture house, it would cost $700,000.
However, as with any new technology this cost — principally the cost of manufacturing the LED boards — will reduce in price.
“When the total cost of ownership is close to projectors, then change is imminent,” said Lude.
The introduction of MicroLED technology, in which brand giants like Sharp, Google and Apple have invented, could cause the price to drop dramatically, he predicts.
In July, Samsung introduced the world’s first commercial direct view screen into a theatre in Seoul. Sony says it is building a cinema product but has not given details of when this is available.
Sony of course also makes projectors. “It is not the end of projectors. We will see another round of sales of projectors into cinema. But LED is probably the only true HDR option around.”                         

Tuesday, 3 October 2017

How to Enable Content Security & Beat Piracy


UBB2020
Pirates today are very sophisticated, not only in the technologies they use to capture and redistribute content but also in how they protect themselves from content owners and anti-piracy vendors.

This problem is amplified because content is no longer only available via smartcard; the prevalence of networks and devices makes content distributors vulnerable to myriad threats. Although protection continues to become more sophisticated, the sheer volume of streaming approaches makes it difficult to close all potential points of attack.

While some pirate organizations do everything -- from capturing content to redistributing it to the user -- many pirates specialize in one part of the chain, content protection specialist Nagra said. Some solely capture content and resell it, while others exclusively build a viewer base and embed streams.

There also is the alarming development of clandestine ad networks created by pirates for pirates, largely due to their rejection by legitimate ad networks (think Google Ad Words and Facebook).

"Pirates have moved from focusing on distributing access rights to legitimate broadcasts ... to distributing the content itself and therefore benefitting from the technical advances in technology, such as Adaptive Bitrate Rate (ABR) streaming and cloud hosting," said Fred Ellis, senior director at Arris Security Solutions, in an interview.

In its study, Anatomy Media found 69% of young millennials use at least one form of video piracy; of that group, 60% stream content without paying, according to "Millennials at the Gate."

Even with a crackdown on the sale of pirate devices by large e-commerce sites, pirates adapt by using vague terms that merely hint at what a user could access on pirate IPTV services. Even some major retail chains sell these devices.

Pirates have become better at protecting against anti-piracy efforts.

"Some use session-based tokens to prevent ISPs from accessing those URLs to validate infringement notices, some generate dynamic URLs that change every few minutes so they can argue they are compliant and are taking down streams," Christopher Schouten, senior director of product marketing at Nagra told UBB2020. "Some block access to the infringing streams to IP ranges from major cloud computing providers" as they know these are used by anti-piracy vendors.

Ideally, content creators protect the entire platform with an end-to-end solution, from watermarking to fingerprinting, crawling, capturing and taking down the source of the infringement, said Schouten.

"Regardless of whether they work with Nagra for the entire solution or only parts of it, they still need to act. If piracy is left unattended it will grow at an exponential rate," he said.

Pirates are driven by opportunity: Deployments scale with larger audiences, especially as a result of OTT video delivery services.

"Given a large enough target population and motivated by limited availability of specific content, pirates will invest fairly heavily to find flaws in any deployed system," warned Arris's Ellis. "Content protection organizations must provide solutions that have been well designed from a protocol and cryptographic algorithm point of view [and] which have been analyzed carefully in regard to robustness."

It's critical to protect content across three key locations, added Petr Peterka, CTO at content security vendor Verimatrix, in an interview. The first is where the service is hosted. "They need to be sure it is protected while it is still in their hands," Peterka said. Hacking involving major distributors like HBO and Disney are examples of pirates going directly to the source.

The next step: Put proper security measures in place to prevent illicit capturing of content as it is delivered. Conditional access (CAS) and digital rights management (DRM) help here. Finally, service providers must ensure end-devices, where content is consumed, operate in a trusted, secure manner -- and that robust security measures are maintained over time.

"An important question to ask would be if they take advantage of security tools like trusted execution environments [TEEs] and downloadable DRM that are available to ensure that new security advancements can be applied later down the road," said Peterka.

During any of these stages, providers can use anti-piracy monitoring and detection services for traceability. They can leverage forensic watermarking to trace instances of illegal redistribution back to an individual subscriber or compromised device so service providers can shut it down and even take legal action.

The investment required of OTT providers, broadcasters and pay-TV operators differs very little if their strategies are largely aligned.

"Both content owners as well as distributors need to protect their interests using content security technologies as well as forensic watermarking during every stage of the production and distribution process," said Schouten.

There are some distinctions. OTT providers must implement two aspects of protection. One to attain content licenses from content providers such as studios and networks in order to gain the ROI they need for their product offering. The level of protection of the content depends on the content owner and the content value, which increases with early release windows and the video quality of the content (UHD).

Service providers also must protect revenue by ensuring only authorized subscribers access systems, he said. Any compromise of authentication or authorization processes deters revenue generation.

Broadcasters and pay-TV operators generally deliver content in a more controlled environment -- over a managed network to fixed devices, such as operator-controlled set-top boxes. Better hardware and endpoint security simplifies content protection on these boxes, since the platform's cost structure is more flexible.

OTTs, which deliver content over the Internet, usually target a vast number of devices including open platforms. Neither the device nor the network is under the system operator's control. Operators must gain some control via other technology capabilities, such as white-box cryptography and tamper detection.

As subscribers adopted DVRs, 4K TVs, smartphones, gaming devices and tablets for their viewing pleasure, content pirates evolved from yesterday's clunky videotapes to more high-tech, IP-based solutions.

System operators need a proven and experienced, well-funded content protection partner that provides maximum security to their deployment and delivery updates and enhancements to stay ahead of pirates' new and emerging threats, Ellis said.

Operators can apply a common framework across DRMs to address a fragmented ecosystem, which is posed when devices come with different pre-integrated security solutions, Verimatrix's Peterka said.

"Service providers can solve multi-DRM challenges by providing harmonized rights management across networks and devices for OTT video delivery," he said. "Selecting a framework that allows for the inclusion of any third-party DRM scheme for a harmonized rights platform can ultimately provide complete end-to-end management of revenue security."


Content protection must safeguard content from the point of ingestion to the point of playback on the user's device. Content needs to always be encrypted and the resultant, associated decryption key must then be securely delivered to the end device. The encryption must be an algorithm that content owners have approved. In addition, the solution must be capable of disabling output capabilities on the end device in order to prevent unauthorized distribution via copy control.

"The overall content protection solution is a layered approach which spans many components within the system," said Ellis. "The content protection provider must align and partner with chip providers, device manufacturers and system operators to ensure the proper security capabilities are in place, where necessary."

Protection doesn't end there. Operators are strongly advised to square the circle with constant monitoring and enforcement: scan, capture, fingerprint, identify, extract watermark, enforce, and repeat as necessary.


"If you don't know your video content is being stolen, how can you possibly stop it?" said Peterka. "Monitoring of real-time transactions can spot unusual patterns and anomalies that would require a team of experts to achieve by crawling through the web. It is crucial to monitor deployments from the inside out, tracing down sources of illicit redistribution and addressing them is as close to real time as possible."

Watermarking -- an adjunct to content protection -- can identify theft that may have occurred after the fact, often acting as a deterrent than a prevention technique.

"It's in everyone's interest that they each embed their own forensic watermarking or other ID in the content itself, so that when the parties are discussing who allowed which content to leak, there is incontrovertible evidence to show where the leak came from," said Schouten.

For traceability, mass delivery of uniquely marked content to combat revenue leakage in on-demand video service models, including OTT delivery to devices such as smartphones, games consoles and smart TVs, works well, said Verimatrix.

"Server-side embedding processes can uniquely mark compressed -- and even encrypted -- content files during delivery, and it is an important alternative to client-side embedding since it does not require any integration with, or modification to, the client devices," said Peterka. "It is essential that the resulting stream can be decrypted, decompressed and rendered by regular client devices, either in hardware or software. That way, all downstream copies will contain the unique payload, which can be extracted by machine aided comparison with the original content, even after severe distortion or degradation."

No technical solutions are 100% watertight. Security is never perfect; it is always a process, said Schouten. It is imperative that service providers upgrade, monitor and track vulnerabilities in a timely manner, he said.

"It is unlikely that piracy will ever stop so it needs to be made harder to achieve. It is not easy or cheap to pirate video content, so one way to reduce the prevalence would be to make it cheaper for subscribers to access legally than for pirates to steal," Schouten said. "Making video content easy and affordable to obtain legally can certainly reduce the negative aspects associated with piracy."

Technical solutions are limited in their ability to control piracy, said Ellis.

"Content protection systems can make it very difficult to extract information from the system, but there is no ability to absolutely prevent it. Security is always rated by how long it would take to 'brute force' it with technology available today; content protection solution's job is to ensure the system is protected in such a way that the pirate is left with only that brute force approach open to them," he said.

With today's available technologies, monitoring and frequent updates, the entire content chain can make it tougher for pirates to illicitly avail themselves of providers' revenue opportunity. This multi-disciplinary approach has as much to do with awareness and partnership as technology.


Monday, 2 October 2017

Wildseed busy playing pranks for digital thriller

VMI

How to create an authentic looking vlog (video blog) to hook the audience into an episodic thriller was the task facing Wildseed Studios in making PrankMe for digital subscription service Fullscreen.
http://vmi.tv/case-studies/article/131
Created by Jesse Cleverly and Paul Neafcy and produced by Bristol-based Wildseed Studios, the eight x 10-minute series follows rising social media star Jasper Perkins (Corey Fogelmanis, fresh from his turn as Farkle Minkus in Disney Channel’s Girl Meets World). Perkins is famed for his increasingly mean-spirited pranks on his channel but when a high-profile stunt doesn’t go according to plan – and encouraged by his growing viewership and crowdsourced victims - he blurs the line between audience and accomplice. Hazel Hayes is directing, Neafcy wrote it with Cleverly producing.
“The essential brief was to tell the story as if it were a vlog made by Jasper,” explains Mark Stopher, Production Manager. “Director of Photography Edgar Dubrovskiy had to strike a tricky balance between making it look like an authentic vlog while maintaining the production value.”
Dubrovskiy tested a number of camera options and set ups at VMI before settling on the Canon 5D MkIV DSLR.
While Dubrovskiy planned each shot meticulously, it was Fogelmanis who handled much of the filming to retain the authentic feel. Feeds were relayed wirelessly to the DP on set. Dubrovskiy also used a second 5D to provide coverage in the edit.
Additionally, he selected a set of Canon L-USM Primes and Canon L USM Zooms including 8-15mm; 11-24mm 16-35mm 24-70mm 70-200mm.
Following the format of hidden-camera prank shows, Wildseed also deployed three GoPro Hero Blacks around different locations including a shopping mall.
Rushes were captured on internal media cards supplied by VMEDIA which a dedicated DIT managed from set to post. Acquisition was at 4K and will be output to HD for final delivery, giving room for the team to punch into the pictures as necessary.
All accessories were supplied too including Anton Bauer Digital G90 battery kits; Genus Fader Triple Variable ND Filters and Circular Pola Filters; a HD  7" TV Logic monitor, data cables;  Sachtler tripod; Redrock Eyespy shoulder kit for the DSLRs and a variety of lighting kit like Astra Bi-Col Light Panel LEDs.
“VMI has helped us out with lots of different productions over the years,” says Stopher. “Delivery is very easy, nothing is too much trouble and they are always able to help with extra kit. We wanted to mimic the look of a news broadcast for part of the story so we hired a Sony PMW-EX3 and PMW-100 for a couple of days.
“We also shot our title sequence at VMI. We wanted some macro shots of pranking paraphernalia like a chainsaw, clown make-up, fake blood, a glitter explosion and so on in one of their kit prep rooms. It helps that VMI has an office in Bristol because it’s very handy for us to pop in and do tests or pick something up. It’s the little things like that that make production easier to achieve.”

Off The Fence use Canon ME20 for night filming of elephants in Botswana

VMI

The extreme low light Canon ME20F-SH (4m ASA) has been used by Off the Fence Productions on assignment for Vulcan Production to support the US TV premiere of Naledi: One Little Elephant.
http://vmi.tv/case-studies/article/133
Off The Fence was required to film some follow-up update stories as well as social media segments leading up to the new season of Nature on PBS/WNET this fall. 
The brief was to film stories about the elephants and other wildlife at Abu Camp in Botswana, as well as capture the daily lives of the elephant handlers who work there.
“It's not every day a cameraman has up-close access to a family of elephants 24 hours a day,” explains director of photography Lee Jackson. “In our experiences filming Naledi, some of the most dramatic scenes - including a live birth of a baby elephant - were filmed at night on infrared cameras.  This time, we wanted something different, something original. We wanted to have our night footage in colour with more depth in our images. After doing some research we found that Canon had launched the ME20 camera and VMI had one of the first rentals available.”
The main camera used on location in Okavango Delta was the Sony PXW FS7 with the Canon ME20 deployed for low light and night time wildlife with Sony A7s for gyro shots.
“Before heading out, VMI showed one of our directors, Geoff Luck, the camera when it was collected in Bristol,” explains Jackson. “They were extremely clued up and informative on the limits of the camera. When it was collected, VMI had it set up in one of their camera booths and quickly showed Geoff what db we should work with and which ones to avoid. We found that certain db worked better than others while 52000db was our limit.”
 The crew filmed for 66 days and were with the elephants around the clock 24/7. “We had help from some of the elephant handlers when we slept or when I was alone, but mostly myself and my AC worked in shifts so that we never had a camera down in case of unexpected behaviour,” says Jackson.
He recorded HD at 25p to the Odyssey 7Q+ with most of the night filming performed on a tripod.
“We were pleasantly surprised at the low light capabilities and low noise at higher ISOs. Our max db was 48 sometimes we did push to 52db,” he informs. “Our post guys did some noise reduction with some of our early footage and reported back with promising results. This allowed us to push some of our shots, especially when there was no moonlight.”
With very little ambient light from the nearby elephant handler’s accommodations, the crew also used two LED 1x1 panels at a distance dimmed right down, when there was no moon.
Says Jackson, “One thing that stands out for me is getting to watch the elephant’s behaviour throughout the night. They don’t sleep that much even in captivity they do feed throughout the night, and are generally restless. The ME20 allowed us to observe this almost nocturnal behaviour which you wouldn’t normally see as our eye sight in these low light conditions wouldn’t allow any vision.”