Friday 29 June 2018

As Brexit Looms, UK Creative and Media Tech Industries at Risk

Streaming Media
The UK's position as the leading international hub for global media groups is under threat as the prospect of a no-deal Brexit grows.

Brexit is already jeopardising the potential of broadcasting in the UK, and the heightened prospect of not securing a deal with the European Union paints an even bleaker future for the sector, according to speakers at a summit on Brexit strategies in London earlier this week.
Two years after the referendum in the United Kingdom on membership of the European Union, and just nine months until the deadline for leaving, the picture appears to be no clearer. Media and entertainment companies cannot afford to wait and see what the impact will be.
Vince Cable, leader of the Liberal Democrats and former secretary of state for Business, Innovation and Skills, rated the possibility of the UK crashing out at 20 per cent. He added he thought there was a 20 percent chance that Brexit will actually be stopped altogether, and just a 60 percent chance of a deal.
"Creative industries have a huge impact on the economy, and there is a major impact on broadcasting," he said. "We need to ensure the creative industries are recognised [by the government's negotiating team]."
The UK government's most recent assessment of the creative industry's value is that it contributed £92 billion in 2016, representing more than 5% of the UK economy—bigger than steel or aviation, 
Yet the creative industries receive little coverage "partly as there are no hard hats or high-viz jackets for politicians to wear, and partly because the media is poor at covering itself," Cable said.
The Commercial Broadcasters Association (COBA) warned that losing access to EU markets through Brexit could cost the UK's television market £1 billion ($1.4 billion) per year in investment from international broadcasters. COBA represents multichannel broadcasters, including A+E, Discovery, Fox, NBCU, QVC, Scripps, Sky, Sony Pictures TV, Turner, and Disney.
Adam Minns, COBA's executive director, claimed 650 channels are considering moving; "Companies will need 6 to 9 months to restructure, move staff and relocate … there is a real clock ticking."
Leaving the EU without a trade deal would "jeopardise" the territory's position as "Europe's leading international broadcasting hub," said Minns. "International broadcasters based here would, reluctantly, be forced to restructure their European operations."
This is primarily due to question marks over the Country of Origin (COO) principle of the European Audio Visual Media Services Directive (AVMSD), which allows media channels across the EU to be regulated in just one member state. For example, a third of the 1,200 television services regulated by Ofcom are never seen by UK viewers, but are broadcast across the EU.
"If a UK broadcasting licence is no longer recognised by the EU, international channels will have no choice," said Minns.
Discovery has already voted with its feet. Last month it announced it will shut its European broadcasting base in West London and shift playout to the continent. Options include Amsterdam, where businesses including Netflix have their European headquarters, Warsaw, and Paris where Discovery-owned Eurosport has its hub.  
Conversely, there are 35 channels, including Netflix, that transmit to UK viewers but are licensed in other EU countries. UK content is therefore EU content, but would face sales limitations if banded outside EU quotas. Agreement on the country of origin principle is a priority in the negotiations.
"The EU will not cut the UK a deal on the single market—including the COO principle," said Paul Hardy, the DLA Piper Brexit Director who has previously been the advisor on European law to the UK Parliament. "It is giving us too much of the cake when we have already decided to leave the party. Businesses are quite right to be relocating."
There is also a concern that post-Brexit, UK citizens could lose their right to the portability of online content when they travel abroad, a right which is currently protected under EU law.The Digital Single Market law guarantees that consumers can enjoy paid subscription content everywhere in the European Union.
The UK government's most recent assessment of the creative industry's value is that it contributed £92 billion in 2016, representing over 5% of the UK economy. The sector grew by 45% between 2010 and 2016, faster than any other sector, according to the government.
The sector—which includes film, music, TV, fashion and architecture—relies on a highly mobile and international talent pool, often hired at short notice.
Analysis by the Creative Industries Federation (CIF)—an organisation for creative industries, cultural education and arts in the UK—suggests 75% of all the UK's creative companies employ people from across the EU.
According to Cable, this is one of the many reasons why an agreement must be made on the status of EU workers, and flexibility enshrined in the new system to allow broadcasters to continue to recruit EU citizens as freelancers when required.
CIF's chief executive John Kampfner wants the government to ensure reciprocal rights for UK workers abroad, to scrap non-EU minimum salary requirements, and to increase training in UK schools for creative skills, which it claims have been squeezed off the curriculum.
He suggested Brexit would stop dead the UK's leadership in creativity and innovation and tech if the movement of skills are restricted.
Other industry groups including the UK Screen Alliance have called for flexible arrangements after Brexit that allow visual effects and animation companies (arguably the hardest hit since they employ significant global talent] to continue to access EEA talent without punitive visa charges or restrictive quotas that could impact UK firm's competitiveness.
A survey of CIF members prior to the EU Referendum in June 2016 suggested that 96% of them intended to vote to remain. 
The focus has now changed to campaigning to prevent the creative industries from being left in a legislatively worse position than previously. It could be that the creative industries are just out of step with the popular vote (when 52% of voters opted to leave the EU), arguably showing a lack of diversity within a sector dominated by a cosmopolitan elite.
A more recent CIF survey suggested Brexit could cause "catastrophic" damage to the UK's booming culture industry.
The Confederation of British Industry (CBI) is also arguing the case for close alignment between the UK and the EU's Digital Single Market post Brexit.
A report released by the CBI in April highlighted that the UK is number one in the world when it comes to e-commerce, and stated that four out of five of the largest global investments in artificial intelligence businesses were for UK firms. It reported that the tech economy is creating jobs twice as fast as the rest of the economy and spurring jobs and investment across the UK.
The CBI argues that in order to sustain frictionless data flows, access to content and support for the UK's digital economy, "it is highly likely that UK businesses will be required to adhere to new Digital Single Market regulations post-Brexit."
Without a dea, all of this will be thrown in to the air. 
The Media Summits Brexit Briefing was produced by informitv and chaired by the DTG, the association for British digital television broadcasters.

Artificial Intelligence is a long way from primetime sports production

SVG Europe

Much is being made of the potential of artificial intelligence and machine learning to dramatically scale up live sports production. However, caution is being urged by the very vendors developing AI workflows.

“Sports rights holders and broadcasters hear about AI everywhere but they don’t have a clue how much it will cost them or how much they will use it or what benefit they will get from it,” says Jérôme Wauthoz, Tedial VP Products.
“AI can do a very good job of speech to text making thousands of hours of footage searchable – something not possible a year ago. It’s not caption quality but it is low cost, about 1-2 dollars per hour of footage,” says Sam Bogoch, CEO at video search specialist Axle ai.
“The question is really not what works but where it makes most sense for AI to be applied first and most cost effectively. Many media organisations are simply not geared up for it. They need to get their metadata in order first.”
“AI is a marketable buzzword,” critiques VSN Product Manager, Toni Vilalta. “It’s critical with any new technology not to overhype it and be brutally realistic about what we’re talking about.”
Tedial, which has married AI engines to its MAM software to augment and speed-up live production, takes issue with the claim that A.I is cheap.
“AI only makes sense when it costs less than a human – otherwise customers are better advised to hire a freelancer to do the job,” says Wauthoz.
While the All England Lawn Tennis Association and Fox Sports have used an AI system from IBM for production purposes, Wauthoz still feels these and every other AI for production are proof of concept.
“AI is still at the marketing stage. Being used at major sports events to create a buzz. As it stands, AI is not a sustainable product.”
Digging into this further, Wauthoz suggests that a freelance logger might cost £125 as a daily rate.
“That means your AI system must cost less than that otherwise there’s simply no way it makes sense from a budget point of view,” he says.
While top tier sports like the Premier League are awash with cash and may be prime candidates to pioneer AI adoption, he feels the technology is out of the budget range for less popular sports.
“Take volleyball for example. It will be challenging to monetize even AI logging since the margins on this will be low. You need to train the AI on sufficient and relevant data which could take anywhere from 3 weeks to 3 months and you would need to do it each time for each sport after which it should learn [from the data] itself.
“Even then you have to budget in the processing and hosting costs which means, in the end, it can be expensive.”
This view runs contrary to the prevailing view of the benefit of AI to sports. Automated production systems like that of Pixellot have found a niche in lower tier sports and club training scenarios.
“At the moment, the technology is not ready to replace for primetime sports programming,” says Bogoch. “But, if you took a second-tier sports event and you use AI to kick out highlights to the Web then there may be benefit on the basis that anything being better than nothing.”
A.I is far from perfect and faces a number of general challenges. “The first of these is that organisations need huge amounts of data to train an AI,” outlines Kevin Savina, Director of Product Strategy at Dalet. “The data has to be well documented, which can be expensive and hard to do.
“Secondly, there are still holes in the technology. Sometimes we don’t know why it works or why it does not. Some models are biased, others are not robust. Organisations need to understand that sometimes you should not trust the math – it can be wrong.”
Tedial’s Smartlive uses speech to text to automatically transcribe the commentary on ingest. It also applies logs object and facial recognition to log players, jersey numbers, whether the shot is wide angle or close-up, where slow motion is used and detection of key actions (penalty, foul and so on) then combines the results for assembly of highlights packages at any length desired.
It’s still proof of concept though Tedial says it has several sports rights holders and producers interested in its commercial launch in November.
“The big advantage we have is the Tedial MAM and bpm engine means users can create a lot of different workflows including delivery social to digital platforms which is a main focus of sports rights holders,” says Wauthoz. “The automatic highlights engine can package a storyline in a few clicks, or a draft EDL can be handed over to the production team to add voice over, graphics or any other beauty shots.
“Plus, we can operate this in on prem in the cloud or in hybrid fashion.”
He says Smartlive can be applied to non-sports live programming like The Voice. “You can definitely do it. All I need is the data feed on which to train the A.I.”
Crucially, he says, the AI is an option: “You are not obliged to use it.”
“A machine will never be as creative as a user,” he says. “To really talk about highlights you need to tell a story. AI is a tool to help the production to produce more content and faster but the final touch – the creative and artistic part of the editorial — will always done by a human.”
The price of hosting on AWS and Azure will inevitably reduce. “When the cost of using AWS is really low -– then the industry will really find a way to monetise AI. But so far as 2018 goes I am not convinced there is a great benefit simply because the cost of AI is too expensive.”
The industry is still at the beginning of its journey into AI and likely won’t see more complete glass-to-glass automated solutions in primetime for several years yet.

Tuesday 26 June 2018

Smell could be the next big thing in VR!


RedShark News
Ever wondered what virtual reality smells like? Officially it smells of fish. It also reeks of ramen noodles with overtones of gunpowder and bouquets of meadow.
Along with tactile force feedback (haptics) it’s the latest sensation to be married to the audio-visual immersion of VR.
The innovation comes courtesy of scent-making specialist Vaqso, a San Francisco-based start-up with Japanese heritage. It has announced a partnership with Chinese VR headset maker Pimax Technologies. The pair plan on launching the Pimax headset later this year armed with Vaqso’s scent-emitting cartridges.
Before you pooh-pooh the enterprise, the proof will be in the sniffing. Smell-O-Vision was introduced in the cinema for the 1960 film Scent of Mystery – albeit only for that movie – and reappeared as a staple of theme park rides along with rocking chairs and atmospheres like wind and smoke timed to erupt with the content.
If the goal of VR is to truly immerse us virtually, then surely scent – usually a background sense but providing vital clues to the world around us – can help fool our brain into believing in the authenticity of the experience.
The band of Vaqso’s device uses an adjustable velcro strap which allows it to be compatible with any HMD, not just that of Pimax. You can place up to five different scented oil cartridges that are easy to place and remove.
Cartridges last around one month, assuming average sniffing of two hours a day, with the fragrance apparently limited to an area the size of a tennis ball rather than filling the whole room.
On the software side, Pimax will integrate Vaqso VR into the Pimax software development kit for user control of a smell’s strength and weakness, or to extinguish the scent and switch it off.
The Pimax headset itself – which has been crowd-funded over $4 million – offers 200 degrees field of view which is almost the full range of a human eye's peripheral vision. It is also targeting an 8K resolution, although, on closer inspection, this turns out to be 4K (3840 x 2160) per eye with images rendered at 80Hz (the original aim was 120Hz), so the viewer perceives a higher resolution even while in effect one ‘eye’ image will be shuttered at all times.
The release version, planned for launch in 2018, will be able to emit five to ten popular scents like ‘ice cream’ and ‘ocean’. The current generation shows five modules including ‘fish’, and also the intriguing ‘lady’ which calls to mind Patrick Suskind’s bestselling novel Perfume about a murderer who kills young girls in order to possess their rare odour.
Pretty much any smell you can think of, though, could be emulated with custom requests on demand, Vaqso says.
Perhaps you could order more nebulous olfactory sensations. What would a VR experience of Apocalypse Now smell like, I wonder? Victory, of course. Just mix burning flesh with napalm.


Friday 22 June 2018

NVIDIA has just made super slow motion possible with any camera


RedShark News

A team of researchers claims to have cracked the secret of making videos shot at a lower frame rate look more fluid and less blurry when played back at a higher rate.
The difference between individual frames of video is pretty negligible, the result of either the camera moving or an object moving very slightly from frame to frame. Compression techniques exploit this fact by using mathematical algorithms to predict (interpolate) motion in a frame – given knowledge of previous or future frames – and to reduce the amount of data required.
However, unless you record a high enough number of frames, slowing down footage for a slo-mo replay can appear nigh on unwatchable. High-end pro cameras like the Phantom VEO 4K can capture full resolution 4K images at 1000 frames per second (fps) and do an unbelievably good job, but at $60,000 a pop, these are used only for top-end natural history, sports or corporate promo applications.
While it is possible to take 240-fps videos with a cellphone, many of the moments we would like to slow down are unpredictable – the first time a baby walks, a difficult skateboard trick, a dog catching a ball – and, as a result, are recorded at standard frame rates.
Likewise, recording everything at high frame rates is impractical for mobile devices – it requires much memory and is power intensive.
But could high-quality slow-motion video be generated from existing standard videos? A team of researchers claims to have cracked the code.
“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers announced. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”
For instance, generating 240-fps videos from standard sequences (30-fps) requires interpolating seven intermediate frames for every two consecutive frames. In order to generate high-quality results, the math not only has to correctly interpret the motion between two input images but also understand occlusions.
“Otherwise, it may result in severe artefacts, especially around motion boundaries, in the interpolated frames,” explain the researchers.
They used over 11000 YouTube clips with 240-fps, containing 300K individual video frames, to train the artificially intelligent system and have had their findings – published last November - officially backed by GPU-maker Nvidia this month.
The system makes use of Nvidia Tesla V100 GPUS and its cuDNN-accelerated PyTorch deep-learning framework.
Now you can take everyday videos of life’s most precious moments and slow them down to look like your favourite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation,” suggests Nvidia.


Thursday 21 June 2018

Piracy didn’t fade, it just got cleverer


IBC 
Galvanised into action the media industry can claim some success in reducing incidents of illegal streaming. But the threat remains high as pirates turn to more sophisticated methods of attack. 
This time last year the industry was in a spin. In close succession, hackers had breached Netflix, Disney and HBO, threatening to release script details or entire shows to the web unless ransoms were paid. Even then, Game of Thrones season seven was pirated more than a billion times, according to one estimate.
Euphemistically known as content redistribution, piracy was rife in sports broadcasting too. The industry’s worst fears were confirmed shortly before IBC when ‘The Money Fight’ between boxers Floyd Mayweather and Conor McGregor haemorrhaged cash for operator Showtime as three million people watched illegally.
In recent months, though, no such high-profile incident has occurred – or at least been made public. The industry would appear to have stemmed the tide.
Massive investment pays dividends
This is at least in part due to the firepower being thrown at the problem. Ovum estimates that the spend on TV and video anti-piracy services will touch U$1bn worldwide by the end of the year - a rise of 75% on 2017. Increasing adoption of these anti-piracy services bundled with premium content protection technology stacks such as DRM, fingerprinting, watermarking, paywalls and tokenised authentication will see losses reduce, predicts the analyst, to 13% in 2018 from 16% in 2017.
Last June, Netflix, HBO, Disney, Amazon and Sky were among more than 30 studios and international broadcasters ganging together to form the anti-piracy Alliance for Creativity and Entertainment (ACE). It shut down Florida-based SET Broadcast pending a lawsuit alleging the streaming subscription service was pirating content. ACE has also initiated legal action against Kodi set top box makers in Australia, the UK and the US (including TickBox TV and Dragon Box) for providing illicit access to copyrighted content.
In the UK, the Digital Production Partnership (DPP) unveiled its Committed to Security Programme at IBC2017 to help companies self-assess against key industry security criteria. It has since awarded the appropriate ‘committed to security’ mark to two dozen companies including Arqiva, Base Media Cloud, Dropbox, Imagen, Piksel and Signiant.
“We have seen the impact of new countermeasures and legal actions implemented in several advanced markets over the past 18 months,” reports Simon Trudelle, senior director of product marketing at content security experts, Nagra. “For instance, ISPs and cloud platform providers in Western Europe are now better informed and are more cooperative when notified of an official takedown notice.
Trudelle says that, as a result, a large chunk of pirate infrastructure has moved to jurisdictions outside of Western Europe, where intellectual property rights are more challenging to enforce. Because this pirate infrastructure is further away from major cloud and CDN hubs in Western Europe, it reduces the quality of the pirate services.
Also, the EU’s data privacy directive, GDPR, has grown awareness in fighting illicit streaming services.
“Broad communication on data and privacy issues help consumers realise that their illegal actions could be traced, or that their personal data, including ID and payment information, could be stolen and misused by organised crime,” says Trudelle.
Previously, content theft has been a crime that couldn’t be enforced - authorities wouldn’t know what to do or how to stop it. Now, according to content security vendor Verimatrix’s CTO Petr Peterka, authorities are better equipped to understand what piracy looks like, how to find it and how to stop it - all of which makes it more difficult for pirates to hide or be anonymous.
“The most effective approach to countering threats of piracy starts with education, then moves into rights expertise, with rights enforcement being the final step,” says Peterka.
Clear and present danger
But far from receding, the security threat remains as high as ever. Even at 13%, the revenue expected to be lost this year by global online TV and video services (excluding film entertainment) amounts to U$37.4bn.
A new major case of piracy has erupted during the FIFA World Cup, proving it’s still a major issue for the media industry. FIFA is taking action against Saudi TV channel BeoutQ for alleged illegal broadcasts of the opening games of the World Cup, infringing the exclusive regional rights to the competition held by Qatar’s beIN Media Group.
The most serious threat comes from the Asia-Pacific region, which will account for roughly 40% of all revenue leakage, according to Ovum.
“[The focus of] attacks have moved – slightly - from Tier-I premium content towards Tier-II and Tier-III formats (regional and local content),” says Ovum principal consultant for Media & Broadcast Technology, Kedar Mohite. “Attackers are specifically targeting local markets… focusing on Hollywood titles distributed through local touch points in Asia-Pacific.”
Furthermore, the fragmentation of access points to content from web, devices, platforms and workgroups (a pre-launch IP theft scenario) means premium content security has to continuously evolve.
 “Cybercrime is now the main source of funding for organised criminal groups,” says Ovum Research Director Maxine Holt. “These groups are extremely well funded and therefore have the time and the inclination to launch extended attacks that can lay undetected for many, many months.”
Content protection agency MUSO charted over 300 billion visits to piracy websites across music, TV and film, publishing, and software in 2017, more than a third of which were to pirate sites hosting television content (106.9 billion). It records that the nation with the worst offenders is the U.S where 27.9 billion visits were made to pirate sites in 2017 (followed by Russia with 20.6bn and India with 17bn).
“There is a belief that the rise in popularity of on-demand services – such as Netflix and Spotify – have solved piracy, but that theory simply doesn’t stack up. Our data suggests that piracy is more popular than ever,” says MUSO co-founder and CEO Andy Chatterley. “The data shows us that 53% of all piracy happens on unlicensed streaming platforms.”
More advanced content security measures may have made it more difficult to hack into the cryptographic components of the content protection system, with consequently fewer ‘traditional’ security breaches. However, even as protection mechanisms get more sophisticated, the number of vulnerabilities continues to increase.
Commercial piracy
“Content is available on many more networks, giving pirates more points of attack than just the smartcard,” says Peterka. “Pirates are now trying to go up stream all the way to content creation itself because pirating that content before it enters the conditional access/DRM domain gives them the biggest benefit. This is why content owners are now employing watermarking before it even hits movie theatres; piracy has to be addressed all the way up to the original source.”
“In some respects, piracy is actually getting worse,” Twentieth Century Fox’s SVP for Content Protection and Technology Ron Wheeler told the Pay-TV Innovation Forum. “Illicit streaming devices and associated services cost users real money and therefore target the same paying customers that legitimate broadcast and OTT services do.”
Nagra says such “commercial piracy” is a more sophisticated form that involves advanced streaming platforms, front-end marketing sites and payment servers that aim to compete with legitimate services.
 “These offerings are particularly damaging in emerging markets, where consumers can hardly tell the difference between legitimate services,” says Trudelle.
No threat goes away - it morphs over time. Attackers are combining different forms of attack and even sharing codebases to circumvent the defences the cybersecurity industry puts in. At the same time, security experts have also ramped up their solutions to disrupt these threats.
Irdeto is using artificial intelligence to detect illegal streams through semantic analysis of social media advertisements or web page indexes, to identify broadcaster logos and even athletes via facial recognition. With the stream flagged as an illegal piece of content, a takedown notice is issued.
“Once pirates realise the detection techniques that are being employed they start adjusting their methods – blanking or switching out logos for example,” says Irdeto VP of Cybersecurity Services Mark Mulready. “The more mischievous ones are actually putting on other logos of other broadcasters.”
That’s where the next phase of the machine learning project comes in. “We’re trying to teach the system to recognise things like football strips so it can actually determine which game is on from seeing, for example, Barcelona’s colours.”
Nagra is introducing new watermarking solutions for OTT delivery apps at IBC2018. This will allow content and rights owners to trace leaks to their origins on a consumer streaming device, enabling operators to turn off a suspicious user and disrupt pirate services during live events. The company is also expanding its monitoring and takedown capabilities.
Verimatrix’s Peterka says: “We may never stop piracy but making it more difficult and less economical for pirates to steal can help slow it down. To stay on top of content protection, it is essential that service providers keep investing in security to discover and patch any vulnerabilities in a timely matter.”
Meanwhile, crypto currencies like bitcoin have made it easier for attackers to ‘cashout’ undetected while the emergent Internet of Things will only magnify the threat.
 “We are no longer dealing with a handful of companies with closed ecosystems solely responsible for securing data on the device,” warned McAfee CEO Christopher Young recently. The cybersecurity firm tracks 600,000 unique threats a day on 300 million devices and says cybercrime drains U$600 billion from businesses a year.
“With open systems the network also connects to hundreds of billions of devices. How will we secure this large-scale connected device ecosystem without stifling growth and innovation? We stand on a precipice today.”

Monday 18 June 2018

We are a step closer to sci-fi like floating video hologram displays


RedShark News
The cinema dream of the hologram promised in Star Wars, Iron Man, and Minority Report is being chased hard by numerous developers – among them Lightfield Labs and Leia Inc.
There are two attributes that, for most people, are at the core of the holographic dream: floating 3D scenes that groups of people can interact with and not having to wear AR/VR headgear or 3D glasses.
These are the guiding principles behind developments at Looking Glass which may just have a leap on them all.
Co-founder and CEO Shawn Frayne and his team have been working since 2013 on a technique that “blends the best of volumetric rendering and light field projection.”
It has released what it claims is the world’s first interactive light field development kit – HoloPlayer One.
The HoloPlayer One uses a two-stage optical system at the heart of which is an LCD screen that sends 32 views of a given scene towards their designated directions simultaneously. This creates a field of light, “which a scene that occupies the same physical volume would have given out”.
The field of light is then retro-reflected to form a real image outside of the HoloPlayer One, allowing the scene to literally exist in mid-air.
Looking Glass says that since the 32 views are “sitting there waiting to be viewed”, the latency issues commonly experienced in eye-tracking 3D display systems are eliminated. Being a light field display also makes the HoloPlayer One system suitable for multiple viewers.
Audiences within a 50-degree view cone will be able to see the same “aerialised” scene at the same time without the need for any head-mounted devices.
On the sensor side, the HoloPlayer One system is equipped with the Intel RealSense SR300 depth camera. This allows users to interact with the scene by grabbing, pinching, touching, swiping…  just as anyone could do with an actual object floating in midair.
Except there is no real object there, just photons.
It’s available now in a U$750 developer version which works on a PC or MacBook Pro, but you’ll get better results with a dedicated GPU. There’s also a U$3000 premium edition which includes the Development Edition hardware with a built-in PC so you can just plug it in and start interacting with holograms, no laptop needed.
Dozens of sample applications – ready to download to your HoloPlayer – have been built, ranging from HoloBrush (kind of like Tiltbrush, but without VR/AR headgear) to HoloSculpt (a way to sculpt and 3D print a digital piece of spinning clay) and games like 3D Asteroids. HoloDancer is a kind of impossible cabinet of curiosities. The system has also been used to rig a 3D digital character directly in real 3D space and to create experimental 3D ultrasound imagers and even more experimental versions of a ‘Holocommunicator’ video programme.
This could be just the tip of the iceberg of what’s possible with this fundamentally new interface. There are more experimental applications bubbling up on the site’s user Forum.
That said, the HoloPlayer One isn’t a plug-and-play product and is more like the first generation of personal computers or 3D printers, designed for very early adopters.
“When your finger touches, say, the tip of an X-Wing, your finger is actually coinciding with the digital 3D tip of that X-Wing in real space,” claims Frayne.
If you still feel the floating image in HoloPlayer One is a little blurry, the company is working on something a little crisper. This is an experimental system called Super Pepper and it’s a close cousin of HoloPlayer One. Unlike HoloPlayer One, the image does not float above the glass, but behind it so that it can augment 3D physical objects. The Super Pepper uses a larger 4K super-stereoscopic screen (similar to what’s in the base of HoloPlayer One but roughly twice as large) to generate its moving 3D scenes. All software that runs in Holoplayers can run in the Super Pepper, including the HoloPlay Unity SDK.
The company has also innovated a series of displays. Debuted a couple of weeks ago was a 3-inch thick clear Lucite (acrylic) block that appears to have a 3D interactive object floating inside of it in front of a black rear panel. The firm is also preparing an 8.6-inch diagonal and a 15.9-inch diagonal screen for sale at around U$600 and U$3,000, respectively.
Having the image float above the glass means you can use your bare hands to interact directly with the 3D content and feels a step closer to the full dream of the holographic interface we’ve been promised by decades of sci-fi movies.


Friday 15 June 2018

Going Fast and Furious with Reds and Celeb 250s

Content marketing for VMI

Film producer Lars Sylvest has a successful track record producing features with stars such as Sandra Bullock, Kevin Bacon and Kurt Russell but he’s long harboured a passion project to share his hobby and interest in luxury sports cars.
Last year he set up Superfast Productions in with Rob Young, one of the world’s leading automotive tuners, to develop the concept into a series of documentary features. They brought on board Wheeler Dealers creator Daniel Allum as creative director.
“We wanted to show the meticulous approach to automotive engineering and the skillsets of UK automotive industry which is unique worldwide,” explains Sylvest. “Eighty five per cent of every component of a F1 car is designed and manufactured by UK firms in a small radius around Oxford. The level of ingenuity is astonishing.”
For the first Superfast feature the production filmed Young in his workshop dismantling a Porsche GT2 – already an extremely highly engineered vehicle - and tuning it up to 900 hp.
“We wanted our documentary to reflect the aspirational quality of the super fast sports car and the high-tech approach to modelling, building and re-engineering them which is why we wanted to film this cinematically,” says Sylvest.
The DP chose to shoot with his own camera which was a Red Scarlet-W and in order to match with the DP’s own camera, VMI supplied a Red Dragon camera kit filters and tonnes of Anton Bauer batteries.  VMI’s media rental arm, VMEDIA were able to supply a Red Minimag SSD side module to fit to the earlier DSMC1 cameras in order for them to use the latest Minimag storage which was important for the production’s data storage demands.
The lenses selected were Leica Summicron C T2 film primes. The principal configuration was a 25mm lens on a gimbal with Red Scarlet-W’s sidekick interface. Occasionally DP Ben Scott used 35mm lenses with the second Scarlet fitted with 75mm or 100mm.  He tended to stay around T2.8 unless low light required opening up more. 
Aware of the data demands of shooting at 6K, they elected to shoot at 5K for a 4K finish with occasional 6K shots and also used a Flowcine Serene Arm on a hydraulic riser on a Ford pickup for driving shots, two additional Red Weapons, a second unit drone team under command of Primary Drift and some Lumix GH4s used for some bolt-on shots on track days.
For lighting, VMI supplied 4 x Kino Celeb 250s which had recently arrived.  Like the ARRI Skypanels and LItepanel Geminis, these punchy LED 1x1 LED soft lights offer variable colour temperature and gel simulation and were remotely controlled from a DMX desk which VMI also supplied.
“The Celeb 250 LEDs allowed us to arrange for a permanent, controllable and consistently even lighting in the workshop/studio,” explains Sylvest.  “The Celebs were mounted from the ceiling and therefore enable us to move setups freely which, in turn, saved a lot of time.”
The greatest challenge lay in filming at high speeds with the cameras onboard the Porsche.  In order to withstand the increased acceleration and deceleration forces resulting from the much heavier cameras and lenses (which combined weighed over 10kg) custom made rigs were supplied from Extreme Facilities.
“We were filming at 200mph which is unknown territory for most cameras,” says Sylvest. “The cameras needed to be stable at those speeds which is why we needed a bespoke rig.”
The production shot on roads in Anglesey, and on a former airfield in Essex. “The result looks really cinematic and the driving shots in particular are of a look and style wouldn’t look out of place in The Fast and the Furious.”
Superfast was post-produced and given a HDR grade at Silverglade in London. The director is Nick Wilkinson, and the producers are Sylvest and Young.
“Superfast is our template for what we aim to be a growing series of films about different sports car models and the skills of the teams who design and drive them,” says Sylvest. “Basically, we are documenting the build of one car at a time as a glossy hour-long mini-feature.”

Thursday 14 June 2018

Virtual Post opens Hove post-production facility

Broadcast 
Post-production and content servicing company Virtual Post has opened its third facility, in Brighton and Hove. The new office complements its existing facilities in Percy Street and Orpington, south east London.
The Hove office houses six offline Avid Media Composer and Adobe Premiere edit suites – two are ready now and four more will follow next month – a 5.1 ProTools audio Suite, a Resolve grading and finishing room and graphics capabilities through Adobe After Effects.
The facility has Avid Nexis shared storage, and it’s also linked up to Virtual Post’s London and Orpington offices via fibre.
Virtual Post launched in 2013 and is based around a cloud infrastructure meaning that each of its three hubs can access resources such as storage from a central location.
As well as its in-house facilities, Virtual Post will use Bebop virtual suites to scale-up when required. The company will also offer serviced office space for local indies and London-based producers looking to expand into the regions.
Brighton & Hove was recently shortlisted as a potential city for Channel 4’s regional hub when it relocates out of London in the next few years.
Virtual Post CEO Jon Lee said: “We have set up to satisfy the increasing post and content servicing demands of TV production growth in the region. With an ever-increasing community of talent in the south, we want to send a clear message to the commissioners that the region is wide open for business.”
“Post production isn’t in one place anymore,” he adds. “Content servicing teams can work from anywhere. What’s fantastic about this is how much easier it is for our clients and teams to work together with everyone benefiting from the significant cost and time savings that can result.”
He says the firm “has seen steady growth” with more and more indies “trusting to the cloud as part of their business operation.”

Craft Leaders: Andy Serkis, director and actor


IBC

Andy Serkis speaks about the “liberating tool” of performance capture and reveals all about his latest project Mowgli.
It’s extremely unusual, unprecedented even, for an actor - let alone an A-list star - to discuss the cutting edge of production techniques with so much passion and erudition. But Andy Serkis is not your usual filmmaker.
 “Using game engine technology to achieve high quality rendering in realtime means that, once something is shot on set, there will be no post production,” he says. “I believe that is where we are headed and with VR, AR and interactive gaming platforms emerging it’s the most exciting time to be a storyteller.”
Serkis is synonymous with performance capture, the art of playing digitally enhanced characters which he has made his own over the last seventeen years
Famously as Gollum in The Lord of the Rings and The Hobbit, Serkis has inhabited King Kong, also for director Peter Jackson, Captain Haddock in Steven Spielberg’s The Adventures of Tintin, Caesar in the Planet of the Apes trilogy and Supreme Leader Snoke in Star Wars: A New Hope and The Last Jedi.
In doing so he has done more than anyone to advance the technique from peripheral activity to fundamental to the way a film is shot, pioneering the transition from purely capturing motion to being truly capable of recording emotion.
Recently he has moved into directing, showing his versatility on well-received biographical drama Breathe before turning his attention to Mowgli, a big budget refresh of The Jungle Book for Warner Bros. with performance capture to the fore.
“I love Gollum, because he started all of this,” says Serkis.
“Gollum is the most incredible 21st century role for an actor. I will never see performance capture as a straitjacket… quite the opposite. I see actors or dancers being able to use performance capture as the most liberating tool. You are not just tied to a humanoid form. Now you can play anything.”
Behind the mask
It wasn’t always like this. In its early days there was “gross misunderstanding” about the technique, Serkis admits.
“It’s like wearing digital make-up or a digital costume and in every other respect it’s the same preparation and process as any actor would go through in creating any role. You are directed by the director, you are on set from day one to the end of the shoot, you are the complete author of that part. A director can have the animators take the data and augment away from the performance – but that is not real performance capture.”
He recalls the early reviews of Lord of the Rings questioning who was behind the mask. “Are they a dancer? A mime? A contortionist? and in every interview I found myself repeating that ‘No, I am playing a role and that is no different to any other actor.’”
The only distinction, if there is one, is that the performance captured actor has significant interaction with visual effects artists. “But not more so than actors wearing prosthetic make-up might have with the make-up department,” he says. “There’s no difference between actors playing simians in the original Planet of the Apes [1968] and what [VFX facility] Weta did with the reboot [Rise of the Planet of the Apes, 2011].”
The classic analogy is that of John Hurt, unrecognisable behind prosthetic make-up in The Elephant Man (1980) yet whose performance as John Merrick was critically lauded landing him a Bafta win and Golden Globe and Oscar nods for Best Actor.
No such recognition yet by the Academy for Serkis – or indeed any actor in a performance capture role – although each of the recent trilogy of Apes movies was nominated for Best VFX.
This year Serkis’ acclaimed performance in War for the Planet of the Apeswent unheralded during Awards season, with Gary Oldman winning for Darkest Hour, his towering performance as Churchill smothered in visage-altering cosmetics.
One argument is that the Academy (and other awards bodies) find it problematic to distinguish between the actor’s performance and the digital artistry of VFX teams. Should the award go to Serkis alone when the work is augmented by animators?
“The fact of the matter is that visual effects are amply awarded,” says Serkis, who suggests a little discomfort with having the burden of an ambassadorial role for performance capture thrust upon him.
“It goes without saying that VFX artists are brilliant,” he says. “On every Marvel movie, for example, it’s clear that the work of the actors has been enhanced in almost every shot with a brilliant piece of CG. What I am talking about is the process of literally taking the performance of an actor and retargeting it onto a new physiognomy in a way which retains the fidelity of the performance, of the character. Of course, one could not exist without the other, but my point is that VFX artists have a VFX category.”
Pushing the boundaries
Which brings us to Mowgli, produced by Warner Bros and Serkis’ Imaginarium Productions and featuring Christian Bale, Benedict Cumberbatch, Naomi Harris, Cate Blanchett and Serkis, who plays Baloo.
It’s a production which promises to push the boundaries of performance capture possibilities further than ever before.
Serkis describes the advances made on the movie as a “true hybrid” in which different gradations of the performance capture process will inform the finished picture.
It’s a live action shot on jungle sets at Warner Bros’ Leavesden Studios and on location in South Africa. In that sense it’s more in the style of Planet of the Apes, than Disney’s 2016 The Jungle Book in which 3D animated characters inhabited a CG world.
“I have a real problem with talking animals on film,” Serkis explains of his reason for using performance capture for this project. “If you took a real photorealist tiger and put a voice on it I would find it emotionally unbelievable.”
The production for Mowgli has gone to great lengths to create realistic animals that will also emotionally connect with audiences. This stems from concept art for the creature design which is based on illustrations by Victorian biologist and botanist Robert Sterndale and on lithographs by Edward Detmold and his brother made for the first edition of The Jungle Book in 1903.
“If you imagine a timeline with, on one side, the actor’s face and on the other side you have a panther and the two sides morph to a point where you will see both panther and the human actor’s face. That for me was the fundamental way into the story.”
The cast spent three weeks having their facial performance captured in space set-up by Imaginarium at a hotel near Leavesden.
“I was very clear [to the cast] in rehearsal that you are creating a character not a photoreal animation,” says Serkis. “Some actors, including Benedict (who plays Shere Khan) and Tom Hollander (a hyena called Tabaqui) chose to bring a lot of physicality to the role. Others, like Peter Mullan (lion chief Akela), found that stillness was more important for their character.”
The physical movements of the cast were recorded by witness cameras and combined with the facial data at London facility Framestore where editor Mark Sanger (who had previously worked with Framestore on Gravity) made a first cut of the movie.
“From that cut we had the timing for scenes,” explains Serkis. The next stage involved capturing the physical performance of a troupe of actors from Imaginarium’s repertory on set at Leavesden. They wore polystyrene heads constructed to scale for each of the animal characters and were recorded alongside the live action performance of Rohan Chand as Mowgli.
“The data we captured here was useful for geography and choreography to block the scenes,” explains Serkis.
All 1,164 VFX shots were supervised and completed by Framestore which developed, with
Imaginarium, a new muscle system to allow animators to more directly replicate facial shapes and expressions.
It is the fine balance between the essence of an actor’s facial performance and the animator’s translation of that performance to create the final character that is most innovative.
“The industry is now on the verge of realtime facial capture,” says Serkis. “That’s the grail. Once you can do that, untethered by wires and cables to combine performance capture transparently with the rest of the physical performances in a virtual environment then we’re talking about being able to complete a production straight out of the box.”
Imaginarium is not alone in innovating such virtual production techniques. Everyone from VFX tools developer Foundry to Paramount Studios, Intel and James Cameron are looking to evolve technology that permits a director to film CG assets with live action live on set. What’s more, this streamlined method will in theory create one set of assets for purposing in all sorts of ways – ultimately a revenue generator for the studios involved.
“We are then in a world where the same assets can used across VR, AR or gaming,” says Serkis. “Assets can be used to create animated movies straight out of the box. Studios like Imaginarium will be best suited to take on the totality of virtual production from inception to final output.”
The art of performance
In his youth, Serkis showed talent as a painter, particularly in character drawing. He went to study visual art at Lancaster University “with no intention of becoming an actor” but gravitated toward theatre after designing posters for college drama shows then making props and gradually becoming more involved behind the scenes. By the end of his first year he had started acting, something that led, he says, to an epiphany.
“One of the great things about Lancaster Uni at that time was the degree of independence they gave students. Basically, you could create your own degree. So, I swapped visual arts for my own independent studies in theatre, set design and movement.”
Fascinated by using art and physicality to tell stories, Serkis started to put on shows using puppetry and directed and performed a one-man show in this final year based on Raymond Briggs’ Falklands War satire The Tin-Pot Foreign General and the Old Iron Woman.
“I then put that lot aside when I first started acting but after ten years I realised I had a huge desire not just to inhabit one character’s point of view but to tell a story from a visual and a storytelling and a directing perspective. I began to make short films and write scripts and then, by luck, I found myself in a world where suddenly all the skills I’d accumulated over the years fell into place.”
That world was Middle-earth, the fantasy setting for director Peter Jackson’s staggeringly ambitious adaptation of JRR Tolkein’s masterwork which many thought was unfilmable.
Playing a creature like Gollum - which required him to watch his virtual performance in real time on a monitor - was “like being a puppeteer and a marionette at the same time.”
Art and painting have influenced all his creations, he says. The work of Francis Bacon, Egon Schiele and Leonardo da Vinci inspired his portrayal of Gollum.
“I found myself straddling acting, VFX and animation and I felt very comfortable in all those worlds and I felt comfortable connecting artistically with a very wide range of people.
“Throughout my acting career I’ve always connected to the physicality of a role and believed that to be just as important as a character’s emotional and psychological centres. So, when I started on Rings I found it relatively easy to launch wholeheartedly into a vocal and physical performance for Gollum.
“When I put on the motion capture suit and saw that character come to life it lit my fire in a big way. The first lifting of my arm in the suit and seeing my arm on screen as Gollum was extraordinary and exciting to me.
“Suddenly lots of things seemed to open up in terms of the potential of what an actor can do and of the stories that could be told from a filmmaking perspective.
“It can be used for digital resurrection (new performances from dead stars) and fake news too. All this technology can be used for good or bad.”
In tests for Mowgli, which went unused, Serkis recorded two people performing the front and back of an animal to create a quadruped. They retargeted the movement of a person’s hands to be the ears of a dog.
“There are so many ways of retargeting performance to drive character,” says Serkis. “The acting community is more and more excited about this and wanting to be part of it.”
Seeing the world in a different light
Serkis says he is motivated by projects that say something about the human condition “and helps others see the world in a different light.”
“In pure acting terms, I am paid to research and respond to a role and show it to an audience as truthfully as possible, but I’ve always believed that social change is a motive for creating art.”
He has long wanted to make a version of George Orwell’s Animal Farm, beginning preparation for it well before Mowgli. “It’s one of the most lengthy development projects there’s ever been,” he admits. “But we want to get it right. It’s a hugely important book and it’s even more pertinent now than when we first started working on it.”
There are parallels with filming Lord of the Rings which until Jackson came along had only been made once before as a 2D cartoon (1978). The only other attempt to film the political fable was a 1954 feature animation. Naturally, Serkis’ version will be performance captured.
“I struggled with the idea of doing a version of Jungle Book because of Orwell’s attitude towards Kipling. It opened up the whole debate about whether an artist can create an outstanding piece of literature that does connect with humanity - and yet have personal political views with which you might not necessarily agree. I had to wrangle with that before committing.”
In the event, Serkis’ PG13 certificate Mowgli, will have subtle political undertones.
“I don’t believe you can make Jungle Book today with being cognizant of modern politics and modern sensibilities,” he says. “Kipling was a very conflicted human – he was the most beloved author at the same time as being derided by Orwell (born the same year as The Jungle Book’s publication) as a jingoistic imperialist.
“We had to take that into account when making this film. Mowgli is an orphaned child who is stuck between two worlds. He can’t be accepted into the animal kingdom but neither can he accept all human customs. It’s about the how we treat the ‘otherness’ of fellow creatures and about man encroaching on the natural world and imposing his laws on the law of the jungle.”
Another of Serkis’ pet projects is a film about Eadweard Muybridge, technologist, self-confessed murderer and founding ‘Godfather’ of cinema.
“I have a massive obsession for him. He essentially created 24 frames a second as the speed at which still images could be sped up to give the illusion of action. His photography has become the lifeblood of an animator’s art since anyone who want to learn how an animal moves starts with Muybridge.”
His exploration of recording movement on film arguably makes him the godfather of motion capture too. Muybridge captured the movement of horses by setting up a dozen cameras in an array taking sequential photos triggered by the movement of the animal’s feet. Sadly, this project may not now come to pass, at least in Serkis’ hands.
“He has a huge part in my heart I respect his tenacity through personal adversity and his pioneering vision.”