Wednesday, 26 June 2019

Craft leaders: Paul Greengrass, Director


IBC
Paul Greengrass, the director of 22 July, Captain Phillips and The Bourne Supremacy has damned the nepotism and public school cliques which he says continue to dominate the UK’s creative industries.
“The entrenched networks that prevail in modern Britain have a very deep presence in film and television and will always privilege those in the industry,” Greengrass told the Sheffield International Documentary Festival. “Recent social advances have made the industry much more open with far more women than when I was starting out but the distance you have travel from a working class background is profound.”
He said an industry “that is awash with money” was not doing enough to encourage diversity. “School kids do not think of film and TV as somewhere where they can go and make their mark. It’s still far, far too reliant on who you know.”
Greengrass speaks as one who broke the glass ceiling, rising from a working class background to A list Hollywood director but apportions part of his career to luck and another to his own rage and frustration at being unable to break into the club at the BBC in the nineties.
“I tried to enter the world of drama and found it a tremendous clique,” he says. “I submitted scripts to the BBC on several occasions only to have them passed on to established screenplay writers [Michael Frayn, Alan Bleasdale] and I struggled with that a lot. It was an institution I wasn’t going to be invited to and I didn’t belong in.”
By his own account, Greengrass has always struggled to fit in. Born in Cheam and growing up in Gravesend and the Thames Estuary “a tough place with a strong, rebellious anti-metropolitan identity” he recalls “all the intense anxiety I felt as a young person – girls, socialising, insularity. I found institutions tremendously hard.”
Having been “quite a bolshie, arsey kid,” he was lucky, he says, to have found his métier at secondary school. “That’s where I learned that I had eyes that I could use. I’d found I could draw and paint and print. The art teacher encouraged me to use an old dusty Bolex camera. Suddenly I’d found what I was meant to do. The camera allowed me to speak at some level.”
There was nothing in his family background to suggest an affinity with moving images.
“I have a theory that my becoming a filmmaker is to do with being quite isolated as a child and my childhood experiences of cinema,” he says. “If you have these incredible powerful experiences in a dark space it creates a fugue state that mainlines to your cortex. I can recall those experiences more vividly than many others. Filmmaking is essentially a psychological effort to recover the intensity of childhood experiences - which is why all filmmaking is one of disillusionment and deep self-loathing.”
TV roots
The route into the industry for aspiring filmmakers like Greengrass in the 1980s was television and with only three TV channels and no insiders to call on, that narrowed the choice down to ITV. He mailed his CV dozens of times requesting work experience, eventually landing a job as a researcher on the sports desk at Granada.
From there he gravitated to the hard-hitting current affairs series, World in Action. “You were taught that London was the enemy, that Manchester was where it was at,” Greengrass says. “Granada had that ‘we don’t care attitude’ which spoke to me. The attitude was ‘we make programmes that people want to watch. The BBC makes programmes they think you should watch.’”
World in Action was an eclectic mix of traditional documentary filmmaking (stemming from founding fathers John Grierson and Humphrey Jennings) and observational filmmaking “wedded to a strong journalistic ethos which gave it a political edge. Plus, it had this weird almost agitprop Private Eye quality to it in which they’d use gimmicks and devices to tell a story.”
Greengrass flourished, directing and producing stories on the Thatcher-era coal mining industry, on M15 investigator Peter Wright (for whom he co-authored the book Spycatcher which the British government attempted to ban) and behind the scenes with Bob Geldof in the weeks leading up to the Live Aid concerts.
“[WIA] enabled me to shoot, cut, write, to tell a story, do it under pressure. There’s nothing so intensely terrifying in having to cut a WIA in two days.”
He believes the process of filmmaking will always tend toward a vacuum and that the essence of the job of the director is to not let it form.
 “You cannot stop, you must move forward. Indecision creates a toxicity which destroys the dream which you set out to achieve.”
As important, his decade working for WIA taught him how to make films for an audience. “You are having a conversation with someone. You ‘ve got to be clear and direct about what it is you want to say.”
Although he never admitted to his colleagues at the time, Greengrass had a hankering to make fiction and tried and failed to transition into the format to the point of considering giving up on his career altogether.
It started well enough. His first film, for just launched Film4, was Resurrected (1989) an anti-war tale based on real events about an MIA soldier in the Falklands (David Thewlis) who turns up alive weeks after everyone thinks he’s dead. It was nominated for the main award at the Berlin Film Festival.
His next steps weren’t so assured. “I did learn the language of TV drama which is authorial storytelling and different to shooting docs - but I always felt that it wasn’t me,” he reveals. “I worked through my thirties but I never quite felt I was being true to myself.”
It became increasingly difficult as he struggled to translate the vision in his head to the screen.
“I’d see a film in my mind and shot it using conventional film grammar. This means you are filming in the third person but it’s not got the first person urgency to it and it didn’t connect me from where I’d come from in terms of documentary filmmaking or connect with me emotionally in terms of things inside me which are attack and pace and drive.
“But I didn’t know how to bring those things together. I went through a couple of films feeling unbelievably frustrated.”
This included TV movie The One That Got Away, about an SAS raid in the first Gulf War based on Chris Ryan’s book. “I was on location banging my head against a Humvee thinking this is not me. Why I am seeing the film that I want to make in my script, but the filmmaking process just puts a wet blanket over it?
“It was a lack of courage, a lack of knowledge and a lack of breakthrough in finding a voice. I hope that’s only truly disastrous film I’ve made. The sense of failure I felt about not imposing myself on that film gave me a rage.”
Take no prisoners
He took that anger into his next film, a docu-drama account of the racist murder of schoolboy Stephen Lawrence.
“I came to that film with some sense of crisis,” Greengrass says. “I’d reached a place where I thought I should give up. The rage and the frustration and fear that I felt at that point gave me a ‘take no prisoners attitude’. On the first day on set I started to shoot in a way that I always do now but then I backed off. I was terrified.
This was the beginning of Greengrass’ now signature handheld, kinetic, micro-cut style that lend his movies the feel of reportage.
“You are almost gambling everything on the one moment. It only lives in that moment, that way, that shot. It is unsettling because your rushes don’t look like ordinary rushes – there is no safety about it.”
Without the encouragement of producer Mark Redhead, Greengrass might have remained stuck. “He said just ‘go for it’. So, we did.”
What Greengrass says he realised is that the screenplay was not ever going to be the template for the film he wanted to make.
“The screenplay is fundamental, but the film exists beyond it and you have to get to that point and the only way of doing that is by speaking in your own voice. Finding your voice as a filmmaker is something that is hard won, and you can only win it by trying and failing.
“In the end, it’s about moving toward being true to yourself in your choice of subject, in the way you handle that subject and the aesthetic choices you make to render that subject. All of those have got to come together.”
Bloody Sunday, his meticulously researched and frenetically paced recreation of the 1972 Derry massacre, shared first prize at the Berlin Festival in 2002 and caught the attention of producer Frank Marshall.
“I’d never thought about making a commercial movie. I’m not an obvious candidate but when they asked me to do Bourne, I remember going to see Doug Liman’s [franchise starter] The Bourne Identity. I thought, I know what to do with that.”
You would imagine that Marshall knew what he was getting with Greengrass but apparently, he was shocked when he saw rushes during the first week of shooting.
“I was sitting in the back of the theatre and I could see Frank jerking around – ‘why the fuck is he shooting the stuff like this – its horrendous.’”
Story through action
When it came to reshooting some scenes for The Bourne Supremacy the studio ordered Greengrass to shoot it both the way he wanted and in a standard locked-off way.
He says though that his experience with Hollywood executives contrasted favourably with clashes with the BBC.
“Studio execs are not scary. They want filmmakers to tell them how to do things.”
Another key aspect of Greengrass’ style is the ability to tell a story through action. This is most evident in the action sequences of the Bournefranchise including Supremacy, The Bourne Ultimatum (2007) and Jason Bourne (2016).
In a scene in Supremacy, Bourne is trapped in a hotel room and recalls a horrific murder that he committed in his past as an assassin. The next sequences show him escaping from the room, running across Berlin to evade Swat teams and CIA goons, eventually getting away on a metro train with a look of resolution on Matt Damon’s face.
“That is quite an accurate psychological state of the character’s mind because when you remember something that is deeply shaming you run away from it,” Greengrass explains. “He is being chased by his own demons and the sequence ends with a realisation that Bourne must face up to them. He cannot run anymore. He has to atone for what he has done.”
He is however dismissive of using shaky-cam simply for effect.
“Action pieces like car chases should all have a character root that is truthful otherwise it is just eye-candy. You want the images you’re capturing to rise authentically arise out of the environment you’re shooting in — so, if you’re running it’s going to feel like what it feels to run. It doesn’t work when you’re not showing detail or developing the moment. It only works when you can get action to enact character.”


Monday, 24 June 2019

VVC and AV1 Show Efficiency Gains; MPEG Announces Third Next-Gen Codec

Streaming Media

BBC R&D finds that AV1 produces better low-bitrate quality than HEVC, but the codec picture will get even muddier in 2020 as MPEG fast tracks VVC, MPEG-5 EVC, and LCEVC.
http://www.streamingmediaglobal.com/Articles/Editorial/Featured-Articles/VVC-and-AV1-Show-Efficiency-Gains%3b-MPEG-Announces-Third-Next-Gen-Codec-132632.aspx

Streaming standards AV1 and Versatile Video Coding (VVC) are on track to outperform HEVC as the industry adapts to a multi-codec future.
The most recent tests by BBC R&D comparing all three codecs side by side verified claims by AV1 developers Alliance for Open Media (AOM) that its codec has significantly reduced its computationally complexity.
At the same time, VVC was shown to outperform both HEVC and AV1 in terms of video quality output by up to 35% - a significant improvement on previous tests.
In results published earlier this month, BBC R&D found that VVC (a development of the Joint Video Experts Team and MPEG) performed 27% better than HEVC for HD sequences and 35% for UHD sequences.
AV1, on the other hand, performed very similarly to HEVC, with an average 2.5% loss over HD sequences, and 1.3% gains for UHD sequences (see graph at end of article).
Comparing AV1 to HEVC, BBC R&D found that AV1 could produce higher-quality decoded video than HEVC in low-bitrate scenarios, "which is highly desirable in the video coding field."
The time taken to process the videos through the codec was also tested. Increased processing times means increased complexity, and therefore more computational power is required.
BBC R&D found that the compression gains from VVC come at the cost of processing time. Encoding took around 6.5x longer than that of HEVC and decoding took 1.5x longer. AV1, on the other hand, takes about 4x as long to encode than HEVC, but is 8% quicker than HEVC to decode. 
In comparison to the lab's previous test results last year, "AV1 has hugely improved these processing times. This again highlights the focus of [AOM] of producing a codec optimised for streaming."
It is worth mentioning that the test models used by BBC R&D are intended to only provide an insight into the possible quality a codec can achieve, and they are not optimised for speed. Encoding and decoding will typically happen with optimised software or on hardware, where processing times will be far quicker. BBC R&D acknowledges this but adds that its data does still give a general idea of the relative quality and complexity these codecs have to one another.
Taking these results at face value then, VVC development can be expected to deliver hefty compression gains over HEVC by the time it is finalised next year.
MPEG is targeting a 50% improvement in HEVC bandwidth with VVC by 2020. This will enable live UHD-2 encoded content (8K) to be delivered at less than 50Mbps by around 2020-2022.
Of course, this being compression schemes, life is not that simple.
Since VVC is an evolution of standards and technologies already used in HEVC and other codecs it will not come for free. There are concerns that VVC will be equally burdened by royalties and patent pool opacity as HEVC.
AV1 is also now being dogged by patent claims of certain AOM members including NTT, JVC, Orange, Toshiba, and Philips. A licence for use of AV1’s use in consumer displays (costing €0.32 per device), STBs and OTT devices (€0.11) administered by Sisvel International was issued in March.
To counteract both of these, MPEG has fast-tracked development of two additional competing codecs.
MPEG-5 EVC is being designed with a royalty-free profile, even though the basis for it is V-Nova’s Perseus codec.
A second "main" profile to MPEG-5 EVC adds a number of additional tools, each of which is capable, on an individual basis, of being either cleanly switched off or else switched over to the corresponding baseline tool.
"The main benefit of EVC will be a royalty-free baseline profile but with AV1 there’s already such a codec available and it will be interesting to see how the royalty-free baseline profile of EVC compares to AV1,” comments Christian Timmerer co-founder and CIO of Bitmovin in a blog following the 125th MPEG meeting.
Then at the 126th MPEG meeting, LCEVC was announced.  Low Complexity Enhancement Video Coding is now the third video coding project within MPEG addressing requirements and needs going beyond HEVC. It will form part of the MPEG-5 suite.
Says Timmerer in another blog"The new standard is aimed at bridging the gap between two successive generations of codecs by providing a codec-agile extension to existing video codecs that improves coding efficiency and can be readily deployed via software upgrade and with sustainable power consumption."
The coding efficiency target for LCEVC—to be at least as good as HEVC—has apparently been met, and the goal now is to achieve an overall encoding and decoding complexity lower than that of the codecs it is built on (AVC and HEVC).
With standardisation of LCEVC, MPEG-5 EVC, and VVC due in 2020—about the time AV1 as set to mature—the race is set to come to a head.
Competition has to be healthy even if three of the runners come from one stable, but weighing the merits of each might require as complex an algorithm as the technology itself.
It is not compression efficiency alone which will determine winner, but the processing complexity at both encoder and decoder, as well as the power efficiency and calculation of its business and financial aspects.

Collaborate to keep up

AV Magazine
As the AV sector takes on the ‘as a service’ model, procurement is changing, and providers need to work in partnership with the client to benefit both parties.

The conventional vendor-specific ‘room in a box’ no longer cuts it when the modern enterprise is looking for unified communications and collaboration.
“There has been a shift in the enterprise space in the last few years regarding AV and communications, from tactical to strategic,” says Byron Tarry, executive director at the Global Presence Alliance (GPA). “The modern workplace is moving towards Microsoft or Cisco teams and AV in the conference space is moving from ‘nice to have’ to an integral part of collaborative workflow.”
Ultimately, the customer is looking for “improved collaborative outcomes”, Tarry argues. “From an industry standpoint, if we start to realise we are not technology providers but human collaborative outcomes providers, then it opens up a whole different world of what the opportunity is and the role we as AV suppliers, integrators, vendors and consultants can potentially play to support that goal.”
GPA will formalise this at InfoComm, where it will promote its new Velocity Ecosystem for global enterprises. This is described as an “integrated and standardised portfolio of collaboration solutions”, which includes program planning, hardware/software and deployment and support, along with a strategic management and analytics dashboard. Developed in partnership with Crestron, Cisco, Logitech, Legrand AV, Domotz and LG, the aim of Velocity Ecosystem is to deliver a quick path for standardisation, simplification and scale worldwide.
“The starting point for this was asking ourselves if we could deploy 1,000 rooms in 90 days and we decided we couldn’t do it in the way we’ve traditionally done,” Tarry says. “Even with all the advantages of global alignment, we still felt it would be a tall order.”
He explains: “The effort that goes into room remediation alone would mean it would take years to deliver. But when you shift the lens in terms of what you’re trying to provide to customers, which is essentially to help organisations move faster and which, in turn, brings them competitive advantage, then you think about mitigating the risk of each individual space by working with vendors to innovate and design your way to a fresh solution.”
However, this hugely magnifies the complexity of procurement. From simple models based on cost where, all things being equal, the vendor with the least expensive solution would win, the industry is slowly taking on the attributes of ‘as a service’.
Accountability shift
“In a world where ‘as a service’ is becoming prevalent, the challenge for procuring AV as a service is about a shift in accountability and risk from the consumer of the service to the provider,” he says. “Basically, it’s putting the onus on the provider to deliver results. That’s opposed to the prevailing capital expenditure model where (the industry) sells customers millions of dollars of kit with a service wrapped around it and if the customer doesn’t get what they want then the industry gets them to buy more kit and services.”
In the service model, the customer is willing to pay more if the provider can deliver better outcomes. However, it’s also extremely complex to measure, which is why part of the procurement package has to be about measurement and analytics for return on investment.
“New technologies now offer organisations insights into how and where they can drive efficiencies in existing AV setups, while advanced features such as energy-saving modes reduce the total cost of ownership in the long term,” says Carl Standertskjold, corporate segment marketing manager at Sony Professional Solutions Europe. “This is especially important at a time when budgets are limited and procurement teams need to have a firm understanding of a solution’s return on investment before authorising any purchases.”
He says the Internet of Things will also see AV solutions become increasingly connected, offering organisations two main benefits. “It enables them to collect valuable data on how and when these technologies are being used to spot patterns, understand user behaviour and challenges, and ultimately, help inform future procurement decisions,” says Standertskjold.
“On the other hand, the more connected an organisation’s AV solutions are, the easier it is to integrate new technologies into an existing setup without the need for a complete overhaul. This, again, helps streamline the procurement process.”
Tarry advises that partnership is needed for procurement on the industry and customer side to ensure innovation, optimisation and alignment of complex but highly strategic and business-critical services.
“Together, we must look for ways to minimise risk, create transparency and focus on common goals. It’s about putting a financial model in place that doesn’t create mistrust, yet has benefit for all parties,” he says.
One of the main bones of contention between parties is that AV is often engaged late on in a project, which results in tight timescales.
“The best and most effective processes are those with a defined forecast and defined roll-out, where accessibility to advance information supports a just-in-time operation,” affirms Guy Phelps, end-user account manager in the finance and legal team at NEC.
While he does not highlight any major issues with current policies, pressure points are building as more projects are created at the last minute, often as end users react to the previous quarter’s performance.
“The more advance notice gained from the end user with detailed specifications, the more slick and effective the process will become,” Phelps emphasises. “The AV industry is responding, but the end user needs to understand that as equipment and projects become ever more complex, and with roll-out processes based on ‘just-in-time’ in order to meet tight budgets, access to information to enable accurate forecasting is vital. It is essential that manufacturers, integrators and end users work very closely to ensure the best information is available to all parties in advance.”
However, Tarry contends that the AV sector needs to adapt further: “As a tech provider, we tended to say we were brought too late into the planning phase and that we were reliant on the space and construction parts aligning – but we can’t continually blame everyone else for that. We have to shift our perspective and change our pitch.”
With regard to the procurement of AV systems as part of a construction project, late AV systems involvement “forces projects down a two-stage tender route”, says Daniel Watson, senior consultant – AV and multimedia at PTS Consulting. “Communication between the main contractor/builder and the AV integrator is key, and as such this relationship often takes precedence over how suitable and/or capable the integrator is to deliver on the project.”
‘Procure AV earlier’
While single-stage tendering provides the project with greater cost certainty earlier on, the total cost of variations (such as changes to system designs) can be expensive.
“Given the speed of technology innovations and constant changes in user habits, this is a serious risk to a client,” Watson warns.
With the two-stage process, the client may enjoy greater flexibility but the AV systems’ cost is a moving target. What’s more, the full impact on other services (IT, building management systems, mechanical and electrical) is also unknown until the project end. According to Watson, this is often the root cause of the narrowing of commissioning windows on site as all trades are commissioning at the same time.
“If AV was taken into consideration earlier in the project lifecycle, and consultant practices were engaged earlier, much of the design development, tech trials and third-party integration requirements (IT and networks for example) would be completed upfront,” he insists.
Manufacturers are increasing specialist engineer resource engagements with integrators, especially at the commissioning stages. PTS Consulting reports that a number of manufacturers are providing commissioning services that can be specified by the consultant as part of the AV systems invitation to tender/specification package.
No ‘one-size-fits-all’
Ultimately, the key to successful investment into new technologies is to ensure high user adoption, so new solutions need to be rigorously tested and analysed to ensure they are intuitive to use, perform as intended and meet user expectations before being deployed.
Of course, decisions must also fit with the firm’s wider investment policy. “Once the decision has been made to invest in new technologies that meet the needs of users, it is essential that procurement, facilities, IT and AV managers work together to ensure new solutions they want to deploy are in line with the company’s wider AV investment strategy,” says Standertskjold.
That being said, there is no one-size-fits-all approach here. Every enterprise is unique, with specific needs, so suppliers need to collaborate closely with all AV integrators and managers in order to offer customised solutions that best achieve an organisation’s strategic aims.
By having these conversations between end users, integrators and manufacturers, AV suppliers can continue to have a finger on the pulse of the market and develop solutions in line with the requirements of modern enterprises.

Ready to play a role in what may be the Golden Age of Episodics?

copywriting for Sohonet
Everyone can see that the media and entertainment industry is undergoing a seismic transformation, but the huge bets being placed on its future in some quarters may not be universally shared. There are sizeable financial rewards at stake at the same time that there are snake pits to avoid. Preparing for the journey need not be done without a map.
Experienced facility chiefs and entrepreneurs alike are able to read the runes. It doesn’t take an analyst to divine where the trend toward exponentially rising content costs might lead. Netflix continues to lead the charge, ramping up its annual content spend above an incredible (and possibly unsustainable) $15 billion this year, and in doing so racking up 150-million subscribers worldwide and pulling rival content owners, broadcasters and SVOD players in its wake.

The bulk of this unprecedented spend is not happening on feature film but to episodic TV where consumer expectations for quality and production value just get higher and higher. Virtually every part of the pipeline, from production to VFX, sound mixing and editorial, is impacted. More demand, more content, more need for services. That demand is not going to abate, at least for the foreseeable future.


For example, high-end VFX was once the sole preserve of theatrical spectaculars like Avengers: Endgame or major episodic investments like Game of Thrones, the tentpole stories being commissioned for the small screen like Amazon’s Lord of the Rings adaptation or Disney’s live-action Star Wars series (The Mandalorian, destined for Disney+) are likely to be VFX showstoppers on a par with anything we have seen to date. 

Disney+ is just one of dozens of OTT services entering the fray, multiplying the number of outlets for high-grade digital storytelling. The ad landscape is splintered too as personalized and geo-specific ads follow content in targeting eyeballs across social channels from Facebook to Snapchat as well as continuing to cater for traditional broadcast and VOD offshoots.

Making, managing (and monetizing) all of this content at an affordable price whether for features, episodics or commercials is not possible without advances in technology and workflow. The primary tools at the disposal of facilities, and especially VFX facilities charged with accommodating the surge in demand, are cloud compute and storage and the connectivity in between on top of which artificial intelligence/machine learning can be deployed to deliver even greater time and cost saving benefits.


We believe there will be a continued drive to public cloud resources to access compute and storage resources and, increasingly, for creative applications used by the artists themselves. Over the next few years, we can expect continued improvement in average connectivity speeds combined with affordable software tools and the availability of a professional freelance workforce which will yield a revolution in post-production and VFX.

Gone will be the static business models based on fixed premises and large capital outlay replaced by dynamic ‘VFX as a service’ facilities able to scale up production in the cloud within minutes and site themselves anywhere to take advantage of VFX tax credits and talented freelance labor. The editing room will be increasingly connected, and increasingly mobile; keeping editors near production, or near home.

Long heralded, this will be the era of the virtual workstation and a truly distributed workforce offering work-life advantages to freelance talent and studio heads alike while improving the speed and cost-effectiveness of the content creation itself.

Technology does need to continue to advance. For example, the management and collaboration tools in such a dispersed remote production environment need further refinement, but there’s no doubt this will happen.

And happen soon. The narrative arc we often hear is one that will take less than a decade. Indeed, we think that the continued explosion of file transfer at an individual contributor level will fuel a revolution in the distributed workforce in VFX and wider post-production by 2023.

No matter if you are a start-up digital boutique or 700-seat international powerhouse, the importance of trying to understand these trends in order to capitalize on them shouldn’t be lost.

Thursday, 20 June 2019

5G Technology Meets the Achilles Heel of Smartphone Hardware

StreamingMedia
Today, video is the king of content demand—and it will be long into the future. NSR predicts that by 2022, 82% of all IP traffic will be video.
Video is also a prime mover for 5G, with upwardly revised predictions that 5G coverage will reach 45% of the world’s population by end of 2024.
When it comes to the consumer, the 5G emphasis has been on mobile. British operator EE, for example, has enhanced its multi-screen app BT Sport with 4K UHD and HDR timed to coincide with launch of its 5G network.
For other telcos, though, 5G means an opportunity to drive fixed line subscriptions to the home. Connect a 5G router to the set-top box or smart TV in the living room and deliver enhanced TV over the last mile.
Cable providers, too, can put 5G cells into street cabinets and cover the last 500 yards where replacing coax with fibre or enhancing it with DOCSIS 3 is a less viable option.
The other data-heavy application primed for 5G is gaming. It is arguably more of a game-changer than live video since real-time multi-player gaming isn’t possible, certainly over mobile, without it. It’s also nearly impossible to create a shared reality experience if the timing isn’t perfect—but 5G solves this.
Niantic, maker of Pokémon Go, is building a game that renders augmented reality in a near to instantaneous tens of milliseconds of latency. Meaning that in a peer-to-peer multiplayer AR game you can see where your friends actually are rather than where they were.
Synched with this is the potential of edge computing in which logic is moved out of the device and into the cloud. After 20 years of CDNs, 5G can now put compute at the edge. If you can process more encodes and transcodes there you can create thinner client apps. With extreme low latency you effectively stream from the edge with less rendering on the device.
The concept of Niantic’s latest game, branded around Harry Potter, relies on edge compute to perform tasks such as arbitrating the real-time interactions of a thousand individuals playing in a tight geographic area.
But one thing is missing and it could be the Achilles heel of 5G in its early days.
Battery life sucks. Or rather, data intensive apps like video games suck battery life.
One review of Samsung’s 5G-ready Galaxy S10 reports an hour-long video draining power by 9%—in HD and at half screen brightness. Gaming saps energy further, with the S10 losing around 21% an hour.
The Lithium-ion based batteries in current cellphones haven’t changed much in 30 years of consumer electronics and are nearing the end of their shelf-life.
There will likely be a pinch point between the development of more economical battery tech, possibility involving supercapacitors, and the migration of data to the cloud.
Without having to store and compute, the edge will turn your smartphone into a streamlined slimline streaming device but as it stands 5G will strain the hardware in your pocket and the patience of newly converted subscribers

Thursday, 13 June 2019

Catch-22: How it was shot

IBC
Joseph Heller’s 1961 novel Catch-22 is set in World War Two, but it’s clear that the makers of the first television adaptation - including series executive producer, director and star George Clooney - believe its satirical take on the insanity of war is just as relevant today.
It follows a US bombing squadron whose leaders continually raise the number of missions their men are required to fly before being sent home, resulting in no one being sent home.
The only way out is to claim insanity, but a request to be removed from duty is proof of sanity, hence the bureaucratic rule Catch-22.
“The very idea of war is absurd,” says cinematographer Martin Ruhe, ASC. “For anti-hero Yossarian this is simply about life and death. The stakes could not be higher. But for characters like Milo, war is a huge business opportunity. This is not just absurd; this is how war is.
While Yossarian (Christopher Abbott) rages at the sheer insanity of it all, his problems are compounded by characters in his own army including the profiteering Milo Minderbender (Daniel Stewart), mediocre commander Major de Coverly (Hugh Laurie) and parade loving Lieutenant Scheisskopf (Clooney).
Ruhe had previously lensed The American, a taut thriller set in Italy starring Clooney and produced by Grant Heslov. It was Heslov and Clooney who approached Ruhe to photograph Catch 22.
“They wanted it to look like something shot in World War Two, so I did some research mainly into period colour newsreel and high contrast footage,” Ruhe explains. “I shot some stills and played around with the look in Photoshop. The obvious decision would have been to shoot 16mm, but film cameras are not too practical, particularly for manoeuvring inside planes, so we had to find a look in digital that wasn’t too clean.”
The story is set on tiny Italian island Pianosa and shot on location in Sardinia and areas around Rome where the Mediterranean light helped Ruhe to find a look that exuded baking heat.
“We wanted this yellowish feel – to really feel the heat,” Ruhe explains. “It’s permanently hot, people are always sweating, it’s not a pleasant place. We added film grain for a richer texture that conveys the feeling of heat.”
The show’s producer, Hulu, also required a 4K finish which led to Ruhe’s choice of ARRI Alexa Minis. “I wanted a small compact camera so we could shoot as much as possible in the planes. We had the fuselage of a real WW2 bomber (in a studio in Rome) to do all the flying shots with actors.”

Ruhe shot using Cooke S4 Prime lenses which yielded aesthetic aberrations and flare as well as using zooms in reference to the films of the 1970s such as Robert Altman’s M*A*S*H.Even more compact 4K Flare camera heads (designed by IO Industries) were mounted to the planes used for aerial work.
“We’re using the zoom to draw attention to something, for example to pick someone out in a crowd and to follow them for a time. It’s not very subtle and I don’t usually do it, but it worked here.”
A major scene in the fifth episode involves an attack by German planes on the army base (arranged by Milo to boost the value of the planes remaining after the attack). Shot at night, Ruhe used HMI lights and gels to give a bluish-green hue for moonlight and then worked with illuminations from the explosions as planes are destroyed across the airfield.
“The shoot felt as big as a major feature and the way the story was treated felt like doing a film, but we were cross shooting several episodes at a time. On one day we’d be setting up multiple scenes in one location for different episodes with different directors, which is a big difference from a feature.”
Unlike the novel, the series unfolds chronologically from 1942 to roughly 1944, but the series retains the chaotic energy and sense of madness.
It’s rare that a national newspaper praises the cinematography but UK’s The Guardian did in its review: calling the adaptation “immediately impressive – visually deserving of a bigger than a laptop screen – with a cohesive, arid palette and shots ranging wildly in scope from resonant closeup to sweeping landscape.”
“George and Grant were effectively working as showrunners plus directors. You must move fast with George. He is quick at making decisions and he’s also very visual which surprised me. He knows how the camera moves and how to direct actors and he’s very experienced all of which makes him very easy to work with.”Clooney directs two episodes with Ellen Kuras (perhaps more familiar as a cinematographer on features like Eternal Sunshine of the Spotless Mind) and Heslov (who produced Argo and Good Night, and Good Luck directed by Clooney) also directing two each.
Bird’s eye viewRuhe was also reunited with the mainly Italian crew with whom he had shot The American.
“There’s something nice about going to places and working with the local crew – and these guys are fantastic,” Ruhe says. “You learn more because there are always so many ways to do things. You pick up things you didn’t think of.”
“There are only so many Mitchell B-25 bombers left in the world, but we had one and a Douglas DC-3 for a few days. We tried to get as much mileage out them as we could, with camera mounts on the body and interior for aerial sequences. We also shot from a helicopter, but we had to turn over a lot of plates to VFX to enhance these scenes.” Also tricky was managing the considerable amount of airborne action. Ruhe tried to do as much in camera and in the air as possible.
DNEG was the sole VFX vendor delivering 717 shots across 105 sequences under supervision of Brian Connor out of Vancouver and Dan Charbit supporting Connor from DNEG’s Montreal office. Matt Kasmir was the on-set VFX supervisor for Hulu. Work included CG planes and military vehicles, water/ocean and beach extensions, sky replacements, CG flak, ground smoke, fire FX, CG clouds and destruction matt paintings taken from aerial photography.
Ruhe shot to prominence photographing Control, Anton Corbijn’s 2007 biopic of tragic Joy Division singer Ian Curtis. He also shot Michael Caine thriller Harry Brown (2009) and American Pastoral, the directorial debut of Ewan McGregor. Before Control he was a renowned pop promo director working with the likes of Depeche Mode, U2 and Coldplay and today juggles feature and TV work with commercials.
“For me, shooting commercials is useful because there are technical things you can learn and new gear to get to know, plus you meet new people,” he concludes. “But I love doing that with actors, which you can’t do in music videos. For me, the highest discipline and the best thing you can do is to tell a story.”

Wednesday, 12 June 2019

Planet-scale AR in Harry Potter: Wizards Unite


IBC
Niantic  says Harry Potter: Wizards Unite will bring unprecedented scale to AR gaming. It could also provide a glimpse into the future of entertainment. 
The company which jump-started consumer AR with the phenomenal hit Pokémon Go is back with a Harry Potter-themed game which promises to be the first real-time synchronised multi-player augmented reality experience.
It is primed for the introduction of 5G and could be the killer app which operators and handset makers need to get consumers to buy 5G smartphones and network subscriptions.
But the ambitions of its developer go far beyond simple gameplay.
The “planet-scale augmented reality platform” which underpins it is intended to function like a global operating system forapplications that unite the digital world with the physical world – or as Niantic’s John Hanke puts it – uniting holograms with atoms.
“We stand at the beginning of a whole new era of augmented reality experiences and a new digital interaction for information and entertainment,” the company’s founder and CEO said at Mobile World Congress in February.
“Yes, it is being hyped, but a paradigm change like this happens maybe once every couple decades.”
Pokémon Go has achieved over 2 billion downloads. The company’s vision and track record have valued Niantic at almost $4 billion, propelled by investors including Samsung Ventures and esports group aXiomatic Gaming.
It will be hoping for more of the same mass participation when it launches Harry Potter: Wizards Unite, made with the blessing of Warner Bros. and JK Rowling, later this year.
The title is built using an inhouse gaming engine “that allows hundreds of millions of players to play in a single global instance,” Hanke says.
Pokémon Go, which is built on this platform, has already demonstrated concurrent real-time usage of several million players in a single, consistent game environment, Niantic says, with demonstrated monthly usage in the hundreds of millions.
But the AR Platform, for which Harry Potter: Wizards Unite is the first application, is of another order entirely. With it, the San Francisco-based outfit aims to solve a number of the key limitations of current AR. Ideally, AR objects should be able to blend into our reality, seamlessly moving behind and around real-world objects in real time.
To tackle this, Niantic is using machine learning to determine the depth of every pixel in a video frame and then applies that to make virtual objects obey real world physics.
What’s more it is using the same pixel depth data to map every physical location of every user on earth for AR experiences potentially involving billions of people.
If successful it will challenge both Apple and Google’s efforts to establish a monopoly in the emerging AR field.
How? To begin with it’s worth knowing that Niantic was a start-up within Google that helped build apps that became Google Maps and Google Earth before being spun-off in 2005 with Hanke at the helm.
He is taking a similar contextual mapping approach so that animated objects and characters (a Quidditch ball, a wand or a fantastic beast, say) are visible at the same time, in the same place and continuously in time to anyone with the app on their phone or with AR glasses.
 “That means we have to photograph and analyse a user’s immediate environment and their positional data to create an AR map in the cloud and serve it back to share with other users.”
Understanding the AR world
Niantic’s AR is an attempt to move from computer models of the world centred around roads and cars – like Google Maps - to one centred around people.
To help with that it is using a dataset submitted, curated and updated over the past six years by players of Pokemon Go which it is combining with other datasets to build contextual computer vision.
According to Niantic, such advanced AR requires an understanding of not just how the world looks, but also what it means: what objects are present in a given space, what those objects are doing, and how they are related to each other, if at all.
“Once we understand the ‘meaning’ of the world around us, the possibilities of what we can layer on is limitless,” it explained in a blogpost. “We are in the very early days of exploring ideas, testing and creating demos. Imagine, for example, that if our platform can identify and contextualize the presence of flowers, then it will know to make a bumblebee appear. Or, if the AR can see and contextualize a lake, it will know to make a duck appear.”
Niantic has the financial resource to code and acquire the tech to do this. In November 2017, it bought Evertoon, a start-up exploring digital social mechanics. In February 2018, it acquired mapping and computer vision specialist Escher Reality and followed that last June by adding London-based start-up Matrix Mill.
This is now Niantic’s London office where Matrix Mill’s trio of neural scientists – all with a shared University College London background - are using computer vision and deep learning to develop techniques to understand and contextualise the 3D space from information culled from the smartphone cameras of game players.
As Hanke puts it, “the larger the vocabulary, the more understanding we have, and the richer the AR on our platform can be.”
A prototype virtual dodgeball game Codename: Neon was developed last year to test the company’s contextual computer vision where AR objects understand and interact with real world objects or people. For example, players in the game can harvest energy from white pellets on the ground, and those are a shared resource–so if one player gets them, the other players can’t.
“All the action, firing, dodging and absorbing of energy is shared with all other players at a very low level of latency,” says Hanke.
Lagging behind
Another internal experiment, Tonehenge, encourages people to work together to solve intricate Myst-like environment puzzles.
Some of the features of these games will reappear in Harry Potter: Wizards Unite.
The other Achilles heel of AR is the latency of data being sent over the network in response to user actions. It’s nearly impossible to create a shared reality experience if the timing isn’t perfect – but 5G solves this.
 “Even good latency times today are 100 milliseconds. With 5G we can get that to a near instantaneous tens of milliseconds,” Hanke said.
To put this in perspective, with rendering at 60fps, each new image is displayed at less than 16ms. According to the company, this means that in a peer-to-peer multiplayer AR game you can see where your friends actually are rather than where your friends were.
The company’s cloud-based platform is designed to make it easier for other developers to create AR apps which can run on any device, unlike Apple’s ARKit and Google’s ARCore, which are both focused on their own iPhones and Android smartphones.
Modelling a ‘people-focused’ world of parks, trails, sidewalks, and other publicly accessible spaces still requires significant computation. The technology must be able to resolve minute details, to specifically digitise these places, and to model them in an interactive 3D space that a computer can quickly and easily read.
This is enabled by mobile edge computing in which the processing power is moved closer to the user – at one of the millions of new 5G cell sites being installed - and allows Niantic to perform compute intensive work such as arbitrating the real-time interactions of a thousand individuals playing in a small geographic area.
It has partnerships with Deutsche Telekom, Korea’s SK Telecom and Samsung.
 “If you want to build compute intensive shared AR experiences, we need the next level of network,” Hanke says.
All of this presupposes a future of ubiquitous wearable computing, one in which the augmented reality experience is inherently shared and social.
If that’s to work, Niantic believes the AR interaction must feel natural to our senses. “The digital would obey similar rules to the physical in order to create the suspense of disbelief in our brains,” explains Diana Hu, formerly of Escher Reality now Niantic’s head of AR Platform.
For example, in Pokémon Go when it rains in a player’s location in the real world, that is reflected in the game.
Last year Niantic launched a contest for developers to share ideas and build new experiences on Niantic’s platform. The winner stands to receive one million dollars and will be announced later this year.
“It’s all about unleashing the power of indie developers,” Hanke says.
In Niantic’s world, our everyday experiences are enhanced by hardware that is unobtrusive, can go anywhere, and is connected in real-time with low latency 5G connections.
A similar - even rival - concept for mixed reality spatial computing at scale is being imagined by Magic Leap.
It will be interesting to see if and when those worlds collide.