Monday, 30 May 2022

AI: The robots are coming

RedShark News

article here

The advance of AI and its increasing capacity to perform work with a creativity indistinguishable from humans is fuelling more discussion and some concern.

At the Venice Biennale running now until July, visitors can see an “ultra-realistic humanoid robot artist,” called Ai-Da. She’s been trotted out for a few years now and this time is showcasing paintings performed by her AI and made by her robotic hand. 

Its British inventors have moved beyond the question ‘whether or not robots can make art?’ to exploring ‘now that robots can make art, do we really want them to?’ 

Soon, AI algorithms “are going to know you better than you do,” warns Ai-Da’s co-inventor Aidan Meller in The Guardian. “We are entering a world not understanding which is human and which is machine.” 

Going further, he implies that society could be edging away from humanism, into an era where machines and algorithms influence our behaviour to a point where our “agency” isn’t just our own. 

“It is starting to get outsourced to the decisions and suggestions of algorithms, and complete human autonomy starts to look less robust. Ai-Da creates art, because art no longer has to be restrained by the requirement of human agency alone.” 

Art or beauty as they say is in the eye of the beholder so if a machine creates art and we accept that then it is art. 

This is what researchers Leah Henrickson and Simone Natale have termed the “Lovelace Effect“ (named after the 19th century female mathematician who essentially programmed Charles Babbage’s first computer for him and whose Christian name is not coincidentally, Ada). 

The Lovelace effect shifts the focus from the technological capabilities of machines to the reactions and perceptions of those machines by humans. 

“How, where and why we interact with a technology; how we talk about that technology; and where we feel that technology fits in our personal and cultural contexts,” all has a baring on whether what we see or hear is called art, Natale and Henrickson say. 

AI in the workplace

That all our jobs are in danger of being replaced by AI is presented as a certainty by AI expert Kai-Fu Lee. In his new book AI 2041  he predicts that all blue-collar and all white-collar work jobs will be phased out of existence as AI proves it can do those jobs better – and cheaper.  

Moreover, any craft-related jobs that require dexterity and a high level of hand-eye coordination will also eventually been taken over by AI by 2041. That would include many areas of post production such as VFX, animation and assembly edits. Even programme direction of as-live sports matches could be done by a bot. 

“Engineering is largely cerebral and somewhat creative work that requires analytical skills and deep understanding of problems,” Lee told IEEE Spectrum. “And those are generally hard for AI. But if you’re a software engineer and most of your job is looking for pieces of code and copy-pasting them together—those jobs are in danger.”  

In order to adjust to the digital AI era we are urged to understand the basic tenets of coding, programming languages, scripts, algorithms, compiling, and machine language by US academics Paul Leonardi and Tsedal Neeley. 

Developing a digital mind 

In their new book The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI, they argue that lacking this basic digital awareness would make it difficult to participate in the digital economy. 

Critically this mindset requires a shift in how we think about our relationship to machines. We shouldn’t anthropomorphise AIs but treat them exactly as what they are – machines which are built with human input and embedded with human bias. 

“Even as they become more ‘humanish’, we need to think about them as machines,” they write. 

They point out that advances in AI are moving our interaction with digital tools to more natural-feeling and human-like interactions. For instance, conversational user interfaces like Hello Alexa or OK Google give us the ability to act with digital tools through writing or talking that’s much more the way we interact with other people.  

The problem is that these AIs aren’t quite up to human mental agility or mimicry yet.  

“We are still some ways away from effective human-like interaction with the technology,” say the professors. 

But it seems inevitable that AI will catch-up, not least because we’re feeding it on the neural networks with which our own brains work. 

Does that mean AI ultimately attains consciousness? 

Graz University’s Wolfgang Maass has hinted at such, saying future neuromorphic setups may one day begin to explore how the multitude of neuronal firing patterns work together to produce consciousness. 

 


Friday, 27 May 2022

Sashi Kissoon / Death of England: Face to Face

British Cinematographer

article here

January 2021. Britain is locked down, divided, and facing some difficult truths. Old friends Michael (Neil Maskell, Peaky Blinders) and Delroy (Giles Terera, Hamilton) are dealing with issues much closer to home. 

Tensions are running high, and racism isn’t far below the surface in this funny, political, and explosive drama that merges theatre techniques with those of film.  

The project’s origin is a pair of critically acclaimed stage plays written by Clint Dyer and Roy Williams and performed at the National Theatre in 2020. ‘Death of England’ starring Rafe Spall and ‘Death of England: Delroy’ starring Michael Balogun (after Terera withdrew due to a last-minute health issue) were essentially monologues, staged in the round. 

The writers scripted a standalone piece for the screen as a two-hander featuring the main characters, with Dyer directing the NT production for Sky Arts. This followed on from the success of the NT’s first original film Romeo & JulietDeath of England: Face to Face was filmed on the NT’s Lyttelton stage over 15 days and combines filmmaking with theatre in its blocking, lighting, camera work and breaking the fourth wall. To that end, it required the extensive pre-production participation of director of photography Sashi Kissoon (Venice at Dawn, Genius by Stephen Hawking).

“It’s the greatest creative experience of my career, not only in terms of how I shot it but being invited into script breakdowns to help shape the film and being in the two weeks of rehearsals,” says Kissoon, who had worked with Dyer on short film Swept Under Rug and commercials previously. “The collaboration was very open. It’s very rare that a DP is encouraged to talk in depth about story.” 

He explains, “I met with Clint and production designer Samantha Harley 2-3 times a week over a month and workshopped the script. One early idea was how to illustrate the internal memory of the characters. We called this the ‘brain space’ and it’s the key idea of setting these scenes in a black space.” 

When Michael visits the corner shop in his brain space Kissoon pitched the idea of using forced perspective, to shrink the set down to make the shop keeper and shop feel small. 

Another idea was to use two different aspect ratios throughout. Although they ended up staying mostly 2.39, when the film cuts to flashbacks – flickers of just a few seconds – these are shot 16:9 to help differentiate them. 

Kissoon suggested using composite shots of actors within the same frame. The idea was to present characters looking back at themselves in self-reflection. He says, “These are moments when in hindsight we question why we do the things we do. I did some tests and showed how it could be done without any expensive VFX.” 

If that was something not possible in live theatre, then the production also planned an elaborate shot (not used in the end) during the climactic fight scene in which, while the camera spun 360 around the room, the damaged walls of the flat would be lifted clear by the stage crew to reveal as good-as-new walls. 

“In the film world, that might have taken weeks to arrange but the theatre crew were so used to doing things like rapid major set changes, it proved no problem.” 

It also made sense, partly for budget, to use the theatre’s own lighting kit and therefore to employ the theatre lighting crew who knew exactly how to use it. 

“With my gaffer and best boy, we did a film school lighting class, and we had a crew from the NT who knew theatre lighting inside and out and helped build little rigs above the set to give us more control.” 

Kissoon acquired on Sony Venice principally because of its ISO 2500. “The big aesthetic for the brain space was that the shadows would fall off the blacks. I didn’t want to see any detail, so we wanted to raise our stop high enough to have everything else fall off to shadow. 

“The other benefit of the Venice is its versatility. So, when we’re shooting the present-day scenes, we use spherical Zeiss Supreme Radiances in full frame mode and for flashbacks its Lomography’s specialist Petzval 58mm lens with the Venice switched to 4K anamorphic mode.” 

Kissoon liked the warping effects at the edge of the frame from the Petzval while keeping the centre framed for portraiture. “I wanted the audience to know exactly what they were seeing straight away in those few seconds.” 

He also deployed the lightweight RED Komodo armed with a 20mm standard speed Zeiss for a few shots. These included one where Delroy gets pulled into a fridge as the actor talks to camera. 

“I built a rig that enabled Giles to grab the camera off me and take control of it. Another instance is a POV of a man being beaten up by Michael and then a handheld shot of Delroy taking in a lift.” 

The look of the feature is crisp, modern, and warm-toned in places and conceived in conversation with Dyer and Harley to challenge the conventional look of ‘poverty porn’. 

“When a story is set in a council flat you tend to get the same handheld 16mm desaturated look which we felt does such a disservice to people who live there. Just because someone may not have as much money as anyone else, it doesn’t mean they are any less happy.  

“The idea for the design of the ex-council flat is that Delroy owns it and is doing it up. We learn he once had a decent job. The living room has a warmer feel, but the kitchen retains the old fluorescent tube because he hasn’t got around to redecorating that area yet. 

“Using Venice in 6K full frame mode you just get this extra richness to the image which is the inverse of the gritty kitchen sink style. That’s also why everything in the flat, apart from the fight, is on a dolly. We wanted this elegant motion to show a different side to life in these buildings.” 

The full frame enabled Kissoon to use a shallower depth of field to keep as much separation of the two characters from their background as possible.  

“Again, it’s the idea that where you live doesn’t define who you are. So many people judge others because they live in this neighbourhood or that type of house. Then when we flip into anamorphic, I want the audience to subconsciously feel they’re in a different world. The old anamorphic lenses warp the image to give it that extra feeling being unnatural.” 

Aside from the NT team, Kissoon commends grip Tony Sankey, focus puller Chris Steel, and colourist Asa Shoul. “It was a team effort and collaborative process not possible without any of them.” 

Thursday, 26 May 2022

How the Churn Turns: Streaming Apps Get Repeat Viewings

NAB

article here

The ability for streamers to stem churn is the new frontier in the battle for consumer wallets, according to a new survey from Samsung.

Understanding and identifying the churn risk — and the retention opportunity — is likely to define the next wave of success for TV app marketers, the report finds.

“TV app marketers have to work a lot harder than traditional broadcast companies to achieve consumer loyalty, and build brand awareness,” says Justin Evans, Global Head of Analytics & Insights, Samsung Ads. “It is critical to focus equally on retention and new customer acquisition to succeed in this competitive streaming landscape.”

The report, available via TV Rev, combines behavioral insights drawn from Samsung Smart TVs with an attitudinal survey of 1,000 owners of Samsung Smart TVs in the US, to shed light on motivations.

While the future for TV apps looks robust — some 88% of respondents intend to use streaming apps more or at the same level, and only 12% intend to cut back — the real fight is to retain subscribers who habitually swap out services.

Churn remains high, at 50% on average, per the survey, and at any given moment, approximately one third (32%) of a streaming app’s audience is new. With so much content at consumers’ fingertips, if they can’t find it, and fast, they aren’t going to watch it.

“One thing is for certain,” advises TV Rev. “Don’t use the same marketing playbook you’re familiar with for linear TV. The goal posts are not the same.”

For example, while there are thousands of apps available on smart TVs in 2022, audiences use an average of 3.8 apps per quarter for streaming. This is fractional when compared to channel surfing on TV where the average household watches between 10-30 networks per quarter which means that brand awareness and content discovery have never been more important for app marketers.

When Samsung asked consumers their top motivations for trying new streaming services, cost was a significant factor. A third indicated that they chose to try a service because it was free or low-cost. Another third indicated that they’d try a streaming service packaged at no cost with another service purchased. A third valued the chance to watch exclusive, original content.

When asked what causes them to leave an app, it came down to content and money. Nearly 40% of consumers leave an app because there isn’t enough original and exclusive content, while 36% cite cost as a reason to leave.

So, what are the lessons here for marketers? Nearly a third of the average TV app’s audience is new to the app–a high rate of users discovering and sampling. Still, the momentum for discovery is slowing somewhat, indicating a need for data-driven strategies for audience acquisition.

“Understand who your churned users were before they lapsed,” advises the Samsung report. “For example, did they simply sample the app and never make it past authentication? Or did they lapse despite using your app multiple times?”

Additional analysis might examine the difference between “light” versus “heavy” app users: “Once you know what makes someone binge vs. ‘drop in,’ your programs will strengthen.”

It follows that marketing campaigns need to be tweaked for different sets of users. Retention, says Samsung, is just as important as acquisition and strategies should be tailored to each audience.

 


Will Danny Boyle’s Punk Manifesto ‟Pistol” Shock the Establishment?

NAB

article here

It’s no coincide that Pistol arrives during the Queen’s Platinum Jubilee. That’s a celebration of the UK’s monarch being on the throne for 70 years and if you like that sort of thing then good for you, if not, then in the UK at least we get a couple days holiday.

In 1977, occasion of Elizabeth II’s Silver Jubilee, The Sex Pistol’s ‘God Save The Queen’ was released, singing “the fascist regime” to shock the establishment.

The album from which it came ‘Never Mind The Bollocks’ is a bone fide classic, number 80 on Rolling Stones’ all time 500, even if that’s the last thing the band’s members would have wanted.

Now director Danny Boyle has directed a mini-series about the band which ended in notorious front man John Ritchie AKA Sid Vicious’ death of an overdose after the possible murder of his girlfriend Nancy Spungen.

That the six-part series is made for Disney-owned FX may be one reason Johnny Rotten, one of the band’s original members, has refused to endorse the show; but that’s par for the course and Boyle says wouldn’t have it any other way.

“I want Johnny Rotten to attack it!” Boyle told The Guardian “It’s so not the story that everybody wants to be told, but it is the story that should be told.”

The Pistols’ story has already been made into the feature Sid and Nancy starring Gary Oldman and Chloe Webb directed by Alex Cox and The Great Rock’n’Roll Swindle – orchestrated by the band’s manager Malcolm McLaren to claim the whole thing was a contrivance to make money. In 2000, the band members released their own movie, The Filth and the Fury, but IndieWire claims Boyle’s is by far the most ambitious.

It is based on ‘Lonely Boy: Tales from Sex Pistol’ guitarist Steve Jones’s autobiography and stars Toby Wallace as Jones, Anson Boon as John Lyndon (Rotten), Louis Partridge as Sid Vicious, and Emma Appleton as Spungen.

The Sex Pistols were the “philosophers and the dress code” of the punk revolution, Boyle tells the New York Times, “I tried to make the series in a way that was chaotic and true to the Pistols’ manifesto.

That meant taking an experimental approach to filming: “We would just run whole scenes, whole performances, without knowing if we had captured the ‘right’ shot or not. It’s everything you’ve been taught not to do.”

Before filming began, the actors playing the members of the Sex Pistols spent two months in “band camp,” with a daily routine of music lessons, vocal coaching and movement practice tutored by Karl Hyde and Rick Smith from the British electronic music group Underworld.

To keep some of that raw DIY edge Boyle also decided not to do any postproduction work on the music. 

This was apparently a passion project for the director of Yesterday, a Beatles-soundtracked romantic comedy.

“I am very music-driven, but I never imagined doing the Pistols,” he said. “I had followed John Lydon’s career closely, and the hostility he felt for the others wasn’t a secret.”

But after reading the script, Boyle immediately said yes.

“Which was ridiculous since I didn’t even know if we would have the music, the most important thing.”

Lydon opposed both the use of the Sex Pistols’s music and the series itself, but eventually lost his court case when a judge ruled that the terms of a band agreement gave Cook and Jones a majority vote. Boyle said he had attempted to contact Lydon during the dispute. He added that he hoped the series would “reveal the genius and the humility” in the frontman.

Flattery got him nowhere with Lydon telling the Sunday Times that Pistol was “the most disrespectful shit I’ve ever had to endure.”

(Though Lydon arguably sold out years ago if you look at his appearance as a contestant on the reality show ‘Im A Celebrity Get Me Out Of Here’ in 2004 and his subsequent promotion in commercials for a brand of British Butter).

Boyle believes that one of the advantages of streaming as opposed to telling the story as a 90min feature “is that it’s willing to take on board that kind of complexity – and look for the attachment of the audience not through quite such easy tropes: the lovable one, the hero moment where he’s not quite as bad as you thought he was.” GGG]

He tells Esquire https://www.esquire.com/uk/culture/tv/a39839241/danny-boyle-pistol-interview/ 9/20 in another interview that he the show got made; “If I’m being brutally honest, I think it was more to do with my age and ability to get it made. I wanted to do punk because it was the big formative experience for me, and it’s overshadowed everything I’ve done.”

Arguably you could trace a lineage of punk in Boyle’s work like heroin addiction in Trainspotting, to which be brought an energy outside of mainstream filmmaking, and traced through Slumdog Millionaire, about a rags to riches dream set in Bombay, though it’s a stretch to call the Boyle of Steve Jobs, 28 Days Later and The Beach a punk filmmaker.

He also cemented establishment credentials by directing the opening ceremony for the 2012 Lonon Olympics which featured Daniel Craig’s 007 on her majesty’s service to launch the Games.

Music aside (Boyle was 21 in 1977, so just the perfect age for punk rebellion) it is the director’s working class, Northern England roots, which are the strongest through line in his work from Shallow Grave to Pistol.

“[The Pistols] were a bunch of working class guys who broke the order of things, more than the Beatles,” he tells NYT. “It was especially resonant in the UK, where the way you were expected to behave was so entrenched.”

The lyric from God Save the Queen: ‘There is no future in England’s dreaming’ is arguably more political today post-Brexit than it was then.

Perhaps Boyle’s most punk career moment was choosing to stick by his guns and the creative vision of regular screen writing collaborator Andrew Hodge when disagreements arose in the making of No Time To Die. Boyle was fired and hints to Esquire that the issue had to do with the way they used Bond’s child.

Perhaps getting caught up in the machine, as he did with Bond, is a mode of working to which Boyle is not fundamentally suited.

“I’m much better under the radar a bit,” he told The New York Times in 2007, “and actually figuring out how to make things work.”

 


Wednesday, 25 May 2022

Taking Those First Steps Into Web3

NAB

article here 

Marketers might be feeling the pressure to develop a presence in the metaverse, and that means getting to grips with Web3. In many ways the interconnected virtual worlds of the nascent metaverse are an evolution of existing experiences between brands, artists and audiences. But of course, it is also more complicated than that.

“Web3 is compelling enough to command attention but daunting enough to stall many efforts before they even start, especially for brands that are tasked with other strategies, executions, and KPIs,” says Cathy Hackl, a tech futurist writing at Forbes.

Although storytelling remains the primary link between a brand and its audience, Web3 does throw a few wrenches into the works. Alongside Jeremy Gilbertson, who describes himself as a “Metaverse Methodologist,” Hackl offers a framework to help brands understand Web3 and create an informed position to develop and deliver tangible ideas in the metaverse.

“The most difficult part for brands is trying to take a stance on something that is being built in real-time by the entire community,” says Gilbertson.

Here are the three major considerations Gilbertson and Hackl outline within their brand framework to help marketers claim a stake in the metaverse:

Define the Metaverse for Your Brand

Define what the metaverse actually means for your brand. Gilbertson explains, “while there are many interpretations of the metaverse, the most important definition is the one created by your team. Blanket definitions can serve as inspiration, but it’s not as simple as grabbing pieces of existing copy.”

To get to this point it is advised that marketers build “a nimble, interdisciplinary, internal Web3 team.” Although, that is surely easier said than done.

Buy or Build?

Decide whether to buy or build a metaverse platform. That’s because a core part of what Web3 means is the migration of ownership from a platform to the creator.

“Instead of populating platforms with your intellectual property, you now are the platform,” says Hackl. “You can control your instance of the metaverse by building your own experiences or buying that ability from instances that already exist.”

Gilbertson advocates a strategy that combines both buying and building. For example, brands can launch pop-up experiences on metaverse platforms like Decentraland or The Sandbox.

“Compelling experiences require a community of users interacting in interesting ways, and unless you have a deeply engaged community eagerly awaiting your next project, you should meet the users where they are,” he says. “I like a hybrid approach of creating pop-up installations in these worlds to experiment and build community before driving traffic to your own presence in the metaverse.”

One of the main challenges facing marketers and the metaverse is an audience not in the least savvy with Web3 protocols. For example, majority of any non-Web3 native brand’s audience doesn’t have a wallet for crypto which is necessary for trading within a Web3 economy.

“Is your audience Web3 curious or are they Web3 averse?” poses Gilbertson. “By understanding this, brands can create strategies that align with their audience’s fluency in Web3, and they can be an onboarding ramp authentically through their activations.”

So instead of approaching Web3 as a money grab or to combat an instance of FOMO, brands could help educate people about Web3 within a branded project.

“Think about how you can engage them through traditional channels as a bridge while teaching them about security and how to avoid phishing scams” advises Hackl.

“The first step in the journey could be to set up a wallet, and once completed, the brand unlocks gated experiences as a reward.”

Plotting Outcomes

Another plank of the brand framework for Web3 is to plot the outcome. Apparently, this takes quite a bit of thought to get it right.

“For some, it’s profit-driven. If so, is it an extension of an existing line of business or an entirely new product?” questions Hackl. “Others see an opportunity to expand their audience, deepen the connection with their existing audience, or activations, while authentically extending brand presence into this new realm.”

Then there’s the hardcore tech part — the bits that actually connect a brand to the technical infrastructure of Web3.

“Even if brands don’t want to get too far into the weeds, a high-level analysis for types of blockchains, levels of interoperability and decentralization, user security, and environmental impact is a great place to start,” said Gilbertson.

For example, if one group in the organization creates an idea that serves the goals of their department but requires a technology that is difficult to integrate into the company’s overall technology stack, it may never come to life.

Similar to a traditional real estate model, brands will need to evaluate ownership in or access to the metaverse.

“While some ideas will be compelling enough to dedicate resources to alleviate these complexities, other nascent ideas will be discarded before they become compelling because they are not aligned with the company’s current technical capabilities,” says Hackl.

In other words, it’s all an experiment. But getting into the sandpit is the first step to learning how to play well with others.

 


By 2031, the Metaverse is Projected to Add $3 Trillion to the Global GDP

NAB

article here

The metaverse could add $3 trillion to global GDP in a decade if it tracks in similar ways to growth in the mobile industry, according to experts at economics consulting firm Analysis Group.

Estimating the economic impact of the metaverse presents substantial challenges, including the fact that it doesn’t exist yet.

That hasn’t stopped the analysts from taking a stab at it by modelling their calculations on the mobile communications market.

In the Analysis Group’s report, “The Potential Global Economic Impact of the Metaverse,” the main finding is that if the metaverse were to be adopted and grow in a similar way as mobile technology, then we can expect it to be associated with a 2.8% ($3 trillion) contribution to global GDP after 10 years.

 “Put simply, there is no metaverse to measure as of today,” the analysts caveat. “Yet, rather than wait for some point in the future, we can apply existing tools and data from related sectors, technologies, and consumer behavior.”

Analysis Group explained that they set out to learn from the adoption process and economic impact of an existing technology to draw inferences about the potential adoption process and economic impact of the metaverse.

Mobile tech is particularly well suited for a number of reasons, they outline.

“The way mobile technology combined existing technologies such as phones, the Internet, cameras, and mp3 players and evolved to change how we use the Internet is reminiscent of the path the metaverse appears poised to follow.”

Combining these existing technologies into a single mobile device fundamentally altered how we connect with the Internet by overcoming limitations of geography. Existing conceptions of the metaverse “have a similar flavor of combining existing technologies, such as AR/VR, videoconferencing, multi-player gaming, and digital currency, and turning them into something new.”

The analysts’ extrapolation doesn’t stop there.

They continue to draw on data about the estimated impact today of mobile technology on global GDP. The report notes that the mobile technology sector directly employed about 12 million people globally — and another 13 million people in adjacent industries in 2020.

“By increasing access to information, mobile technology has reduced price dispersion of agricultural goods and increased welfare in developing countries,” they state. “Mobile technology has also increased financial inclusion in certain African countries,” the analysts repeat.

The concluding figure of $3.01 trillion (measured in 2015 US dollars) by 2031 is at the conservative end range of existing industry projections about the metaverse’s economic effect. These range from $800 billion to $2 trillion over the next few years for near-term impacts on gaming, social media, ecommerce, and live entertainment — and longer-term estimates which range from approximately $3 trillion to over $80 trillion.

 


The Next, New Model is Stackable Streaming

NAB

article here 

Netflix shocked Wall Street with its first subscriber loss since 2011 but this doesn’t signal a waning in streaming in general. On the contrary, it seems the streaming wars have reached a point where an abundance of options makes growth harder for everyone.

Subscribers are spreading their time and their wallets across more services than ever. According to a survey by Hub Entertainment Research assessing the type of streaming TV bundles that people are putting together, the average number of streaming services watched monthly has increased from 5.7 in 2021 to 7.4 this year.

The biggest factor: more streaming providers, generating a greater share of new content.

The survey, based on interviews with 1,600 US consumers aged 16-74, found that half of them use three or more of the five biggest SVODs

“Each of these offers thousands of titles. Stacking them means far more content competing for the same disposable time,” Jon Giegengack, principal at Hub Entertainment Research, said.

As the total cost of paid subscriptions stacks up, more viewers are adopting free ad-supported TV (FAST channels). The percentage of viewers doing this is up to 58% over the 48% of 2021.

What’s more, rather than downsizing the number of services, a third of consumers intend to sign up for a new subscription in the next six months (vs. 21% last year). Seventy-seven percent of those people say they’ll keep all the other subscriptions they have now (versus replace one of them).

In fact, viewers who stack more providers are also the most satisfied with their TV experience,” said Giegengack.

Among those who use eight or more TV sources, two-thirds say their TV needs are “very well met.” Fewer than half using less than four sources say the same thing.

“The relative stability we’ve seen over the past twelve months is not going to last long, with both the Discovery/Warner and Amazon/MGM mergers looming on the horizon,” added Giegengack. “Both mergers will likely shake up the current status quo and cause viewers to reconsider their existing bundles.”

 


Why Communities Are the Key to a Commercial Metaverse

 NAB 

article here

The metaverse will take years, if not decades, to fulfil its interconnected three-dimensional Web3 powered vision, but brands are encouraged to make the leap now.

If they don’t, they’ll be missing the chance to connect with audiences already playing in metaverse-like worlds.

The catch is: the rules of the game governing brand-audience relationships are different.

M&C Saatchi London’s Niall Wilson, writing an op-ed for The Drum, points out that the most popular virtual world mass-participant experiences online today are games — Roblox, Call of Duty, and Among Us being some of the biggest. Yet game-playing communities aren’t drawn from familiar “advertising” demographics.

“Age, gender, location, ethnicity and affluence aren’t the foundations on which these communities are built. I’m just as likely to be killed by my daughter in Fortnite as I am to bump into my uncle in Minecraft. I’m pretty sure they’ve met each other in Animal Crossing,” Wilson says.

What online games do is bring together diverse communities that share a passion for very specific experiences. Wilson suggests that those who like to be rebellious play Doom. People who like to compete: FIFA. Thrill seekers play The Last of Us and adventurers play World of Warcraft.

“The people who make mass multiplayer games are better at tapping into human passions and emotions than any other creators on earth. And brands familiar with connecting people and their passions find the world of gaming easier to penetrate,” Wilson writes.

Sports brands might find it easier than others to activate in the gaming virtual world. For example, Nike’s Nikeland experience in Roblox has been visited by more than 7 million people since it launched last November.

Other brands on the books of M&C Saatchi London, like O2, McDonald’s, Heineken and Coca-Cola, are making the transition to Web3 by continuing to connect communities to their passions, in much the same way that they have done before. Just virtually.

“The brands that may struggle, however, are those that have grown quickly through the precision audience targeting of the social web,” says Wilson.

He advises these brands to find games with playing communities that share their values. Then they should start helping these communities to grow by enhancing (not interrupting) their gaming experience.

“Charities such as Calm and The Kiyan Prince Foundation have paved the way, showing us that gaming communities can be much more altruistic than the echo chamber of social media,” notes Wilson. “Any brand unable to clearly articulate its values will fall even further behind than they are now.”

Anyone looking for immediate success in the metaverse will however be disappointed. Especially if success is measured against conventional indicators like reach, awareness, and attention.

“[These] may simply not provide the ROI that many marketers crave,” says Wilson.

Instead, brands should experiment in the space and invest to learn what works and what does not.

“Those willing to invest now will have a better opportunity to build trust and advocacy over a longer period with some of the most passionate, engaged and diverse communities on earth,” Wilson urges. Who wouldn’t want to do that?

 


Tuesday, 24 May 2022

The Metaverse Economy is on Course to Hit $140 Billion by 2025

NAB

article here

The technology needed to build the metaverse is already creating a sizeable economy, with revenue mostly coming from VR/AR, specialized servers for data centers, and 3D design software over the next two years, according to a report from Bloomberg Intelligence senior analyst Mandeep Singh.

Metaverse sales could gain 72% a year as tokens outpace VR/AR, the report suggests.

“Initially, revenue from the sale of virtual reality and augmented reality devices will be the highest portion of the metaverse market, before an installed base of at least 15-20 million engaged users allows companies to drive monetization through transactions and ads,” says Singh.

Meta’s additional capital spend of $10 billion every year to build the data-center infrastructure for its metaverse is likely to support leading GPU makers, including NVIDIA.

Unity and Matterport, along with other design-software makers like Adobe and Autodesk, are also likely to benefit from demand for 3D software used to build the metaverse.

Design software makers like these “may be among the major beneficiaries” of the growing investment in 3D virtual worlds, where people can interact with other people’s avatars and transact with digital assets.

Bloomberg calculates that the metaverse design software segment could expand by about 40-45% a year, driven by license and subscription sales for software companies such as Unity, Autodesk, Adobe and Procore.

However, though metaverse-related hardware and software spend “may keep rapidly increasing”, Bloomberg believe integrating 3D immersive effects in uses beyond gaming into entertainment and e-commerce will be essential for mainstream adoption.

It highlights the role of tokens and NFTs in helping unlock new business models. It calculates token-based transactions driven by NFTs and blockchain-based currencies can boost the metaverse market to $140 billion by 2025 as 3D virtual spaces expand into shopping, events, social media, video conferencing and other consumer apps.

“We expect metaverse offerings to expand beyond gaming into 3D virtual spaces for shopping, concerts and sporting events,” says Singh. “The monetization will likely be driven by token-based transactions, with ads a much smaller portion at first.”

Token-based revenue is today about $7 billion, Bloomberg says, fueled mainly by gaming companies like Roblox and Epic Games, and could grow by more than 60% a year through 2025 spurred by integration with cryptocurrency and digital wallets.

The analyst says that existing social media platforms (Twitter, Instagram, Facebook, etc.) would likely suffer as eyeballs and ads switch to metaverse apps.

The growing metaverse economy is also likely to boost the fortunes of those companies offering high-performance computing (HPC). That’s because, in Bloomberg’s view, cloud-based HPC will be needed to crunch the data necessary for real-time (AI-driven) metaversian experiences to work.

HPC could be among the fastest-growing segments of the metaverse market, expected to expand at annual rate of over 200%, based on Bloomberg analysis.

“Though metaverse infrastructure as a service, about a $1 billion segment, will probably be offered by most hyperscale cloud providers, we expect there will be more companies offering multicloud support,” Singh said.

 


How modelling the human brain can help improve computers

RedShark News

article here

In the quest to make machines that can think like a human or solve problems that are superhuman, it seems our brain provides the best blueprint.

Scientists have long sought to mimic how the brain works using software programs known as neural networks and hardware such as neuromorphic chips.  

Last month we reported on attempts to make the first quantum neuromorphic computers using a component, called a quantum memristor that exhibited memory by simulating the firing of a brain’s neurons. 

Going a more Cronenbergian route, Elon Musk (and others) are experimenting with hard wiring chips into a person’s neural network to remote control technology via brainwaves.

Now, computer scientists at Graz University in Austria have demonstrated how neuromorphic chips can run AI algorithms using just a fraction of the energy consumed by ordinary chips. Again, it is the memory element of the chip which has been remodelled on the human brain and found to be up to 1000 times more energy efficient than conventional approaches. 

As explained in the journal Science current networks of long short-term memory (LSTM) operating on conventional computer chips are highly accurate. But the chips are power hungry. To process bits of information, they must first retrieve individual bits of stored data, manipulate them, and then send them back to storage. And then repeat that sequence over and over and over. 

At Graz University, they’ve sought to replicate a memory storage mechanism in our brains called after-hyperpolarizing (AHP) currents. By integrating an AHP neuron firing pattern into a neuromorphic neural network software, the Graz tesam ran the network through two standard AI tests.  

The first challenge was to recognise a handwritten ‘3’ in an image broken into hundreds of individual pixels. Here, they found that when run on one of Intel’s neuromorphic Loihi chips, their algorithm was up to 1000 times more energy efficient than LSTM-based image recognition algorithms run on conventional chips. 

A second test, in which the computer needed to answer questions about the meaning of stories up to 20 sentences long, the neuromorphic setup was as much as 16 times as efficient as algorithms run on conventional computer processors, the authors report in Nature Machine Intelligence.

As always, we’re on the outskirts of the breakthrough making a real world impact. Neuromorphic chips won’t be commercially available for some time but advanced AI algorithms could help these chips gain a commercial foothold. 

“At the very least, that would help speed up AI systems,” says Anton Arkhipov, a computational neuroscientist at the Allen Institute speaking to Science. 

The Graz University project leader Wolfgang Maass speculates that the breakthrough could lead to novel applications, such as AI digital assistants that not only prompt someone with the name of a person in a photo, but also remind them where they met and relate stories of their past together.  

   

AI Ethics Are Vital, So Why Aren’t More of Us Talking About It?

NAB

article here

One of the areas the pandemic shocked into life was a rush to deploy AI algorithms in our national health systems. You can understand why; states jumped on anything to get the virus under control and so we now have AIs that track and trace our health, triggering a new economic sector in the flow of biodata.

In and of itself that may be no cause for concern. What should be a worry for all of us is whose hand is on the tiller. The lack of progress on AI governance should be setting off alarm bells across society, argue a pair of esteemed academics at the Carnegie Council for Ethics in International Affairs.

Anja Kaspersen and Wendell Wallach, directors of the Carnegie Artificial Intelligence and Equality Initiative (AIEI), say that despite the proliferation of interest in and activities surrounding AI, us humans have been unable to address the fundamental problems of bias and control inherent in the way we have developed and used AI. What’s more, it’s getting a little late in the day to do much about it.

In their paper, “Why Are We Failing at the Ethics of AI?” the pair attack the way that “leading technology companies now have effective control of many public services and digital infrastructures through digital procurement or outsourcing schemes.”

They are especially troubled by “the fact that the people who are most vulnerable to negative impacts from such rapid expansions of AI systems are often the least likely to be able to join the conversation about [them], either because they have no or restricted digital access or their lack of digital literacy makes them ripe for exploitation.”

This “engineered inequity, alongside human biases, risks amplifying otherness through neglect, exclusion, misinformation, and disinformation,” Kaspersen and Wallach say.

So, why hasn’t more been done?

They think that partly it’s because society only tends to notice a problem with AI in the later stages of its development or when it’s already been deployed. Or we focus on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging.

“This is the problem known as ‘ethics washing’ — creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.”

Another issue that is blocking what they would see as correct AI governance is quite simply the lack of any effective action.

Lots of hot air has yet to translate into meaningful change in managing the ways in which AI systems are being embedded into various aspect of our lives. The use of AI remains the domain of a few companies or organizations “in small, secretive, and private spaces” where decisions are concentrated in a few hands all while inequalities grow at an alarming rate.

Major areas of concern include the power of AI systems to enable surveillance, pollution of public discourse by social media bots, and algorithmic bias.

“In a number of sensitive areas, from health care to employment to justice, AI systems are being rolled out that may be brilliant at identifying correlations but do not understand causation or consequences.”

That’s a problem Kaspersen and Wallach argue because too often those in charge of embedding and deploying AI systems “do not understand how they work, or what potential they might have to perpetuate existing inequalities and create new ones.”

There’s another big issue to overcome as well. All of this chatter and concern seems to be taking place in academic spheres or among the liberal elite. Kaspersen and Wallach call it the ivory tower.

The public’s perception of AI is generally of the sci-fi variety where robots like Terminator take over the world. Yet the influx of algorithm bias into our day to day lives is more of a dystopian poison.

“The most headline-grabbing research on AI and ethics tends to focus on far-horizon existential risks. More effort needs to be invested in communicating to the public that, beyond the hypothetical risks of future AI, there are real and imminent risks posed by why and how we embed AI systems that currently shape everyone’s daily lives.”

Patronizingly, they say that concepts such as ethics, equality, and governance “can be viewed as lofty and abstract,” and that “non-technical people wrongly assume that AI systems are apolitical,” while not comprehending how structural inequalities will occur when AI is let out into the wild.

“There is a critical need to translate these concepts into concrete, relatable explanations of how AI systems impact people today,” they say. “However, we do not have much time to get it right.”

Moreover, the belief that incompetent and immature AI systems once deployed can be remedied “is an erroneous and potentially dangerous delusion.”

Their solution to all of this is, as diginomica’s Neil Raden critiques, somewhat wishy-washy.

It goes along the lines of urging everyone — including the likes of Microsoft, Apple, Meta, and Google — to take ethics in AI a lot more seriously and to be more transparent in educating everyone else about its use.

Unfortunately, as Raden observes, the academics broadside on the AI community has failed to hit home.

“It hasn’t set off alarm bells,” he writes, “more like a whimper from parties fixated on the word ‘ethics’ without a broader understanding of the complexity of current AI technology.”

 


Why (and How) You Have to “Think Digital”

NAB

article here

In 1995, Nicholas Negroponte — the founder of MIT’s Media Lab — wrote a book predicting how information, entertainment and interactivity would merge. He called it Being Digital, as accurate a title as you can wish for in an age where we must learn to see, think, and act in response to a world driven by data and powered by algorithms.

Now, a new book urges us to develop a digital mindset. That does not necessarily mean that we all need to master the intricacies of coding, machine learning and robotics, but it does urge a rethink in our approach to collaborating with machines.

In The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI, researchers and professors Paul Leonardi and Tsedal Neeley suggest that most people can become digitally savvy if they follow the “30 percent rule” — the minimum threshold that gives us enough digital literacy to understand and take advantage of the digital threads woven into the fabric of our world.

However, if business leaders in particular actually want to be successful they need to go further and develop “digital awareness.”

“Lacking a digital awareness would make it difficult to participate in the digital economy,” says Neeley, a professor of business administration and the senior associate dean of faculty development and research strategy at Harvard. “This also means we don’t have the capability of running organizations that are impacted by digital technology.”

To be successful, business leaders need to understand the basic tenets of coding, programming languages, scripts, algorithms, compiling, and machine language.

“This knowledge is crucial for understanding how digital applications are programmed and how computers are made to execute,” says Leonardi, a professor at the University of California.

For example, how do you collaborate successfully with machines? Perhaps counter intuitively, the authors say we should treat machines as machines and resist the temptation to anthropomorphize them.

“A digital mindset requires a shift in how we think about our relationship to machines,” Engadget pulls from an excerpt from the book. “Even as they become more humanish, we need to think about them as machines— requiring explicit instructions and focused on narrow tasks.

“Advances in AI are moving our interaction with digital tools to more natural-feeling and human-like interactions,” continue Neeley and Leonardi. “What’s called a conversational user interface (UI) gives people the ability to act with digital tools through writing or talking that’s much more the way we interact with other people. Every ‘Hey Siri,’ ‘Hello Alexa,’ and ‘OK Google’ is a conversational UI.

“Interacting successfully with a conversational UI requires a digital mindset that understands we are still some ways away from effective human-like interaction with the technology. Recognizing that an AI agent cannot accurately infer your intentions means that it’s important to spell out each step of the process and be clear about what you want to accomplish.”

A related knowledge set that business leaders (or anyone) should understand is to spot bias in the algorithm. Data is not necessarily truth; it’s information that must be analyzed and challenged, the authors say. Someone lacking a digital mindset can easily be fooled into accepting data as gospel.

Data will never be unbiased, Neeley says, because biased humans gather data, interpret data, and sometimes build models that don’t take into account potential risks and harms from technologies derived from misunderstood or incomplete data.

“A digital mindset requires us to fully understand how to think about data, how to analyze data, and how to ask all of the right questions to ensure that no harms or risks are embedded in them as well,” she says.

So, we also need to arm ourselves with the ability to challenge data. Leaders need to ask how data was produced, who had access to it, and how well it represents the behavior organizations hope to understand.

Digital leaders must also be in a perpetual state of inventing, reinventing, and transitioning, the authors stress.

“Perhaps most of all, achieving a digital mindset means overcoming a fear of technology,” Neeley says.

“People cannot be afraid of technology. They cannot be afraid of data work. They cannot be afraid of entering an era where they have to learn something new every day. You have to understand how machines learn because otherwise, you won’t be the one leading your organization.”

 

Global SVOD Market to Hit $171 Billion in Five Years

NAB

article here

There will be $1.8 billion subscriptions to SVOD worldwide by 2027, making the market worth $171 billion, according to a new report from Rethink TV. The video research team also forecasts that the AVOD market will be worth $91 billion in ad revenue by 2027 based on 8.6 million monthly active users.

Contrary to recent popular reporting, Netflix will not fade, but will retain its top spot, with Disney second and HBO Max in third. But only because Netflix will have successfully launched an ad-supported tier. If ranking is just taking into account subscriber tiers, Disney+ comes out on top by 2027 in Rethink’s analysis, with Netflix second and HBO Max in third.

A key point from the survey is that we’re seeing the beginning of AVODs collective expansion into subscription models. Through the five-year period (2022-27), Rethink believes there will be no crossover between the two camps but does pose the question whether AVOD platforms will attempt to challenge the traditional SVOD realm in the future.

Netflix’s recent confirmation that they would be exploring an advertising strategy is “news that has sent the market into a frenzy. Depending on the route that Netflix takes, the lines between SVOD and AVOD could be completely obliterated,” Rethink suggests.

ARPU from subs is expected to be relatively flat through 2027 due to increasing competition between SVODs.

“A speculative factor might be the move away from rolling annual subscriptions,” notes Rethink.  “Once consumers reach a pain point, with regard to the size of their monthly streaming bills, they will begin to cancel services.”

SVODs are likely to offer discounts to tempt these viewers back, which will suppress ARPU. If viewers cannot be counted on for an entire year’s subscription, SVODs might have to consider increasing their prices. The change would arise on the basis that subscribers are not going to be active for an entire year and will instead be transient, staying for only a few months of the year.

When it comes to AVODs, Rethink predicts that most services are set to see an increase in both Monthly Active User (MAU) hours watched and revenues.

YouTube makes up for 40% of AVOD MAUs, with this proportion only set to grow further. “We expect that YouTube will become ever more dominant in the lives of web users outside of China, and an increasingly recognized as a source of legitimately ‘premium’ video,” says Rethink. “This will only serve to fuel the viability of its premium tier — more YouTube users will mean more people that are willing to cough up the cash for an ad free experience, especially as Google pushes monetization of its video outpost to the limit.”

The survey also examined the impact on pay TV and broadband operators with the rise in SVOD and AVOD services. Unsurprisingly, they do not foresee pay TV increasing in value among consumers. Instead, Rethink suggest that the fears that pay TV providers have about becoming dumb pipes will intensify.

Another future avenue to explore is the comparative penetration of sports-focused streaming services (DAZN, fuboTV, etc.) — a market that Rethink believes “is going to be severely disrupted by sports leagues moving into direct-to-consumer services.”

 


Monday, 23 May 2022

Behind the Scenes: Top Gun: Maverick

IBC

article here


Having raced motorcycles, been strapped outside an Airbus 400 at lift-off, put a helicopter into a controlled spin and signed on to film aboard the ISS, there was no way Tom Cruise was going to be chromakeyed into a F/A 18 jet for his return as Top Gun: Maverick.

“The bar was set impossibly high every day of filming,” says editor Eddie Hamilton, who began on the project in summer 2018. “You can’t put average shots in Top Gun. You are constantly combing the dailies looking for something better.”

For all its cheeseball machismo, the original Top Gun set a new fast and furious template for action movies on its way to $350m at the box office in 1986. The film’s aerial dogfights didn’t just have to be matched in the sequel but bettered and that meant shooting as much practically as possible.

The Cruise ‘brain trust’ assembled for the project included director Joseph Kosinski (Oblivion), co-writer Christopher McQuarrie (who has written and directed Cruise in one Jack Reacher and two Mission: Impossible films, with another two on the way), Jerry Bruckheimer who co-produced the 1986 smash hit and Hamilton himself. The British editor previously collaborated with the actor-producer on MI - Rogue Nation and again in MI: 7 as well as earning blockbuster spurs on Kick-Ass, X-Men: First Class and Kingsmen: The Secret Service.

For Maverick’s pre-credits sequence they deliberately leaned into Top Gun’s over the top style down to using the same gradient filters, music (by Harold Faltermeyer) and title font of the original. The first sequence they filmed was a high-speed tracking shot of Cruise on a Kawasaki motorcycle, dressed in Maverick’s leather jacket and Aviator sunglasses, racing an F/A-18 down a runway, framed against a classic Tony Scott sunset.

Pilot training

Resisting pressure from Paramount to shoot the movie’s aerial sequences first, Cruise campaigned for the actors to go through five months of intensive Navy flight training.

“I can’t just stick an actor in an F/A-18,” Cruise explains in the film’s production notes. “Not only are they going to pass out, there are so many things happening in that airplane. You have cameras. You have lighting. You have performance. They’ve got to be pulling Gs. They’ve got to be low. They have to have that experience in that aircraft. You see it. You feel it. You can’t fake it.’”

So the actors trained in basic aerobatics and short-duration G forces then acclimated to the longer-duration G forces experienced when manoeuvres an L-39 jet trainer before gravitating to the F/A 18 Super Hornet.

They flew in some of the most challenging and picturesque landscape in America, including Rainbow Canyon, on the western edges of Death Valley National Park, and Washington’s Cascade Mountains.

Aerial filming

While the ground-based story was covered with the usual 2-3 cameras the aerial work was astonishingly complex. Each twin seat F/A 18 had six Sony Venice cameras fitted inside the cockpit – four on the actor seated behind the pilot and two over the shoulder of the pilot. The pilots wore the same wardrobe as the actors they were flying (identifying each character with a different helmet, colours and insignia).

The four cameras on the actor were focussed slightly differently: One wide and one tight front on, and two with a more three quarter angle from either side.

DP Claudio Miranda used the Rialto extension of the Venice to fit inside the cramped space. This system separates the sensor block from the camera body tethered by a cable. The sensor can record a full 6K in large format suitable for IMAX presentation and way beyond the quality that action-cams like GoPro could achieve.

The actors were responsible for turning the cameras on mid-air, perhaps 20 minutes into the flight when they’d reached their filming location. With no intercom to the ground, they were also charged with directing themselves including saying their lines when the plane was at the correct altitude, positioned with the sun over their shoulder, that their visor wasn’t fogged up, their mics set correctly.

“Actors would rehearse lines on the ground with Joe but in the air they were responsible for each take, asking the pilots to fly the same route again if their line or action could be improved,” says Hamilton, who was working out of an edit trailer in the same hanger as the F-18s.

A typical filming day would begin at 7am on the naval base (one of six used during the shoot: China Lake, Fallon, Lemoore, North Island, Whidbey Island and Norfolk) for a two-hour briefing. By 09.30 the planes would be in the air, landing an hour later having recorded 20-40 minutes of footage.

“One thing we discovered was that the locked-off cameras weren’t producing much visual energy. Tony Scott used rear projection in 1986 so could move his cameras around the cockpit, but our cameras were bolted rock solid (for obvious safety reasons). One way we could generate energy was by having the actors move their heads more when looking around the cockpit.

“Some were being supercool and taking their cues from the fighter pilots by doing as little as possible to conserve energy. After the first week of aerial filming Tom invited everyone into his room on the carrier and explained to the actors why they had to exaggerate their movements.”

Another technique was to choose shots with a moving horizon. “Jet pilots are trained to fly level to the horizon but we asked them to break that rule so that there’s always a bit of movement behind the actor. In the film you’ll notice that we picked shots with good energy where possible.

“The skill of the pilots is extraordinary when flying so close to mountains or the bottom of a canyon at 700mph. The margin for error is incredibly thin. You can see the adrenalin pumping through the actors. Their fear and excitement is real.”

Mountain of footage

Some jets were fitted with additional cameras outside the craft. Other jets would fly the same route to capture POVs and shots looking backwards and sideways. They would make four flights a day often with a ground-to-air unit shooting simultaneously. One day, they rolled 27 cameras.

That made for an astonishing amount of footage; over 813 hours, more than all three Lord of the Rings movies combined.

First, all camera footage per flight was synced with timecode. Then, Hamilton’s editorial team logged every significant detail from lines of dialogue to when an actor’s head moves – and in which direction. Then it was a case of chipping away at the mountain.

“I’d spend entire mornings looking for the perfect shot of a jet turning left - or flying inverted through a narrow canyon. Sometimes I’d add only a few seconds to my timeline each day. I felt the weight of expectation every day for the year it took to film and another year to finalise editorial.”

Sequence assembly was complicated because he didn’t receive any exterior shots of the jets to show the geography of the aerial action until months later.

“Joe prepped meticulously with storyboards and we did previsualise some dogfights, this was a useful thought experiment, but it’s hard to film a real jet to match the previs closely.”

Hamilton would place X-Men action figures against his 65-inch monitor to get some sense of big screen scale.

“It’s a trick I learned from [editor] Walter Murch,” says Hamilton. “It helps you imagine you’re in a cinema. I wasn’t necessarily cutting slower for an IMAX presentation. Each shot has its own life and you cut when the energy expires. Every single angle had to look amazing.”

Ideal wingman

Eventually overwhelmed by the volume of footage, Hamilton requested help and got it in the form of Lebenzon, who was Oscar nominated with Billy Weber for his work on the original film. On Maverick, he worked principally to shape the climactic dogfight.

Lebenzon told IBC365 that Top Gun too had been largely built in the edit. “The only script we had was ‘they engage the Russians and win the fight’.” he recalls. “We had to piece together a story of the battle that made sense in terms of the geography of the planes and were the actors were looking. Since Tony shot the actors wearing pilot masks we could record any line of dialogue over the top. Tom was frustrated one day and there’s a shot of him in the cockpit with his mask on and he’s shouting for his PA. Of course, you can’t hear that but I loved the intensity in his eyes so we kept that shot.”

Lebenzon also dismissed the idea that outtakes from Top Gun could be used to make a sequel saying that they had used every good shot of jets in the first movie.

Likewise, Hamilton testifies that every single great shot of a jet culled from hundreds of hours of footage is in the film.

“I can honestly say we didn’t compromise Maverick at all. Visually and sonically everything we want you to see and hear throughout the movie is up there. I’m thrilled that audiences finally will ride back into the Danger Zone.”