Monday, 22 May 2023

Why Everyone (Not Just Tom Hanks) Should Maintain Control of Their Digital IDs

NAB

Generative AI is going to profoundly change the way in which we create content, “because ultimately it is a hundred times cheaper than using 3D modeling and traditional VFX and setting up a camera,” said Tom Graham, the CEO of Metaphysic, in a keynote delivery at TED 2023 in Vancouver and in conversation with Jesus Diaz at Fast Company.

article here 

Evidence of this is happening today. Metaphysic, for instance, has developed an AI technology to capture the biometric AI profiles of any human to deepfake them in real time. The company signed an agreement with talent agency CAA to create AI-powered biometric models of its clients.

That includes Tom Hanks, who stars in the new Robert Zemeckis film Here.

“We did a lot of de-aging of the characters because the movie covers their entire lifetimes. It’s both happening live on set while they’re actually acting, and then obviously it comes out in the movie and looks amazing,” Graham explains.

“Our partnership with CAA enables actors to own and control their data from the real world — their hyperreal identities [the biometric AI model made of photographic information captured in extremely high definition].”

Graham said it would mean that while you would still have to contract with the real Tom Hanks, “maybe Tom Hanks wouldn’t have to turn up on set to actually film.”

He said, “That’s definitely happening today, particularly in advertisements that involve sports figures, who have way less time to be in content than, say, actors. There are lots of applications in which we are beginning to decouple human performance from the physical locality and the time.”

All of this means that there’s a need to “empower” individuals to own and control their real-world data.

“We have agency over our bodies in the real world and our private spaces. People can’t come into our homes,” Graham said. “We need to extend that set of rights into a future that’s powered by generative AI. We need to democratize control over reality. Because if the means of production are controlled by big tech companies, then it’s the opposite of democratic norms and institutions that we experience today [in the physical world].”

For the record, Graham claims here that his own company has no interest in owning our data. “But we are the people who are definitely going to be pushing this discussion forward, trying to create tools and institutions to empower individuals.”

He also thinks that it’s only a short matter of time before generative AI is able to spit out hyper-realistic video content.

“I would say two years from now it will be super accessible and at the level of full video where you really struggle to tell the difference with reality. It’s a very short period of time for us to prepare ourselves both psychologically as individuals and as governments and regulators.”

However, while industry jobs will change Graham does not think there will be an imminent bloodbath.

“I honestly think that there’ll be more people hired to create the content of telling stories than there are today. What will be interesting, however, is how that works with unions and collective action.”

He added, “I think that the biggest category of job growth for the future of generative AI will be people who capture data from the real world and make that accessible to large AI models. If you think about what’s inside those models today, it’s not very good.

“We need to bring a thousandfold more data into those models to really be able to do stuff with the finesse that filmmakers want to do today. People who contribute to stock photography today will just migrate to contributing to these models in exactly the same business model.”

He ends the conversation with a prophesy that was previously imagined in “The Entire History of You” episode of Black Mirror.

“You can capture data from your experiences in the real world,” Graham said. “Maybe it’s your kid’s fifth birthday party. In the future, you can have that major event in your life in your catalog of life events, download it, render it out with AI, and fully relive that experience with exactly the same fidelity of the experience you lived the first time you were there. That’s a lot of what we’re talking about.”

Hollywood’s Strongest Weapon Against AI is Understanding an Audience… But for How Long?

NAB

The single biggest impact of generative AI for large content producers and distributors isn’t about disrupting the media-making process. It’s that it gives its fiercest competitors — content creators on YouTube and TikTok — more tools to eat further into everyone’s daily video consumption that the media industry is battling for.

article here 

According to a fresh report by studio-funded thinktank ETC, “AI and Competitive Advantage in Media,” generative AI “potentially disrupts the already unfortunate economics of the media business: stable demand (never more than 24 hours in a day) and exploding supply.”

In the report, Yves Bergquist, ETC’s resident data scientist and AI expert, argues that what’s happening in the media industry is proximate to what already happened in manufacturing: automation of the craft of making a product (i.e., making the product computable).

By computable they mean that content is produced in volume and is “machine readable” in terms of every aspect of its creation to distribution to feedback from audiences being data and therefore available for dissection

Traditional media companies currently are not “computable” in the sense that they produce products linearly, one at a time. It is scarce, whole, long-form (not conducive to being sliced and diced by an online audience) and unstructured (its narrative DNA is not yet machine-readable).

This is going to have to change if studios and streamers want to part of the bigger picture in a few years’ time.

ETC divides the creative process into three parts. Bergquist dubs the ideation part, where creatives “sense” what an audience wants to see, “zeitgeist intelligence.”

Then there’s the core of the creative process, where creatives define their voices and make strategic decisions about what product will be crafted.

Finally, the product is made.

AI’s immediate impact is on that final phase. But by automating production, “Generative AI not only puts more emphasis on Zeitgeist-sensing and creative decision-making, it gives creative decision-makers tools to quickly and cheaply tinker, experiment, and prototype.”

At the same time, traditional media companies “risk losing their monopoly on the craft of high-quality content.”

Generative AI empowers social creatives to quickly and cheaply craft “studio quality” content threatening the status of traditional media. They can do this because their knowledge of what the audience wants is crowdsources by links, likes and recommendation algorithms. The content produced is computable in the sense that it can all be digitally mined. And the scale of content production means there’s enough supply to fit cater for every audience whim.

But ETC spots a weakness. Social media platforms and content creators reliant on those platforms lack any real understanding of their audience, claims ETC. It is just “basic content match-making”.

Instead, studios and especially streamers, can strike back against pure AI content generators by using the data they have at their disposal more intelligently.

“Programmatic content distributors like TikTok match content with audiences without any semantic understanding of why this content resonates. It’s just a programmatic marketplace that computes the content de facto.”

With generative AI bringing high production value tools to social creators, we can expect a new category of “short-form linear content” to emerge on social platforms.

Studios, on the other hand, “have the longest experience and the largest dataset available to not only develop an intelligence go their audiences, but to draw them into a deep relationship with their franchises.”

Media organizations, “especially those with a streaming service,” have both the data and a unique capability to understand the cultural zeitgeist. They can use AI to better “know” what audiences want, Bergquist says.

ETC also suggests that it’s the large media organizations that have the financial backbone “to create highly integrated and replicable AI-driven virtual production workflows.”

It contends that traditional media players will need to differentiate through immersive, multi-platform, world-building franchises, a trend they are already pursuing of course.

This, says ETC, “is the greatest opportunity for large media organizations to leverage virtual production and generative AI together to quicken and cheapen the cost of producing these multi-format immersive pieces. This new form of computable content will run on game engines.”

In so doing, this “revolutionizes the way stories are told,” with integrated narratives spun across linear and immersive media products.

There are warnings, though.

“Media organizations don’t have a software culture, nor can they support large AI R&D assets. They could partner with (or acquire) key AI research organizations to leverage their data to create their own proprietary content and audience intelligence models, but this is a heavy lift.”

ETC also identifies a need for intuitive “human-ready” and “business-ready” interfaces for AI models, which continues to be the greatest bottleneck for AI in enterprise. Too often, says Bergquist, organizations can’t connect models and business needs.

“Whoever can redesign their organizations and workforce needs to best create a ‘culture’ of AI and data will move faster than its competitors.”

Education, insists ETC, is the largest opportunity in AI today.

While everyone seems to agree AI represents a big financial opportunity to automate some production and postproduction workflow it begs a question: Does taking knowledge of the craft out of creative work affect creative decisions and creative output overall? Or, put another way, does knowing the craft make a creative a better decision-maker? ETC has no answers for this, and perhaps we’ll only find out in time.

More globally, what the media industry needs right now is a distinct and actionable AI vision.


Thursday, 18 May 2023

Advice for Content Creators Who Want to Be Content Entrepreneurs

NAB

article here

In his podcasts for creators, Joe Pulizzi dissects the anatomy of what makes creators and influencers tick.

He breaks down the findings from the 2023 Creator Economy Benchmark Research from The Tilt, and talks with Jay Clouse, the founder of Creator Science.

Everyone agrees that content entrepreneur is a more apt term for what successful creators do.

Even though 40 to 45% of creators’ time is spent on content creation, a lot of distribution, marketing, sales and promotion, business administration and operations goes on in the background.

From launch it takes about five months to earn the first dollar, per The Tilt’s research, and about 12-18 months before revenue exceeds expenses in some way.

Clouse shares some advice for budding content entrepreneurs. One of them is that successful content creation does not necessarily involve the amount you publish — provided that what you publish is consistent.

“If you set a schedule you want to follow through on that because when you follow through, that’s consistent with the promise that you’re making. You want to consistently be delivering a good experience,” he says.

“If you are publishing too frequently to create a consistently good experience, then that’s a negative overall. We’re living in a noisier and noisier world… you have to earn the right to keep people’s attention.

“But when you have people’s attention, it’s more important that you honor it and value it and give them a good experience than it is to show up every single day.”

Clouse advocates focusing energies on one or two platforms at the beginning of your creator career and to resist the pressure to be across more.

The Tilt confirms this, finding that leading content entrepreneurs focus on being amazing at one core place that they call home, before diversifying.

Email, though, is recommended as one means of contacting followers — not least because it is a constant when other platforms are at risk of change or extinction

“I would have email as part of my strategy because to me, email is like this valuable asset that really de-risked your business across third-party discovery platforms, changing their rules or getting bought by a billionaire and going to zero,” Clouse says. “I would I would build an email list and I would focus on one discovery platform or one discovery style.”

Clouse himself has grown his business sufficiently to be able to employ other people to take on specialist jobs, such as video editing. However, this operations side of the business remains hugely important for his overall business growth.

“I have a basically full time video editor and a research assistant part time, mostly on the YouTube side,” he says. “I have two contract thumbnail designers have a contract audio engineer, and I outsource accounting, outsource any legal work that I have.”

Wednesday, 17 May 2023

How to Build a Business in the Creator Economy

NAB

If you want to make money as a content creator you’ll need to think and work like a business. Not only does it take five months to earn your first dollar, and just over a year to begin working full time as a creator, you’ll most likely need upwards of $10,000 in the bank to support yourself before the dimes roll in.

article here

“It’s a business, not a freelance gig, and it requires a business approach to revenue generation, management, operations, etc. — even at a small scale of one — to be successful.”

That’s according to new research into the creator economy from The Tilt. Specifically, its research asked creators themselves what it actually took to do their jobs.

“A content enterprise is not a get-rich-quick scheme… it’s not even an ever-get-rich scheme for most,” the report’s authors say.

Even if creators go full time on their content business there’s no bonanza in earnings. The average full-time creator earned $86,000 in 2022. On average, full-time creators expect to bring in approximately $108,199 in revenue in 2023 and will pay themselves $62,224 — a gross margin of 59%.

Most creators (86% in this report) say they think of themselves as entrepreneurs where non-creative tasks take up nearly half of their time.

Content entrepreneurs spend a little less than half of their week on creative efforts. The rest of the time, they’re knee-deep in operations, marketing and sales, content distribution, and other unglamorous tasks.

As one creator reports, “I spend most of my time managing people, doing accounting, talking to sponsors, managing editorial calendars, fixing equipment, etc. People think being a creative is all puppies and rainbows — and no one wants to hear you complain.”

The biggest challenge experienced by 64% of respondents was to grow their online audience.

Tilt has some advice: The more niche the audience and the narrower the topic, the better the odds for success. Build relationships with your audience by responding to comments, asking for feedback, and creating more of the types of content they respond to, it suggests.

Focus on building connections with your audience and other creators. Someone following you is the start of a relationship, not the end result.

“It’s not actually about how good your content is, it’s about how you leverage it and monetize it. And that mostly comes down to marketing and publicity. The so-called ‘best’ creators are often those who are just best at doing their own marketing.”

 


Tuesday, 9 May 2023

Comparing the Technologies for XR, MR, and Virtual Production

NAB

One of the biggest benefits of virtual production is the reduction of costs in actors’ time, props, setup, travel, and outdoor shooting time. But according to graphics technology vendor Brainstorm, scenes shot in a LED wall environment are fixed and can’t be changed later unless the scene is shot again.

article here

In a white paper that details the various technology set-ups for virtual production, Brainstorm favors virtual production based on chromakeying.

It argues that scenes filmed in a volume using background plates on LED walls in part to light the scene will be “baked” in if there are any issues requiring tinkering in post.

“We may have to re-shoot or enter in long and complex postproduction, meaning all the time and cost savings of virtual production disappear,” it says.

“Using LED-based XR will still maintain some of these benefits, but at the cost of not being able to alter shots easily in post, so if we need changes in the scene, the background needs some adjustments, etc, we will still need to reshoot the scene.”

The paper continues, “Of course, rehearsals in live productions can help with these issues, however, some other changes may not be possible to make because of scheduling, availability or change of minds after production, so they will require going into postproduction.

“On the other hand, chroma keying, when used with tracked cameras and multilayer shooting, can perform any changes in post with total ease.”

For film and drama productions, shooting the background “as is” leads to a “significant loss in flexibility when postproduction is required”, such as compositing, VFX, environmental grading, particles, etc.”

As the image is “fixed” rotoscoping or other techniques may be required to isolate parts of the image prior to apply effects, “which makes no sense in complex productions, whereas using chroma keying will allow VFX operators to easily achieve all that is required, as the elements are already shot separately and stored independently.”


What To Do If Your IP is Being Stolen By Generative AI

NAB

The meteoric rise of AI applications has left the industry and onlookers wondering how this rapidly developing technology will interact with copyright law and whether the law can keep up. The legal landscape is muddy but there is legal advice for the developers of AI tools and artists working with them or believing their work is being stolen.

article here

There are two main questions to consider about AI art. The first is, “Can AI art be copyrighted?” The other question surrounds the legal status of artists who claim to have had their art stolen (euphemistically called “sampled”) to supply the data for AI diffusion models.

Thuan Tran, associate at Dunlap Bennett & Ludwig, answers the first question, stating that the US Copyright Office will reject a request to allow an AI to copyright a work of art. This is because it will not register works produced by a machine “or mere mechanical intervention” from a human author.

Courts interpreting the Copyright Act, including the Supreme Court, have consistently restricted copyright protection to “the fruits of intellectual labor” that “are founded in the creative powers of the [human] mind.”

However, this interpretation is being tested. In a case currently before the Supreme Court, artist Kris Kashtanova is contesting a decision by the Copyright Office not to register a copyright for graphic novel that she created using an AI.

Kashtanova is emphasizing in how she “engaged in a creative, iterative process” that involved multiple rounds of composition, selection, arrangement, cropping, and editing for each image in her work, which makes her the author of the work.

“While the outcome of the proceeding is not yet finalized and Kashtanova has a chance to appeal its decision, many are eagerly awaiting what may be very precedential for the future of AI art.”

The second question is also taken up by Tran, and is also being framed in the court of law. There are several cases of artists suing generative AI platforms for unauthorized use of their work

Image licensing service Getty, for example, has filed a suit against the creators of Stable Diffusion alleging the improper use of its photos, both violating copyright and trademark rights it has in its watermarked photograph collection.

The outcome of these cases is expected to hinge on the interpretation of the fair use doctrine. This is the legal concept that allows for the use of copyrighted material without permission from the copyright holder, in certain circumstances.

Tran explains that Fair use is determined on a case-by-case basis, and courts consider four factors: (1) the purpose and character of the use; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used; and (4) the effect of the use upon the potential market for or value of the copyrighted work.

“One argument in favor of AI-generated art falling under fair use is that the use of copyrighted material by AI algorithms is transformative,” he says. “Transformative use is a key factor in determining fair use. It refers to the creation of something new and original that is not merely a copy or imitation of the original work.”

AI algorithms create new works by processing and synthesizing existing works, resulting in a product that could be considered distinct from the original. “As a result, AI-generated art can be seen as a form of transformative use, which would weigh in favor of fair use,” Tran says. “On the other hand, this argument is not without its limitations. Many argue that AI-generated art is simply a recombination or manipulation of existing works, without adding significant creative output. “

There is also the larger philosophical debate as to whether a machine can give “creative input” to its work. In such cases, it may be more difficult to argue that the use of copyrighted material is transformative and subsequently falls under fair use.

All this uncertainty presents a slew of challenges for companies that use generative AI. There are risks regarding infringement — direct or unintentional — in contracts that are silent on generative AI usage by their vendors and customers.

The Harvard Business Review gives some advice for AI vendors, their customers, and artists.

“AI developers should ensure that they are in compliance with the law in regards to their acquisition of data being used to train their models,” advise Gil Appel, Assistant Professor of Marketing at the GW School of Business, Juliana Neelbauer, partner at law firm Fox Rothschild LLP, and David A. Schweidel, Professor of Marketing at Emory University’s Goizueta Business School. “This should involve licensing and compensating those individuals who own the IP that developers seek to add to their training data, whether by licensing it or sharing in revenue generated by the AI tool.”

Developers should also work on ways to maintain the provenance of AI-generated content, which would increase transparency about the works included in the training data. This would include recording the platform that was used to develop the content, tracking of seed-data’s metadata, and tags to facilitate AI reporting, including the specific prompt that was used to create the content.

“Developing these audit trails would assure companies are prepared when customers start including demands for them in contracts as a form of insurance that the vendor’s works aren’t willfully, or unintentionally, derivative without authorization.

“Looking further into the future, insurance companies may require these reports in order to extend traditional insurance coverages to business users whose assets include AI-generated works.

Creators

When it comes individual content creators and brands, the onus is on them to take steps to protect their IP.

Stable Diffusion developer Stability.AI, for example, announced that artists will be able to opt out of the next generation of the image generator.

“But this puts the onus on content creators to actively protect their IP, rather than requiring the AI developers to secure the IP to the work prior to using it — and even when artists opt out, that decision will only be reflected in the next iteration of the platform. Instead, companies should require the creator’s opt-in rather opt-out.”

According to Appel, Neelbauer and Schweidel, this involves “proactively looking for their work in compiled datasets or large-scale data lakes, including visual elements such as logos and artwork and textual elements, such as image tags.”

Obviously, this could not be done manually through terabytes or petabytes of content data, but they think existing search tools “should allow the cost-effective automation of this task.”

Content creators are also advised to monitor digital and social channels for the appearance of works that may be derived from their own.

Longer term, content creators that have a sufficient library of their own IP on which to draw “may consider building their own datasets to train and mature AI platforms.”

The resulting generative AI models need not be trained from scratch but can build upon open-source generative AI that has used lawfully sourced content. This would enable content creators to produce content in the same style as their own work with an audit trail to their own data lake, or to license the use of such tools to interested parties with cleared title in both the AI’s training data and its outputs.

Customers

Customers of AI tools should ask providers whether their models were trained with any protected content, review the terms of service and privacy policies, “and avoid generative AI tools that cannot confirm that their training data is properly licensed from content creators or subject to open-source licenses with which the AI companies comply.”

Businesses

If a business user is aware that training data might include unlicensed works or that an AI can generate unauthorized derivative works not covered by fair use, a business could be on the hook for willful infringement, which can include damages up to $150,000 for each instance of knowing use.

Consequently, businesses should evaluate their transaction terms to write protections into contracts. As a starting point, they should demand terms of service from generative AI platforms that confirm proper licensure of the training data that feed their AI.

Appel, Neelbauer and Schweidel add that they understand the real threat of generative AI to part of the livelihood of members of the creative class, “at the same time both creatives and corporate interests have a dramatic opportunity to build portfolios of their works and branded materials, meta-tag them, and train their own generative-AI platforms that can produce authorized, proprietary, (paid-up or royalty-bearing) goods as sources of instant revenue streams.”

 


Behind the Scenes: Kandahar

IBC

A Gerard Butler action movie is the first Hollywood production to be filmed entirely in Saudi Arabia and the first to shoot in the country’s majestic, barren and hitherto off-limits AlUla region.

article here

With a plot set in Afghanistan involving the CIA, the Taliban and Pakistan’s intelligence service ISI and a lot of guns, terrorism and explosions, the Gerard Butler vehicle Kandahar might seem an odd choice for Saudi Arabia’s first servicing of a foreign film production. But if nothing else it showcases the spectacular canyons, sand dunes and oases of the country’s vast north-western desert.

Shot between December 2021 and February 2022 Kandahar is also the first big-budget production to shoot in Saudi since the conservative Muslim Kingdom began to culturally open up in 2018. That year also saw a lifting of a 35-year ban on commercial cinemas and the introduction of a generous 35% location rebate on films shot there.

“The idea was for a road movie with the epic quality of Lawrence of Arabia in locations that nobody in cinema has seen before,” explained Miguel de Olaso, the Spanish director of photography who goes by the moniker Macgregor.

“This region has been closed to tourism for so long, so to get the opportunity to shoot here was - from a cinematographer’s perspective - one not to be missed.”

He described the UNESCO World Heritage Site at AlUla as “a version of Utah and Jordan times ten.”

“It’s a vast area full of archaeological sites and older civilisations mixed with amazing rock formations.”

For all that, Saudi was a stand-in for a story set in Afghanistan across which Butler’s CIA agent and his Afghan interpreter must travel hundreds of perilous kilometres to safety at the border.

To that end, one of Macgregor’s main visual references were the iconic photographs of Afghanistan and its people shot in the 1980s by Magnum photographer Steve McCurry.

“I grew up with those images which transport you to a world that might as well be happening 200 years ago or 200 years into the future,” he said. “AlUla does look like another world.”

BTS: Kandahar – desert experience

Director Ric Roman Waugh (Angel Has Fallen) appreciated the fact that MacGregor had prior experience of filming in desert conditions. The DP had shot and directed the 2018 documentary short Mauritania Railway: Backbone of the Sahara tracing the transport of iron ore over 700km to Africa’s Atlantic coast.

 

“One thing I learned was that the desert looks more beautiful at sunset for sure but it looks more like the true desert when the sun is higher. For much of our story we needed to portray the desert as a miserable and desolate place. Our main characters endure a very rough journey so it didn’t make sense to shoot everything to look perfect in the magic hour of dusk and dawn.”

Most scenes were shot two camera on Alexa Mini LF with additional Sony FX6 and FX3 as crash cams and rig cameras. He selected Panavision E series and Primo 35mm anamorphics but found that after weeks on the road their optics became a little warped.

“We were extremely careful with lenses, especially when changing them, but this was a very demanding shoot. We mostly shot chronologically and towards the end I could see how the glass inside was started to be a bit misaligned. The fall-off of the focus was completely different to how they started out. That worked great for the aesthetic since it suits the battering our characters have taken by the end of the journey.”

BTS: Kandahar – cameras and drones

A major cat and mouse sequence set at night in the desert was accomplished using a Sony FX3 mirrorless camera with infrared sensor conversion.

“We would shoot Gerard Butler’s point of view night vision footage during the day using the FX3 which makes the sky look dark and the vegetation appear infrared and then the same scene at night with huge lighting set up on the Alexa.”

Drone shots were made using the Mini LF with Vazon full frame anamorphic lenses. All the gear was kept wrapped in plastic to avoid exposure to the sun and to keep out dust. Macgregor applied thin filters around the camera vents to remove sand while keeping airflow.

He estimates that 95% of Kandahar was filmed in AlUla with additional shots including airport scenes in Jeddah, Dubai in the UAE and some sequences of Butler driving across the desert shot in a studio using conventional back projection.

BTS: Kandahar – varying conditions

“There are a variety of landscapes within a 15km radius [of their AlUla base] including the market town which stood in for Herat and an area of black volcanic lava. We all had Green Cards and got full freedom to do anything we wanted.”

The finale features a massive practical FX explosion that generated a sandstorm that lasted for five minutes and “would have been impossible to recreate with fans and Fuller’s earth. It threw up sand taller than the Empire State building and affected the light - even the local weather.”

The camera crew were mostly from Spain, the camera electrician and gaffer from Mexico and additional crew from Dubai. MacGregor is an experienced commercials DOP and wanted a smaller crew who would be able move quickly scene to scene. The local Saudi crew are still learning the ropes.

Since Kandahar the Saudi Arabian psychological thriller feature Matchmaker has shot in the country for Netflix. The Saudi region of Neom, a sponsor of the Media Production and Technology Show, is also marketing itself as a media hub and has mooted plans to legalise alcohol.

As The Hollywood Reporter pointed out, while the country continues to combat negative perceptions about its human rights record — including the 2018 murder of journalist Jamal Khashoggi — this has all been backed by a significant promotional campaign that has helped AlUla become a dominant presence across most major festivals.

BTS: Kandahar – A love for Scotland

Macgregor’s passion for moving pictures was evident aged 9 when he got his first video camera but on leaving high school his parents dissuaded from a career in film.

“I went to study mining engineering but it was the most boring thing ever. Within six months I knew I needed to be a filmmaker.”

He left to go to film school in Madrid only to be kicked out “for not being focussed enough.”

Instead, he went to the European University of Madrid to study AV communications “things like art history and advertising which they don’t teach you as a filmmaker but which I found very, very useful,” he said.

“I had a late start to my career and no contacts. Even though Spain produces a lot of content and has a lot of very talented people there’s a lot of competition for work.”

Moving to LA opened those doors up. “There’s less ‘show us your CV’ and more ‘show us what you’ve got’ attitude in the US. I wish I could have started there in my 20s rather than my mid-30s.”

He has shot a handful of features including Fall, the 2022 breakout hit which succeeded in bringing a terrifying sense of vertigo to the tall story of two friends who scale a 2,000-foot-high TV mast.

Of his unusual nickname, which he has mischievously trademarked on his website, Macgregor said he has been called this since he was at school.

“I was and am in love with Scotland,” he said, “And now I’ve made a movie with a great Scottish star.”