Friday, 17 February 2023

5G for Immersive Content Development, Aggregation, and, Yes, Even Distribution

NAB

article here

Verizon’s aim is to entice subscribers to its 5G plan and one of the ways it is doing it is by experimenting with new content formats.

“We are dabbling in funding content when it demonstrates the capabilities of 5G,” explained Erin McPherson, chief content officer of Verizon Consumer Group, in discussion with Variety’s Todd Spangler during a recent episode of Variety’s “Strictly Business” podcast.

McPherson explained Verizon’s goal is to become a friendly-but-not-competitive portal to help Hollywood with content aggregation.

“Our originals are specific to building out new models for 5G. We’re partnered with Snap as the distribution window for a lot of this. That’s where it would be different [to Netflix]. They’re being experienced through Snap AR lenses. And we’re experimenting with whether customers will buy something like ‘Iago,’ the Viola Davis original. Will they pay for a gamified version of building the Perfect Man with Mr. Wright? We’re just early days figuring out what people want to experience in an immersive or interactive way on their phone. What are they willing to pay for?”

Verizon has also partnered with NFL and sports betting and with Live Nation, producing AR maps for concert goers.

“What’s great is we see the same excitement in Hollywood that I remember seeing early days of web video,” McPherson said.

Last year, the operator launched content platform +play and partnerships with Netflix, Peloton and Live Nation’s Veeps featuring leading services like Disney+, discovery+, A+E Networks and AMC+. The hub is intended to give Verizon customers a simple and efficient way to access and take advantage of exclusive deals for content services.

“It’s the result of a lot of years of internal work to get the company in a good place to be a partner of choice for all of these SVODs coming to market,” said McPherson.

5G would proliferate the ease of mobile viewing and wireless internet, she added. “One of the easiest, most immediate changes I think we’ll see is high speed internet available wirelessly. The proliferation of our fixed wireless access business and that of our competitors are selling wireless home connectivity that can power work, gaming, entertainment. That’s just a very easy example.

“We like to talk about AR/VR and immersive experiences and all the cool stuff that 5G will do, and it will. But if you really think about the most immediate, I think impact for many consumers would be very viable wireless connectivity solutions for your house.”

Also on the Variety podcast, Pearlena Igbokwe, chairman of Universal Studio Group, details how the studio’s strategy is to sell content far and wide to outside platforms — and hopes that pending strikes don’t derail this.

“You can’t produce a show without people that are in those unions, (Writers Guild),” she said. “I definitely do not wanna be a doomsayer. I absolutely hope not [but] you know we’re very supportive of [resolving] whatever the issues are to everyone’s satisfaction.”

She added, “I love being on the studio side. We’re the people making things. And if you’re making good things and good shows, there will be a marketplace for it.”

Thursday, 16 February 2023

What if We Build Metaverse as a Global Village?

NAB

Just as the arrival of the internet was heralded as a “global village” that would unite people in a universal exchange of information, there is now an effort to turn the next-generation internet into the “Global Collaboration Village.”

article here

That’s because the vision of a connected global village has gone sour, exacerbating polarized views and leaving many without broadband cut off from participation.

While it far from being built, there’s optimism that the metaverse can be built with sturdier open to all foundations.

At Davos, the glamourized meeting of global financiers, the World Economic Forum talked about establishing the Global Collaboration Village, described as “the first global, purpose-driven metaverse platform.”

Klaus Schwab, founder and executive chairman of the World Economic Forum, described the initiative in an article on the organization’s website. “Created to enhance more sustained public-private cooperation and spur action to drive impact at scale, this global village will not replace the need to meet face-to-face but will instead supplement and extend our ability to connect regardless of where we are physically located around the world,” he said.

“Business executives, government officials and civil society leaders must come together to define and build an economically viable, interoperable, safe, equitable and inclusive metaverse.”

A prototype of the Global Collaboration Village was launched at Davos including Accenture and Microsoft, with the support of “leading global corporations, governments, international organizations, academic institutions and NGOs.”

“To create mass adoption, the metaverse must show that it is not just a replacement for what we already do but that it enables us to do things in new and more effective ways,” declared Schwab.

For example, people will be able to “dive in” to an interactive ocean experience that reveals the importance of safeguarding the ocean through collective action. In other words, instead of telling us how important mangroves are for coastal ecosystems, this global Metaversian vision “invites us to witness and experience the power of restoration and conversation for ourselves — all while engaging with global experts and innovators who are on the physical frontlines of this work.”

The WEF and supporters of this project may mean well, but this appears on the surface to be little more than a marketing stunt with no actual concrete plan to lead development of an open, interoperable internet.

Perhaps that’s understandable given that its paymasters at Davos are Big Tech, including Microsoft and Meta, which have a vested interest in shaping the internet to their own ends.

A bit more meat on these bare bones was provided by a panel discussion at the event featuring representatives from Meta pitched against the author and technologist Neal Stephenson.

Stephenson is building a blockchain intended to help individual creators make more money from the future internet than they currently do in an online landscape dominated by monopolies like Meta, Apple, Microsoft and Amazon.

“What we’re trying to do with the Lamina 1 project is to build a blockchain that is optimized specifically for creators,” says the futurist. “These are the people whose talents are going to be needed to actually make a metaverse that people are going to want to visit. It’s a kind of pure engineering project at this point.”

A core tenet of the metaverse is for people, as avatars, to be able to move friction-free from one experience to another, without having to continually log in and log out.

That means personal data in the form of identity (plus payment mechanisms and virtual assets) needs to be shareable and also secure. It is one of the basic frameworks that the WEF’s Collaboration forum needs to discuss.

For Stephenson there’s no doubt that identity needs to be distributed and decentralized “if the metaverse is actually going to work.”

Christopher Cox, chief product officer at Meta Platforms, agreed with the overall vision, but didn’t quite acquiesce that user identity needs to be decentralized (and outside of Meta’s walled garden).

“We view the feeling of presence as being the essential ingredient for the user experience, of something that feels metaverse-like,” he said. “I think the Internet’s a pretty good way to think about the metaverse because some parts of the Internet are very coherent with each other. If you’re inside of Wikipedia, if you’re inside Instagram, you know, these are experiences that are self-consistent, that have a single designer, that have a single server, that have a single privacy and identity model where you understand the rules. Those systems are interlinked,” he said.

“So you can move from Instagram easily to Google Maps. You’re not confused how you got there. The hyperlink was the thing that got you there. And I think part of what doesn’t exist yet for the metaverse is what is the hyperlink? What is the model of travel from sort of one set of experiences for the other?”

Stephenson conceded that both models have a decentralized, bottom-up approach to building the metaverse and that centralized, top-down approaches had their advantages, but came down on the side of Web3.

“[The metaverse] doesn’t happen unless you create an open system that’s kind of analogous to the early Web or the early Internet where anyone who’s interested can latch on to a shared protocol and begin to build what they want to build in that world.”

However, invited by the moderator to challenge Meta on the topic, Stephenson declined.

For his part, Cox largely swerved the answer, but toed the party line:

“One thing about the development of Facebook and Instagram is a lot of it is focused on giving tools to creators and tools to businesses. The creative tools that we give them is a lot of what makes the experience unique, along with some set of assurances around safety and privacy, which is where the centrality can offer a big benefit to the user.”

The digital divide is a conundrum for any kind of internet universality — in poorer or rural parts of the US as much as Rwanda, a country represented by a government minister on the panel.

Most people imagine the metaverse, and its experience of immersivity and presence, will be accessed not via a 2D flat screen but in 3D via AR or VR hardware.

“We believe that one day that computing platform will be as important as the smartphone has become in our lifetimes,” said Cox. “We’re working on a lot of the early R&D to bring that to life.”

He explained that Meta had spent the last eight years since acquiring Oculus to deliver a VR product line that is affordable enough, usable enough and impressive enough that it can be used in social experiences.

“We’re working on augmented reality, which is a much further out version of the future where you would wear, you know, a nice pair of glasses. It would be light, it would be comfortable, it would have waveguides that would allow you to see screens in front of you.

He also said Meta was building software to support a developer ecosystem of developers including filmmakers who are starting to make 3D content.

“We’re really just trying to start to seed the ecosystem of content and experiences for VR.”

For all its talk about the metaverse as a driver for progress and the rhetoric behind the WEF’s Global Collaboration Village, it was clear that in honesty not much progress had been made and nor would there be when those controlling the divided internet of today want to control it tomorrow.

Stephenson appeared exasperated too. In response to a question about how the metaverse can be engineered he said,

“In order for everyone to not die, we have to remove carbon from the atmosphere on a scale that is completely mind boggling, even to people who consider themselves really well informed about this issue. And that’s going to be the biggest engineering project in the history of the world.”

Wednesday, 15 February 2023

Behind the Scenes: Marlowe with DoP Xavi Gimenez

IBC

The heat, concrete and mystery of 1940’s LA recreated in Barcelona with a stylish neon twist by cinematographer Xavi Gimenez 

There have been numerous movie adaptations featuring Raymond Chandler’s hardboiled gumshoe Phillip Marlowe but none that use colour like a weapon. 

article here 

That’s how director Neil Jordan (The Crying Game) described the work of cinematographer Xavi Gimenez in helping make new film Marlowe stand out from the pack. 

“Xavi and I, we’re not making something ‘real’ here,” Jordan elaborates in the film’s production notes, “We’re making something sort of hyper-real — so let’s use the intensity of the light, the colours and strips of neon that Xavi used very beautifully in the night scenes. It created a heightened version of a noir film. Here using colour almost felt like using a weapon.” 

Marlowe is an old school throw back to classic film noir such as The Big Sleep (1946) starring Humphrey Bogart while doffing its hat to 1974’s Chinatown (Danny Huston has a prominent role in Marlowe, recalling his father’s famous turn in Polanski’s movie) but it is Blade Runner, a sci-fi noir, which was Jordan’s principal stylistic reference. 

This touchstone emanated from the production’s decision to shoot the entire movie in Barcelona, standing in for 1940s Los Angeles. It’s not the first Chandler movie to relocate. Michael Winner’s 1978 version of The Big Sleep swapped ‘40s LA for 1970s London.  

“If you go to LA there is nothing of that period left,” director Neil Jordan explained at the film’s premier in San Sebastian. “They destroy the past. We had to invent an imaginary city. That to me was the challenge of the film rather than trying to approximate a noir aesthetic.” 

He continued, “To make it work you have to reinvent the idea of a noir movie. When I was speaking with Xavi and [production designer] John Beard the reference I chose was Blade Runner. Weird I know. Our Marlowe is set in the LA of the past not the future but in a strange way we are building a science fiction landscape to this movie.” 

Speaking to IBC365, Gimenez says the lighting scheme for Ridley Scott’s film was more “neurotic and electric” than he felt Marlowe needed. “I decided not to jump too far into this or to just to copy the concept of film noir. Of course, Blade Runner was always floating around us as a reference but it was not the exact final concept. 

In the first scene for example you can see these ideas. It begins in proper noir territory, with Marlowe (Liam Neeson) handed a job by mysterious client (Diane Kruger) in his downtown office splintered by afternoon light streaming in behind window blinds. Gimenez explains that he chose to bath everything in a bourbon-coloured almost-dusk.   

“There is a little bit of hazy cigarette smoke but not too much, the lines of light come through the blinds but it is not extreme high contrast. We want to integrate the stylistics of noir naturalistically into the movie, not have them shout out and detract from the story.” 

Gimenez baths the whole production in a sun-dappled, sinister feel that befits a California noir, with shadows, concrete and gardens filled with secrecy and inscrutability. 

“I had two different concepts – one related to heat, the other related to black and white,” he shares. “Since we weren’t going to shoot black and white [for commercial reasons] we decided to create a constant colour as if it were our black and white. I decided to use this particular yellow because in my imagination yellow has a connection with jazz and jazz has a connection with the warring gangs of 1920s to 1940s.  

“With heat our idea was to create this feeling of ambient humidity. We achieve this by over exposing all the day exteriors just little a bit more than normal. To light scenes we used filament bulbs, normal bulbs, to which we added a yellow tint.” 

In addition to which, Gimenez shoots a driving scene with Marlowe and scheming villain (Alan Cumming), in a LED volume where neon street signs are reflected on and viewed through the windows. As the film progresses, he dials up the neon so that entire scenes are filtered in red or blue light as from a John Wick or Nicholas Winding Refn thriller. 

Not surprisingly, Gimenez has made a fair share of horror movies particularly at the beginning of his career including Intacto, before making his breakthrough with The Machinist. This is the 2004 psychological thriller for which Christian Bale famously shed 62 pounds of bodyweight. Set in California, the film was shot entirety in Barcelona including the Tibidabo amusement park and urban districts of El Prat de Llobregat (near the Fira exhibition centre) and Sant Adrià del Besòs. 

It’s also a dark film, thematically and pictorially, which director Brad Anderson, likened to noir.  

“Barcelona has an extreme dark side,” says Gimenez who was born and lives in the city. “At the beginning of the 20th century and during the late 1920s there was a lot of anarchism with guns and gangsters in the street.” He’s referring to the series of violent worker’s strikes in the city that culminated in the Spanish Civil War (1936-39) and the rise of Franco as dictator. 

“I think Barcelona can perfectly match noir. For Marlowe we were looking for locations with palm trees, Venice beach and suitable architecture.” 

An abandoned paper factory in the city doubled for several locations including the film’s recreation of a Hollywood studio. 

Gimenez went to film school in Barcelona in the late 1980s to study sound. “I never thought I could be a DoP. We had a teacher who taught about aesthetics he showed us lots of different films. One of them was Blade Runner but I remember at that time I didn’t even know what a DP was. I didn’t know about credits or that there was such a profession. 

“One time I asked in the middle of class, who does that – who films the movies. Then I learned about the director of photography and began to get really obsessed about it. 

“You know, cinematography is a drug. If you talk with lot of different DPs you realise we are absolutely in shock about light and how it is possible to manage it, to train it, how to understand it as a material to create emotions. I was shocked and impressed by this concept – that it is possible to create emotions with light.” 

Like all film school students, Gimenez studied every aspect of production including sound, direction, design and script. In his last year he focused on cinematography.  

“My first idea to make documentaries but the producers were more interested in my cinematography of my docs than of the docs themselves. That’s when they started to call me and offer me work solely as a cameraman. 

He did his time as second assistant camera on movies including on Bigas Luna directed films Golden Balls, starring Javier Bardem, and The Tit and the Moon (1994) but his heart wasn’t in it. A friend gave him a book, ‘The Peter Principal’ by Laurence J Peter which talked about the straitjackets of conventional hierarchies and promotion. 

“I realised I had to jump straight to being a DP because follow on from second to first assistant camera was impossible because head doesn’t work at this level. Focus pulling is extremely precise work and my soul doesn’t work in this manner. I live too much in abstraction for this. I had to be a DP. And it worked.” 

He lensed the thriller Transsiberian for Anderson starring Woody Harrelson and shot episodes of Sky’s gothic horror series Penny Dreadful, exec produced by Sam Mendes. 

Like many artists he never switches off. He regularly carries a digital stills camera around with to take pictures of anything that catches his eye, to file away for future use. Usually, these pictures are about light. 

“I used to teach film at university and I tried to explain that when you become a DP you have to be DP 24 hrs a day. You have to study every day to discover new forms of light or new concept of lighting that you find in the street. It’s like to be a dancer you have to be training every day, learning and investigating every day and not just as a technical process. The difficult thing is the emotion, the connection of lighting and emotion. I always tell my students you have to be practicing every day because the what audiences want is changing very fast.” 

He seems torn between working up close and personal with the actors, holding the camera, or being by the director further away between takes. The latter is more of a norm with digital cameras which allow a director and DP to monitor a shot from distance but can detract from a cinematographer’s appreciation for being involved intimately in a scene. 

“I love to work as camera operator but sometimes the movie is too big and doesn’t permit this. It’s most important to be side by side with the director to push a movie but at the same time I feel the actor doesn’t have quite the same reference to the frame of the camera if you are not with them. It is important as a cinematographer to understand the actor at work. 

His own heroes of cinema include Pasquale De Santis an Italian cinematographer who shot Death In Venice and collaborated with Robert Bresson, Joseph Losey and Frederico Fellini; the Mexican Emmanuel Lubezki (Gravity, Birdman) and British legend Roger Deakins (Empire of Light), of whom he says, “I am not able to talk about him. The feel of emotion of his lighting is amazing. I can only aspire to reach his level.” 


Public Cloud is Ready for Live Production at Scale

NAB

Delivering and monetizing the live experience reliably at scale has always been one of the hardest challenges that media operators, broadcasters and content owners face but technology providers maintain that their expertise and cloud more broadly is now ready for prime time.

article here

“Delivering live streaming is relatively straightforward, but delivering it at scale, in the highest quality, and with latency equivalent or better than traditional broadcast is where the challenge comes in,” MediaKind says in a white paper, “Live without Limits: Streaming at Scale.”

MediaKind says that live content is somewhat behind the transition compared to non-linear content in its migration to public cloud, “and it is likely that much of the very top tier of live production will remain on-premises in a more dedicated environment for many years.”

However, as the ability to produce live content in public cloud matures, it becomes the obvious way for adding flexibility to production capabilities: “no more limits on production due to the number of studios — simply create them on-demand and tear them down again afterwards.”

This fits well with a remote production approach, in turn minimizing both infrastructure and operational production costs, thus enabling a wider range of content to be made available.

“Originally, high-availability and the custom connectivity from broadcasters to their transmission infrastructure were the main reasons to retain it all on premises. However, as availability is now similar to on-premises, and highly-available connectivity is readily possible over IP (for example via SRT — Secure Reliable Transport), there are few remaining arguments as to why the entire chains for both streaming and traditional broadcast cannot be delivered through public cloud.”

MediaKind makes a case for the viability today of public cloud as the infrastructure for hosting the biggest live sports events.

Among other benefits, the use of public cloud to build production and publishing workflows as needed can radically reduce deployment times, the vendor says.

“Building new processing capabilities using on-premises physical infrastructure can take weeks or more, whereas creating a new channel or production environment can be done in minutes, including monitoring. Automation through orchestration is the way to make this timely and reliable — orchestrate all the components to be instantiated exactly when they are needed, connect them, and go.”

The automation and version management through Kubernetes means that “it is possible to replicate the exact environment with certainty,” which is a prerequisite for being able to use automation to instantiate and tear down media applications and services with confidence.

Yet public cloud environments are very different from the more traditional on-premises fixed-function world. Cloud technology requires a new set of skills that are not closely aligned with the traditional broadcast approaches, meaning either training or recruitment are a prerequisite to making a transition if expertise is remaining in-house.

Hence the pitch for media organizations to work with MediaKind or other tech partners.

“While it is certainly possible to work in a best-of-breed manner, it comes with significant and recurring costs in terms of engineering and operations. Therefore, it will be increasingly typical to leverage vendors’ expertise to deploy and maintain code in public cloud environments, with more of a sub-system approach rather than individual components.”

 


Monday, 13 February 2023

How Prepared Are You for a Deepfake Attack?

NAB

By 2023, 20% of all account takeover attacks will make use of deepfake technology, consultancy Gartner predicts in a new report. It’s time organizations recognized this threat and raised employee awareness because synthetic media is here to stay and will certainly become more realistic and widespread.

article here

“While deepfakes may have started out as a harmless form of entertainment, cybercriminals are using this technology to carry out phishing attacks, identity theft, financial fraud, information manipulation, and political unrest,” warns Stu Sjouwerman, founder and CEO of security awareness trainer KnowBe4.

According to the Security Forum, criminals can easily manipulate videos, swap faces, change expressions or synthesize speech to defraud and misinform individuals and companies.

“What’s more, people are being bombarded with information and it’s becoming increasingly difficult to distinguish between what’s real and what’s fake,” it warns.

All the elements necessary for the widespread and malicious use of deepfake technology are readily available in underground markets and forums — the source code is public.

“Advanced editing technology, once the exclusive domain of the movie industry, is now available to the average internet Joe,” says Security Forum. “Anyone can download a mobile phone app, pose as a celebrity, de-age themselves, or add realistic visual effects that can spruce up their online avatars and virtual identities.

Sjouwerman reports that in online forums, criminal organizations routinely discuss how they can use deepfakes to increase the effectiveness of their malicious social engineering campaigns.

No one is immune. Even Elon Musk was prey to a deepfake video of himself, promoting a crypto scam that went viral on social media.

That was an attempt to manipulate the stock market.

In 2020, fraudsters used AI voice cloning technology to scam a bank manager into initiating wire transfers worth $35 million. Deepfakes can be leveraged as a strategic tool for spreading disinformation, manipulating public opinion, stirring civil unrest and causing political polarization. As a recent example, a deepfake video of Ukrainian president Volodymyr Zelensky urging Ukrainians to lay down arms was broadcast on Ukrainian TV. Fake evidence (using deepfakes) can also be planted in the court of law. For example, in a custody battle in the UK, doctored audio files and footage were submitted to the court as evidence.

So how can organizations protect themselves against such attack? Sjouwerman lays out some advice. He says the key to mitigating deepfake risks is to nurture and improve cybersecurity instincts among employees and strengthen the overall cybersecurity culture of the organization.

Perhaps the best advice then is to run security awareness training sessions to ensure employees understand their responsibility and accountability with cybersecurity.

Employees can be trained to watch out for visual cues such as distortions and inconsistencies in images and video, strange head or torso movements, and syncing issues between face and lips in any associated audio.

Other tips that can help: When video conferencing, run this simple trick: ask the participant to wave their hands in front of their face or turn their side profile to the camera. If it’s a deepfake, it will reveal quality issues with the superimposition.

You can also deploy technologies like phishing-resistant multi-factor authentication (MFA) and zero-trust to reduce the risk of identity fraud.

Line Producing Virtual Production: What You Need to Know

copywritten for Sohonet

Virtual production is upending the century-old filmmaking process with a suite of technologies and techniques converging around digital tools and nonlinear workflows. While elements of virtual production such as previz and the creation of VFX assets and LED backdrops have been around for a while, the speed at which they are being combined into a unified workflow for the whole of production can be overwhelming to producers and crafts technicians making the transition. In particular, the difficulties of finding talent with virtual production experience and an understanding of how to work with vendors to get the best out of everyone in this new collaborative environment calls for cool heads and specialised line production skills.

article here

The MESH co-founders walked us through some key steps.

How does pre-production help with success?

Nothing succeeds like prep. Prep reveals efficiencies and forces people to collaborate much earlier on the project. It affects every department. You can do pretty much anything you want on a virtual production stage provided you plan for it.

For example, the camera department needs to better understand what they can and can’t do. That requires a lot of testing which is part of a bigger approach that enables a collaborative conversation between department heads.

Traditionally, arriving on set was the point of collision where the camera and production design finally met. That’s why people talked about ‘happy accidents’ in the creative process. Now those ‘happy accidents’ need to be baked-in in planning. There is so much going on in a virtual production that winging it is not an option. 

Scheduling is a main element. The principles are the same but with virtual production you drive a conversation by creating a schedule that creates a cadence of delivery and involvement and note giving for all assets. This requires that people come together in prep in a way that will be unfamiliar to many and may be outside the comfort zone. It also requires that you pay them for their time.”

What is the impact of pre-production planning on previously siloed crafts?

Cinematographers now have the power to make decisions in pre-viz that directly impacts other crafts, like production design. They can make decisions on designing light sources (windows, perhaps) into the virtual assets. The director must start making hard wiring decisions too. Shooting on green screen is only kicking the can down the road. The discipline required to make decisions and to stick to them can be hard for some directors and crews to get to grips with.

This whole process is beneficial to the end product because it is far more proactive than reactive. Pre-viz forces you to come to the table with a collaborative approach. There is no time during principal photography to move things around because of the asset light baking. Moving an asset on set can mean hours of light baking. You want to avoid that if you can.

You’re essentially asking your whole team to step up to what Spielberg and Zemeckis have been doing for years: planning the hell out of it, costing and re-costing, iterating and re-iterating so on the day you just shoot. You can’t get onto a volume without having made some hard decisions a month prior. Filmmakers have to be into that. 

Some things you can move up to the point of being on set and some you can’t. The trick – or the skill – is to know the difference between a reasonable and an unreasonable request.

What are some of the things to check when selecting a volume stage?

“All walls are not the same. We view them as instruments that need to be calibrated. If a vendor says their volume stage has been calibrated, we always want to measure it. For example, that means making sure that the colours that are expected to be sent to the wall are being received by the camera properly. You can’t rely on your eyes. The light emitting from the LED reacts differently with organics to that of a chip. 

 

The size of the pixel pitch is not a main issue. The relationship of the grid to the camera chip is. This is a mathematical problem related to how light waves from the wall are transmitted to the camera and, if not checked and properly calibrated, can result in artefacts like moiré. This calibration is different for every single camera set up and every wall. 

What role does remote collaboration play in virtual production? 

Triple ‘A’ games have been developed collaboratively for some time using high bandwidth connectivity to contribute Unreal assets remotely. Downloading assets, from Quixel for example, is entirely cloud-based today and only enabled by connectivity. We used the Sohonet’s full collaboration toolkit on The Mandalorian – with multiple ClearView Pivot, Pivot Lite, and Flex boxes being utilised across the post, and I was able to colour time it entirely remotely.  Cutting on decentralised edit stations or sharing review sessions is so routine now that it’s almost second nature. 

“I was able to colour time The Mandalorian entirely remotely”

So, connectivity is an integral part of the virtual production workflow and one that will only become more important as a means of solving the shortage of talent in media and entertainment. A connected multi-cloud solution will be key to finding the talent to work with around the planet and connecting them in real-time to work with OCF. We are not there yet, but the building blocks are in place. 

“We used the Sohonet’s full collaboration toolkit on The Mandalorian”

One of the benefits of virtual production are the economies it delivers over conventional shoots, but can that be quantified?

The equation is project dependent and not quite as simple as removing travel from the bottom line. Certainly, transport to location, accommodation, per diems etc. are reduced. There’s often a small reduction in the cost of production design. A lot of VFX are eliminated and there’s little to no green screen work, but a lot of the budget is reallocated from post into prep. Overall, if you observed all these efficiencies, you could net around 10% savings.

The biggest reduction comes from needing fewer shoot days and ideally, no reshoots. For example, with virtual production you can control the weather. By identifying efficiencies at script breakdown stage, you can maximise everyone’s time on the virtual production stage. If you’re looking at $150k per shoot day and you knock two of those off a 20-day schedule you have saved some pretty serious cash.

Can you outline what you mean by savings that are project dependent?

There are particular types of shows that benefit from virtual production. These include all shows that would previously have shot elements on green screen. The superior quality of being able to shoot in camera VFX should relegate green screen to history. 

Many shows with a low to mid-range budget can save money by not paying licences for and setting up at locations – a hospital, a fire station, a mansion – when all can be efficiently shot in the volume. Similarly, those shows can maximize time with a major actor – an A list star perhaps – by shooting several pages over one day at different locations in a volume.

For tentpole series and films, the cost savings of a few hundred thousand dollars here and there is less important than having their $200m show up as production value on screen. The imperative is ‘let’s see it on-screen.’ That’s the drumbeat at this point, not cost efficiency.

How will technologies and techniques change in the next 12 months?

In the next year, heads of production will try to enable a certain level of virtual production through their art departments because that is where a culture of spending money up front exists. It’s a very difficult thing for a producer to get their head around managing and spending the VFX money upfront and giving a VFX supervisor so much latitude. Going forward, the art department will get more control and more access.

“VFX supervisors are in charge of the conversation at the moment”

VFX supervisors are in charge of the conversation at the moment because they are really the only ones who can recover a show if there are any issues. Until the art department has access to the same level of vendor and talent relationships, VFX supervisors will remain the trusted partner in delivering the final image.

That said, there is also a sea change in the way the art directors request assets. A certain quality of asset will have to be delivered much faster than before and certainly a lot faster than what VFX are used to. This is currently a source of some resistance, but artificial intelligence can ride to the rescue.

The rapid advance of generative image making tools (like Stable Diffusion, DALLE-2) will increasingly play a role here as will NeRFs (neural radiance field), a method of generating 3D objects or scenes from 2D images.

We have worked on projects that would have had a completely different cost, had AI tools been available. Today, you can build an asset for a proof of concept in under two weeks using tools such as geophysical survey data, Google Earth, high range satellite photos, LIDAR scans and photogrammetry. The issue is whether you light the final set convincingly, so you sell the story you are telling.

 


How Deepfakes Do a Number on Cybersecurity

NAB

Deepfakes have become increasingly prevalent in politics and the entertainment industry in recent years. However, they now threaten business and enterprise as well. According to one security expert, companies need to have deepfakes on their radar or risk getting burned.

article here

“Deepfakes are a rapidly evolving technology that has the potential to cause significant harm,” says Dr. Edward Amoroso, CEO of TAG Cyber, in a new report. “[Businesses need to] understand the dangers of deepfakes and how to protect themselves and their organizations.”

Deepfake technology, artificial intelligence, and machine learning are moving faster than security teams are evolving. Audio deepfakes are increasingly being used now to hack into company networks to steal large sums of money, impersonate individuals, and even manipulate stock prices.

One general risk is reputational damage. Putting up a fake Tom Cruise is one thing — it’s flagged as a deepfake and it’s clearly designed for fun. But a faked video from a company CEO? That could tank the stock rating.

“Deepfake technology can be used to deceive viewers or listeners,” TAG’s chief information security officer David Neuman says. “When deepfake technology is used through the cyberdomain to target businesses with false or misleading information, it is likely to have a cognitive influence on leadership decision-making.”

The report explains that the immediate risk is that a company’s existing security team lacks the ability to determine if media is authentic. Most security teams have spent considerable time and resources to build technology stacks and procedures to detect and respond to traditional cyberthreats, not those designed to influence behavior or decisions.

Another risk is that security teams lack defined controls to mitigate the impact. Segmenting different parts of the business can be done proactively to help control the spread of a cyberattack. But how does a company proactively prepare for a deepfake?

“Procedures designed to respond to a deepfake event may not include the right teams or professionals,” says Neuman. “These are likely teams that have never dealt with such incidents and lack a set of operational and business procedures to implement.”

Cyber investigators will also need to develop capabilities to try to determine where a damaging deepfake originated and work with authorities to pursue the perpetrators. New skills, training and education are also necessary for dealing with deepfake technology.

One new twist is fake job applicants. As the paper explains, now that so much work is conducted from remotely outside the office, it’s no longer unusual for job interviews to be conducted remotely, and for employees to work for years for bosses they haven’t met and may never meet.

Last June, the FBI issued an alert that warned companies about deepfake job candidates. Complaints along these lines have been growing, the bureau noted. Once criminals obtain employment, they can look for opportunities to steal money and/or data.

Rick McElroy, principal cybersecurity strategist at VMware, says, “Organizations have spent an inordinate amount of money on these controls. Manipulation of the human is the easiest way — it’s the fast forward button.”

How can companies begin to get to grips with the problem?

A good place to start is to develop a threat model and tabletop exercise to understand the gaps and needed capabilities to deal with a deepfake incident.

A threat model is a systematic way of understanding and analyzing potential threats to an organization. It helps to identify, assess and prioritize the organization’s threats and develop strategies for mitigating or managing those threats.

The tabletop is a low-cost and simple way to understand and test the effectiveness of processes, techniques and procedures in dealing with a threat.

“These approaches are used in cyber threat environments today and would be a good starting point for teams to understand how to prepare for the next evolution of cybersecurity,” says TAG’s David Hechler.

Responding to a cyber incident is a team sport with many players: cyber experts, sure, but also technologists, lawyers, communications professionals, CFOs and other stakeholders. It is the same with responding to deepfakes. Teams will need to develop processes to identify business-impacting deepfakes in a timely manner and move to counter them.”

TAG also includes a foreword to its report from the Dali Lama. Pictured with Amoroso, His Holiness writes:

“I read this deepfake publication with great interest — and I deeply appreciate the work that has gone into its development. I offer my prayer that you will cancel your Gartner subscription. This seems consistent with Ancient Wisdom. Divert your dollars to TAG Cyber — and you will be happy. And I’d stay away from Forrester as well. They are better than Gartner, but only a bit. Stick with TAG Cyber. For enlightenment.

“And please do not trust or believe everything you read. It could be a fake. Or a deepfake.”