Wednesday 30 November 2022

“Squid Game” and Calculating the “Value” of Global Content

NAB

Survival-themed horror-drama Squid Game was the most in-demand series debut of 2021 and is on track to becoming a $2 billion+ bonanza for Netflix through 2027 as further series are rolled out, according to data from Parrot Analytics.

article here

With two more series in the works, Parrot projects that Squid Game will generate more than $2 billion in cumulative revenue by 2027 – not bad considering Netflix reportedly outlayed $21.4m to produce the first hit season.

This analysis comes courtesy of a new way of measuring the value of content outside of just the number of viewers who have watched it. The key to increasing the lifetime value of a piece of content is its ability to retain viewers over time, Parrot’s analysis found.

That lifetime value makes Squid Game much more valuable to Netflix, on a profit margin basis, than some of the streamer’s more expensive investments in film, such as Red Notice and The Gray Man, which each cost Netflix roughly $200 million to produce.

“Each film caps out at around $80 million in cumulative revenue for Netflix over the next six years,” reports Axios, “suggesting that pricey streaming films tend not to deliver the same level of lifetime value to streamers as do cheaper, bingeable series.”

Parrot says its Content Valuation measurement system can determine the value of any title for any distribution service by measuring its historic and forward-looking impact on user acquisition and retention for that service, within each market.

For example, notes Axios, long-running comedies and sitcoms, like Friends and The Office, tend to be strong retention titles, as do bigger-budget original series such as Stranger Things and The Crown.

For a long time, Hollywood has been operating in the dark about the value certain content drives for streamers.

In a quote to Axios, David Jenkins, creator of HBO Max series Our Flag Means Death, said, “Streaming networks have access to the most granular audience data. Unfortunately, they’ve deemed these analytics off-limits to their partners. This has created a widening power imbalance between companies and creators.”

Parrot’s new metric aims to address this imbalance. “In an increasingly streaming-focused world, viewership alone doesn’t translate into direct growth or subscribers,” said the company’s VP of Applied Analytics, Alejandro Rojas in a release. “Capturing consumer demand through a more comprehensive set of signals of intent is the most effective approach to determine what's valuable and what's not. What does audience demand tell executives about their overall product, why are people subscribing to a streaming service, who is at high risk of churn, how and when do you intervene to impact the number of customers subscribing month after month.”

It is very important to note that Squid Game is a non-English speaking language original. While a dubbed version was offered most fans and critics gained more in understanding the drama by watching with subtitles.

Indeed, demand for the Korean show is still high. Squid Game topped the list of non-English language shows in demand among US viewers recently, in other Parrot stats, and is part of a wider trend for original stories outside of the Anglo-American cultural hegemony.

According to Parrot Analytics its Content Valuation metric will help global entertainment leaders determine whether or not to acquire or produce a title, where it should be released and whether to go theatrical or not, the overall value contribution of a title to an existing library and the value of an entire library. “It will determine projected value across multiple seasons for a TV show or the value of a film to a streamer five years after the fact, just to name a few,” CEO Wared Seger said.

 

 


“RRR:” Changing the Game for the Global Marketplace

NAB 

Rise Roar Revolt is the Indian movie storming Hollywood by surprise. At the start of this year, there were only a few people around the world who even recognized the existence of Tollywood, aka the Telegu film industry. Then came RRR which single-handedly put Telugu films on the global map, raved Collider: “For many people on the internet, the movie is synonymous with an emotion that describes opulence and celebration of cinema.”

article here

Tollywood is the name for the southern Indian film industry whose growing box-office performance has brought it in line with Mumbai’s Bollywood. Telugu is the main language and RRR features two of its biggest stars in Ram Charan and Rama Rao.

The movie itself is populist retelling of two real-life Indian revolutionaries and their battle against the British Raj. It depicts the heroes as freedom fighters against the colonial regime in the 1920s.

Its theatrical run is impressive, clocking up $170 million worldwide, including $14.5 million in North America. That gross is even more impressive since the picture has been available to stream on Netflix since May where it was among the service’s Top 10 most watched titles in America for 14 consecutive weeks. Partly propelled by strong word of mouth among Netflix users, the film was re-released theatrically just a few weeks after initial release and gradually spread to more theatres nationwide during which time it gained audiences far beyond the Indian diaspora.

Dylan Marchetti, president of the distributor Variance Films, estimates that most of the RRR ticket buyers had never before seen a production from Tollywood.

Most new Indian movies are not marketed to American viewers beyond those who speak the film’s language, and most such films are already screened at national chains like AMC and Cinemark,” noted the New York Times.

There’s so much momentum surrounding the film that there’s even a push for it to be included in Oscars’ conversation – and not for Best Foreign Language film (the Indian Academy actually selected another picture anyway) – but for Best Picture along with Best Director for S.S. Rajamouli.

The question puzzling the US filmed entertainment industry is why?

The director himself told Collider: “Covid I’m sure was a factor. When everything got shut down, the whole world started looking into different cultures, absorbing content from different countries, in different languages.”

RRR would not be alone. Just look at the $2 billion in lifetime value that Netflix will amass from existing and future series of the South Korean satire Squid Game.

But Rajamouli is being humble. Having picked up on the film late in the day – mostly it seems to see what all the fuss is about – reviewers are raving about its epic and cinematic production values.

“RRR contains more exciting action scenes than all the Marvel movies put together,” beams John Powers at NPR. “Indeed, there's a slow-motion shot that is one of the most jaw dropping moments in the history of cinema.”

The three-hour run time isn’t deterring cinemagoers. The film’s title sequence doesn’t run until nearly 45 minutes into the story.

“It leverages its hefty runtime and captivating story to earn its big moments, and delivers with some of the most imaginative set pieces ever witnessed on the big screen,” is the verdict at National Review – which also likens the experience to a a first-time viewing of The Empire Strikes Back.

The director’s template was inspired by Mel Gibson’s Braveheart, Rajamouli revealed to Deadline, adding, “I like that film a lot. The way he enhances the drama before the action is a big influence on me.”

There’s undeniable pleasure in films like Braveheart or Gladiator of The Woman King (which was also inspired by those films) or indeed Star Wars in rooting for an underdog against an imperialist oppressor. Especially if said oppressor gets their violent comeuppance.

“Compared to a stereotypical Bollywood film, RRR is relatively light on music and romance, devoting much of its screen time to visual spectacle, gonzo action, and patriotic zeal,” notes Katie Rife of Polygon, “At its core, this is a story about people fighting for their beliefs against impossible odds. It’s about perseverance and the power of working together toward a common goal. Those themes are universally relatable — as is the giddy thrill of watching racist forces of imperial oppression get exactly what’s coming to them.”

While there are copious VFX, the old school values of a Hollywood epic are on display too, including filling the screen with hundreds of extras backed up by a crew of 700. Collider is not the only publication to note the thrill of seeing something new in the company of strangers.

“Indeed, in these days when the box-office is way down, movie chains are wobbling, and experts wonder whether the movies will even survive, RRR makes the case for returning to theaters. It reminds us that movies are always more thrilling when they're part of a collective experience, when you can share the excitement with the people around you.”

Cinematographer K.K. Senthil Kumar, ISC selected Arri Alexa LF using Signature Primes suitable for IMAX. In a first for an Indian film, RRR was also released in Dolby Cinema, a format that incorporates Dolby Vision and Dolby Atmos.

“We still don’t have Dolby Vision theaters in India, but we thought this would be the best way to preserve the film for the future,” Senthil told the ASC.

The bulk of the film was shot in Hyderabad, Telangana, the epicenter of Tollywood. Sets were constructed at Alind Aluminum Industries Limited — an industrial complex repurposed for film production — as well as Ramoji Film City and on location in Gandipet. Some scenes were shot at President Zelensky of Ukraine’s official palace in 2021 – since Ukraine was one of the first countries to open up to filming.

“It’s a beautiful place and we had a wonderful experience shooting there,” Senthil said

The finishing work for RRR was performed in Hyderabad, at ANR Sound & Vision at Annapurna Studios, by colorist Bvr Shivakumar, who delivered a 4K master.    

Perhaps RRR’s success is as simple as it being a good story, well told. Rajamouli thinks so.

“We can all agree that, basically, a good story is a good story across the world,” he told IndieWire. “But the way the audience perceives it depends on the sensibilities of the culture and the people. I can’t pinpoint why it happened, but I would say a part of it is that Western audiences are not getting the full-blown action of [Indian] movies. Maybe Hollywood movies aren’t giving them enough of that. That’s what I gather when I look at the response.”

Learning the Basics of Video Translation

NAB

The language services, accessibility and translation markets are big business — and demand is rising. According to Statista it was worth more than $56 billion dollars in 2021, and another researcher, Cision, believes the markets will reach $96.1 billion by 2027.

article here

Language services encompass the set of language assistance solutions that offer varying degrees of interpretation, translation, comprehension, localization, and other training services. They include a wide range of electronic, written, and multimedia materials for transcription, dubbing, narration, and voice-over. When it comes to M&E, demand is rising in part because of the local-global expansion of streaming service providers.

Here are some of the use cases, production methods and considerations related to video translation, courtesy of technology provider Verbit.

There’s a distinction between subtitling and captioning. The two solutions appear as text on screen but address different viewer preferences. When a viewer activates captions, they appear in the same language as the original content. The reason for this is that captioning is a solution focused on accessibility where subtitles are about translating to a new language.

In addition, captions interpret nonverbal sounds. If there’s a knock at the door or someone honks their horn, the captions convey that message. Captions also indicate when there is music playing and may state the artist and song or the tone. For instance, [eerie music] or [upbeat music] may appear on the screen to give the viewer more context.

Captions are a key accessibility tool for people who are deaf and hard of hearing, although many other viewers prefer to watch with captions on for various reasons.

The process of dubbing only impacts the audio file of a video. If the film has background music or non-verbal sounds like dogs barking or glass breaking, that audio also remains. However, translators will switch out any dialogue and narration from the original language to another language. In some instances, the filmmakers will even alter the gestures and mouth movements of the actors to match the new audio. The aim is to create an illusion that the characters are talking in the language of the target audience.

When users transcribe video to text, they create a written version of the audio content — useful for record-keeping or to preserve an interview. Having a searchable transcript makes it easy to jump to various points within a video. With the rise in digital content, there’s also a growing need to translate subtitles for online video translation.

Manual transcription and translation take significant time and are often inefficient. However, technical processes, including those using AI, can leave users with poor-quality results. Verbit advises a hybrid process where AI is used to produce a first draft of transcripts to streamline the process, but then ensures accuracy by having professionals review each transcript or captioning file.


Remote, on-location or cutting room? The editorial conundrum for producers and editors

copy written for Sohonet

A lot has changed, or jumped forward, in the last few years, including the physical location of the core craft of editing. If nothing else, the enforced arms-length workflows during the pandemic proved the viability of the technology for remote editorial. Although no longer a necessity – but with increasing pressures on budgets for travel productions seem open to adopting a remote-from-home workflow where it makes sense.

article here 

Cheryl Potter (Lord of the Rings: The Rings of Power, Hanna, Snow-piercer) notes the question comes up regularly now when she interviews for a show.

“Where do you like to work? We don’t have a preference, but we want to know where you stand.”

“It’s part of the conversation. The question would be framed as, ‘Where do you like to work? We don’t have a preference, but we want to know where you stand.”

This question could be tricky for some editors to answer. Most times the producer will be genuinely keen to offer choice and ensure the best possible solution is found for their team. But editors can be forgiven for being sensitive to responding in any way that might impact their chances of landing the job.

“Honestly, how it works will depend on the show and those working with you on it,” says Potter. “If the showrunner communicates best in person, then their preference is going to be for the editor to be in the room so for the editor it’s going to be very important to make sure that that is at least part of how you’re going to work.

“Some creatives don’t come into the cutting room that much anyway. They might prefer to watch dailies on their own or they want you to send them cuts to watch and send feedback. 

“Some creatives don’t come into the cutting room that much anyway. They might prefer to watch dailies on their own or they want you to send them cuts to watch and send feedback. Perhaps they are travelling or busy prepping another show and they’ll insist on remote reviews and feedback sessions. I worked on shows that way long before the pandemic.” 

Potter was in an editorial on the first season of HBO science-fiction drama The Nevers in 2020 in London when Covid brought production to a halt. Her experience will chime with many who were working in this period.

“We were cutting out of the studios where they were shooting when the first wave of Covid hit,” she recalls. “They tried to reschedule certain scenes with fewer people but eventually we were forced to lockdown. Fortunately, all my episodes were in the can, and I could continue for several months in my spare room, with my Avid and Clear View Flex.

With her episodes finished, Potter – a native Australian – was on a plane to New Zealand to join the team making The Lord of the Rings: The Rings of Power. When she arrived, she spent two weeks in quarantine, but since New Zealand had managed to lockdown fast enough to contain any outbreaks early on, she found the production itself to be pretty normal.  Later, that changed as outbreaks required weeks of remote working once again.

“Pre-Covid, my experience was that creatives usually wanted editorial close so that your showrunner or director can easily pop in just to check or discuss. You’re going to have to spin up a cutting room somewhere so why not spin it up next to set? That was the norm. But now, post-Covid, it really does feel like a lot of productions are happy for you to work remote during the shoot.”

In fact, Potter has worked on entire shows where her output has been sent out, notes received back, then notes acted on and resent; all accomplished without the lead creative ever having been in the cutting room.

And even though some editors like to be close to set and directors may want their editor close, there are times a production may look to save cost by not flying editorial out to location by setting up a central facility for cutting, in London Soho, for example. It’s in those cases, Potter notes, that it should be down to personal preference. “There will be some people who prefer the convenience of working from home. If you’re not going to be visited in person by producers, then there’s no benefit to going into town when you can just as easily do your job at home.”

It’s important to note, there’s a creative debit too from loss of interpersonal communication. Many editors and directors/show-runners feel inspiration can only be sparked in the same room together.  

“Playing back a sequence to someone so you to get the mood of the room is not something you can do remotely,” says Potter. “You don’t get that gut feedback. Even if they’re not saying anything, you can feel if someone is enjoying it or if it’s dragging and needs to be paced faster. That is a very visceral feeling, something that you can’t quite put your finger on.”

She adds, “Being able to get anywhere close to that working remotely comes down to good communication. It requires someone who can watch sequences on their own and then communicate those things to you that you would have got by sitting next to them.”

“If a production deems it very important for me to work from home, I hope they’d support me with the right gear just the same way as if I were working in a cutting room. Solid communications technology is a minimum requirement of remote working,” she adds.

If remote from home does becomes the rule rather than the exception the industry must surely be concerned with how this impacts the next generation of talent.

Potter is passionate about understanding and enabling how the next generation of editors will be brought into the craft. “You have to wonder how the up-and-coming assistant editors are going to learn to do their jobs if they are at home on a computer and not gaining that invaluable experience of being next to the editor in the cutting room,” says Potter. “The pathway to becoming an editor begins as an assistant first, where you watch the editor run the room and interact with the other creatives. Being present in the room, where the decisions are made and interpersonal relationships are built over eating lunch together and being around each other simply can’t be replicated remotely. This is something that won’t become apparent for a few years but it certainly should be addressed now.”

The need and desire for the editor’s physical presence on set, in a cutting room or remotely, will continue to evolve. Flexibility, knowledge of the production’s expectations, and accessible technology is needed to lock in the best possible experience. Properly nurturing and educating the next generation of editors is a critical part of how the future unfolds.

 


Tuesday 29 November 2022

Machine Learning: How MoMA’s New AI Artwork Was Made (Trained)

NAB

MoMA is exhibiting a new digital artwork that uses artificial intelligence to generate new images in real time, and some critics think it’s alive.

article here

The project by artist Refik Anadol and titled Refik Anadol: Unsupervised, uses 380,000 images of 180,000 art pieces from MoMA’s collection to create a stream of moving images.

“It breathes,” Fast Company’s Jesus Diaz gushes, “like an interdimensional being… this constant self-tuning makes the exhibit even more like a real being, a wonderful monster that reacts to its environment by constantly shapeshifting into new art.”

To be fair, Diaz was being shown around by the artist himself, who says he wanted to explore how profoundly AI could change art. In an interview for the MoMA website, alongside Michelle Kuo, Paola Antonelli and Casey Reas, Anadol shares, “I wanted to explore several interrelated questions: Can a machine learn? Can it dream? Can it hallucinate?”

To which the answer is surely no. But if nothing else, Unsupervised has succeeded as art should in feeding the imagination.

The display is “a singular and unprecedented meditation on technology, creativity, and modern art” which is focused on “reimagining the trajectory of modern art, paying homage to its history, and dreaming about its future,” MoMA states in a press release.

Most explanatory blurbs for an artwork are deliberately vague, lacking in the basics of simple comprehension. In this case, the work is described by Anadol as a “Machine Hallucination” that brings a “self-regenerating element of surprise to the audience and offers a new form of sensorial autonomy via cybernetic serendipity.”

To understand what Unsupervised means, you have to understand the two main methods with which current AIs learn: Supervised AIs — like OpenAI’s Dall-E — are trained using data tagged with keywords. These keywords allow the AI to organize clusters of similar images and, when prompted, will generate new images based on what it learned.

In this case, the AI was left to make sense of the entire MoMA art collection on its own, without labels. Over the course of six months, the software created by Anadol and his team was fed 380,000 high-resolution images taken from more than 180,000 artworks stored in MoMA’s galleries, including pieces by Pablo Picasso, Andy Warhol and Gertrudes Altschul.

The team created and tested various AI models to see which one produced the best results, then picked one and trained it for three weeks.

Crafting the neural network and building the training model to create Unsupervised is only half of the story. 

To generate each image in real time, the computer constantly weighs two inputs from its environment. According to Diaz, it references the motion of visitors, captured by a camera set in the lobby’s ceiling. It then plugs into Manhattan’s weather data, obtained by a weather station in a nearby building.

“Like a joystick in a video game, these inputs push forces that affect different software levers, which in turn change affect how Unsupervised creates the images,” Diaz describes.

The results probably need to be experienced before judgement can be passed.

“AI-generated art has arrived,” says Brian Caulfield, blogging for NVIDIA, whose StyleGAN forms the basis for Anadol’s AI.

“Refik is bending data — which we normally associate with rational systems — into a realm of surrealism and irrationality,” Michelle Kuo, the exhibit’s curator, explains to Zachery Small at The New York Times. “His interpretation of MoMA’s dataset is essentially a transformation of the history of modern art.”

In his interview for MoMA, Anadol even has the chutzpah to compare his work to breakthroughs in photography.

“Thinking about when William Henry Fox Talbot invented the calotype, and when he was playing with the early salt prints, pigmentation of light as a material — working with AI and its parameters has very similar connotations: the question of when to stop the real, or when to start the unreal.”

For example, Unsupervised is able to draw on the vast array of digital representations of color from artworks on which it was trained, and from that, it seems, play back colors of its own.

Anadol imagines looking at historic paintings like Claude Monet’s Water Lilies, and remembering their richness and personality. Now imagine the data set based on these works, one that considers every detail that your mind cannot possibly hold.

“Because we know that that EXIF [exchangeable image file format] data that takes the photographic memory of that painting is in the best condition we could ask for,” Anadol comments. “I think that pretty much the entire gamut of color space of Adobe RGB most likely, exists in MoMA’s archive. So we are seeing the entire spectrum of real color but also the machine’s interpretation of that color, generating new colors from and through the archive.”

Speaking to Diaz at Fast Company, David Luebke, vice president of graphics research at NVIDIA, says simply, “Unsupervised uses data as pigment to create new art.”

Digital artist and collaborator Casey Reas offers another perspective for how we should think about an AI, rather than it somehow being conscious.

“What I find really interesting about the project is that it speculates about possible images that could have been made, but that were never made before,” Reas says. “And when I think about these GANs, I don’t think about them as intelligent in the way that something has consciousness; I think of them the way that the body or even an organ like the liver is intelligent. They’re processing information and permuting it and moving it into some other state of reality.”

Anadol and the exhibit curators would have us think that the art world is in a new “renaissance,” and that Unsupervised represents its apex.

“Having AI in the medium is completely and profoundly changing the profession,” the artist noted. It’s not just an exploration of the world’s foremost collection of modern art, “but a look inside the mind of AI, allowing us to see results of the algorithm processing data from the collection, as well as ambient sound, temperature and light, and ‘dreaming.’ ”

Of course, this is only the tip of the iceberg. Much more is coming. Modern generative AI models have shown the capability to generalize beyond particular subjects, such as images of human faces, cats, or cars. They can encompass language models that let users specify the image they want in natural language, or other intuitive ways, such as inpainting.

“This is exciting because it democratizes content creation,” Luebke said. “Ultimately, generative AI has the potential to unlock the creativity of everybody from professional artists, like Refik, to hobbyists and casual artists, to school kids.”

 


Michelangelo vs. Machine: Creativity in the Age of AI

NAB

In 2022 AI was most certainly a gimmick and a headline grabber. In 2023 the answer will be different.

article here

“People will start using AI in their workflow because it makes sense,” award-winning director Karen X Cheng says in an interview.

In an a16z podcast, host Steph Smith talks with Cheng about her use of generative AI tools like DALL-E, Midjourney and Stable Diffusion. Cheng has more than a million followers and almost everything she creates goes viral — including a video of her becoming a lawnmower (yes, that’s right), an AI-generated magazine cover for Cosmo, and a DALL-E fashion show.

“It is so much harder to make a video go viral than it was 10 years ago,” she says. “The way to do it now is you have to have a following. It’s not so much about trying to make something viral but about building an established follower base so that the number of people who see your work steadily gets higher.”

For Cheng, the secret ingredient that unlocked viral videos in the age of the algorithm was to produce a behind-the-scenes look at how she made a piece of content, posting it alongside a new release.

Recently she has successfully created content for sponsored partners using generative AI.

“I had to find new toys to play with,” she says. “I noticed the insane stuff in AI white papers and what researchers were doing but it’s not their job to explore AI storytelling or cinematic potential.

“So, I started experimenting by taking the research from white papers and making them into social media-friendly videos.”

Aside from the leading text-to-image tools, Cheng also uses AIs that require more technical knowledge but enable niche techniques. Dain, for example, applies artificial slow motion to your video. She used it on a stop-motion video of her lying on the lawn to smooth the action to appear “as if she were a lawnmower.”

It went viral.

NeRF technology uses any camera to scan a scene to create a 3D scan constructing a particle lightfield so that the light changes realistically during the scene. “That’s why it can handle mirrors,” she explains, “Whereas traditional photogrammetry cannot.”

Each AI tool has a specific purpose: “Where it becomes more interesting is when you combine them. For example, you can generate an image in DALL-E and then use the CapCut app to turn it into a 3D image. Since image synthesizers don’t do human faces very well, you can run them through Facetune — an app that will fix it.”

Cheng feels that AI art will significantly lower the barrier to entry to becoming an artist. “To be an artist you [historically] have to have a lot of time, a lot training and sometimes the money to do that. Now, everyone can do it. Image synthesizers takes the artistic skill of artists and gives it to everyone. There will still be standouts — they will be the ones finding different or creative innovations with the ability to combine things in different ways.”

As it’s popularly conceived, AI is all-powerful and will replace humans. Cheng says she feels a responsibility to portray the tools in an optimistic light.

“I’ve had to unlearn a lot of my bad habits. As a trained viral video creator, I am rewarded for making clickbait headlines. My first instinct was to make a bunch of human versus machine videos. Then I realized that will just freak people out,” she comments.

“There are legitimate reasons to be worried about AI and it will negatively impact some people more than others but AI can be used for good or bad. The media will push toward the bad because that gets clicks and views.”

Having amassed a sizeable following, Cheng says she felt less pressure to make clickbait and decided to make videos depicting AI in a positive manner.

“It does feel like a collaboration. You often get results back that you didn’t expect and which prompt you to go down that rabbit hole.”

However, she doesn’t deny that AI will negatively impact many people. The introduction of AI into the creative industries for example will put “incredible downward pressure” on prices where the vast majority of people will lose out.

“I do worry about that for creators and I don’t know how it will play out. If you hire a human to use AI as a tool then you pay the human.”

Cheng advises, “I would say best thing to do is not to learn a specific skill, because technology is changing so quickly, but to adopt a specific mindset. You have to accept that the model humans had which is to choose a career and have it for life, is gone. The sooner you can accept that the world is always changing the better off you will be.

AI-powered video tools for creators are on the way — but aren’t here yet. “That’s why I’ve been doing so much hacking of AI tools because it’s not quite there yet. Once it does though, be careful what you wish for. My wish is for humans to take the ethics of AI very seriously. By which I mean that everyone working on AI be held to a standard to use AI for positive force rather than negative. I hope society finds a way to seriously penalize those who use AI negatively.

“For example, deception is bad. If you alter things you need to disclose what and how.”

Do we need to label things as AI or human-generated?

“It will be necessary and will be similar to nutrition facts on food packaging,” Cheng suggests. “There will need to be a universal standard that shows this is a video produced in such and such a way especially if the message in the video is very important (like a political video, rather than a social influencer’s vlog).

“I would love to see a culture develop where, as part of being human, we use technology responsibly and ethically.”

 


How Exactly is Generative AI a Gamechanger for Creatives?

NAB

The ability to generate music, images, and even video by algorithm may be ethically or even aesthetically controversial, but financially speaking there’s no argument.

article here

In a pure business context, generative AI changes the economic calculus — massively.

That’s according to venture capital firm a16z, a group of investors who normally place their bets on emerging Web3 tech like blockchain, DAOs, crypto and NFTs. Now they report that generative AI is seeing the fastest uptake by developers they’ve ever seen.

In particular, they highlight the popularity of AI tool Stable Diffusion, noting the almost daily launch and funding announcements of start-ups using the technology. Online social networks are being flooded with content created by generative models.

The argument couldn’t be clearer for any company working in the creative arts.

“Any custom artwork or graphic design project will likely take days or weeks, and will cost hundreds, if not thousands, of dollars. Using generative AI is easily four orders of magnitude cheaper and an order of magnitude faster.”

a16z compares the use of generative AI for image creation with programming code. While code generation can benefit from an AI boost, even a very basic functional program still requires reviewing, editing and testing by humans.

It puts this down to the fact that programming code requires absolute accuracy — unlike producing a piece of content where the outcome often depends on serendipity and the blending of ideas that may not be planned.

Generative AI for art is not yet perfect. It will require some degree of user supervision. But the writers say it’s hard to overstate the difference in economics versus coding that’s created by an image model’s ability to mimic an artist’s output.

“For a basic image, entering a prompt and picking an image from a dozen suggestions can be done in under a minute.”

All of which seems to undo the blood, sweat, tears and years of artwork by Michelangelo on the Sistine Chapel, or Van Gogh’s incredible final pictures of wheat fields in Arles.

Their argument is not that computers are necessarily better than humans but, as with so many other automated tasks, “they just kill us on scale.”

a16z doubles down, saying that the impact of generative models on creative work output, such as image generation, is extreme.

“It has resulted in many orders of magnitude improvements in efficiency and cost, and it’s hard not to see it ushering in an industry-wide phase shift.

“The massive improvement in economics, the flexibility in being able to craft new styles and concepts, and the ability to generate complete or nearly complete work output suggests to us that we’re poised to see a marked change across all industries where creative assets are a major part of the business.”

Examples include the ability for generative AI to help with level design for video games. In marketing, “it looks poised to replace stock art, product photography, and illustration.”

There are already applications for AI tools in web design, interior design, and landscape design.

When it comes to Hollywood and the major record labels, a16z makes the point that a large amount of output is formulaic, taking an idea that sold well and re-spinning it across franchises. Its argument for the use of AI is only as cynical as the industry has always been:

“It also may be the case that combining and recombining all prior art may be sufficient for the practical range of creative outputs. The music and film industries, for example, have historically produced countless knock-offs of popular albums and movies. It’s entirely conceivable that generative models could help automate those functions over time.”

The VC company goes further, suggesting that the melting pot of data that an AI is fed on could come up with something unique and fresh.

“It’s not difficult to envision an AI model producing genuinely interesting fusions of musical styles or even ‘writing’ feature-length movies that are intriguing in how they tie together concepts and styles.”

It ends: “We believe generative AI is strictly a positive tool for extending the reach of software — games will be more beautiful, marketing more compelling, written content more engaging, movies more inspiring.”

 


Phantom VEO 4K demand scales new heights

copy written for VMI

article here


Nothing beats seeing the elegance of a beautiful action slowed down to reveal so much more than we can see with the naked eye. 

 

Super slow motion facilitates the critical ‘money shot’ required for many productions. No natural history production is complete without the 1000fps shot of an amazing action – such as the shark attack on a seal recorded for BBC Planet Earth in 2017.

 

The Phantom Flex 4K camera has been the gold-standard for capturing these shots ever since its release in 2013. It allowed cinematographers to shoot 1000fps in 4K RAW with a Super-35 sensor for incredible image fidelity which had never been seen before and has become the shot that every natural history production now requires.

 

The release of the Phantom VEO 4K in 2018 offered the same image quality as its bigger brother but was built into a smaller, lighter form factor. Unlike its power-hungry predecessor, the VEO 4K only draws 80W of power, allowing it to use regular camera batteries and its operation no longer requires a DIT to be in tow. The VEO 4K on location can be operated with a smaller crew and the DP/operator alone is able to control the camera and perform the transfers.

 

Its size and weight clearly makes it more ideally suited to productions required to travel extensively with kit and crew into the field. However, the VEO 4K did have a wrinkle in its workflow which may have dissuaded some DPs from using it in place of the FLEX – the speed of its media transfer. This is slower than the FLEX if you’re not aware of the right tips and tricks to optimise its performance, which is also the key to its success.

 

The VEO 4K’s winning strategy is to partition its high speed memory, so enabling the camera to effectively act like three separate cameras, simultaneously capturing, trimming and offloading media, all at the same time.  This speeds up the entire operation, turning the portable CF2 capture of the VEO 4K into a very efficient portable location production tool. But to make the most of this operators need to be shown how to use it in this capacity. 

 

VMI were early investors in the Phantom VEO 4K back in 2018 and ran several workshops in London and in Bristol to teach operators how to optimise it in the field. We’ve also spread the word by making videos and writing articles to help inform the industry.  

 

And guess what? It seems to have worked. Today, VMI runs a fleet of five Phantom VEO 4Ks (with another on order) to become the largest UK rental supplier of these specialist cameras. 


They have been used on Natural History shows such as:

·        Gangs of Lemur Island made by True to Nature for Smithsonian Networks

·        Tiny World made by Plimsoll Productions for AppleTV+

·        Super/Natural another Plimsoll commission for Disney+

 

As well as numerous commercial productions including:

 

·        Manolo Blahnik campaigns

·        Awesome stunts

·        Shoe brands

·        Time-warp creative concepts

 

They are increasingly being used in the studio too, since the VEO 4K includes a 10Gbit ethernet port, which was sadly absent on the Flex. This allows media to be pulled off the camera more quickly than offloading to Cinemags, when operated by a skilled DIT and a suitably fast computer.

 

This camera comes into its own when configured for portable productions, since it is lightweight enough to be mounted on a small jib or gimbal, frugal enough to be powered for long periods and is entirely remote controllable. Having got to grips with all of its capabilities, producers and DPs have rapidly come to this conclusion too.

 

Five years is a long time and the Phantom VEO 4K shows no signs of slowing down.

 


Monday 28 November 2022

Why M&E Companies May Be “Reticent” About Adopting Full Public Cloud Deployment

NAB

The benefits of cloud solutions are widely understood by video service executives, but Amazon Web Services says migration to public cloud isn’t happening at a fast enough rate for M&E companies to fully take advantage of them.

article here

Naturally, AWS has a vested interest in pushing this message and says operational complexity is being exacerbated while M&E companies adopt halfway-house strategies involving on-prem, private and public cloud.

In a new study, “Expediting Cloud Transformation in the Media Industry with Marketplaces,” AWS commissioned research analysts Omdia to evaluate the potential of cloud marketplaces to expedite the cloud transformation of media operations. Omdia interviewed 66 global senior decision makers from video service providers, asking them how and why they are utilizing cloud services.

The leading external factors influencing technology strategy are topped by the lasting impact of COVID (48%), closely followed by the rise of direct-to-consumer (D2C) streaming (45%); and, related to both, the changing nature of viewer behavior and expectations (35%).

As the report points out, the pandemic hastened the use of cloud infrastructure to support business continuity by enabling remote working capabilities. It also acted as a catalyst in changing consumer behavior and facilitating the acceleration of D2C streaming as stay-at-home measures meant people sought online entertainment.

These changes to consumer behavior and entertainment needs are driving a shift in technology strategy, Omdia finds. The overall picture is one of managing the cost of content production, with a focus on technologies that aid the monetization of assets. Operations are migrating to the cloud, improving both financial and operational flexibility and giving content owners and distributors tools to pivot quickly in response to customer demand.

Budgets are being adjusted to promote these enabling technologies, though some functions are being prioritized over others. Online distribution and AI/ML will be key areas for technology spending, with the highest proportion of surveyed respondents stating an expectation that spending on these areas will increase 6% or more in the next 18 months. Both online distribution and AI/ML are also the perfect candidates for optimization through cloud-based technologies.

Reducing the overall spend on technology is stated as the main reason for cloud deployment by 36% of respondents. Moving away from CAPEX intensive investment is also highly rated, along with scalability.

However, cloud implementation is progressing at different speeds. Processes like archiving, storage, and media asset management are much further ahead than others. These are “more naturally cloud-aligned” aspects of the media enterprise that might require significant storage and can be scaled quickly and accessed from anywhere.

AWS also says live content production is expanding into cloud as a direct result of the pandemic.

In contrast, AI/ML has the lowest level of cloud deployment due to its technical challenges. This remains “one of the most highly anticipated technologies and a key future development in terms of workflows and creating highly personalized services.”

The variation in the level of cloud deployment is matched by the mix in cloud implementation type. Rather than an outright preference for one, most content providers are utilizing a hybrid model of appliances, private cloud, and public cloud — which AWS criticizes saying it “adds operational complexity.”

These differences are at a function level too, with 56% of respondents using direct public cloud for live content production, while just 40% use it for AI/ML. Private cloud remains highly used across most media delivery functions (45-60% of respondents), while managed service providers are used by fewer than 15% of organizations across all operations.

“This complexity combined with the pace of change in the media and entertainment landscape has led to technological skills and knowledge gaps, which can be a hindrance to cloud deployment,” AWS state.

Despite the technical challenges, the benefits of using cloud to support growth are said to be extensive and well understood by users in the M&E industry.

Satisfaction is high, with the vast majority of surveyed respondents — over 80% — stating the overall impact of using cloud technology was either positive or very positive.

“Therefore, the real challenge, and opportunity, for the industry is pushing more functions into the cloud, at a faster rate,” AWS comments.

Looking to the future, it seems broadcasters will continue to seek a mix of public and private cloud. AWS attributes the reticence toward full, public cloud deployment to the “multi-layered cloud ecosystem.”

There are several public cloud infrastructure providers, it explains, a multitude of complex M&E applications, and an implementation structure that can often involve working and deploying directly through a number of different technology vendors.

“This not only slows cloud implementation, but can also result in silos of data, hindering the ability to achieve a fully connected workflow — a key asset for developing monetization strategies.”

However, AWS then highlights a solution to this issue in the role of public cloud marketplaces (AWS Marketplace, Google Cloud Marketplace, Microsoft Azure Marketplace).

“Marketplaces are going to be a key element of future technology strategy for content providers in the media industry,” AWS contends, “though their approaches are still essentially a hybrid of available options while in this transition phase.”

Cloud marketplaces are presented as making it easier to both implement and leverage the benefits of the cloud, along with the scalable pricing structures of SaaS, Software as a Service, models.

“Media companies can license industry specific software then integrate it with other on-prem or cloud-based solutions or leverage integration services from any number of vendors. These platforms create a single source for content providers to access and test an array of advanced tools for the development of efficient processes while also supporting growth.”

AWS urges more vendors to host their software on these marketplace platforms saying that this could have “a snowball effect” resulting in a more “unified approach,” which would bring benefits to everyone.

“Using marketplaces can push forward the use of public cloud across under-penetrated and technically difficult functions. This would help to leverage the myriad of benefits that cloud can bring, but also better connect often isolated data silos, all of which can put the industry in a much stronger position going forward.”

 


Sunday 27 November 2022

How to Manage This New Creativity “Supply Chain”

NAB

The Creator Economy is depicted as a burgeoning opportunity – for a few. Yet the millions of people unable to sustain a living from creating and distributing their work online may in fact be the biggest untapped market of them all.

article here

So says Michael Mignano, co-founder of podcast platform Anchor, who lays out the idea of the Creativity Supply Chain at Medium.

He thinks that the Creator Economy hasn’t lived up to its sky-high expectations. Even finding 1000 true fans to monetize your work is hard enough for a few people.

Most creators can’t break through platforms’ algorithms. Most creators are not marketing experts,” he says. The Creator Economy was never about democratization; it was about elitism.

He makes the case that is the other 99% of creators which is where giant wealth lies. That does not necessarily that mean that 99% of the rest of us are suddenly going to get rich.

Or maybe it does.

What if the opportunity for creativity is much bigger than what we’ve all been calling the Creator Economy? I believe it is. I call this opportunity, The Creativity Supply Chain.

Mignano breaks this concept into four areas supply, incentives, demand and ‘superpowers’.

When it comes to ‘Supply’ Mignano thinks it obvious that “we are all creative. We write, take photos, edit videos, design presentations. We all create content using tools like our phone’s camera, Instagram, Facebook, TikTok, and YouTube. We make podcasts and so on. There would seem no-end to this supply, perhaps because humans are innately driven to share their version of the reality they find.

In any case, in the internet age there are market mechanisms that financially and socially incentivize all of this creation, distribution, and consumption of creativity. Even if the goal is not to make money, but to advocate something, Mignano believes the universal reason we all create is because “there is overwhelming demand for creativity.

I’m not sure I buy into this. Does a racist football mouthing obscenity on Twitter qualify as a ‘creator’? They have written and shared something. I don’t think everyone is a creator or rather I don’t believe all shared interactions should be labelled creative. It demeans the meaning of the word and applies value where there is none.

Anyway, following his logic, “By this time next year, you will probably be consuming even more creativity as more products and services launch and your devices become more powerful. And while your demand for creativity continues to grow, so too does that demand for nearly everyone else on this planet; especially people who are just coming online for the first time in their lives, such as in emerging nations which are just gaining access to high speed internet.

More supply, more demand – it’s a virtuous circle “happening in greater volume and velocity than ever thanks to the superpowers of technology.

“Technology both democratizes and turbo charges all of the above components, making them more accessible, efficient, and powerful. And the rate at which creative superpowers are improving is accelerating every day.

The ‘Superpowers’ highlighted here include Artificial Intelligence where tools like text to video engines will put a rocket under everyone’s ability to create photoreal worlds and moving image narratives.

“It’s not hard to imagine a world in which the cost of creativity plummets as a result of the rise of AI-enabled generative media. Ten years from now, will major movie studios be investing hundreds of millions in blockbuster films? Or will a state of the art AI model do all the work instead for orders of magnitude less?

Another tech superpower is a souped-up internet enabled by 5G and computing at the network edge. Mignano even looks ahead to 6G, which will make creative mediums like AR and VR more accessible, is expected to begin rolling out from 2030.

Further advancements in machine learning and recommendation media will have massive implications on both the supply and demand sides of creativity.

For creators, finding an audience for your creativity will happen automatically upon platform distribution,” Mignano predicts. “We’ll upload our content, and the platform’s ML algorithms will find the perfect audience at the perfect time for our work.

“On the demand side, we will instantly and easily access the exact content we want to consume with far less friction. The supply and demand flywheel of creativity will spin far faster as a result of machine learning.

For evidence of this flywheel already being engaged, he points out that core businesses in the Creativity Supply Chain —Spotify, Shopify, Squarespace, Adobe, and Snap — represent more than $370 billion of market cap. Companies with big stakes in the Creativity Supply Chain — like Meta, Apple, Alphabet, and Netflix — represent more than $7 trillion of market capitalization, he calculates.

More evidence? Look at Adobe’s $20 billion acquisition of Figma, the largest ever acquisition of a venture-backed company — for proof that there is massive demand for companies contributing to the Creativity Supply Chain.

Make no mistake, says Mignano, the Creativity Supply Chain isn’t some theoretical concept from the future; it’s happening right now.

 

 


Now We Have an AI That Mimics Iconic Film Directors

NAB

An AI that mimics the directorial style of Quentin Tarantino – or literally any other auteur you care to think of? The possibilities are tantalizing and within reach.

article here

A group of researchers out of Aalto University in Finland have devised a tool that generates video in the style of specific directors.

It’s so good that when put to the test and audiences can tell which directors’ style is being mimicked.

Cine-AI is in fact targeted at the automated creation of cutscenes in video games – but its applicability to generating cine-literate visual storytelling that emulate the film language of famous directors - alive or dead - is clear.

It would also only be a hop, skip and a jump to apply the same process to auto-generate synthetic cinematography as if lensed by Vittorio Storaro or Roger Deakins. Anathema as it sounds – this will be possible at a convincingly photoreal level sooner than we think.

The particular problem that the Inan Evin, Perttu Hämäläinen and Christian Guckelsberger sought to crack is laid out in their white paper published in August.

In-game cutscenes are non-interactive sequences in a video game that pause and break up gameplay. In high-quality (AAA) productions especially, cutscenes feature elaborate character animations, complex scene composition and extensive cinematography for which games developers may need to hire dedicated directors, cinematographers, and entire movie productions teams.

Cutscenes form an integral part of many video games, but their creation is costly, time-consuming, and requires skills that many game developers lack,” they explains.While AI tools have been used to semi-automate cutscene production, the results, typically lack the internal consistency and uniformity in style that is characteristic of professional human directors.

“We aim to realise procedural cinematography, focusing on how camera placement, shot continuity and composition can be brought together by algorithmic means.”

To that end they have devised Cine-AI, an open-source procedural cinematography toolset capable of generating in-game cutscenes “in the style of eminent human directors.

Implemented in the game engine Unity, Cine-AI features a novel timeline and storyboard interface for design-time manipulation, combined with runtime cinematography automation.

Besides Tarantino they also chose to train their AI on the films of Guy Ritchie (Lock Stock…, Snatch, Revolver, Sherlock Holmes) explaining that not only did both directors have recognisable and unique shooting styles, but they were well known to a wider audience. That was important when it came to assessing the results.

Arguably also Guy Ritchie’s kinetic style works well for action-based video games while Tarantino’s works best for more sedate dialogue heavy scenes.

As the basis for the dataset they extracted 80 one-minute clips from the most highly rated movies of each director on IMDB. For added representativity, half of the clips chosen were action-heavy, and the other half to be strong on dialogue. Each clip was also assessed in terms of its dramatisation level and the scene’s pace, encoded on a scale with high values given to a scene that unfolds quickly for example.

Importantly, these cutscenes are not baked-into the game but designed to playback dynamically based on the actual game state, rendering the production of different, static cutscenes for each possible gameplay outcome obsolete.

With the finished clips they arranged for viewers to take the Pepsi challenge and to judge which was in the style of Ritchie and which a Tarantino. Eighty percent of the responses were correct.

The team is aware of their system’s shortcomings - and its potential. They note, for instance, that not all directors solely focus on camera work to express their style.

Michael Bay, for example, notoriously employs many post-processing effects such as lens flares and god rays.

Future enhancements to the software might introduce additional cinematography techniques including post-processing effects.

They would like to open Cine-AI up to more genres outside of action and to account for more nuanced - less iconic – directorial styles.

Both directors are moreover white and western men, and future efforts in extending the dataset should focus on increasing director diversity,” they state adding they would be intrigued to see how well Cine-AI might reproduce the style of horror and action director Timo Tjahjanto.

Other directors cited as providing a worthwhile challenge to procedural cinematography” include Spike Lee (who “focuses on color and race relations and is well known for his frequent use of dolly shots to let characters float through their surroundings” they say) and French director Agnes Varda (“praised for her unique style of using the camera as a pen’”).

The researchers don’t expect Cine-AI to completely replace actual human directors in video game creation but - like other developers of AI tools – see it as a co-creative in the process.

“The cinematography requirements of AAA games tend to get extremely sophisticated in terms of style, scene duration and size,” they explain. “While AAA companies will likely continue relying on dedicated production teams to achieve the desired level of cinematographic quality, they can utilise Cine-AI to create prototypes for their cinematography design, automatically generate shots to inspire new ideas, or use the storyboard feature to quickly iterate on possible shots, following a specific directorial style.

Both the proof-of-concept dataset and the source code is now publicly available under an open-source license. The Finnish team is inviting other researchers, game developers and film enthusiasts to join them in taking the project further.