Monday, 10 June 2024

What Do You Need for a Successful Video Game Adaptation? (Maybe to Have Actually Played the Game)

NAB

After years of trying, have filmmakers unlocked the secret to adapting video games into successfully TV shows? The plaudits from gamers and non-gamers alike for hit series The Last of Us and Fallout seem to suggest so.

article here

Graphic artist and video game consultant Tom van der Linden says the missing link is “qualia,” a philosophical concept that refers to the subjective experience of phenomena.

“For me, at least, The Last of Us was not a game about shooting zombies, it was not about moving ladders around to solve environmental puzzles,” he explains.

“It was about spending time with the characters, about going on a journey with them and coming to care deeply about them.”

In a YouTube video, van der Linden goes deeper into analysis of Amazon Prime Video’s Fallout, which he holds up as the best example yet of how to successfully adapt a video game.

There’s a general consensus that a good screen adaptation of a video game has to be faithful to the source material, and that copy-and-pasting the narrative, the set pieces and the iconography alone is not enough.

Films like Doom and Uncharted included recognizable game elements but missed the deeper experience of playing the game. To van der Linden these adaptations felt more like adaptations of the game’s brand rather than the game itself, similar to non-gaming adaptations like Barbie or LEGO.

What’s missing is the one thing that actually defines videogames, and arguably the only aspect that cannot be directly carried over to cinema or television. And that’s the gameplay.

It’s the players engagement with the virtual world, and how do you faithfully adapt player interactivity to a passive medium?

This is where the true act of transformation or translation has to happen and where the real challenge of adapting video games lies.

Because unlike with literature, for example, which has been the source of cinematic adaptations for virtually as long as movies have been around, there haven’t been any clear principles or proven methods yet for translating gameplay for translating interactivity.

Until Fallout, which is held up as a classic of the genre and maybe a template for future game to screen adaptations.

“Adapting the qualia of video games is about how a game makes you feel, how it puts you into a certain mode of engagement.”

Qualia is a somewhat nebulous term that even dictionaries struggle to define, and that could be a problem for creators trying to capture it. Broadly it means “subjective, conscious experience,” which in van der Linden’s view can’t be achieved by superficially bolting elements of a game into a TV or film format.

The producers of any successful video games adaptation “should have actually played the game they’re adapting, or at least know what it’s like to play video games in general,” he says. “Secondly, and this is the challenging part, it means that you have to figure out how to translate to a passive medium, the essence of an interactive one. And to me, this is where Fallout has been particularly revelatory.”

Part of the reason why recent video game adaptations have been more successful is not so much because filmmakers got that much better at adapting them, but rather because video games themselves have been becoming increasingly cinematic and narrative driven.

HBO’s The Last of Us raised the bar “significantly” when it came to utilizing new motion capture technologies that allowed for much more realistic performances and dramatic storytelling, he says.

“In turn that resulted in a narrative driven experience that was hailed as being on the same level as any serious movie or TV show. So when the adaptation eventually came around, it’s fair to say that the game was already meeting it halfway.”

He adds, “The game aspires to high drama, and the show reflects that.”

In contrast to a limited TV series, much of the narrative impact for players of a game like The Last of Us comes from everything that happens in between the big story moments “in those stretches of empty space” that the TV show wasn’t quite able to capture.

“However, I also believe the series was able to improve on the story in some other areas that the game struggled with,” he concedes.

Fallout is an open world role-playing game, meaning that instead of having one linear narrative that players have to follow, there are also secondary storylines, quests and errands to complete, dungeons to clear out, items to collect, skills to level up, weapons to craft, armor to create, and the list goes on.

“Even if you have done all of that, in the game you can do it all over again in a different way. Players don’t just participate in a story, they create one for themselves. It always gives them some choice that lets them direct the course of the narrative, which obviously makes games like these harder to adapt. There simply isn’t one definitive storyline here.”

So how did Fallout’s creators pull it off?

Among other things, they took that open-ended player-centric perspective and spread it out over multiple characters. Fallout’s three main protagonists are each layered with the different stages a game player moves through, as well as with different play styles they can assume.

The narrative structure of the episodic format also works well in this case, van der Linden says, “though, there are plenty of games for which I think a movie would be a better fit. Again, it’s about looking at qualia, considering the true essence of your engagement with the material,” he advises.

“The show easily could have been just a lackluster tourist guide through Fallout iconography,” he says. “Instead, in the multitude of different scenarios, dialogues and character moments that each relate to some aspect of what it is like of playing the game, Fallout is able to capture the qualia of the game.”

For example, the frustration with low-level weapons, the satisfaction of becoming an apex predator, and the awkwardness of failing a speech check are all captured in the show. These moments resonate with gamers because they reflect the experiential reality of playing the game.

Beyond gameplay, Fallout captures the broader themes and philosophical substance of the game. “The show explores the absurdity of a world dominated by capitalism, even in a post-apocalyptic setting,” he says. “This reflects the game’s satirical take on societal issues and the human condition.”

Van der Linden even argues that successful video game adaptations can enrich and preserve the medium’s artistic value.

“They allow non-gamers to appreciate the stories and experiences that gamers hold dear, bridging the gap between different forms of artistic expression. This broader acceptance helps integrate video games into the larger cultural and artistic conversation.”

 


Robert Tercek and Peter Csathy: When It Comes to Media and AI, Copyright Law Is Not an Open and Shut Case

NAB

If there’s one thing that media and Big Tech can agree on when it comes to AI, it’s that existing copyright laws are outdated and in need of an upgrade.

article here

There’s a gray area being fought over in the courts by artists like Sarah Silverman versus companies like OpenAI over the definition of “fair use.”

Lawyers for Big Tech might be able to pull this line of attack apart by showing that their large language models are not copying copyrighted works at all.

Others, like media and legal expert Peter Csathy, argue for a trickier to define case that it is ethically right to reward artists and that doing so is in the interests of the public good.

“It’s not about anti-tech, or pro-big media, it’s about taking the value that’s been created by new GenAI models and recognizing the contribution made by creators in the first place,” Csathy said during a debate with tech entrepreneur Robert Tercek on The Futurists podcast. “It’s about sharing in the pot of the opportunities in some kind of equitable fashion.”

A central issue is the unauthorized use of copyrighted content by AI companies to train their models. Major media entities like The New York Times have filed lawsuits against AI firms such as OpenAI and Microsoft, arguing that their copyrighted content has been used without permission, undermining the creators’ rights and economic interests.

Csathy argues that AI’s reliance on copyrighted content necessitates fair compensation for creators. He said the use of AI to generate content raises concerns about the loss of control and the potential devaluation of creative works.

His argument is that Big Tech, notably OpenAI, has gotten rich — billions of dollars worth of market capital rich — because its models have been trained on millions and millions of pieces of copyrighted works.

It’s only fair then for a licensing fee to paid to creators by developers of LLMs for use of their copyrighted works.

“The reality is that no generative AI would have any value whatsoever it unless it was ingested with copyrighted creative works. That’s what gave it value,” Csathy says.

He believes the outcome of the copyright lawsuits brought by the likes of Sarah Silverman and The New York Times against OpenAI and others will result in some kind of opt-in or opt out system “where a creator can decide whether they want to their work to be available to be licensed.”

With 500,000 open-source models available today, such a forensic tracking and payment system would need policing at a global level but Csathy doesn’t think that problem insurmountable.

“The goal is to create a system that is more equitable system, that can be such so much better for the end user that they will gladly pay something to use that rather than something that’s just free.”

Even if a new compensation scheme were agreed between Big Tech and creators going forward, Csathy questions the “billions and possibly trillions of dollars of value” in generative AI that have been already been accrued without compensating artists.

“It’s about money. It’s about control and it’s about respect for the artists. And part of being an artist is that you have control, over your creation.”

Playing devil’s advocate, Tercek laid out some of the defense that Big Tech will use to justify training on copyrighted work as “fair use.”

It’s an interesting argument, and goes like this:

“The AI reads all the books, or looks at all the images, or listening to all the music and then that model begins to build parameters. A big question about this is, is it fair use? Is it okay to look or read or listen? There is no law that prohibits reading a book. There’s no law that says, You can’t learn by looking at a picture.”

Tercek explains that what the tech companies will insist on is that they do not retain any copies they are reading, or in case of pictures, they are looking at images, in the case of music, they’re listening to. These are mathematical representations only.

“Once the LLM is trained, is just a mathematical representation of what they call ‘parameters.’ These are different values that are set based on looking at a lot of Van Gogh pictures for example. They start to arrive at kind of a extrapolation. There’s a pretty strong argument to say that all they’re doing is measuring facts. Facts like what kind of values and colors and hues does Van Gogh use.

It’s not replicating any of Van Gogh’s paintings but we have millions of incredibly precise, measurements about his paintings. Those are facts and facts can’t be copyrighted. What we can recreate in LLM is a factual representation of all those different values that go into creating that work.”

Further, he says that the cases brought by The New York Times and other artists against Big Tech are antiquated because copyright law needs updating to account for the age of AI.

Tercek argues that original copyrighted works like films or books or painting are “fixed works delivered in fixed media.”

“What LLMs are doing is transforming those fixed works into something that is participatory, that billions of people can interact with to build new creative things,” he says.

“So should the artists who created the original work get paid a royalty off of those things? These are huge open questions right now. But this notion of a fixed work that is copyrighted for a century because that’s what we have right now is an absurdly distended copyright term today.

“Maybe that notion is going to change and maybe what media has to turn into is something that is more open, more participatory, something that enables audiences to actually play with and mess with and participate in meaningfully.”

On this point Csathy agrees. He thinks it’s a massive opportunity for artists — so long as the artists themselves want to be a part of it. “They should have control of whether they want to enable their fans to build on top of their work. If they do, there’s a whole new opportunity.”

If they want to take advantage new AI tools that will it possible for anyone to find novel ways to express themselves, with new kinds of mashups, then creators need to get grips with the intricacies of the technology and of new monetization pathways.

“You really need to play with these AI tools, understand them, learn about them, or get people on your team who does understand them,” Csathy advises. “There’s not $100 out there in one bucket — there are little pieces here and there and everywhere. You’re delighting your fans more and maybe getting a chance to be closer to them.”

How ‘Slow TV’ watching moose migrate is accelerating live cloud workflow at SVT

SVG Europe

article here

Slow meandering herds of moose trekking across wintry tundra doesn’t have much in common with the turbocharged noise and control of rally cars but right now in Sweden they are virtually joined at the hip.

Swedish broadcaster SVT’s month long round-the-clock live broadcast of The Great Moose Migration (‘Den stora älgvandringen’) is the latest test of an organisation-wide digital transformation project that will soon encompass the next leg of the WRC Rally.

“It’s a slightly insane project,” says Dennis Buhr Head of Production Development, SVT of the remote production initiated for this year’s Great Moose Migration. “It’s not that common that your show is 500 hours long.”

About the Moose Migration

Every spring, for thousands of years, Sweden's population of moose (or European Elk) in Västernorrland in the north of Sweden migrates across the same tracks in the forest to get to greener pastures. Since 2019, SVT has live streamed the ‘action’ over the course of several weeks.

This ‘great moose migration’ has proved hugely popular with the public with viewers rising from a million in 2019 to about 9 million this year on SVT Play alongside more than 300,000 chat interactions - an increase of 30% on 2023.

The show has also been broadcast live on Twitch reaching nearly 15 million views around the world. Men aged 16-25 seemed to enjoy the respite from hardcore gaming.

The livestream is now a national springtime phenomena and a classic example of ‘Slow TV’ which this year is also being aired on Finland’s YLE and RTL in Germany.

“It’s always been a huge technical struggle to get this to work in a forest with a wild river and at a time of year that turns from winter into spring with huge ice melts,” says Buhr.

SVT lays thousands of metres of cable across the forest area to connect 30 camouflaged cameras and 28 mics. In the past these feeds were all sent to a remote production gallery built on site.

This year the Great Moose Migration has gone truly remote.

New cloud based workflow

“First and foremost we decided to keep all our production staff inside our TV building in Stockholm,” says Buhr. “Immediately that cuts down the costs of having staff on site for a month.”

In this new set up the cameras (a mix of Sony PTZ during the day and Axion surveillance cameras at night) and mics are converted from SDI to IP by VideoXLink and transported as H.264 streams over the internet to SVT’s northern hub at Umeå, a 3-hour 250km drive away.

Two cameras in this year’s production were positioned next to a wild bear spot. Unable to run either power or fibre to that location they used solar power and a Starlink satellite internet connection.

At Umeå the rest of the IP streams are captured into Agile Live, a software-based system jointly developed over a number of years by SVT and Agile Content. Image mixing and graphic overlay is performed by operators in Umeå with rendering done on-prem in Stockholm 650km away.

“The technical cloud workflow of this is not particularly difficult,” says Buhr. “The actual work is the shift in mindset we have gone through at SVT. We have had to rethink production.”

He elaborates, “You can’t stream high-res images everywhere as you are used to. You have to think who needs to see what, when, and in what quality. You need to understand and communicate the advantages of it rather than comparing this workflow to how it used to be done.”

This year’s production has brought together SVT broadcast engineering with its IT teams. Many, it seems, had never met before, yet here they are working to deliver the production together.

“All of a sudden you have meetings and you see people inside the organisation you do not regularly see. They might be responsible for computer services in whole other areas of the business. They are experts in networks, computer hosting and IT security. Everyone gathered to make a TV show. That was a new thing for me – and for everyone.

“I’ve been doing broadcast for twenty two years and this is the first time I’ve seen computer guys getting a credit on the show. It made me extremely happy.”

The culture clash between IT and broadcast worlds that are often mentioned as a management nightmare didn’t materialise. Or, if it did, everyone treated each other professionally and understood the overall mission.

“We have had our discussions shall we say, but it has not been a hassle - more of a learning curve,” says Buhr. “Early on we asked for a feature from the network department and gave them four days notice. They said, ‘You can’t have this short a time span.’ But when you work in live TV with the red light about to flash, four minutes is a long time, let alone four hours.  That was one cultural difference but people have genuinely been getting on very well and understand it is a joint venture.”

Having previously trialled the arrangement on a local sports production, the 2024 Great Moose Migration is by far SVT’s biggest proof of concept to date of its transformation project Next Generation Online (NEO).

NEO is an organisation-wide glass to glass approach to making production and distribution under the same roof. It is aimed to be 100% software, based on COTs servers and standard IT using the internet.

“It’s a huge scope but we started early, in late 2018, and now we are live with a working product.”

The same set up will be used by SVT to produce the ERC Bauhaus Royal Rally, part of the WRC calendar, on June 13-15 at several different locations in Värmland, near Karlstad. 

“We are also adding in a new product which is our re-invention of the intercom 4 wire system as an entirely software-based system web technology and standard IT hardware.”

The software- based 4 wire intercom is a collaboration with fellow Scandinavian broadcasters NRK, YLE and TV2.

“We still produce TV today the same way we did in the ‘60s but in a software environment if you want to be more efficient with money and cut the carbon footprint you must rethink who does what and why and when,” Buhr stresses.

“If you want to transform TV production from cables to software and you don’t think about how your staff will work in a facility you will only not have the cost of investing in new technology but the same staff costs as before if you adapting it to an old school way of making TV.”


Friday, 7 June 2024

Playing with light: 3 Body Problem

Definition 

A trio of DOPs discuss their roles in transporting viewers between real and virtual worlds in David Benioff, DB Weiss & Alexander Woo’ s sci-fi hit 3 Body Problem

article here

Before Game of Thrones became a global phenomenon, there was a feeling fantasy was something only a niche, if loyal, audience would lap up.

Now the makers of HBO’s dragon and sorcery hit aim to do the same for sci-fi naysayers.

As their source material, showrunners David Benioff and DB Weiss – along with Alexander Woo – chose The Three-Body Problem by Chinese novelist Liu Cixin and turned it into an eight-hour series for Netflix.

“We’ve all seen a thousand alien invasion stories,” Benioff says, “but this one’s different because it focuses on the human response to finding out we’re not alone in the universe — and the others out there are not necessarily friendly.”

The ambitious story spans different decades and planes of existence, from China’s Cultural Revolution in the sixties to contemporary Britain, New York City and a vast, epic virtual reality world.

Naturally, the showrunners turned to the experience of a DOP who had shot 17 episodes of Game of Thrones to help establish the show’s visual language.

“There are many elements to the first season and to what they hope would become subsequent seasons,” according to Jonathan Freeman, ASC, a Canadian DOP who won an Emmy for Boardwalk Empire and has also worked with directors Russell Mulcahy, Richard Loncraine and Robert Lepage. 

His first decision was determining the aspect ratio.

Game of Thrones was presented in familiar 1.78:1, but for a story at least partly set in outer space, they discussed using several framing formats.“I was keen on the notion of a wider aspect ratio because eventually this story will expand into space,” Freeman says. “A wider-screen format of 2:39 or 2:35 is used in many successful space odysseys to represent the vastness of space and the distance between elements, whether between two planets or an astronaut floating over a planet. A widescreen format also made sense for the VR game, where we need to convey the scale of individual figures in a landscape, Lawrence of Arabia style.” A storyline in episode 5 about a giant freighter ship also suggested a wider-screen frame.

They decided to stick with 2:35 shooting on ARRI ALEXA LF, but to divide earthbound scenes of relative normality from the hyper-real sequences of the VR world by using different glass.

“Given that the first couple of episodes jump from scenes set during the Chinese Cultural Revolution to contemporary scenes in London, and to lean into the cinematic language of films set in space, my preference was for anamorphic. I felt we needed glass with a vintage look but also a way of making it feel modern at the same time.”

That balance was found in the ARRI ALFA, a lens set purposely detuned for Greig Fraser, ASC, who was at the time shooting them for The Batman.

“It had almost exactly what I was looking for,” Freeman says. “A vintage feel surrounding the edges of the lens, but sharp in the centre so we could cut between any period from late-sixties China to modern-day London smoothly and effortlessly.”

In contrast, the VR scenes appear extraordinarily sharp – a factor of being filmed with ARRI’s spherical DNA range. “Technically, they are a bit purer, which was an advantage to the VFX team who needed to produce some very complex environments in the VR world.”

The VR world, which was engineered by extraterrestrials in the story, takes the player from the Shang dynasty in China to Tudor England to post-apocalyptic deserts. These were filmed against a large 180° wall consisting of ARRI SkyPanel LEDs filtered through and hidden behind a Rosco scrim.

“We spent a long time developing ideas for how to shoot these imaginary worlds,” explains Richard Donnelly, ISC, who also shot episodes 1 and 2, joining the project a little later after shooting The Nevers.

“Our board operators could control any kind of colour we wanted. It was fantastic as it enabled us to light the actors as we wanted to, for instance with the sun rising, instead of it being led by VFX. We augmented the set with many other lights, but that wall was us lighting the actors. It’s almost the reverse of volumetric capture in which you use plates filmed on location to light the live action. Here, it was the other way around.”

For example, a scene set in a VR desert in episode 2 introduces the AI child character Follower and the concept of dehydration. The characters hide under a rock to escape the rising sun. Donnelly lit the scene with the wall and a ceiling rig of Vortex lights. “All the long shadows are real. It’s almost like shooting back in the forties where you’re creating all these shadows in camera on-set and it’s not a heavy FX world.”

Swedish cinematographer Martin Ahlgren, ASC (The Plot Against America) lensed the three-episode block following Freeman and Donnelly.

This included the startling scene in episode 5 in which a container ship and everyone onboard is silently ripped to shreds by ‘nanofibers’.

“It needed a lot of figuring out from a storytelling perspective; how to build up the mystery of what was being done, then also revealing it happening as well as finding the right level of detail.”

A storyboard artist designed ‘some gruesome ways to be sliced’, but the showrunners and director Minkie Spiro dialled that down. “We’re setting it up for a shockingly violent way to die, but letting the imagination do the rest,” he says.

This scene, in keeping with the rest of the series, stretches the boundaries of our known physical world, rooting the fantastical elements in some level of scientific understanding.

Ahlgren plotted camera moves for the scene using LED pixel tape before moving to the backlot where production design had built a large, to-scale section of the tanker – complete with a helicopter pad.

“The idea is that the nanofiber technology is cutting at a molecular level, so unless gravity is doing something to the object or person, we don’t show its effect. We show cutting paper and when the ship hits the bank of the canal it topples like a stack of plates, but the technology itself is not revealed.

“We had to figure out at what speed the nanofiber would move, in relation to the ship’s movement and that of the camera. We decided that it moves slowly enough for someone to run away from it if they can, and that becomes a big part of the drama.”

The series was shot at Shepperton Studios over nine months, ending in August 2022.

Most was filmed in England using locations in and around London, as well as in Portsmouth, Kent, Oxford, Sussex and Bedfordshire.

Other locations included a mountain ridge near Cáceres in Spain, site of the Chinese radar station, and Cape Canaveral in Florida.

Director Derek Tsang (Better Days), a Chinese native, had drawn on his own experiences of hearing stories about people who lived through the Cultural Revolution in order to picture the series’ opening scenes.

“His own memory of that time period is from images that are shot on Ektachrome,” says Freeman. “We opted not to go full Ektachrome in our look since that would give us bright primary colours and everything else would be muted. We go in between to yield that period feeling, pulling back on the primaries, but without becoming a distraction for the rest of the story that follows.”

Europe drives new interest in anime, as Netflix, Crunchyroll boost awareness

Stream TV Insider

article here

Europe is set to drive the next content boom for Anime, the animated content genre majority produced in Japan.

Anime titles available outside Japan on SVOD services have doubled from 3000 to 6000 around the world since 2019, according to new figures from Ampere Analysis.”

International streamers, and in particular Netflix, have made a strategic effort to grow the market outside of APAC and are now reaping the reward as Japanese Anime is now ranked as the second most popular content on subscription streaming services globally behind US content.

This is a ranking unique to Ampere and based on key metrics such as volume of interest, web traffic and box office income from major services.  

“While the number of core Anime fans is small, casual Anime viewers are common across the world,” says Orina Zhao senior analyst at the researcher house. “In terms of growth, European markets have seen the fastest rise in those enjoying Anime over the last four years.

“Besides traditional linear licensing, global streamers and Hollywood studios have all tried to seize this opportunity by ramping up their Anime catalogues. This is particularly because Japanese Anime has a long-lasting lifecycle of popularity.”

She also highlights how cost effective licensing or producing Anime can be in terms of the content’s ability to attract and retain subscribers within a platform.

Long-running TV series and Ghibli movies account for the majority of the most-popular Anime titles.

For example, Wit Studio’s Attack on Titan was the most popular title on SVODs in 2023 despite being first released a decade ago. Studio Ghibli movies Spirited Away (2001), Howl’s Moving Castle (2004) and Princess Mononoke (1997) are also in the top ten.

Globally, Anime fans are typically young, skewing 18-35 year old with a relatively lower income. They are heavy SVoD users, and spend more time on smartphones and smart TVs.

Ampere says they are likely to be young adults or new parents and the gender split is about 50-50 outside of Japan. That differs from demos in Japan where the genre attracts a mostly male audience (the split is 58% male / 42% female).

According to Ampere’s latest survey of 30 countries, Asia Pacific markets such as the Philippines, Indonesia and South Korea still show the highest interest in Anime. However, seven out of the top 10 markets with the largest growth of interest in watching Anime are in Europe. They include Germany, Finland, Italy, the UK, France, Poland, and Spain which have seen a 3% to 9% increase in Anime enjoyment in the past five years.

The amount of Anime titles has been increasing too in these seven European markets, from 1,945 titles in 2019 to 2,755 titles in 2023, a 42% increase in the past five years. This has been driven primarily by Netflix, Amazon Prime Video, and dedicated anime platform Crunchyroll which launched into Europe when parent company WarnerMedia acquired Viz Media in 2019.

Sony has owned the platform since 2021 and earlier this year merged it with Funimation (closing the Funimation brand) to offer a combined 1653 titles to 15.6 million subscribers (who pay from $7.99 a month in the US, with Mega Fan and Ultimate Fan tiers increasing by $2 a month).

“Since April’s merger with Funimation Crunchyroll has become the single most powerful Anime-focused platform in the West in terms of both its Anime catalogue size and subscriber base,” says Zhao.

Between 294 and 481 new titles (titles produced in the past three years) were made available in the seven European markets in 2023, but this is around one-third the rate of new titles released in Taiwan, leaving ample capacity for importing new content into Europe, Ampere say.

“Crunchyroll’s merger and Netflix ramping up its distribution this will undoubtedly increase the visibility and popularity of Anime globally,” says Zhao. “We find European audiences under served at present by a good supply of Anime and think there is a substantive opportunity for growth.”

She advises, “European local and regional services should leverage the building appetite for Anime and the wide availability of content yet to be exploited in the region to gain a competitive edge and achieve long-term growth.”

Netflix strategic focus

Japanese content overall has become the second largest content type on Netflix and the streamer is now the most important platform globally for licensing and producing Anime. It has ramped up its licenced original titles from 602 in 2019 to nearly 900 in 2023 including an increase of Exclusive licenced titles from 45 to 86 and originals from 21 to 76.

It has broadened its Anime genres too. While Sci-Fi & Fantasy and Action and Adventure still make up 70% of the titles it has expanded to include comedy, childrens, horror, romance and drama.

“Crucially, Netflix has signed production line deals with a number of Japanese studios,” says Zhao.

This began in 2018 when Netflix first signed production deals with Production I.G. (including Wit Studio) and Bones. A year later it made co-production deals with three more leading Japanese studios, Anima, Sublimation and David Production. The three have so far co-produced titles with Netflix such as Altered Carbon, Dragon’s Dogma, and Springgan.

Netflix further expanded its partnerships with Anime studios by signing co-prod pacts with Naz, Science Saru and Mappa. It also signed a similar deal with Studio Mir in South Korea, which produced The Witcher: Nightmare of the Wolf for Netflix.

Two years ago, Netflix signed a film co-production deal with Studio Colorido to expand from TV series to movies. Some co-produced films have also premiered in theatres and on Netflix on the same day.

Moreover, Netflix has quadrupled the amount of adult animated content it has produced from outside of Japan from 11 in 2019 to 44.

Ampere also believes there is further scope for Netflix to produce live action adaptations of popular anime titles, as it did with One Piece which launched last year.

The exploding interest in Anime is explored at animation conference and festival Annecy in France next week. Crunchyroll’s SVP of Global Commerce Mitchel Berger will discuss anime’s impact on pop culture and Japanese studio Kasagi Labo will announce a financing platform for original anime.

Netflix is in force at Annecy where it premiers Tokyo-set animated superhero feature Ultraman: Rising produced by Netflix, Tsuburaya Productions, and ILM. Rising Impact, the first anime adaptation of Nakaba Suzuki’s manga of the same name, is a Netflix exclusive that premieres on June 22.

 

Monday, 3 June 2024

Broadcast Workflows, But in the Real World: Cellular Transport, Cloud Vs. On-Prem, Actual AI Contribution

NAB

Cloud-based workflows are becoming ubiquitous but on-premise solutions continue to play a central role, according to a new industry trends report from Haivision.

article here

The company’s “Broadcast Transformation Report” also found an increased adoption of cellular networks, with 60%of broadcasters using these networks for transport and 80% using the internet, showcasing the industry’s reliance on diverse network technologies.

The findings also underscored the increasing significance of 5G, with74% of broadcasters either currently using or planning to use 5G for broadcast contribution in the next two years.

When it comes to cloud, the report found 84% of broadcasters use at least some cloud-based technology, but only 22% use it for more than half of their current workflow elements.

While more than half (58%) of broadcast professionals have implemented IP and cloud-based broadcast infrastructure, a substantial amount (86%) of those surveyed continue to use SDI.

Respondents primarily use cloud-based solutions for encoding/transcoding (44%), stream routing (43%), and remote collaboration (39%).

Fifty-nine percent of broadcasters rely on cloud for less than 25% of their workflows, indicating that on-premise technology continues to remain critical to broadcast workflows. Haivision said this finding suggests that broadcasters expect to leverage both technologies for the foreseeable future.

As organizations consider workflows that include cloud and IP technology, those surveyed indicated network reliability (46%), budget limitations (46%), and bandwidth availability (40%) as the primary challenges in their shift to IP or the cloud.

Besides cloud-based workflows, respondents indicated the use of other IP technologies, mainly NDI (40%) and SMPTE ST 2110 (36%).

Cellular Transport is Now Mainstream

Sixty percent of respondents said their organization currently uses 3G, 4G, LTE, or 5G for live video contribution, making cellular the most popular network for transport after the internet (80%).

Nearly three-quarters of broadcasters already use or plan to use 5G for broadcast contribution. Forty-six percent anticipate using 5G with private networks. Additionally, 15% are using 5G only, which is an increase of 5% over last year’s survey. Satellite usage remained steady from last year’s report, and fiber increased by 4%.

Regarding 5G adoption timeframes, 74% of broadcasters already use or plan to use 5G for broadcast contribution within the next two years.

More specifically, 29% of respondents are already using 5G for broadcast contribution, a 9% increase from last year’s survey. Greater bandwidth (51%) and lower latency (48%) are touted as the top benefits 5G can offer within live production workflows.


Unsurprisingly, most broadcasters predict AI will have most impact on the industry of any technology over the next five years, with 49% planning to or already using AI in their workflows. 5G remained as a strong runner-up at 57%, however the number of respondents citing HDR dropped by 5% and ATSC 3.0 dropped by 10%.

The most commonly cited benefits of AI on live production workflows were efficiency gains through automation and automated translation and closed captioning.

The Visual Elements for “Sugar” Make a Different Kind of Hollywood Mystery

NAB

Apple TV+’s detective drama Sugar starring Colin Farrell has an unconventional visual style that includes footage shot on iPhones, multiple camera setups and classic film noir clips inserted into scenes.

article here

Scripted by showrunner Mark Protosevich with Farrell as executive producer, Sugar is directed by Brazilian filmmaker Fernando Meirelles, co-director of City of God and director of The Constant Gardener. The series is shot by cinematographers César Charlone and Richard Rutowski, ASC and edited by Fernando Stutz.

“We really work as one,” Meirelles told IndieWire’s Chris O’Falt. “If I can’t take César and Fernando, I won’t be able to [do the project], that was my only condition.”

As O’Falt explains, Meirelles’ approach to directing is to play a scene from beginning to end rather than dividing it into camera setups, while Charlone documents it with two to three and sometimes even up to seven cameras.

“The first thing we learn in film school is never jump the eyeline,” said Meirelles. “I tried on purpose to jump the eyeline just to test [it out], and it really worked. It gives a dynamic to the scene.”

“We have developed what I would call a documentary style,” Charlone said. “We leave the actors very free to move around, my crew does not put [down] any marks, the actors just move around and we follow them with the camera.”

After each take, Charlone changes the camera’s angle and movement. Meirelles said that after two hours of shooting, he will often have 15 to 16 different angles, each master of the whole scene. This approach often leaves them an hour or two ahead of schedule, which just encourages more experimentation.

This gives Stutz a lot of material to compose with. “I start by looking through César’s lenses because it’s the way he moves through the scene,” the editor told O’Falt.

“When he operates the camera I’m always interested where he’s going, what he’s looking at, and then from there, start to build the sequence.”

One of the most striking aspects of Charlone’s work on Sugar is his use of iPhones to not just capture reference stills but footage for the show.

“I’ve been using iPhones for some time now,” Charlone explained in an episode of the Kingdom of Dreams podcast. “They are incredibly versatile and practical, especially in confined spaces. For Sugar, we did extensive testing to ensure the footage could blend with that from the Sony VENICE cameras.”

The iPhones proved especially useful for filming car scenes, where their compact size and flexibility allowed for unique angles and dynamic shots.

“For all the car scenes, with Colin driving, the iPhone is very practical because I can put it behind the steering wheel, move it around easily, and I use VR goggles to see the image I’m capturing.”

This approach not only saved time but also kept the energy on set, allowing actors to perform naturally without the interruptions typical of traditional setups.

“I try my best not to interfere with the relationship of the actors on set,” Charlone says. “We avoid marks and traditional setups, letting them move freely and following them. This way, they don’t have to follow the camera, and it helps them perform better.”

Cinematographer Richard Rutkowski, ASC picked up the reins from Charlone for his block, working with director Adam Arkin on Episodes 3, 4 and 7. This included using multiple cameras.

“We’d plant cameras strategically around the set, sometimes hiding them to capture different perspectives,” he explained to Tara Jenkins at American Cinematographer. “This method was particularly effective for scenes involving security camera footage or social media posts.”

He also aimed to evoke classic film noir while incorporating the vibrant, sun-drenched atmosphere of Los Angeles.

“We talked about classic noirs, private detective stories, and films like The Long Goodbye. I also brought up The Constant Gardener because of its selective saturation and beautiful use of color,” Rutkowski recalls.

“We knew we’d be traveling with Sugar in a car through broad daylight in LA, and we wanted that LA blue sky without it looking too candy-colored.

“Digital can be unforgiving with overexposure, so planning the route and managing the iris was crucial. We used a low loader for some shots and relied on natural lighting for others, capturing the authentic look of LA.”

Rutkowski also talked about the “meta” nature of the show which actually inserts portions of scenes from classic neo-noir films (The Long GoodbyeThe GriftersLA Confidential) into the on-screen story.

“It’s sort of a semiologist’s dream that you have a lead character whose own identity is entwined with his search for others,” he told David Philips at Awards Daily.

He said he didn’t know when he signed up that this was going to happen. “It wasn’t known that there were going to be such explicit cuts to classic noirs. We knew we were sourcing visually and tone-wise from those films.

“I’m not sure when it became apparent that they were going to actually cut in scenes from Mike Hammer, although it made sense because we were treating LA very much as a character in the way that it becomes a character in those films.

“And all I can say is, while that would have given me hesitation if they told me we’re literally going to cut in these classic films to your work, it wasn’t there for me to worry so much at the time.

The show is one of several recent series that has used black and white photography to tell its story. Others include another Italian-set noir murder mystery Ripley, and the third episode of FX’s Feud: Capote vs. the Swans.

 Sugar begins with a black-and-white opening sequence before color seeps. The B&W was shot in color and converted in post.

“Fernando wanted to start that opening in Japan in black and white as kind of a throwback to [Akira] Kurosawa films, and for it to feel a little otherworldly as well,” executive producer Audrey Chon told Hunter Ingram at Variety. “It just added a whole other dimension to the show and to the character of Sugar.”

As Ingram observes, taking color out of the picture literally opens a whole new world of awareness for a viewer. In these extreme darks and blinding whites, more can also be concealed.

“With black and white,” adds Feud cinematographer Jason McCormick, “you can get away with murder in ways you couldn’t when you are shooting in color.”