Tuesday, 2 July 2024

In-car streaming entertainment starts to rev

Stream TV Insider

The long-anticipated Jetsons style future of the car as a “living space on wheels” is gathering pace as a fresh wave of announcements bring familiar home entertainment experiences into the vehicle.

article here

In a technology tie-up with software developer Qt Group, LG Electronics aims to create “more innovative, immersive content-streaming services for cars” by integrating the Finnish developer’s framework for application development into its Automotive Content Platform (ACP) for in-vehicle entertainment.

“The promise of in-car streaming media entertainment began ten years ago when the first fully autonomous cars were proposed,” Petteri Höllander, SVP at Qt told StreamTV Insider. “Since you have to do something in the car while traveling if you are not driving, streaming media was the number one entertainment. There’s been a growing number of demos over the years and we’re finally seeing the first real implementations in cars.”

The news follows new research projections of the global in-car infotainment market size reaching $35.4 billion by 2030.

Höllander said Qt provides “a very flexible and efficient set of collaboration tools to give the auto providers greater choice in development of user interface (UI) and user experience (UX). Increasingly consumers expect their vehicles to offer an experience no different from that offered by other smart devices and that includes video streaming.”

The partnership builds on Qt’s existing support for LG’s open source webOS, which is predominantly seen in consumer electronics, including smart TVs. Previously, LG relied on the Qt framework for UI and UX in devices like smart TVs, signage, smart monitors, and home appliances. Qt Group has existing deals integrating its software into development of the ‘digital cockpit’ with Peugeot, General Motors and Mercedes Benz.

LG began offering Netflix and YouTube on its ACP running on Hyundai’s Genesis models in its home market of South Korea at the start of this year.  It made a similar pact with Kia, another South Korean auto marque, in May and plans to do the same for more car brands and more territories.

LG explained that “the move is emblematic of LG’s continuing growth as a media and entertainment platform company, and presents a content experience that extends seamlessly from home to vehicle.”

Also in May, Google said it was adding Max and Peacock to its Android Automotive infotainment platform, which is the OS used by Volvo, BMW, Renault, Stellantis, Polestar and Ford. Xperi’s TiVo is another player working to make a name for itself in the in-car entertainment area, with its TiVo-powered DTS AutoStage Video Service for connected cars. Xperi scored an expanded partnership with BMW earlier this year with deployments across various models in the U.S., Great Britan, Germany, France, Italy, Spain, South Korea and plans for Japan.  

Video streaming in-cars generally relies on the user’s cellular connectivity though some auto brands sell additional WiFi connectivity. Tesla for instance offers a standard package with every car capable of basic maps and music streaming over Bluetooth and charges $9.99 a month for a Premium service “for the most intuitive and engaging ownership experience”. 

Tesla led the way in 2019 when it made Netflix available within its in-car infotainment system.

“You have to do something while you are sitting in an EV waiting for it to charge, and watching a video is an obvious use case,” Höllander said.

It is illegal in most countries to have video playing on the front display because of the safety issues concerning driver distraction. The automotive rear seat infotainment market size is forecast to increase by $8.79 billion, at a CAGR of 13.39% between 2023 and 2028.

Special screen coatings for displays in the front of cars are also being introduced allowing passengers in the front to view video without distracting the driver. Screens in certain Mercedes cars automatically dim the passenger screen if it catches the driver trying to look at it, using the driver monitoring system to track their eyes.More broadly, not just including streaming video apps, ‘software-defined vehicles’ (SDV) will create over $650bn in value potential for the auto industry by 2030.

According to IBM, in a SDV, the vehicle serves as the technological base for future innovations, acting as a command center for collecting and organizing vast volumes of data, applying AI for insights and automating thoughtful actions. Another piece of research valued the global SDV market, at $35.8bn in 2022 rising at a growth rate of  22.1% until 2032. 

Monday, 1 July 2024

HBS leads XR Sports Alliance to deliver fan experiences “no longer constrained to the limitations of TV”

Sports Video Group

article here

If extended reality (XR) is the next field in sports broadcasting it could jump into the mainstream come summer 2026, when the FIFA World Cup lands in the US, Mexico and Canada.

HBS is one of three founding members of a new XR Sports Alliance designed to accelerate research and bring to market immersive sports applications for a new generation of consumer device.

Speaking to SVG Europe, HBS director of digital, Johannes Franken points to past VR experiments at the FIFA World Cup in Russia and the AR service offered at the FIFA World Cup in Qatar.

“While these were mostly used as marketing or proof of concept type of use cases, when we look forward we believe that these are the technologies where we need to be successful,” he says.

“It’s no secret that there is [not a huge] amount of headsets in the market but we see that there are some roadblocks to create a viable business case for that. That was one of the reasons why we thought we should found this alliance and bring many parties together, because solving those roadblocks can only be achieved as a group.

“Taking knowledge from content creation and service delivery out of social media productions and transferring that into the XR world is our goal in the alliance.”

While HBS explores ways to produce immersive XR sports content, the other partners in the alliance – chip maker Qualcomm and OTT video solutions provider Accedo – lead on device manufacture and distribution.

Accedo will explore monetisation, data intelligence and would build the user experience. It will also advise on the kind of features sports rights owners should test first and experiment with and create a user testing panel.

“Initially we were thinking about prioritising the in-home use case, but more and more we’re hearing that rights owners and stadium or venue owners are also looking at how to enhance the in-venue experience,” says Lucy Trang Nguyen, business development director, XR, Accedo. “The technology and test framework that we’re building for in-home could potentially also be leveraged for the stadium experience.”

Patrick Costello, senior director for business development at Qualcomm, says sports XR applications are a top three interest among its users.

“We’ve been investing in XR R&D for over 15 years and that manifests itself today in a very robust hardware and software roadmap. We have a dedicated XR silicon roadmap and we address all sorts of types of devices and architectures. The aim is to scale XR to the size and scope of something like mobile at some point.”

Qualcomm is even considering developing “a purpose-built device for XR sports viewing”, says Costello.

“We see the XR market developing a little bit differently than mobile. We are already seeing purpose-built devices for the medical vertical, for defence, and fitness and health devices. We really want to collect input from alliance members and see if there is room for more of a dedicated sports viewing device for this market and run that through the test framework.”

Carving out XR rights

XR is a nascent term in media and requires some pinning down. This is part of the problem that the alliance says it wants to solve.

“XR is the companion to all the immersive technologies available today and something Apple has called spatial computing,” explains Jose Somolinos, solutions strategist and XR lead at Accedo. “Most headsets available now are not only VR or AR but also MR, meaning they are equipped with cameras that let you see the outside world (or passthrough) as if you were wearing glasses and displays that on a screen in front of your eyes.”

There are a host of technical problems from compression to delivery as well as improving the comfort level of XR wearables, but there are commercial issues that the alliance will address too.

“It is very, very hard to define [XR] rights today,” stresses Franken. “How do they differentiate from traditional AV rights? Are they a subset of AV rights? How do you separate volumetric data within data rights? Every big sporting federation has their own definition.”

Not only is there a fragmented definition of XR as a media, there’s confusion over the marketing implications too.

“How does a virtual can of Red Bull compare to a board advertisement? These kinds of definitions are all very fragmented and scattered over the market. What we are trying to do is help federations and to define a more streamlined approach,” says Franken.


XR at Euro 2024

In an example of the type of solution it brings to the field, Accedo has debuted an XR sports streaming application during Euro 2024 as a showcase for Deutsche Telekom’s sports streaming service MagentaSport.

The solution uses Xtend, Accedo’s solution for XR applications complemented with tech from Ateme and HISPlayer. It integrates live streams and data feeds and includes interactive features such as live statistics, multi-camera feeds, player cards, and 3D sponsored experiences.

In a release, Accedo said the activation introduces fans to a viewing experience “that is no longer constrained to the physical limitations of the TV screen.”

It described how the application will blend fans’ physical and virtual worlds by displaying the match in the space around the fan, while also introducing a new layer of interaction where fans can access supplementary information such as team and player statistics and watch replays from different angles. Additionally, it enables sponsors to interact with fans in new ways to drive revenue.

Nguyen says previous VR/AR experiments were mostly focused on the user experience “because that was the novelty factor” to show the public what next-generation streaming experiences could look like.

“Now we need to take this to the finish line,” she says. “We need to see beyond the user experience.”

To do this the alliance has established a test framework described as an end-to-end program for sports rights owners intended to accelerate time to market.

Within this, HBS is bringing its expertise to bare on producing live immersive video formats. Among other techniques it is examining capture from monoscopic, stereoscopic and volumetric camera arrays, perhaps married to digital twin renders of stadiums and realtime generated graphics. It could be 180 degrees or 360 degrees.

“There are a lot of learnings that we have accumulated over the years about how to cover sports in a different way,” Franken says. “It’s all very well having fancy and exotic technology but making that work at scale for large events at the efficiency and in the timeliness required to keep that content valuable – that’s a different challenge.”

Another aspect is XR editorial storytelling. “There is a question as to what extent you use static wide shots,” adds Franken. “How much movement is acceptable in a virtual environment? If you take football or soccer as an example, then there is very little depth being captured from the main high camera platform. If you capture that stereoscopically at pitch side with the players running towards you, that immersion is really what we’re after.”

He suggests XR production is likely to sit separately from the main 2D broadcast and that “there will be a different set of production utilities for a different set of deliverables and different size of tournament. Not everybody will have the ability to afford a volumetric capture system.”

Recruitment phase

The fledgling alliance wants to onboard one telco operator in key markets, which Nguyen mentions as “Germany, Europe in general, South Korea, Japan, India, China and the US”.

She says: “In each of these important markets we are aiming to get at least one representative telco. They will gain access to the white label test applications that we are building within the alliance.”

Data on XR sports would be used to shape and support business cases. “Together we will contribute by testing in our respective markets so that we can collect insightful data and report back to the alliance so that all the members can share this knowledge,” she continues.

It is anticipated that the first wave of network operators will be made public in three to four months’ time. The alliance is also targeting OEMs and ecosystem technology providers. “The reason why we want to bring the OEMs onboard is that we want to test XR experiences on market-leading devices that are launched today and on those that haven’t even launched yet,” says Costello.

“You can imagine how valuable it would be for this OEMs to get their devices into the hands of premier sports right owners and to get their feedback for improvement before launch.”

They are also keen to speak to individual teams and brands interested in XR sports activations. “In an ideal world we would cover the entire value chain for XR sports,” he says. “The alliance will facilitate an exchange around how to create future businesses and how to foster development in the industry.”

The alliance has pencilled agreements with unnamed sports federations to develop some initial experiments. “There will be an announcement once we’re able to speak about that publicly but we have the verbal commitment from those federations to start working on that,” says Nguyen.

The likes of Disney/ESPN, WarnerBros. Discovery and NBC are also being approached. “The idea is if an individual entity is spending money on a POC, we would like to leverage that learning from that investment across the entire alliance,” says Somolinos.

“One of the things that we have discovered over the past few months when we’ve been pitching this idea to sports federations, clubs and leagues, is that normally XR is driven internally by one single person or a very small team. Even in large organisations XR is considered as an experiment. We feel the loneliness of these people and how they are struggling to push internally. They are asking us individually at HBS, Accedo and Qualcomm to help them drive that upward. This is where collectively the alliance can step in.

“We can help them drive [XR use cases] internally with much more muscle than what they will be doing with their smaller experimentation and budgets.”

Nguyen calls this “forum” for members a second mission of the alliance. “We are right now in the recruiting phase.”

 


Jac Fitzgerald, Richard Rutkowski ASC, Adam Arkapaw, David Franco / Masters of the Air

British Cinematographer

The new World War II epic from producers Tom Hanks and Steven Spielberg uses cutting-edge virtual production techniques to bring the story of the United States Army Air Forces’ 100th Bomb Group to a new generation. 

article here

Five miles above the ground and behind enemy lines, 11 men inside Flying Fortress bombers battle fleets of German fighters in Apple TV+ series Masters of the Air. Based on the book by WWII expert Donald L. Miller, produced by Tom Hanks (Playtone) and Steven Spielberg (Amblin) with cast including Austin Butler, the nine-episode drama required considerable location and virtual production shoots.  

Block direction and cinematography was by Cary Joji Fukunaga (No Time to Die) working with Adam Arkapaw (The King); Anna Boden and Ryan Fleck (Captain Marvel) with Jac Fitzgerald (Freaky Tales); Dee Rees (Mudbound) with Richard Rutkowski ASC (The Americans), and Timothy Van Patten (Inventing Anna) with David Franco (Game of Thrones). 

Each DP and directing team shared gaffer, grip and camera team for exteriors and another shared camera team for volume work. DIT Mustafa Tyebkhan was one constant across all nine episodes and he helped each DP follow Arkapaw’s lead in maintaining the show’s visuals and workflow. 

Masters of the Air was shot at locations in the UK and soundstages featuring two large volumes and a smaller volume. At the time of filming in 2021, the technology was just short of being able to produce photo-real volumetric clouds that could run in real-time. The use of VP was to create an immersive environment for the actors, interactive lighting for reflections, historical accuracy with plane formations and a time of day reference. 

The DPs were able to interactively light the set to cast reflections on the bomber during aerial combat scenes using pre-viz sequences designed by The Third Floor in conjunction with stills shot by aerial cinematographer Phil Arntz. Plates that he shot over the UK and Northern Europe by helicopter using a RED V-Raptor array were used by VFX to replace backgrounds in post at DNEG. 

Having worked with Fitzgerald on The King (2019) Arkapaw suggested her to block two directors Boden and Fleck who followed Arkapaw with episodes four and five. Fitzgerald had never worked in a volume before. 

“It was a great learning curve,” she says. “Say, for example, when we’re flying through clouds, we had the ability to undulate the light very easily or change the exposure in beautiful ways. When we were filming scenes of planes descending or coming up through cloud it was really interesting to collaborate on that with my gaffer and also with the computer technicians on the volume to get the best combined lighting result between the LED and physical studio fixtures.” 

In the clouds  

Weather was a key component to many of the actual battles replicated in the series. Arkapaw and Fukunaga were meticulous in creating shots that were above or below the cloud line, using the volume to help bring that reflective lighting to the inside of the cockpit. 

“We also had to take account of the particular direction the planes were flying in and the time of day so we could move our ‘sun’ [an HMI on a crane] and angle it accordingly,” notes Fitzgerald. 

Episode DPs would sometimes combine practical ‘sun’ light with light from the screen which required coordinated moves between the VAD and gaffer alongside the DP. 

The volume shoot involved working within authentic but cramped 1:1 replicas of the actual plane’s cockpit, fuselage, ball turret and tail. On the main volume stage these were rigged on a gimbal 20ft in the air. 

“It was a trial and error because there wasn’t time to rehearse in the volume and learn first-hand what was possible or not,” Fitzgerald relates. “We had a creative hub of images and mood boards and could view [block one] rushes to see what they were able to achieve while we shot locations.” 

Block one had worked out a “roadmap” for placement of Sony Venice cameras [in Rialto mode fitted with Petzval Primes] in and around set. For example, if the scene required two pilots in the cockpit there were specific places to put cameras. 

“We soon realised that we wanted to do something different,” Fitzgerald says. “Anna, Ryan and I realised that the language from block one was not quite what we wanted. We wanted to be with the actors a lot more. So, we decided to strip away a lot of the rig they had employed and find our own angles to get inside closer to them.” 

She elaborates, “We were just not feeling connected enough to the actors, so we decided to throw out their roadmap and make our own.” 

Because the sets were 20ft above ground, it was physically difficult for actors and crew to get into the space. 

“You are basically crawling and having to twist yourself around and get up into the seats,” Fitzgerald recalls. “In the tail section my shoulders were hitting the set side to side. I couldn’t move in there. Some actors found it claustrophobic.” 

The actors were further bulked up with layers of flight suit clothing and got so hot they had to wear cooling suits. 

“Once the actors went into the plane, all communication was via radio, making it really difficult for the directors to tweak anything. If anything went wrong with the camera or make-up needed adjusting, it would have been a big ordeal to stop.” 

In rehearsals they learned that without a digital replica of the wings (and just seeing clouds) talent would become motion sick. Adding digital wings helped make them feel more grounded and to understand which way the plane was turning. 

Aerial battles 

The show’s narrative arc also necessitated a different editorial approach. Episode five dovetails into the look established in episodes one to four, but Fitzgerald and her directors started to push away from that base as the story evolved. 

“The world was already set up especially in terms of the aerial battles. We have just one main battle sequence in episode five. It was pitched to us as a battle that springboards the characters and the series out into a broader world which is not just on the squadron’s base or in the sky.” 

In episode five, a mission over enemy territory ends disastrously for the 100th after they are intercepted by swarms of fighters. All but one B-17, piloted by Rosenthal (Nate Mann), are shot down while Egan (Callum Turner) parachutes alone into the German countryside. 

“It’s devasting, killing off so many characters,” Fitzgerald says. “Here we’re just getting to understand that they have an impossible task. It is very visceral.” 

In episode six the story spends the majority of time on the ground tracking Egan’s progress as a prisoner of war and Crosby’s (Anthony Boyle) visit to Oxford where he meets Westgate (Bel Powley). 

“It was less a radical change of shooting style more a change dictated by new locations and a focus on character emotions. Instead of seeing our characters on their base in military mode we now see them letting down their guard and responding to each other’s state of mind. 

“We were delicate when we need to be and playful when we need to be. For example, when Crosby meets Westgate there’s a connection between them so we can be a bit more open and lighter to their flirtations.”  

When Fitzgerald had shot her block, she went to work with Fukunaga to capture additional material in the volume for block one. 

“Cary had seen our rushes and realised that we can get a lot more physical with the characters, we can get more handheld and just dirty it up a lot. After seeing our rushes, he ended up re-strategising as well. Basically, everyone was trying to figure out what you can do with a tiny space.” 

Rutkowski, who photographed episodes seven and eight, concurs with the challenging shooting experience. “When you hit a roadblock on a volume stage you are frozen,” he says. “Suddenly you are no longer using it and you pivot the whole thing to more CGI and VFX, so having the time to prep well in advance is my big takeaway from the experience. 

“There’s almost a relentless need for real estate too. To sell the realism of daylight in a volume you need height above the stage and space between the camera and the subject.” 

Arkapaw had based the show LUT on 2018 documentary The Cold Blue, which itself was composed of restored 35mm colour footage shot by Hollywood director William Wyler during 1943 aboard B-17 bombers. One of Wyler’s cinematographers, Harold J. Tannenbaum, was killed when the bomber in which he was flying was shot down over France.

Rees and Rutkowski’s portion of the story, however, required new visual looks for POW camps, and the Italian base of Tuskegee (Red Tails) pilots to be distinctive from the look of the UK Air Force base, even though all three were shot on location near Aylesbury. 

Rutkowski explains, “We wanted scenes with the Red Tails to look warm, an appearance of honey. These were [Black] men who were not allowed access to certain things back home in the US but in the War, they gained a reputation as notoriously efficient and very brave. We wanted to give that feeling of optimism and heroism by having the sun warm their faces.” 

By contrast, the light in the POW camps was dimmed and grey. Rutkowski’s research found that the Germans lit such camps with the lowest wattage light they could produce. A single source of light from a searchlight [HMI] provided illumination for night exteriors.  

Rutkowski is a qualified amateur pilot and found that experience useful on set. “Flying is a constant scanning, followed by immense concentrated focus on the instruments when necessary. I had a sense of that when helping on set for composition and verisimilitude. I knew how light should fall into the cockpit and what contrast should be. It’s a very contrasty environment, unless you’re flying directly into the sun.” 

Stellar Storytelling: Constellation with Markus Förderer ASC BVK

British Cinematographer

Markus Förderer ASC BVK reflects on his influences and inspirations for the visual language of Constellation, which takes space-set sci-fi to new heights.  

article here

When Markus Förderer ASC BVK was inducted into the ASC in 2019 he was asked which films had made the strongest impression on him as a child. His answer was 2001: A Space Odyssey, Alien and Independence Day, all of which influenced the visual approach to Constellation, a psychological science fiction series for Apple TV+. 

“If they had asked about TV shows, I would have mentioned The X-Files, which had a moody, cryptic and dark atmosphere that Michelle MacLaren and I wanted to inform Constellation,” he says. 

Executive producer MacLaren, a former producer of The X-Files, tapped Förderer to shoot the first two episodes (which she also directed) of the eight-part drama.  

Written by showrunner Peter Harness with echoes of Interstellar and SolarisConstellation moves from space to Earth, and across time, as we follow Noomi Rapace’s astronaut (Jo) struggling to come to terms with the aftermath of an accident possibly caused by a NASA experiment aboard the ISS. 

“Independence Day left an impression on me because it was a big spectacle with tension, something I also wanted to bring into this project,” says the DP, who shot Roland Emmerich’s sequel Independence Day: Resurgence in 2016. “In early conversations with Michelle we discussed how to capture the scope of the show and decided to shoot on location to give the project authenticity and scale. At the same time the core of the story is a mother-daughter relationship so we needed to convey intimacy.” 

This included shooting extensive sequences aboard the ISS on a soundstage rather than a volume. “I love shooting with LED walls but it’s not right for everything. I was fighting to shoot as much as possible in camera because the audience will always feel the integrity if the source is captured for real.” 

Gravity is one of many examples of productions that have successfully created the illusion of zero gravity but Förderer felt he wanted to avoid the “perfect Hollywood floating camera and long, impossible oners”. 

Former ISS astronaut Scott Kelly was a consultant to the show and explained how astronauts video themselves and their experiments. “He shared many of his personal videos and I asked specifically how they get the camera to float. In zero-G you can just push the camera and it moves like some fluid special effect. I wanted the audience to feel like the audience are with Joe, following her.” 

Production designer Andy Nicholson not only built detailed sets for the ISS, he also constructed plywood models for the camera team to rehearse in. 

“We played around with a mix of hi- and lo-tech techniques. Sometimes we had stunt coordinator Martin Goeres flying in a wire rig with a camera on a stabilised remote head so we could remotely operate. Sometimes we used Steadicam or gimbals and a lot of very smooth, handheld. It’s not perfectly smooth in zero gravity, but astronauts do shoot handheld so this was a big influence.” 

Lighting was incorporated into set to avoid opening it up, with exception of certain shots where a piece of ceiling was removed to suspend actors on wires. 

Gravity-defying shots 

The spacewalk in episode one presented another challenge, in part because of the limited manoeuvrability of the heavy replica spacesuit worn by Rapace but also because they used an authentic visor which features triple layer glass. 

Most screen depictions of actors wearing space helmets are shot with glass removed and added as CG to avoid accidental reflections of camera or crew. Here, Förderer leant into the light reflected from the ISS sets. 

“Every exterior module we see Jo come into contact with was a build. Sometimes it’s obviously extended into depth but when she’s exploring the impact of the accident we can see, in the glass, very realistic reflections. We ensured everything on stage was blacked out, the camera crew wore black and worked mainly from cranes to avoid reflections. We shone a super bright LED source onto set to replicate the sun. Over 90 percent of what you see in the reflections are real with occasional VFX of the Earth’s reflection. 

“This was interesting conceptually since the script explores different realities with mirrored storylines so we employed reflections where we could.” 

There are very few windows in the ISS, apart from the giant cupola, but in the story the craft dips in and out of power when the solar panels gain or lose sunlight. 

“Our sunlight was a big LED light source outside on a moveable crane so we could move from eclipse to daylight. Everything was controlled from a dimmer board.  To give the audience a clue as to where we are in space and time when we cut to the ISS post-accident, the lights are always slightly flickering.”

Scenes in the first two episodes are mirrored in later episodes. For example, we see Jo grab an astronaut’s floating hand in episode one and a reverse perspective later, shot by another unit. 

“There was a lot of discussion about to pull it off,” Förderer says. “We had a good overlap in pre-production but I had to wrap my head around all episodes to make sure we set it up in a way that made sense when we shot from the other side.” 

Three units shot in parallel, with DPs Frank Lamm and Yaron Scharf able to review each other’s footage. Scenes that Förderer shot in the ISS on one day could be ‘mirrored’ with the same lighting the next day by a different DP.

He selected the RED V-Raptor because its small size enabled them to mount it onto spacesuits and move in the tight ISS set and Soyuz module. 

“I wanted to give the audience this feeling of being alone in space with Noomi’s character so we needed a system that’s fairly compact. It paired beautifully with Panavision anamorphics and still records at high resolution.” 

Using anamorphic glass “a good blend of modern and cinematic scope” focussed on an actor’s face meant up to eighty per cent of the image was out of focus. “It stretches the background vertically and subconsciously imparts a tension to the shot because it’s not quite how our eyes see.” 

In the story, Jo’s capsule lands in Kazakhstan to which the production sent a scout but Morocco proved logistically easier to access and service. There Förderer shot aerials to capture the descent of the capsule over a wide area and employed FPV race drone specialist Skynamic using a RED Komodo “which cuts perfectly with our A cameras.” 

Skynamic were also directed to fly a Komodo through the ISS set. “It was so helpful to be able to intercut from the drone to shots captured by stunt person so you really feel you are floating.”   

Scenes set in Sweden on a frozen lake and cabin in a remote forest were shot on location in Finnish Lapland, close to the Swedish border.

“There’s no way to light a snow landscape in a realistic manner since you will immediately feel the lights’ sources. On the scout I took lots of stills and short videos as a proof of concept to do day for night, and this worked out well.” 

“We filmed at a time of the year where the sun barely touches the horizon giving a very long pitch-black night and a very long blue hour during which I underexposed in camera to embrace the available light without having to artificially light everything.” 

“I knew we had to somehow extend our shootable range and wanted to avoid using a harsh Condor backlight which on snow looks like giant construction lights.  

Instead of directly lighting the set he and Uwe Greiner built a ‘UFO light’ of powerful LEDs lights on a scissor-lift hidden behind the cabin. “It softly lit the atmosphere above the frame for miles all around, not the actors directly. We gave it a greenish hue into which we could run animations from a dimmer board to give that presence of the northern lights.” 

It was the first time Förderer had experienced the actual Northern Lights which displayed for real almost every night but are hard to capture on camera. “There’s some mystery to the phenomena so it felt right to make this a theme in our story. In later scenes I’d use cyan/turquoise as a subtle fill colour to hint at some sense of presence from outer space. Even in a kitchen scene I wanted people to be subconsciously reminded that there’s something bigger at play.” 

With colourist Florian Martin he designed a LUT based on Kodak stock with RED colour science. This glued the whole show together but principal looks were created with lighting.  

Cabin interiors, for example, were shot on a soundstage lit by firelight and three wick candles augmented with LEDs in background. 

“You usually have to extend candlelight artificially with lights off-camera but the sensitivity of the V-Raptor meant we didn’t have to over-do augmentation particularly to illuminate a painting in the scene which has narrative significance.” 

He also embraced HDR to show detail in very dark and extreme bright shots. During the spacewalk the sun comes up, Jo puts her visor down “and the audience is hopefully being blinded by the reflection of the sun. It’s nice that your eyes get stimulated with a lot of light.” 

He adds, “I feel a lot of my colleagues are more conservative when it comes to HDR, perhaps worried what executives viewing rushes on uncalibrated screens might think, but I was really excited to explore it. Working for a studio that also makes displays I knew that producers would have an iPad Pro. Put that in calibrated mode and you see exactly what you’d see on a $30,000 grading monitor. It meant we didn’t need to do much dailies grading.” 

MacLaren wanted the opening scene of a car driving through miles of forest to have scope. For this, Förderer shot plates day for night and then replayed them on a small volume stage in Berlin at the end of production. “That proved very efficient since we could pick up all kinds of little pieces which we might have not had time to get on location.” 

Constellation’s narrative is complex and it took Förderer several attempts reading the script to understand it from every angle. “A big discussion point was how to give visual clues to the audience about which character’s reality we are in.”  

Those worlds collide in “liminal spaces” the first one of which occurs at the end of episode one. Jo is approaching the cabin over a frozen lake and the falling snow slows down around her.  

“The audience can always feel if you add some gimmicky VFX so I wanted to do something with lensing that tells you she’s now transitioning into this other reality.”

He unearthed an old lens without a front element that “technically is not supposed to be able to focus, and has very strong distortion. Michelle loved the test so we shot the whole scene with this lens. In later episodes we didn’t use it so much because once it’s established in the audience’s mind that there could be different realities it’s interesting for them to figure out which is which without giving that visual cue.” 

Forderer operated cameras on the ‘liminal scenes’ so he could play with focus “and give a three-dimensional sense of some pulsating energy.” 

He has no idea of the origin of the lens. It would seem to have come from another dimension. 

 


What Is AI Superintelligence and, Follow-Up Question, Should We Be Superworried?

NAB

Predictions of Hollywood’s extinction at the hands of generative AI pale into insignificance beside warnings about the fate of humanity if evil scientists get their way in building artificial general intelligence (AGI).

article here

AGI is a hypothetical system that can perform at human or superhuman levels. Some scientists think it either unachievable or possible only decades and decades away. Others think it could be here in the 2030s.

One of them is Leopold Aschenbrenner, who is behind an AGI startup but he warns that the lust for profit is driving developers to accelerate AGI at such a pace that guardrails are being ignored.

There are even suggestions that western liberal democracy will fall if China gets its hands on the secret to AI superintelligence. It is nothing less than an arms race and Aschenbrenner is here to save us before it’s too late.

“The AGI race has begun,” he writes in a 50,000 word essay. “We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.”

Aschenbrenner is not part of some doomsaying cult. He stands with those, like Elon Musk, calling for greater “situational” awareness of AI’s potential and for intervention at state level to curb the power of those wielding it.

He doesn’t name them but his principal target is OpenAI — where he was part of a team exploring safeguards around the firm’s AGI development until he was fired last April, for apparently questioning the company’s deviation from its society before profit goal.

The heads of the governance division where he worked at OpenAI have also quit. Another employee, Daniel Kokotajlo, who also believed OpenAI could be steered toward safe deployment of AI, shares Aschenbrenner’s views.

He told Sigal Samuel at Vox, “OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we don’t proceed with care.”

Kokotajlo said he “gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

Aschenbrenner, a German national who lives in San Francisco, makes clear that predictions in his polemic “is based on publicly-available information, my own ideas, general field-knowledge, or SF-gossip.”

The OpenAI whistleblower employs dog-whistle politics in his ‘reds under the beds’ warning about the perils to the US if China gets hold of AGI first.

He thinks US security services will “wake-up” and start to develop their own AGI to counter bad state actors from 2027. “The Free World Must Prevail” he says, ignoring the real and present non-AI threat to democracy on America’s own doorstep.

Here is a summary of his red flag predictions:

AGI by 2027 is “strikingly plausible.” GPT-2 to GPT-4 took us from preschooler to smart high-schooler abilities in 4 years. Tracing trendlines in compute, algorithmic efficiencies and gains (from chatbot to agent), “we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.”

AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress into a year. “We would rapidly go from human-level to vastly superhuman AI systems. The power — and the peril — of superintelligence would be dramatic.”

As AI revenue grows rapidly, many trillions of dollars will go into GPU, datacenter, and power buildout before the end of the decade in an “extraordinary techno-capital acceleration.”

The nation’s leading AI labs treat security “as an afterthought. Currently, they’re basically handing the key secrets for AGI to the CCP on a silver platter. Securing the AGI secrets and weights against the state-actor threat will be an immense effort, and we’re not on track.”

Reliably controlling AI systems much smarter than we are is an unsolved technical problem, he warns. “While it is a solvable problem, things could easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic.”

Superintelligence will give a decisive economic and military advantage, he suggests. “In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way?”

In a reality check on these claims, Axios managing editor Scott Rosenberg says that the wider consensus among experts is that AGI won’t happen this fast or in this direction.

“That’s not pessimism: The consensus sees so much value and utility in AI where it is now, and where it’s headed long before it gets to AGI, that AGI isn’t really the point.”

Rethink Research: It’s the End of M&E As We Know It

NAB

Everyone understands that the industry is in a period of excessive disruption, but few present the developments over the next decade with the apprehension as the analysts at Rethink Research.

article here

Let’s look at their reasoning and compare it to our realities.

“2033 is a cliff, over which the media and entertainment industry will barrel,” its report, “The Crash of 2033,” states. 

Here’s the thinking:

“Declining pay-TV operators will not have the market power to secure exclusive content rights. Sports will be lost to OTT video rivals, and at this point, the roof will cave in — prompting the abandonment of legacy video distribution networks, and the collapse of many major operators that cannot extricate themselves from these car crashes.”

Even if, as is inevitable, pay-TV operators shift all of their services to IP-based delivery, it is too little too late, according to Rethink’s analysis.

The analysts predict: “This will lump their video services in with their rivals, and without the ability to use consumer premises equipment to assert a user experience, most will be lost in the wash.”

The Rethink research team settled on 2033 as significant because it marks the end of many current long-term sports rights contracts, which (as of this moment) remain in the hands of studios, networks and pay-TV operators.

But beyond this, legacy TV will not have the financial muscle to compete.

“For traditional pay-TV operators and broadcasters, sports and news are the last two bastions — all that is left between the high-margin halcyon days and total irrelevance.”

Taylor Swift’s “Eras” tour and cinema concert film offers some life for traditional media, Rethink suggests, by illustrating “plenty of scope for using movie theaters to show content other than movies.”

eSports, however, is projected as a dead end: “It has continually disappointed as a salvation for Hollywood. This could change, but is not a safe bet, despite younger consumers playing more games and spending more time watching streamers.”

Younger generations brought up consuming media on mobile devices, coupled with advances to mobile connectivity (5G) will deliver a decisive blow to the old fixed-infrastructure ecosystem of broadband cable and satellite.

“The first consumers that will never take a fixed-line offering have already been born, and so the expansion into cellular will become existential for those with fixed-line offerings,” the paper states.

ATSC 3.0 (the hybrid broadcast/streaming delivery mechanism) “has come too late to prevent the onslaught of cord cutting in the US.”

The Cost of AI for Hollywood

Rethink also has some strong statements about AI, including a prediction that 2026 is the year the waters will break.

That’s when the agreements signed with the WGA in Q4 2023 will end. The current contract stipulates that script writers themselves are allowed to lean on generative AI systems such as ChatGPT in the creative writing process, and then sell these scripts to studios.

“This is a canny move from the Hollywood studios,” the analysts say. “Advancements in GenAI between now and May 2026 will be lethally swift and effective, by which time script writers will be so adept in generative AI taking away the mundane tasks, that many content writers will not have a leg to stand on when the next bout of strikes come around.”

Further, “once AI has mastered a single production, the technology will be tasked with spinning up entire catalogs of content and even linear TV channels — with FAST positioned as a perfect partner.

“As a result, the next decade will be rife with litigation, and these fierce court battles will further deplete Hollywood studios to the point of destitution.”

So… Just a Mid-Level Panic?

There’s a lot of panic in Rethink’s analysis but it remains that some of the messaging is insightful.

That M&E should have its eyes on 2026 is important to underline. GenAI will have advanced to a considerable degree by then. We will probably be on Sora 3.0 and the end of agreements with writers and actors will force Hollywood’s hand to engage with AI on a less cautious level.

As to whether the whole industry as we know it has already pressed the self-destruct button in failure to shift to streaming models and widen business portfolios sufficiently beyond film and TV, well… this does sound more than a little alarmist.

There will be change, but it will be evolutionary… nothing is going to drop off a cliff. There are a lot of people in our industry who are too smart to allow that.