Wednesday 17 April 2024

Bridging the gap beween the main and small screen

CSI

cover story Spring 2024 and p8-11

article here

The shoppable TV era is here, but the evolution of interactive TV isn’t limited to the adoption of QR codes

 Promised for years, the era of shoppable TV has finally arrived. Driven by the penetration of Connected TV (CTV), the drain of audiences from linear TV, new advertising technology and changing consumer behaviour, the path between brand exposure and direct purchase can be short circuited.

Most importantly consumers seem to like it. Over half of CTV users wish they could shop online using their TV, according to a recent survey by LG, with 63% wishing they could see store inventory on their TV.

An August 2023 report from the US-based Video Advertising Bureau (VAB) found that viewers who interact with a shoppable ad are inherently more likely to then make a purchase.

The survey found more than one third (36%) of audiences have interacted with shoppable ad QR Codes, and 67% have interacted with the "click to receive info to your email or device" ad format.

Another survey (by Lucid), found 75% of viewers preferring to see an interactive TV ad than a standard commercial, particularly when they could find out more information about a product or redeem a special offer.

That stat was quoted by Disney in its launch earlier this year of a trial that allows consumers to purchase products on Disney+ and Hulu, by connecting the stream to a shopping cart using Gateway Shop.

Shoppable and interactive ad formats have taken off in the U.S. where FAST channels have soared in popularity. In LG’s poll 78% of CTV users regularly use FAST apps, and 59% preferred FAST over paid streaming services, “making FAST a critical part of every media plan.

Having launched its ad-supported Prime Video service, Amazon is predicted by Omdia to generate more than $2bn in incremental ad revenue this year and is in prime position to lead the shoppable ad market. Prime Video experimented with shoppable ads during the Black Friday NFL game in December. Ads featured on-screen QR codes linking to Amazon’s Black Friday deals, enabling viewers to scan codes with mobile devices to go straight to the offers. 

While shoppable ads were a novelty a couple of years ago, brands are now comfortable allocating big budgets to CTV,” says Roxanne Harley, Head of Strategy & Client Development, at Dutch digital media platform Azerion. “The next step in this adoption is testing interactive ad formats in more premium TV and publisher environments. Expect more innovation and testing of interactive ad formats in the future, both in the UK and worldwide.”

The “elite player” according to Amagi, is the QR code. “During the pandemic people were at home sitting with phones, 87% of folk are watching TV with second screens. It is a perfect captive audience and now an opportunity to reach them,” says James Smith, EVP & GM, Global Ads Sales & Programmatic, Amagi

LG calls them a ‘must-have’ for ad creatives citing 70% of CTV users liking TV ads with QR codes, and that 62% would scan them if exposed. Broadcasters including Sky UK, ITV and NRK have begun to experiment by integrating QR codes into some of their broadcast content.

“I’m really interested in how we build bridges between big and small screen products, Tanja Skjoldborg Lindboe, Head of Product Development, NRK TV, told the Northern Waves conference, adding that QR codes were “the best way we’ve seen yet.”

Shopify, Walmart, Wendy’s and DoorDash are among retail and commerce partners to leverage Roku Action Ads where consumer interactivity involves sending a text, scanning a QR code, or making direct purchase.

Various factors are propelling the technology towards wider utilisation.

“The increased familiarity and comfort with QR codes among consumers is driven in part by their ubiquitous use during the pandemic for contactless transactions and interactions,” says Mike Shaw, Director, International Ad Sales, Roku. “QR code scanning is now faster and more seamless, eliminating previous barriers to adoption. Also, the integration of QR codes with CTV platforms offers a bridge between traditional broadcast media and digital interactivity, presenting a compelling opportunity for advertisers to shorten the purchase funnel between awareness and conversion.”

Taken together there’s every indication that the technology is poised for broader acceptance.

 

Diving into the tech stack

An efficient CTV shopping experience relies on robust back-end technical components, including flawless integration with e-commerce platforms to enable real-time inventory updates and transaction processing.

“The backend system should identify relevant objects within the video content, such as products or brand logos, to avoid an editorial nightmare,” explains Bleuenn Le Goffic, VP Strategy and Business Development, Accedo. The system then needs to pre-filter video frames to highlight shoppable metadata. Last year, Accedo experimented with AWS to create an AI-powered virtual product placement engine. “Having this type of system in place means the backend can do the heavy lifting and lower the editorial investment required to enable shoppable TV across all assets of a specific video,” she adds.

Shaw points to advanced data analytics capabilities as essential to track user engagement and optimise shoppable ad-campaigns while integration with CMS enables dynamic insertion of shoppable elements into streaming content.

Another important element in the underlying technology are codecs (AVC/H.264, HEVC/H.265, VVC/H.266) which “make it easier and more affordable to use QR codes within more types of content,” says Geoff Gordon, VP Global Marketing, MainConcept.

 

User experience is key

ITV was the first UK broadcaster to introduce a shoppable TV service. Launched around Love Island in 2021, developed by The Take, and on LG CTVs and with Boots as ad partner, its messaging was optimistic of scaling the service. Yet ITV quietly abandoned the trial, perhaps because the user experience wasn’t quite right.

“The best user experience is one that is seamless and integrated with the addition of interactive elements, which allows the live content and ad to share the same screen without interrupting the viewing experience,” says Martins Magone, CTO, Veset. “To enhance the user experience through external integrations such as a second screen, content providers need to start considering the ways in which interactive features are best managed.”

Today, purchasing products from TV ads mostly occurs on mobile, as 71% of CTV users are “always” holding their phones while watching TV, according to LG.

If shoppable TV comes as an overlay to the actual video, then Accedo recommend using a second screen for the least intrusive experience. “The remote control can then be used to interact and action shopping for the products that users see and find appealing on the second screen,” he says.

Roku is firmly in the remote camp. Since the remote control is the primary interface through which users interact with their TV it “provides a more immediate and seamless experience for consumers to engage with shoppable content,” argues Shaw. “By incorporating shoppable functionalities directly into the remote control, Roku can streamline the shopping journey and enhance convenience for viewers, ultimately driving higher conversion rates for advertisers.”

Voice and gesture activation offer further opportunities. Voice-enabled shopping allows users to verbally command their CTV to add items to their shopping cart or make purchases directly through voice assistants (like Roku Voice). Gesture recognition enables viewers to interact with shoppable content using hand gestures or motion controls, enhancing the immersive shopping experience.

 

Integrating shoppable TV into live streams

Sports is a particularly attractive use case, with often high viewer numbers and huge levels of fan loyalty. However, integrating shoppable TV is a challenge.

“You don’t want to interrupt the live story,” says Smith. “The second screen experience needs to be closely tied to what is happening on the CTV. You can’t take viewers completely away from the show and expect them to do a ton of research and come back to the show. So the creative needs to trigger an activity. Sometimes it could be asynchronous – the user cued up to complete a conversion later. In live that’s a good way to do it, provided you can still demonstrate attribution.”

As well as the complexity of serving ads and products in real-time, live sports such as football have an element of repetitiveness with the same players on the pitch, often with the same goal scorers, “so for the regular viewer you need to create a shoppable journey across many different products and experiences to maximise the opportunities for engagement,” says Le Goffic.

On the other hand, this year’s Superbowl was the most-watched event since the moon landing in 1969 and highlights the opportunity for getting live sports integrations right.

“Opportunities include capitalising on the high viewership and engagement levels associated with live sports to drive conversions for advertisers, enhancing the fan experience by offering exclusive merchandise and interactive content during live broadcasts, and leveraging social media integrations to facilitate peer-to-peer sharing and viral marketing opportunities,” says Roku’s Shaw.

Providers and advertisers are already exploring how AI can aid in recognising products and contexts in live video to enable instant shopping interactions. Potential transactional behaviours are perhaps best leveraged during moments of high engagement that could open new opportunities for contextual-based advertising.

“By leveraging the flexibility and scalability of cloud playout, broadcasters can exploit the tech even further with features such as inserting QR codes into live streams, which is crucial for advertisers keen on investing their CTV budgets in interactive ad formats,” says Magone. “Content providers need to provide the tech that allows for real-time changes so ads can be delivered in a range of formats based on data insights and viewer behaviours.”

 

The evolution of interactive

Interactivity isn't limited to QR codes. Peacock and Max have started using ‘pause ads’ where ads play when the user pauses the content. Ads are delivered in an unobtrusive way because the ad break is initiated by the viewer.

Advancements in ad tech have also made it possible to insert ads that are displayed alongside the content. This removes the need for a traditional ad break, thereby reducing the chance that viewers will disengage. Veset’s AdWise solution uses dynamic squeezing technology to squeeze the content to make room for the ad to display on the screen alongside the programme.

For Magone, the next logical step from QR codes is to bring interactive experiences directly onto one TV screen and create a “non-intrusive yet engaging shoppable experience,” he says. “In-video style ads can significantly shorten the path to purchase without interrupting the intimacy of the live experience.”

Video overlay ads are an “exciting application” for adding interactive features or placing the ads directly into the on-screen content, says Gordon. “It’s like product placement, but instead what’s delivered will be personalised to the viewer for a more engaged advertising experience.”

Shaw also points to AR overlays allowing users to visualise products in their own environment before making a purchase. He says ‘gamification’ rewards viewers for engaging with shoppable content, encouraging repeat interactions - which ultimately enhance the overall viewer experience and driving higher conversion rates for advertisers.

Disney's Gateway Shop allows viewers to send on-screen products to a second screen without interrupting their streaming session. Harley judges this a good way to think beyond the standard ad spot so audiences can receive information from their TV to their smartphones and shop the products they see on TV.

“As exciting as this sounds, looking at media planning and measurement across these new formats would be beneficial so CTV ads don't have to do it all alone. My advice to drive purchases would be to build a full-funnel advertising strategy with omnichannel formats and measurement throughout for cost-effective reach and conversions.”

 

Challenges ahead

Creating a more robust shoppable TV experience entails other challenges. For instance, balancing user experience with advertising content to avoid overwhelming viewers with commercials “is crucial” deems Shaw.

Addressing privacy concerns related to the collection and use of consumer data for targeted advertising is also high on the agenda. Perhaps the most critical issue is relevance. Without it and the whole shoppable media enterprise is sunk.

“The biggest mistake is failing to identify the right message to the right user working within GDPR and privacy controls,” says Smith. “We have a lot of information on users that will help placing the right ad in front of them so there’s no reason to put a retail ad in front of a user who has already purchased that product.”

The reasons why 70% of CTV users don’t scan QR codes on ads is when the product is not relevant. “The key to success for Shoppable TV is relevance, underscoring the need for advanced audience targeting,” states LG.

Shoppable TV experiences should create an higher degree of personalisation and context. Smith says Amagi is focused on pioneering the personalised EPG which will serve viewers a menu of 100-200 channels tailored to specific interests and curated from potentially 5000 channels distributed over CTV.

Even then, Smith says the industry needs to join the dots down the funnel. “One of the most important things the industry needs to get better at is attribution. When you get down the funnel in a situation where the user is trying to convert via mobile phone or CTV then brand partners need to be data to manage that attribution.

He also suggests there is a way to go before shoppable TV ads are transacted programmatically. “We are not seeing massive CTV budgets for overlays or interactive ad formats that are transacting programmatically.”

Educating advertisers and content creators about the potential of shoppable TV and best practices for maximising ROI is essential for widespread adoption.

CTV platforms with a large user base and registered methods of payment are best positioned for this opportunity.

 

Tuesday 16 April 2024

New Cameras at NAB Show 2024: From the Cine to the Mini

NAB

article here

Imaging technology innovation has continued at a fast and furious pace; there have been a number of significant new camera releases at NAB Show 2024. Here’s a brief rundown:

High-End TV and Cinema

Blackmagic Design jumped into cameras a decade ago with its first Pocket Cinema Camera and has been upping the ante ever since. Its new URSA Cine 12K is the new flagship designed for high end production.

The $14,995 camera features a new sensor that builds on the technology of URSA Mini Pro 12K with larger photo-sites capable of 16 stops of dynamic range. The full sensor area gives customers a 3:2 open gate image enabling post reframing. Alternatively, the larger sensor area can be used to shoot anamorphic and deliver in a range of aspect ratios. Plus, you can shoot in 4K, 8K or 12K using the entire sensor without cropping, retaining the full field of view. There are even 9K Super 35 4-perf, 3-perf and 2-perf modes for full compatibility with classic cinema lenses. An optional Cine EVF is for outdoors and handheld shooting.

There are a variety of ways to work with the media, which at 12K uncompressed will be substantial. For example, the URSA Cine will enable H.264 proxy file creation in addition to the original camera media when recording. “This means the small proxy file can upload to Blackmagic Cloud in seconds so media is available back at the studio in real time,” according to the vendor. “The ability to transfer media directly into the DaVinci Resolve media bin as editors are working is revolutionary and has never before been possible.”

Media can also be streamed direct from the camera (over RTMP and SRT) to major platforms or to clients via Ethernet, WiFi or even connect a 5G phone for mobile data.

“We wanted to build our dream high end camera that had everything we had ever wanted,” said Grant Petty, Blackmagic Design CEO, “Blackmagic URSA Cine is the realization of that dream with a completely new generation of image sensor, a body with industry standard features and connections, and seamless integration into high end workflows. There’s been no expense spared in designing this camera and we think it will truly revolutionize all stages of production from capture to post!”

BMD has also upgraded the PCC into a rugged cube design that targets the market currently occupied by the likes of Red Komodo. The full-frame Blackmagic Pyxis 6K costs $2,995 and comes with multiple mounting points for camera rigs such as cranes, gimbals or drones and is available with either L-Mount, PL or Locking EF lens mounts.Cr: Blackmagic Design

Like the URSA Cine 12K, the Pyxis also generates proxy files for instant upload to the cloud and media availability to editors anywhere working in Resolve. It also includes an optional Cine EVF for outdoors and handheld shooting.

“Since the introduction of the original Pocket Cinema Cameras, our customers have been asking us to make it in a more customizable design,” said Petty. “But we wanted it to be so much more than just a Pocket Cinema Camera in a different body. The Pyxis is a fully professional cinema camera with more connections and seamless integration into post production workflows.”

Live Production/Studio

High end camera makers are turning their attention to broadcast and live events market where there is growing demand for cinematic imagery (for live, highlights and BTS documentaries) and the depth of field capture of digital cine lenses.

In its first outing since been acquired by Japanese camera maker Nikon last month, RED was at NAB Show highlighting broadcast solutions, including a SMPTE fiber cine-broadcast module and broadcast color.

The new RED Cine-Broadcast Module integrates the company’s V-RAPTOR camera into live broadcast scenarios such as sports and concerts. It enables two channels of 4K 60P (HDR/SDR) over 12G-SDI and is IP ready with SMPTE ST 2110 (TR-08) and up to a 4K 60P JPEG-XS feed. The module features a hybrid fiber optical cable connector which connects to a rack mountable base station.

Additionally, broadcasters will also be able to shoot with slow-motion, AI/ML augmentation and live to headset using 8K 120FPS R3Ds by using RED Connect, available for a separate license.

Complementing these developments is a new firmware broadcast color pipeline for “live painting” of RED cameras in a broadcast or streaming environment.

ARRI has also expanded its Alexa brand of high-end imagers into live broadcast setups such as outside broadcast concerts, sports, esports to studio based talk shows and game shows.

The Alexa 35 Live Multicam System integrates into existing live production environments providing the full functionality of a system camera while retaining the flexibility of a dockable camera setup, according to a release.

Supporting a current trend in live productions, the Super 35-sized 4K sensor enables shallow depth of field and offers 17 stops of dynamic range for handling extreme lighting situations presented in SDR and HDR.

“As a result, even contrast-y concert lighting is captured faithfully and skin tones are beautiful, so performers always look their best. Low light scenes display minimal noise, and highlights roll off in a natural, film-like way,” ARRI claims.

The full System comprises the new Alexa 35 Live camera with fiber camera adapter and base station), a Skaarhoj remote control panel (though it can work with other RCPs), and accessories such as base and receiver plates, an adjustable monitor yoke, an extra-long camera handle, a tally light with camera ID display, and a rain cover. A new large lens adapter assists rapid setup with box lenses.

One of 87 pre-made looks from a built-in Look Library can be selected and there’s also producer choice of Textures (five multi-cam and eight cine-style) to modify grain and contrast.

Japan’s Ikegami marks its 60th anniversary with an expanded range of HD and 4K UHD cameras targeting bread-and-butter live broadcast.

“Our emphasis at NAB 2024 will be on the advance to 2160p 4K-UHD as the globally preferred standard for producing high-value television content,” said the company’s Alan Keil. The new UHK-X700 model can be used for pedestal-mounted studio operation, tripod-based sports coverage and shoulder-mounted location production and features three 2/3-inch CMOS UHD sensors with a global shutter to minimize artifacts when shooting LED screens or scenes illuminated with flash or strobe lighting.

The UHL-F4000 is a compact and lightweight UHD HDR camera with low power consumption designed for aerial shots from a helicopter. The camera head sports three UHD CMOS global shutter sensors “capturing natural images completely clear of geometric distortion and flash band effects.”

Germany’s PROTON Camera Innovations launched the PROTON CAM, billed as the world’s smallest broadcast-quality camera.

Measuring just 28mmx28mm and weighing only 24 grams, PROTON CAM is tiny in size, but also incorporates market-leading specifications compared to other comparable cameras. It uses 12-bit sensor technology and advanced FPGA to deliver unmatched high resolution and dynamic range, capturing details with exceptional clarity. It also grants a wide-angle view of up to 120 degrees and better low-light performance, without any image distortion, thus allowing broadcasters significant flexibility and creative scope in its deployment.

“Crucial to the core proposition of PROTON as a company is our ability to maintain 100% ownership over our research and design process, meaning that we guarantee full control over the innovation and quality standards of our product,” said PROTON CEO Marko Hoepken. “Whilst the tiny size of the PROTON is of course a key USP, it was crucial to us that this was not a gimmick that came at the expense of other deliverables. The exceptional image quality and technical specifications embodied within the PROTON are what will set it apart from the market.”

PTZ and Studio Automation

Production budgets and staff are being stretched like never before. In light of the need for more content to support multiplying distribution channels amid the headlines of economic recession, camera robotics comes into play. Innovations in this vibrant product sector range from higher quality sensors to AI face tracking, expanded SDI and NDI support and more.

Sony’s NAB launch PTZ has a 20x optical zoom and AI-driven auto framing primed for sports coverage. The BRC-AM7 is equipped with a 1.0-type image sensor and compatible with 4K 60p as an integrated lens PTZ remote camera. Sony claims it is the smallest and lightest camera of its type in the world. It is possible to record at up to 4K 120p another boost for dynamic sports action.

The Sony BRC-AM7 is equipped with a 1.0-type image sensor and compatible with 4K 60p as an integrated lens PTZ remote camera. Cr: Sony

The $1,999 KY-PZ540 PTZ series from JVC is the company’s first PTZ cameras to incorporate a 40x zoom. The 4K imager also feature JVC’s Variable Scan Mapping technology, which scans the sensor to produce a lossless image transition up to 40x in full-resolution HD. The cameras are intended for large event spaces and instances when the need to zoom in from a distance is essential and they support NDI network connectivity.

“We already have an award-winning PTZ product – so increasing the zoom magnification while keeping the unit affordable made it possible for us to accommodate the needs of a larger segment of customers,” said Joe D’Amico, VP of JVC Professional Video.

Ikegami’s new UHL-43 4K UHD is a compact box-style camera designed for robotic studios, live-event broadcasting and point-of-view image capture. The camera head can be used on practically any support device such as a remote pan and tilt, long-reach arm or overhead mount. An Ethernet interface allows control from practically any distance, making the camera ideal for remotely supervised field production.

Ross Video’s Artimo addresses some common challenges faced with traditional studio camera movement solutions such as pedestals, dollies, and jibs. According to Ross, it offers quiet, fast, programmable moving shots without the limitations of fixed rails, markers, or the need for perfectly smooth studio surfaces. The increasing use of LEDs and the need to have more on-air movement “to engage and entertain the audience” makes a studio robotics solution like this necessary says Ross. It comes with geofencing and LiDAR so it can be programmed for uninterrupted operation, maneuvering around obstacles with precision.

The Artimo robotics studio system addresses some common challenges faced with traditional studio camera movement solutions such as pedestals, dollies, and jibs. Cr: Ross Video

Phones and Drones

Atomos has released the Ninja Phone, a 10-bit video co-processor for smart phones and tablets that lets you record from professional HDMI cameras. The $399 Ninja Phone is designed for iPhone 15 Pro and iPhone 15 Pro Max and the uses the phone’s OLED display and Apple ProRes encoding to create “the world’s most beautiful, portable, and connected professional monitor-recorder.”

It is also the first time Ninja users will have access to an OLED monitor screen, “which, at 446 PPI, is by far the highest resolution, most capable HDR monitor that’s ever been available to them,” added Young.

It is intended for use with many new, smaller format mirrorless cameras such as Fujifilm’s X100 and G series, Canon’s R5 Series, Sony Alpha Series, Nikon Z series cameras and Panasonics GH and S series.

Atomos CEO and Co-Founder Jeromy Young said, “Ninja Phone is for the thousands of content creators who capture, store, and share video from their iPhone 15 Pro but aspire to work with professional cameras, lenses, and microphones. At the same time, the Ninja Phone is a perfect tool for longer-form professionals who want to adopt a cloud workflow without a complex and expensive technology footprint.”

China’s DJI, perhaps better known as a maker of prosumer drones, has new versions of its camera stabilizer platform DJI RS 4.

The DJI RS 4 costs $869 and is capable of carrying up to 3kg (6.6lbs) of mirrorless camera and lens combinations for comfortable handling and robust power. A redesigned gimbal horizontal plate enables smoother transitions to vertical shooting.

The DJI RS 4 Pro retails for $1099 and can carry 4.5kg (10lbs). It features an extended battery runtime of up to 29 hours provided the DJI RS BG70 Battery Grip is used. This also incorporates the firm’s proprietary LiDAR Autofocus system to offer cinematographers precise autofocus and enhanced control in dynamic shooting scenarios.

DJI is also making the LiDAR system independent of its own systems in the new DJI Focus Pro Automated Manual Focus (AMF) lens control system.

With a 70-degree focus field of view; 76,800 ranging points and a refresh rate of 30 Hz, the upgraded LiDAR “empowers cinematographers with intuitive spatial understanding by using LiDAR waveform as focus assistance, enhancing their ability to capture scenes with precision.”

According to the company the move marks a “significant leap forward” in providing LiDAR technology, once exclusive to the DJI PRO ecosystem, to more creators.

 


NBCUniversal hopes advanced broadcast TV launch will ignite NextGen TV

Stream TV Insider

Broadcast TV viewers in some U.S markets will now be able to enjoy a degree of personalization and rewind functionality of NBC channels via NextGen TV, but the launch announced Monday also highlights the lagging rollout of ATSC 3.0.

article here

ATSC 3.0 or NextGen TV combines broadcast with IP to potentially transform linear TV from one-way mass-market to two-way personalized interactive experiences. It is available to 75% of Nielsen households in the United States, according to ATSC.

The intent of the whole NextGen TV project is to elevate over-the-air (OTA) broadcasters on par with online streaming TV competition and boost the prospects of local stations.

Now, six NBCU-owned stations have become the first in the nation to deliver some of these benefits. Viewers can rewind or pause NBC and Telemundo channels in real-time. The channels will also show localized news or weather pop-ups, provide targeted emergency alerts, and offer viewers a selection of on-demand content.

Created in partnership with Fincons, Ease Live (an Evertz company) and Pearl TV, the new functionality includes the ability for viewers to restart programs including the TODAY show.

Pearl TV is a consortium of partners, representing major U.S. broadcasters, and including Google which develops the RUN3TV web platform on which broadcasters can build hybrid TV services.

The return path from ATSC 3.0 tuners enables audience metrics to be used to inform personalized editorial and, in theory, more relevant advertising. Fincons are on board to provide dynamic ad insertion, content promotion and monetization, and real-time audience and monitoring.

The four markets involved in this launch include New York (WNBC, WNJU), Los Angeles (KNBC), Philadelphia (WCAU), and Miami (WTVJ, WSCV), with NBCU promising to deploy the application in “major media markets across the nation”.

NBUC also claims this represents “a significant milestone in realizing NextGen TV’s full potential” but some commentators at have called it “a clear sign of NextGen TV's obscurity.”

That’s because NextGen TV has been heralded as a game changer for terrestrial TV broadcast since launch in 2017, but investment and implementation has been slow.

One reason is that the FCC didn’t mandate the switch as there was from analog NTSC to ATSC 1.0. Another is that the capability within the standard to broadcast 4K has not been taken up. There are no regular UHD broadcasts in the country. This is compounded by the issue of bandwidth. TV stations are only allocated a certain amount of a limited spectrum to broadcast their channel forcing them to trade off multiple HD channels for UHD truck (4K). Additional bandwidth limitations due to how NextGen is rolling out further restrict how much can be used.

A more pressing issue is that encryption (DRM) being added to broadcasts by stations are incompatible with many TVs and tuners. The National Association of Broadcasters (NAB) went so far as to issue a public letter to the FCC a year saying the transition to NextGen was "in peril" and urging the FCC to take action.

The NAB wrote, "The single biggest factor in the success of this transition is almost completely out of our control - it is up to the consumer electronics industry to build the devices that consumers will use to access our signals. By signalling support for ATSC 3.0 as the future of broadcasting, the Commission can help ensure these devices get built and marketed. In contrast, a lack of support will slow the pace of deployment, and eventually, we may be stuck."

NBCU is at the NAB Show in Las Vegas this week demonstrating its new ATSC 3.0 capabilities while the ATSC itself were touting more than 40 companies showcasing how ATSC 3.0 standards can be applied. It said more than 100 NextGen TV products would be available to consumers this year.

This includes HDR by Technicolor with SDR backwards compatibility and interactive music video broadcast channels powered by ROXi.

ATSC 3.0 broadcasts are being tested in India, Canada, Brazil and Mexico. South Korea, Jamaica, Trinidad and Tobago, are on-air or planning near-term launches.

Behind the Scenes: 3 Body Problem

IBC

The cinematographers behind Netflix sci-fi series explain why the sun was such an important element in the show’s VR game world and how they played the story’s fictional science straight down the line.

article here

Attempting to do for science fiction what Game of Thrones did for fantasy, Netflix series 3 Body Problem mixes a large cast and a constellation of locations with shifting timelines, astrophysics and alien invaders.

Game of Thrones’ showrunners David Benioff and D.B. Weiss were reportedly handed $20m an episode – bigger than GoT’s budget - to adapt the sci-fi novels of Chinese writer Liu Cixin into eight parts of what is hoped will be the first of several seasons.

Like events in Westeros, the arc of the story is one of impending threat centred around a core group of characters, but there the similarities end. It goes from the domestic to the cosmic, from the known to the unknown, and from the stable to the unstable — all in a flash.

“We swing from extreme normality to extreme abnormality,” says Richard Donnelly ISC, who was cinematographer on the first two episodes along with fellow DP Jonathan Freeman ASC. “We have multiple locations, multiple worlds but the main drama takes place in very real everyday life. It was very important to give those everyday scenes a big sense of normality so that once you left and went into the VR game you would really feel the extreme abnormality of aliens travelling from four light years away to invade Earth in 400 years’ time.”

The first scenes are set in 1960’s China during the Cultural Revolution and filmed to evoke the colours of the period. Later in episode one, we see a Chinese military facility ‘Red Coast Base’. The location for this was found near Cáceres in western Spain, at the site of an actual military facility on top of a mountain ridge. Part of the workload for set decorator Andrew McCarthy included stripping out components of WWII aircraft, tanks, and other machinery to lend an era-appropriate and practical feeling to the base.

“The giant radar dish was added in post but it was very important to have this feeling of isolation when viewed from the top. When Ye Wenjie (played by Zine Tseng) looks up she sees a huge area of the forbidden zone and when she looks up at the mountain it looks quite impenetrable and private.”

It was shot on a hot spring day so Donnelly changed the colour temp in the camera to give the feeling of a harsh winter with cold blues.

“We timed the whole scene in which Ye arrives at the facility so that all the characters were backlit. When she is taken out of the truck and walked up to the entrance she squints and looks up at the radar and gets hit with the sun. I liked the end result because we worked with the weather and it was quite convincing.”

Playing with light

The aliens manifest themselves to humanity in several ways including being able to switch on and off the very universe, placing a digital countdown in someone’s vision and by way of a hyper-real virtual reality game.

The destructive three-star solar system that the VR game recreates for its players meant that lighting was of special significance throughout production.

Freeman, who was lead photographer on the first two episodes, created complex and specific lighting panels to define, differentiate, and manipulate the lighting in the VR shots.

On a stage at Shepperton, he devised a 180-degree wall of LED SkyPanels, hidden behind a scrim, and manipulated lighting changes that play in the story.

“The most critical part of the VR game was the story of light and how even the motions of the characters are influenced by the phases of the sun,” Freeman says. “Most of the imagery was created virtually however the key human element would be our actors performing and reacting to light. We would be on close-up shots of their faces responding to light so we needed those transitions – to mimic sunrise and sunset – to feel as real as possible.”

They could have used a Virtual Production Volume where the background plates are used for interactive lighting but one challenge was that they couldn’t find one big enough or cheap enough.

“We have a scene with a hundred naked extras running from one side of the stage to the other which required a very large space; plus we had others walking, talking and travelling. So the space had to be large. In addition, we wanted to use real elements of wind and snow that can corrupt any high-tech Volume stage. Since we didn’t necessarily need high-resolution background images to create interactive light we used hundreds of SkyPanels as a screen array. It is in effect a low-res Volume, still expensive to build but less so than a conventional Volume.”

Operators were able to precisely control light movement and colour on the wall and program lighting differences specifically for the different skin tones of each actor.

“The most complex aspect of it was to create real hard sunlight but we could get away with that here because of the artificiality or heightened reality of it being a VR game world. Also, we are dealing in worlds with multiple suns and on a larger scale than that of Earth’s relationship to the sun so we could be more forgiving.”

DP PJ Dillon, who shot episode 3, put together an LED platform on the floor for when alien intelligence the Sophon walks on lava.

Enter the Sophon

The novels merge science fiction with heavy, practical science and DP Martin Ahlgren, ASC dived into research for his block of episodes 4-6 with director Minkie Spiro.

“The question for everyone was how to do an alien invasion series in a way that is more scientifically rooted than a lot of film versions of this genre,” he says. “That leant itself to a more naturalistic approach.”

Ahlgren was tasked with a scene explaining the background of the aliens (called San-Ti) and their interest in invading Earth. We learn that the San-Ti are light years ahead in terms of technology than the human race and have created Sophons, which act as a quantum device that allows the aliens to keep track of humans in real-time. This scene takes place in the VR game and features a female human embodiment of the Sophon.

Minkie and I spent a lot of time working out how to tell this story with the show’s science advisor physicist Dr Matt Kenzi. In some ways I felt I spent as much time researching the science and how to visually tell the story than anything related to cinematography.”

He experimented with a GoPro Max 360 camera mounted on a boom pole as a storytelling device to move between the VR world and flashbacks to previous episodes while the Sophon explains mysteries like why the stars in the night sky seemed to blink.

They swapped the GoPro for an Insta360 Pro which captured feeds from eight high-resolution cameras, stitched together in post. “It also had a very decent close focus so you could even do transitions with close-ups on someone’s eye,” says Ahlgren.

It was important to play the sci-fi straight down the line in order for an audience to believe in the story’s fictional science.

“We are stretching the limits of what is possible according to physics but we also have to leave it a bit fuzzy unless the logic falls apart,” he says.

VR world building

The VR headset is gold mirrored and looks as if it’s made all of one metal. It comes presented in a luxury white box with the name of the player embossed on the top. Production designer Deborah Riley calls it a “hero prop”, that set the tone for the virtual reality worlds which were a mix of VFX and physical sets.

The first of the VR worlds that appears in episode 1 is of a Chinese-style pyramid referenced from the Shang Dynasty. The gothic architecture of Wells Cathedral was used as a template for the VFX build of late 16th century Italy and an audience with the Pope. One of the biggest builds of the season was the observation deck of Xanadu’s pleasure dome in another VR world.

“It was meant to be of the era of Kublai Khan [from Coleridge’s poem],” Riley explains. “We were looking at Mongolian architecture but there was very little by way of research that points particularly to timber construction in Mongolia. The setting was originally scripted as an observation deck, but because that caused VFX far too many difficulties and turned every shot into a VFX shot, I had to turn the deck into a balcony. Adding various levels to that design meant that the structure was able to hug our actors in a way and give us a backing, so that the camera could look into them and make sure that our actors could still look out onto vast Mongol armies.”

VFX producer Steve Kullback led a team of vendors that included Scanline VFX, BUF and Cadence Effects and extensive VFX work that included a photoreal Panama Canal through which an oil tanker (also part VFX) passes through in ep. 5 as well its destruction by ‘nano-fibre’ threads that make it look like it been through a meat grinder.

The physical special effects team plays a role as well, smashing numerous sheets of plate glass so we have just the right look when the Sophon puts one character’s head through a window.

For the prosthetics team, creating the flattened, dehydrated body of the character Follower was one of the most challenging tasks across the show. They made two versions and also created another 80 dehydrated bodies.

Eagle-eyed viewers might be able to spot the 3 Body Problem logo hidden in the visuals throughout the show — on the handles of weapons, woven into the fabric and tassels of costumes, carved in stones, in the architecture, and elsewhere.

Monday 15 April 2024

Why These Movies Are Being Captured With Very (Very!) Unexpected Cameras

NAB

Two decades of digital camera technology and software packages have enabled greater access for more people to the possibility of telling stories on film. The contention now is that such prosumer, even consumer, gear is of such high quality that even A-list filmmakers are using it.

article here

The latest talking point is the extensive use of the $5,000-$6,000 DJI Ronin 4D, an integrated camera, lens and four-axis stabilized gimbal, to shoot $50 million sci-fi action thriller Civil War.

The inexpensive price of the camera was not the reason director Alex Garland wanted to use it. As he explained to Ben Travis at Empire Online, “It self-stabilizes, to a level that you control — from silky-smooth to verité shaky-cam. To me, that is revolutionary in the same way that Steadicam was once revolutionary. It’s a beautiful tool. Not right for every movie, but uniquely right for some.”

It enabled DP Rob Hardy to shoot and move the camera quickly without using dollies or tracks and yet without it feeling too handheld.

Instead, the DJI Ronin 4D offered a distinctly human perspective. It was, notes Garland, “the final part of the filmmaking puzzle — because the small size and self-stabilization means that the camera behaves weirdly like the human head. It sees ‘like’ us. That gave Rob and I the ability to capture action, combat, and drama in a way that, when needed, gave an extra quality of being there.”

Gareth Edwards’ $80 million budget sci-fi feature The Creator was shot on the $4,000 Sony FX3 by Oren Soffer, guided by Dune’s Oscar winning cinematographer Greig Fraser, for reasons of compactness and low light capabilities.

While neither camera is certified by IMAX as an IMAX camera, both The Creator and Civil War were presented for IMAX screens because they used IMAX post-production tools and a sound design suitable for the giant format. Neither film might look quite as good as Dune Part Two — which was shot on IMAX certified ARRI Alexas — but the quality is as good as there.

And this is the contention of Jake Ratcliffe, technical marketing manager at camera rental house CVP. The gap in image quality between low and high-end cameras is closing, he argues, and the compromises you would previously have had to make with cheaper cameras are diminishing.

With image quality less of a differentiating factor, filmmakers have more and more choice over the tool for the job. RED originally designed the smaller and relatively cheap Komodo as a crash camera, but its lightweight, small form factor and image quality that matches bigger brother V-Raptor cameras has seen it increasingly used on shows like Amazon Prime’s Road House.

Ratcliffe thinks these stories are showing that the process of filmmaking is changing. “The democratization of filmmaking equipment is going to allow more and more people to tell the story in a more engaging way than what would have been possible in the past. I think the industry will go a step further in this regard with Unreal Engine in the future too.”

Has camera technology using glass optics and digital sensors reached its natural peak?


David Fincher and Erik Messerschmidt ASC target V-Raptor To Shoot The Killer

interview and copy written for RED 

In David Fincher’s Netflix darkly comic thriller The Killer, Michael Fassbender is the nameless assassin who goes on an international hunt for revenge while insisting to himself that it isn’t personal.

article here

The film marks the second Fincher-directed feature shot by Erik Messerschmidt ASC, following the Citizen Kane drama Mank, for which he won the 2020 Academy Award for Best Cinematography.

It is also the latest in a long line of Fincher movies since The Social Network to be shot on RED.

“There was not a conversation about using another camera system - there never is with David,” Messerschmidt says. “RED as a partner have been enormously collaborative with us in terms of helping us develop new ideas and solve problems. RED is absolutely creative partners to David’s process and certainly to me.”

Establishing the film’s visual code

The filmmakers play with the rhythm of having the audience watching the contract killer at work or conceptually being inside his head and seeing exactly what he’s looking at. This evolved into a coded set of visuals to convey the objective and subjective sides to the story.

“On Mindhunter and certainly on Mank the visual grammar was very much about putting the audience in the position of observer,” Messerschmidt explains. “They are not participating in the conversation, they are right behind someone, over the shoulder.

“David didn’t want to do that on this movie. He wanted to put the audience in the position of watching this person go through their process, very objectively, until the moment where the Killer sees something and then we are subjected to his point of view, front and center with him. In the beginning we place the camera just off his eyeline, seeing his point of view, starting off with very long Fujinon Premiere PL zooms. That’s established early on, and we use that throughout the entire film.”

A later dialogue scene in an upmarket restaurant where the Killer meets his intended victim (played by Tilda Swinton) would normally have been shot by Fincher and Messerschmidt with a wide, a shot over the shoulder, then the reverse.

“We didn’t do that on this movie. It was very intentionally about subjectivity and what is going on dramatically in the scene. We bring the audience into a space where no one ever gets closer than the character would allow anyone to get to him. In those situations, we have 29mm and 25mm Leitz Summilux-C primes.”

Fincher works with V-RAPTOR

Fincher has been a staunch user of RED’s camera systems over the years, and this continues with The Killer which marked his first use of the RED V-RAPTOR 8K VV.

“When we were doing the initial director’s scouting on The Killer, I was sent a prototype V-RAPTOR. We had shot Mank on the HELIUM 8K S35 [monochrome] and that would have been fine on this film but there were certain things that I wanted in particular for The Killer. For instance, I wanted greater spectral performance in low light. We also wanted to shoot Cinemascope 2.35, to capture the killer and his prey together in the frame.”

“V-RAPTOR was the answer. It is extremely high resolution with fantastic spectral sensitivity. It maintains saturation all the way into the deep end of the exposure - which is a systemic problem with digital cameras - and it allowed us to shoot in 7K with our Super 35 lenses so we could perform an enormous amount of post destabilization for all our handheld work.”

A fight scene in a Florida house involving the Killer and rival assassin ‘The Brute’ has been meticulously “art directed” by Messerschmidt to add more shakiness to the handheld camera work.

He explains, “Brian Wells is just too good an operator, so we needed to screw the work up a little bit. We would cover the shot in 5.5K which translates to 24mm wide on the sensor when you put on the S35 lens. Then we’d put the camera in 6K or 7K mode depending on how much bumper we wanted outside of the frame to allow us lots of room for stabilization or destabilization in postproduction according to the pitch of the scene.”

“The V-RAPTOR is uniquely suited to that type of work. It is something we couldn’t have done on another camera, because there just isn’t that sensor room, unless we were to have a substantially smaller shooting size, which was not what we wanted to do. It’s truly an amazing storytelling tool.”

It took a week to stage the multiple set-ups for the Brute fight scene. “Most of it was shot two camera although we put three on the more dangerous stunts such as when they break a table. With David it’s almost exclusively a two-camera shoot. We sometimes tease the idea of doing a single camera set up, but inevitably it will end up as two cameras.”

RED KOMODO was also deployed, notably to record plates in Paris for the opening sequence of the view from the killer’s nest.

“I adore the KOMODO. I’ve shot a lot with it. It’s great because the color processing and science, is identical to the V-RAPTOR so it blends beautifully. In Paris when the Killer is staking out the apartment across the street, all the plates were shot from a six-storey window. We had nine cameras stuffed in this tiny window so KOMODO was really helpful because we could get those lenses extremely close to each other. Also, the camera’s global shutter was helpful in creating sharp, solid images for car plates in this film.”

Designing the look

Although the composition for The Killer owes something to the French graphic novel ‘Le Tueur’ source material, the camerawork is precisely framed and controlled to mirror the confidence and movement of Fassbender’s performance. None of it was storyboarded in advance.

“We don’t storyboard unless there’s something that we need to specifically articulate to a large group of people, such as trying to explain which streets are locked off for a car chase,” the DP elaborates. “When I work with David our approach is more nuts-and-bolts. It might be that I’d cover the scene with a 29mm lens, the actor will come in right to left, maybe sit at a table and we’ll get overs, CUs and POVs. That is usually our workflow, and it was the same on The Killer. The scenes aren't particularly complex in terms of coverage the way they were in Mank.

David is also really generous with actors and allows them to run the scene and see where it goes. There’s plenty more flexibility to change the way a scene is built out than people realise.”

The rest of the film’s style and much of the tone and palette emerged during location recces with the bulk of interiors shot on stage in Louisiana.

“Even though the film is stylistic our entire approach came from a place of naturalism and available light. We don’t actually use a lot of artificial light. This came out of our experience with David and [production designer] Donald Graham Burt looking at how the locations were going to work together. David was clear that the audience should experience each one as a discrete and different environment.”

They subdivided the picture into three looks seeking a humidity for the Dominican Republic and a cooler look for Paris. “For me, Paris always looks blue, even in Summer,” he says. “Perhaps it is those sodium-vapor streetlamps. To me, a film’s color is informed by so many different parties from costume and production design to make-up. The DPs participation is ultimately the icing on the cake. That’s not to say we are not actively involved in the conversation, but I rebel against idea that it’s exclusively my jurisdiction.”

HDR workflow

Beginning with Mindhunter, Messerschmidt has developed an on-set HDR workflow and evolved it on every single movie he has shot since in collaboration with post supervisor and Colorist Eric Weidt.

“I have been a proponent of that idea especially if you are going to finish in HDR. It’s a DIT-less workflow where we build the LUTs in advance. For The Killer we built three LUTs depending on location [Paris, DR and New Orleans/Chicago]. They don’t touch the gamma or contrast at all, but they do adjust color hue. Those LUTs were loaded into the REDs and the images given an HDR transform for on-set monitoring so when Eric does the initial grade, he fires up those LUTs as a baseline for us to start from. We monitored in HDR on-set with Sony 17-inch monitors and had HDR dailies – editorial had HDR as well – in DCI-P3 and Dolby PQ Gamma.

“The dailies are made essentially as a one light print. You apply the LUT and process the footage and there is no grading done at all. It’s very similar to film actually.”

For all the kinetic and precision action of The Killer much of the film shows the anti-hero watching and waiting. Messerschmidt says he is proud of the fight scene his favourite shot is one of these silent and still moments.

“I love the car interiors when he is just sitting there such as one outside of the Expert’s [Tilda Swinton] house. His breath is condensing on the window, and I love that.”

 

NEP battles weather to keep the going good for Grand National coverage

SVG Europe

The 177th running of the Grand National is NEP’s seventh with ITV Racing and this year they’ve had considerable weather challenges during the build, with high winds, wet weather and soft ground to contend with.

article here

Regardless, the NEP technical teams got to work, with three days to build the multiple commentary and camera positions across the four-mile, 514-yard race course.

“During these builds there are always logistical challenges to deal with as well, as the race courses are preparing their course for their flagship event,” explains Jon Harris, technical projects manager at NEP. “For example, moving technical kit to remote places such as the famous ‘Canal Turn’ can be challenging and time consuming, as it’s the furthest distance from the OB trucks.”

Coverage of the racing action is pulled together from more than 50 camera sources plus 19 radio links including shared camera feeds from RaceTech (horse racing’s integrity coverage supplier), in addition to the multicamera presentation studio, on-course interviews and reports which take place during the show. NEP Connect is also supporting with the live connectivity.

“Every year, we continue to refine our setup and rig,” Harris says. “Our years of experience in delivering this complex event, combined with our continuity in our crewing, means we have the top specialists when it comes to horse racing broadcast."

Throughout the calendar, NEP provide NEP Equinox as the main HD baseband broadcast truck for ITV Racing and many of the specific racing requirements can remain setup within the truck, meaning more efficient build and rigging times.

“For events such as the Grand National we are able to scale the operation to meet specific technical requirements, meaning we retain the same work space for the production team and still have the benefit of having the core setup installed in the truck,” Harris says.

By and large the camera set up and presentation is the same as 2023. This includes a pair of depth of field cameras (used extensively on earlier Six Nations coverage) for, “lovely emotion shots of winning jockeys and crowd,” says ITV Sport programme director Paul McNamara.

The experiment putting augmented reality (AR) graphics onto drone footage last year did not prove as successful. “It took such a long time for us to get it into the pre-record so we’re not doing that this time,” explains Tony Cahalane, technical director, ITV Sport.

However, ITV Sport is deploying AR graphics more extensively from a feed captured by the main wire-cam that runs 180 metres across the course. This Comcat Colibri wire-cam is supplied by specialist camera partner ACS with broadcast graphics partner Alston Elliot (AE) producing the graphics as it does across the show’s whole output.

Two tracking vehicles, also supplied by AVCS, keep pace with the riders travelling at 40mph with a Cineflex on a jib and a Sony F55 for 3 x high motion. Another super-slo is positioned on the finish line.

Four minicams are built into fences. At Fence One, for example, a minicam in the middle of the fence takes shots towards the grandstand of the runners.
“It’s a really impressive shot with the runners all coming towards the fence and jumping it,” says Harris.

With all the current industry excitement about whether the promise of 5G network slicing might finally be about to be fulfilled, it is something that has been on ITV’s radar for some time but still might not be appropriate for a large race meet like the Grand National.

Explains Cahalane: “Effectively there are two reasons why it doesn’t help us. At Aintree we really need a guaranteed 5G network and that doesn’t happen without cost and the cost is quite extreme. Added to which at this moment there isn’t enough 5G infrastructure in the immediate area. We would need more Towers.

“But it’s also a question of control. Radio frequencies are in our control completely whereas that’s not necessarily the case with a 5G network where you are beholden to a telecom provider. The Grand National is one of those situations where we know what we need to do in order to get the best pictures and so we want to maintain RF spectrum to be absolutely in control.”

News coverage is a different matter however. “I do understand 5G’s advantage for pop-up news stories especially in places which aren’t necessarily populated with a hundred thousand fans who are all on their mobile phones,” Cahalane says.

“5G definitely has uses but not yet in large-scale events like the Grand National. It’s not only cost prohibitive, it’s easier for us to do it with digital radio camera technology.”

The average viewing figure for last year’s Grand National programme, aired on ITV’s main channel, was 4.4 million, which was up from 4.1 million in 2022. The team are hoping to have matched or exceeded that this time around.

The Grand National was won this year by ‘I Am Maximus’, the 7/1 joint favourite.