Friday, 28 April 2023

Shopify decentralizes creative workflows and secures global video production on LucidLink

copy written for LucidLink

Shopify is a leading provider of essential internet infrastructure for e-commerce, offering trusted tools to start, grow, market, and manage a retail business of any size. The Shopify platform offers online retailers services, including payments, marketing, shipping, and customer engagement tools. Founded in Ottawa, Shopify powers millions of businesses in more than 175 countries and is trusted by brands such as Allbirds, Gymshark, Heinz, Tupperware, FTD, Netflix, FIGS, and many more. 

article here

Shopify’s creative powerhouse

To maintain the quality of its leading e-commerce brand, Shopify has built an internal creative team handling marketing communications. Most of the content they produce is centered around video. “We are not a creative agency, but we have the in-house skills to film and produce video content addressing an array of communications needs of our staff and clients,” shares Kevin Luttman, Content Management Lead, Shopify. “Our content ranges from corporate messages and executive talks to training and guides on new features.”

IT is the backbone of their media production pipeline, connecting dozens of creatives across multiple teams. They are primarily based in North America but regularly collaborate with people all over the world.

The challenge of adapting to remote work

Like many other businesses, Covid-19 completely disrupted the established ways of working at Shopify. “Before the pandemic, most company investments were concentrated on the office, and all of our servers were installed on site,” says Luttman. “There was no need for Shopify to build infrastructure supporting remote video work – no one was asking for it.”

“The pandemic flipped our strategy on its head virtually overnight. We had to find a way to continue working but in an entirely decentralized fashion,” adds Luttman.

Shopify faced the seemingly impossible challenge of having multiple distributed editors working on the same project — simultaneously.  

The company needed a robust, secure solution to connect its creative teams collaborating on shared data-rich assets. “Neither Dropbox nor WeTransfer are approved solutions at Shopify. The last thing we wanted to do was to equip our people with RAID systems or massive local storage. Not only would that be a security risk, but also administering and maintaining distributed hardware was not something we were set up to do,”  Luttman explains.

Enter LucidLink

Shopify’s creative team first heard about LucidLink during conversations with Adobe. “We were discussing whether Adobe Productions and our primary editing tool, Premiere Pro, would work better using a VPN or sharing a central SAN over the Internet,” Luttman recalls. LucidLink was recommended to Shopify as the only officially sanctioned solution for use with Productions over the Internet.

“We ran LucidLink through internal validation tests using Adobe Productions and almost couldn’t believe it at first. What initially struck us was how easy it was for creatives to use Filespaces.  Sharing files was intuitive and rapid. The response from employees was equally enthusiastic.”

Shopify was able to immediately onboard 100 members of their marketing communications team and now has more than 120 people collaborating on LucidLink every day. These include teams of editors, producers, directors of photography (DoPs), production staff, and heads of departments who require visibility over multiple projects. “Everyone can access media assets far more quickly than before, and team collaboration is enabled from anywhere,” says Luttman.

Efficiency supercharged

The most significant benefit of LucidLink was the amount of time it saved Shopify. Conventionally, if you’re collaborating remotely, you would go through the laborious process of uploading, downloading, and relinking every single file,” Luttman points out.

It meant that one of  Shopify’s creatives could finish up their work on a file or project and save it in Premiere. Immediately, another creative, across the other side of the world, could access it and begin editing.

Shopify didn’t have to quantify the costs as the results in speed and efficiency were clear. “If you are looking for a tool that lets creatives just be creative, then LucidLink is the way to go. Using LucidLink is a no-brainer,” insists Luttman. Everyone wants technology to be invisible and simply work in the background. “Technology shouldn’t require any complicated setup or complex training. It should just work – and LucidLink does just that. Try it! It will forever change how you work remotely,” Kevin Luttman, Shopify.

 


NorthSouth Produtions Speeds Post Operation with FileRunner

copy written for Sohonet 

article here


Makers of comedy, lifestyle, and documentary series, NorthSouth Productions is one of the world’s most trusted content creators. They’ve successfully navigated the ever-changing media landscape for over two decades by putting talent and creators first. This approach has yielded long-running series and created household names, including Impractical Jokers, 100 Day Dream Home and Say Yes to the Dress: Atlanta

Based in New York City, and Knoxville, Tenn., and with its non-scripted shows based everywhere from Tampa, Fla. to Milwaukee, Wis., NorthSouth needs to send files in a timely manner to its hub facilities for postproduction multiple times a day. The company’s Vice President of Technology Kim Pratt shares why Sohonet’s File Runner has become a solution.

“Prior to the pandemic, or principal way of sending large amounts of files back and forth was to put them onto a drive and send the box by FedEx,” says Kim Pratt, VP of Technology, NorthSouth Productions. This was never quite ideal, fraught with potential delays and risks about losing media in route. Like many companies, the need to work remotely in the early days of Covid-19 brought about a rethink.

“The first thing we did was use Sohonet ClearView Flex as a way for clients to remote view online sessions,” explains Pratt. “That proved highly successful. Soon afterwards we discovered FileRunner, which we now rely on one hundred percent to send files from the field to Knoxville and between editorial teams in our two offices.”

NorthSouth houses their offline editorial out of Knoxville. Its roster of editors reverted to remote from home working during the pandemic, linking their Avids into shared storage. The company bases ingest operations for all its shows in Knoxville, which also houses online ProTools sound mix and colour grade, as well as an extensive LTO archive.

“There’s nothing to download or install on your machine to get started, and you don’t have to invest in expensive server installations.”

NorthSouth has maintained a flexible working environment for its editing teams and clients, enabling them to work from anywhere and use FileRunner to connect and share the media.

“The courier service was intended to be overnight but sometimes that was delayed which pushed us off our deadlines. Also, if the shoot overran then we might miss the courier window. None of that applies when we use FileRunner to share files of unlimited size anytime we like, securely over the internet.

“All of our shows use FileRunner now,” adds Pratt. “It is definitively cheaper, far more secure and a lot quicker. We are able to quantify that since we can compare our current operational costs and efficiencies with earlier seasons of shows like 100 Day Dream Home and Lil John Wants To Do What?”

Continuing with their talent-first approach, North-South has new projects in the works with Food Network and Investigation Discovery and is starting 2023 with a robust slate of new formats.

 

Thursday, 27 April 2023

How new laser projection technology delivers huge energy savings for cinemas

Screen Daily

Twelve years on from the big transition from 35mm to digital cinema, projection is undergoing another overhaul — this time to replace high-pressure mercury and xenon lamps with laser light illumination. Not only does the newer technology deliver superior picture quality (colour reproduction, higher contrast ratios, consistent illumination), but significant energy savings for cinema owners.

article here

“Laser is a natural transition in projection technology as it offers serial benefits to cinema owners, including potentially huge cost savings,” says Phil Lord, manager at cinema technology company Christie Digital Systems.

Laser projection was first introduced in 2014 with systems costing around $327,000 (€300,000). Sales were sluggish with exhibitors content to wait for the end of life of their current systems before making the upgrade. Now soaring electricity costs and industry-wide attention on sustainability have refocused buyers’ minds. Around 13% of the 200,000 cinema screens worldwide are installed with laser, but with lamp-based product contributing less than 20% of new projectors sold, and with the costs of laser projectors falling to $38,000 (€35,000), the number is expected to tick upward at pace.

“Laser is a key enabling technology contributing to the wider sustainability of the whole industry,” says Carl Rijsbrack, chief marketing officer and head of innovation at projector manufacturer Cinionic.

One environmental and financial gain is that lamps no longer need to be replaced. Xenon bulbs typically last 500-1,000 hours before burning out. Lasers can power light for 50,000 hours before eventually fading below industry-benchmark specifications, provided the system is properly operated and maintained. “This means no lamp delivery and no lamp disposal,” says Lord. “It also means an engineer doesn’t have to visit the cinema to change lamps.”

Laser is more energy efficient compared to traditional lamp-based technology. It generates less heat and does not require external cooling or ventilation systems, reducing energy consumption further. At this year’s UK Cinema Association conference, Mark Williams, director of WTW-Scott Cinemas, which operates in southwest England, demonstrated that in illuminating similar-sized screens, laser used 70% less power than lamps. That is the same figure Cinionic claims theatres powered by its laser projectors can save over a product lifetime.

Investment return

Further data from supplier Sharp NEC suggests that based on current market costs for energy, it would take 30 months to return investment in a laser projector installed in a standard small screen. The period is less than five years for larger screens.

Manufacturers offer online ‘ready reckoners’ for exhibitors to input screen number, screen dimensions and current electricity costs and calculate approximate operating costs between legacy projectors and rival laser products. “The arguments are compelling but a main issue for exhibitors is finding the upfront finance,” says Mark Kendall, business development manager at Sharp NEC.

A xenon kit is cheaper to purchase. While NEC, Cinionic and fellow supplier Barco have all discontinued production, Christie continues to manufacture three xenon models and has even reinvested in the technology. “There is demand in certain territories for xenon,” says Lord. “Some post-production facilities have been using xenon for years and want to carry on, others where capex is a big issue.”

Since the guts of the digital light processing (DLP) chip set in any projector are fundamentally the same whether a xenon lamp or laser pushes light through the lens, theatre owners can upgrade their current projectors with a new laser source. This can double the working life of machinery, says Rijsbrack. Cinionic also enables cinema owners to lease its laser equipment.

Costs for the purchase of the laser are subsidised by governments in some territories including Italy, Germany, Denmark and the Netherlands, as part of wider green and energy-saving initiatives. There is no such scheme in the UK.

Christie has developed a laser optical system that, according to the company, further enhances system efficiency. “New laser diodes are much more efficient and more field replaceable,” explains Lord. “An engineer can easily swap out a laser module on site as opposed to having to ship the projector back to a lab.”

Panels of direct-view LEDs are an alternative technology that eliminates projectors and projection screens altogether but, according to Kendall, this is far more power-hungry.

“All the major circuits like Odeon and Cineworld are looking at how much everything costs, from projection to sound systems to the Slush Puppie machine in the foyer,” says Lord.

All contend that, among the technologies in the building, a move to laser projection will bring the greatest environmental and financial saving.

 

How 1,065 unique versions of ’Avatar: The Way Of Water’ were delivered to cinemas

Screen Daily

Mastering and delivering Avatar: The Way Of Water in multiple formats to cinemas worldwide required vendors to come together and execute the film in more than 1,000 versions. Screen talks to the companies about the groundbreaking effort.

article here

A delivery process that involved 1,065 unique versions of the movie has helped propel 20th Century Studios’ Avatar: The Way Of Water to more than $2.3bn in global box office receipts. It makes James Cameron’s Lightstorm Entertainment production among the most logistically complicated titles ever released.

In order to meet the global release on December 16 last year, Disney created new asset management workflows, developed a mastering process in the cloud and collaborated with suppliers on a scale no studio had previously attempted. 

Kim Beresford, The Walt Disney Studios’ vice president of planning and motion picture operations, explained in February at the Hollywood Professional Association (HPA) Tech Retreat 2023 Supersession in Rancho Mirage, California: “The type of experience that Jon [Landau, producer] and Jim [Cameron] wanted the audience to have was partly about the best 3D version, partly about being able to fill the screen — whatever type of screen is at your local cinema — and partly to get the brightest amount of light onto screen based on what each projector could handle. It was all to have the audience really feel immersed. Those were the guiding principles.”

They started with 27 discrete picture formats to meet the basic specifications of theatres including Imax and Dolby Vision. That quickly multiplied with the addition of audio formats (Dolby Atmos, 5.1, 7.1), each in 51 languages supported with subtitles and 28 languages supported by dubbing. That number immediately doubled by delivering at 48 frames per second (fps) and required combinations of 2D, 3D and 24fps. There were even different colour grades for conventional digital projection systems depending on their light output. The aspect ratio of individual screens was another key variable.

“The first idea of a plan we had was 3,000 versions,” Beresford revealed. “But when we looked at the potential capability that exhibition might have, it turned out we didn’t need all those. Not every exhibitor can play everything in the way we thought they could, not all markets or versions were required. So we ended at 1,065 full-feature versions.” By contrast, a typical Marvel blockbuster has around 500 versions. “

While huge, the 1,065 number might not have presented so much of a challenge but we were also dealing with a huge increase in data,” says Rich Welsh, senior vice president of innovation at Deluxe, one of three vendors on the project. “The more data you have to move, the more time it will take.”

The project is estimated to have amassed 10 petabytes (10 million gigabytes) of data — more than 10 times a regular tentpole feature.

Time crunch

Disney made the unusual decision to break the 192-minute film into 15 reels of varying lengths to allow the mastering and versioning process to begin before the final full feature was locked. Meanwhile, the time to make and check all the versions was slashed from the industry standard 45 days between picture lock and worldwide release to just 16 days between filmmaker approval and delivery to cinemas in the case of the 15th and final reel.

The reason for the tight window is given as the perfectionism of director Cameron. He and Landau had asked Disney ahead of time if they could lock the picture as close to release as possible.

“We had to manufacture more time,” said Mark Arana, vice president of distribution technology at The Walt Disney Studios, at the HPA event. “Since all the data was at Park Road Post Production in New Zealand, our main operations are on the US west coast and our vendors were mainly in Europe, as was our dubbing facility, we had to turn our operation into a 24/7 support model. Moving to a cloud-based workflow enabled everybody to receive content on time.”

After the data was received from Park Road Post Production in Wellington, the studio had to quickly churn out digital cinema packages (DCP) for each of the 15 reels and send them to Deluxe, Eikon and Pixelogic for creation of local language versions, 3D subtitles, and for quality control (QC).

While this mastering process was automated and managed by Disney software ADCP (Automated Digital Cinema Package), it was only possible after the technology was scaled up to the task. “For media creation and transformation, we leveraged Sundog — a tool that we needed to evolve to be able to scale,” said Arana. “Partnering with Deluxe [which owns the Sundog technology] was a key part of that.”

The September 2022 re-release of the original Avatar film at 48fps was an opportunity to stress test the Sundog engine before it was scaled up to work on Avatar: The Way Of Water. Traffic-light approval system QC  Assist was created to manage, track and synchronise assets across Avatar: The Way Of Water required innovative collaboration and new technology to deliver the film at the same time to audiences around the world all vendors. “If any vendor QCed [quality controlled] a reel and failed the picture, then it failed that picture reel everywhere it was being used,” explains Welsh. “Conversely, if a reel was signed off, then it was crosscorrelated across all assets.”

This central quality control register enabled Disney to farm out the project to multiple geographically located facilities and vendors and keep track of it all. “None of this existed before,” says Welsh. “The ability to co-ordinate work reel by reel into final conformed versions globally and across multiple vendors was new.”

The localisation process, however, cannot be done automatically. Experienced talent is required for foreign-language translation (subtitling, recording voice actors for dubs), sound remixes in the new language, subtitle translation and positioning subtitles line by line.

Most films will have their 3D subtitle versions derived from the 2D one, but not here. The filmmakers wanted the 3D experience to be prioritised and therefore an inverse of the normal workflow.

“This was about micro-placement of subtitles, not just top, middle or bottom, but along the Z [depth] axis,” explains Andy Scade, senior vice president and general manager of digital cinema services at Pixelogic Media. “Lightstorm were signing off on every 3D placement to ensure the 3D was as comfortable for the viewer as possible.”

Sharing workflow

Localisation, 3D subtitling and QC was split between vendors, with each performing similar workflows. “It was an incredibly timecompressed project,” says Jonathan Gardner, chief information officer at Eikon Group. “Usually, you would receive the whole finished film, QC, then distribute it. Here we were doing QC reel by reel. For every single version, we were doing 15 mini QCs, 15 mini validations, 15 subtitle placements — substantially more than doubling the amount of QC.”

For the 3D subtitle placement, Gardner explains Eikon would map the translations over the stereo picture. “When you translate the subtitles into, say Portuguese, the text could be longer than the English version,” he says. “Or in Korean the subtitle needs to be placed down the side of the frame. This impacts the 3D experience, so you need to change the off set to make sure you’re mirroring the director’s creative intent for the shot while ensuring the subtitles are legible.”

As the final QCed DCPs came off the production line, they are packaged per territory and sent to the local distribution vendor, which would route them to cinemas by hard drive or electronic delivery. 

“The data overhead was enormous,” says Gardner. “We upgraded all our storage systems and increased bandwidth throughout the building. We rewrote parts of our MAM [media asset management] to auto-generate tens of thousands of work orders, and built integrations into third-party systems for QC management. We had to maintain all these assets in a regimented state across all our infrastructure, and the only way to do so is by software and automation.

“We moved forward leaps and bounds in how we project manage at scale, which is a template we can take forward to other projects,” adds Gardner. “It was great for us operationally and solidified the efficiencies we knew we could achieve.

Similar efforts were made at Deluxe. “We have worked reel by reel before but never with this complexity and timescale,” says Welsh. 

Aside from QC executed in screening rooms, the rest of the vendors’ work was hosted and performed in workstations on media pulled from Amazon Web Services Cloud. Park Road delivered directly into AWS where all mastering of reels was automated by Disney and Deluxe technology. 

“The use of cloud is relatively new in post-production,” says Welsh. “Avatar: The Way Of Water points firmly to the way forward for this type of work. It showed we could do a delivery pipe that was entirely in the cloud with huge advantages of scale. That we just did the biggest release of all time should answer any lingering questions about security.”

The achievement also lays the ground for even more technically ambitious projects that put clear blue water between theatrical experiences and the home. “We could go to higher than 48 frame rates and more immersive experiences,” says Welsh. “The door is now open.”

Because co-ordinating QC and mastering is so complex, studios have traditionally trusted work per title with just one vendor — but not this time. As Disney’s Beresford testified, “The thing I am most proud of is the level of collaboration and innovation that everyone brought to the table.

 

IABM: Broadcasters Need to Clarify Cloud Economics (and Expectations)

NAB

The Broadcast and Media (B&M) technology market was worth $67 billion in 2021 as the industry continues to rebound from COVID. The market will grow between now and 2026 at 1.6% a year, according to the latest annual survey by the IABM (The International Trade Association for the Broadcast & Media Industry) and research company Caretta Research.

article here

Much of the decline in 2020 and the subsequent rebound in 2021 was driven by the production and post-production services industries as the creation of new content was postponed until after the worst phase of the pandemic.

Many areas of the industry are expected grow at closer to 3-5% CAGR, such as technology used in the production of content. For instance, the report expects rapid adoption of camera-to-cloud tools despite lack of a common interchange format between vendors. Remote collaboration and IP intercom systems are prime examples of the “enormous improvements” to efficiency forced into being across the industry by COVID.

PTZ cameras with more sophisticated optics and automation are being increasingly used for a vast array of events. PTZs with AI, for instance, are being used to assist in flagging and referee reviews of off-sides in sports. Distributing more live feeds from an event is now becoming a requirement yet “there is no prevailing format, metadata or rights management for this presently,” notes the report.

Most concern in the IABM report is placed around the shift to cloud.

“The words ‘cloud’ or ‘the cloud’ raise many different thoughts, prejudices and different meanings among our peers,” writes Lorenzo Zanni, lead research analyst at IABM. “To some, cloud is playout, others distributed computing, and yet some only think of this as offsite public storage within our industry.”

The vision of running everything in an off-prem cloud — whether public or private — still has limitations, the IABM finds. These limitations are typically overcome by using hybrid solutions, and sometimes even by including dedicated hardware. The trend towards object storage is clear, which means this scalable solution is quickly becoming a commodity. However widespread lack of understanding about cloud and cloud economics “has mid-size companies hesitant due to cost models and lack of fully scalable storage across various platforms.”

Uncertainties also remain over the understanding of public cloud security, meaning not so much concern about hackers stealing content but more about users having the know-how to configure security.

Overall, though, whether the cloud is public, private or hybrid, the IABM finds the infrastructure continuing to move away from dedicated hardware to more virtualized edge computing.

Although AI/ML are considered mature for closed captioning, script and data generation, AI is not widely used yet for QC and surveillance of networks. The use of AI for creating short-form advertising is underway. In sports AI is being used for player and ball tracking, off-side calls, and sound mixing.

“AI chatbots must be factored into new workflows especially on the creative side,” says Zanni. “AI video creation platforms are bleeding edge and many are finding this fascinating yet not ready for prime use yet.”

The creation of deepfakes with the assistance of AI is a worry, the IABM says, citing the use of Respeecher by sound editors without first securing permission from talent.

“ML systems leveraging complex language models will continue to improve interactive chat, and automated creation of editorial content, but also SPAM, Phishing and other security threats, including the building and drop of malicious code.”

LED wall virtual production using game engines for volumetric productions is no longer considered bleeding-edge according to this study, with traditional studios quickly adopting XR within their sets and production.

There are no recent dramatic breakthroughs in image sensors with momentum towards 8K slowed due to the COVID hiatus.

Media companies must learn, understand and respect the video game world, the IABM advises, as this will help them to quickly gain newer viewers, which in turn will give them the all important brand recognition within an interactive space.

Sustainability is starting to move beyond “nice to have” toward becoming a requirement. Companies that are merely greenwashing are being called out, and hard facts are becoming a requirement as carbon footprints are becoming part of the RFP process, along with surcharges, though “there is little balance yet between the costs of sustainability vs. profits.”

 


Wednesday, 26 April 2023

AI Copyright Law Must Also Account for the Creators/Users/Prompters

NAB

Instead of pitting artists and content creators against AI technology developers the debates about the future of creativity and copyright should be more nuanced, argues attorney Derek Slater, founding partner of Proteus Strategies.

article here

Many artists and content creators are users and beneficiaries of AI tools, and so the way these tools are regulated will impact them, too.

Consider the introduction of the camcorder, the mobile phone, and platforms like YouTube. All were demonized in some quarters as a threat to artistic creation by democratizing access to media, yet all have enriched our culture. So, generative AI is the same, Slater writes in an op-ed for Tech Policy Press.

“We see a familiar cycle – new technology democratizes creativity and enables a variety of new types of uses; initially, it’s seen at worst as a threat to art and artists, and, at best, marginal; and over time, it helps foster new forms of creativity and opportunities for creators to find audiences and make money.”

The core copyright concern with generative AI is that many tools are trained on massive datasets that contain copyrighted works, where this training has not been specifically licensed.

Slater contends that by keeping the interests of creator-users in mind, we can better arbitrate between what copyright should allow and prohibit.

“No creator develops their craft in a vacuum. Everyone learns by engaging with past works. You might walk around a museum and read painting manuals to learn how to create your own Surrealist art. Or you might watch classic horror movies in order to create your own take on the genre. Copyright has always permitted this sort of behavior, so long as the resulting creative output doesn’t copy directly from past expression or create something substantially similar to preexisting expressions.”

That doesn’t mean all generative AI tools should necessarily be permissible in every circumstance. Legal scholar Mehtab Khan and AI researcher Alex Hanna, in their more critical take on these tools, note a tougher call would be a system trained on a particular singer’s work in order to specifically generate songs like hers.

While style is not generally protected by copyright, the facts of each case will matter. For Slater, the key question is whether the tools are designed to substitute for particular creative expressions, rather than enabling new expressions and building on pre-existing ideas, genres, and concepts.

Someone can use a general purpose tool like Midjourney to create a work that is substantially similar to an existing copyright work. However, that shouldn’t mean the tool itself is infringing per se, as opposed to the user of the tool.

Slater says, “building on existing legal approaches, liability for the tool will depend on whether and how the tool developer or service provider knows about, contributes to, controls, and financially benefits directly from infringement.”

Addressing concerns that AI is reinforcing existing tech market structure he argues that extending copyright to further limit training on copyrighted works is unlikely to help and may even hurt creators of all stripes.

In a post examining AI art generation and its impact on markets, author Cory Doctorow and policy advocate Katherine Trendacosta imagine a world in which all AI training on copyrighted works must be licensed, and explain how this would be a “pyrrhic victory” for artists. That’s because, media markets are also highly concentrated (in part due to copyright itself), and the licensing fees would accrue to those corporations, not to artists.

Moreover, only those tech companies with substantial resources would be able to afford such licenses, reinforcing concentration in that sector.

“The solution to monopoly concerns in tech is not, then, to beef up the government-granted monopoly of copyright, but rather to apply other policy solutions, such as competition and privacy laws,” Slater says.

“The impact on labor markets is a real concern, but it’s also important to recognize that foreclosing generative AI also has an impact on creator-users of those tools.”

As one example, if you look at artist Kris Kashtanova’s tutorials, it’s apparent that generative AI can involve far more of a craft than simply clicking a button.

“People are right to call out the need to think about the impact of these tools on existing artists and content creators, and the political economy of the current tech sector,” says Slater. “But a full accounting can and should factor in the creator-users of these tools as well, both the ones that are emerging today and those that may come in the future.”

 


Mass, Interactive Live Streaming Needs a Different Strategy/Structure

NAB

As video becomes increasingly interactive, the need to accelerate the speed of online traffic is becoming critical.

article here

Outlining the issue at The Next Platform, Vincent Fung, senior product marketing manager at AMD, says, “It’s starting to put a strain on the infrastructure when it comes to the networking pipe and also in terms of processing on the server side. The previous traditional [infrastructure] model starts to not make much economic sense. It becomes a harder model to keep up to address these use cases.”

The prevailing internet model works for streaming. In a one-to-many on-demand environment driven by companies like Netflix or events like the live broadcast of sporting competitions, the video feed starts in a single place runs through cloud datacenters, content delivery networks (CDNs), and edge servers before landing in enterprises offices or the homes of consumers.

It always comes with a little bit of a delay, given the amount of processing and computing that needs to be done in the datacenter to ensure good quality or because broadcasters are looking for a few seconds of delay for editing purposes — but these delays don’t pose a huge problem for such scenarios.

Sean Gardner, head of video strategy and development at AMD, explains, “Netflix can take 10 hours — and they do — to process one hour of video and they can use it in off-hours when they excess capacity. But ‘live’ needs to happen in 16 milliseconds or you’re behind real time, at 60 frames a second.”

Applications demanding real-time interactivity, on the other hand, do. These range from video game live streaming on Twitch to video conferencing.

Gardner says, “If you think about this scenario, where you could have Zoom or Teams, it could have billions of people using it concurrently. Or Twitch, which has hundreds of thousands of ingest streams. The other aspect with live [streaming], is that you can’t use a caching CDN-like architecture because you can’t afford the latency. This is why acceleration is needed.”

Fung adds, “There’s a lot more processing that needs to be done from a video perspective when we look at these interactive use cases when one-to-many becomes many-to-many. You need high performance because you have a lot of people using it. You want to minimize bandwidth costs because the uptake is large.”

Chip makers like AMD and Intel are aware of the issue and trying out different architectures to boost the throughput of video throughout the pipe. AMD, for instance, has a “datacenter media accelerator” and a dedicated video encoding card that can more than halve the bitrate to save on bandwidth.

According to the experts, it’s not just video applications that will benefit from such acceleration. AI use cases are also on the rise.

The AI-Based Future for Video Compression

NAB

AI is considered an essential new tool in the progress towards future video compression technologies, but the next few years will be dominated by the transition to existing standards, including AV1 and VVC, according to a new InterDigital and Futuresource Consulting report.

article here

“The Evolution of Compression in a Video-First Economy” outlines the development path of video compression codecs that have proven to be critical in reducing bandwidth and improving efficiency in the delivery of the data dense video services.

The report restates the case that video dominates internet traffic today, with more than 3.5 billion internet users streaming or downloading some form of video at least once per month and that, with applications for video expanding, state-of-the-art codecs are needed to not only reduce storage and bandwidth but also use energy more sustainably.

Spotlight on VVC

Unsurprisingly given its own stake in the development and licensing, InterDigital makes much of Versatile Video Coding (VVC/H.266) as the standout video codec that will take over much of the work from existing lead standard HEVC.

Based on research by codec specialist Bitmovin, H.264/AVC continues to be a popular choice for live and offline encoding, but InterDigital thinks this is likely to be overtaken by both H.265/HEVC and AOM AV1 within the next two years.

The VVC (H.266) is based on H.265/HEVC and offers up to a 50% bit rate reduction.

Alongside InterDigital, major semiconductor companies including Qualcomm, Broadcom and MediaTek are among the largest contributors of intellectual property into the VVC standard. They will integrate into Android smartphone chipsets, helping to drive VVC adoption into mobile. More widely, hardware decoders are under development to provide support for VVC on TVs, STBs and PCs.

Chinese technology giants Alibaba and Tencent each have their own versions of VVC codec (S266v2 and TenCent266, respectively).

NHK and Spin Digital are also developing real-time software VVC decoders to support UHD live streaming and broadcast applications.

Therefore, InterDigital believes, VVC is likely to become the favored codec as UHD services proliferate with expectations that it will be universally accepted and used from 2026 onward.

However, it says, “VVC may not replace H.264/AVC and H.265/HEVC entirely, but instead the industry is likely to advocate the coexistence of multiple codecs during the transition.”

Looking further ahead though and codecs like VVC, HEVC and AV1 will likely be superseded by technologies based on neural networks.

“The days of static, block-based codecs may be coming to an end,” InterDigital notes. “Traditional coding techniques use hard-coded algorithms and, although these are entirely appropriate for saving bandwidth, their advancement is still based on traditional heuristics.

“New coding methods, notably those exploiting the power of AI, are poised to supplant current wisdom within the next five years.”

Enter AI

Machine learning techniques are being researched by the major video standards organizations worldwide. The MPEG JVET Adhoc Group 11 is working on NNVC (Neural Network Video Coding) in an attempt to create an AI-based codec before the end of the decade.

The paper explains that there are three primary areas of focus: dynamic frame rate encoding, dynamic resolution encoding, and layering.

In dynamic frame rate encoding, the AI aims to encode video at the minimum frame rates necessary to encapsulate the content without sacrificing quality. News broadcasts might be encoded at 30fps, whereas live sports content benefits from 60fps.

“Using ML to train AI to identify the type of content, it is possible to significantly reduce the encode compute requirements, approaching a 30% reduction overall for content with limited movement between frames.”

Dynamic resolution encoding extends existing compression techniques that streaming content providers employ today. Here, the resolution-bit-rate choices are determined on a scene-by-scene basis to optimize file storage requirements and streaming bandwidth using encode ladders. Using AI, however, would remove the requirement for encode ladders.

“Replacing this ‘brute force’ approach not only reduces computation, but also helps improve sustainability by banishing unnecessary energy usage,” says InterDigital.

This applies to offline encoding as well. Netflix, for instance, has been using AI to avoid exhaustive encodes of all the parameter combinations, with neural based methods discovering the optimum selection to reduce file sizes.

The third AI focus on layering is aimed at delivering higher-resolution content. Using scalable techniques, UHD videos are encoded using a base layer at HD resolution along with an enhancement layer that conveys the extra data required to reconstruct UHD frames. HD-only receivers ignore the enhancement data, whereas a 4K-capable product uses the additional layer to decode and reconstruct the entire video stream.

AI-derived methods are also likely to extend beyond these traditional techniques. For example, AI could reconstruct video using low-resolution reference images alongside metadata describing the context, “delivering a credible approximation to the original video with a significant reduction in bandwidth.”

While ML and AI have a place in helping define current and future video coding, InterDigital says that the industry isn’t about to drop its existing tools and models.

“The industry concurs that traditional video coding tools presently outperform AI-based alternatives in most areas today,” it states. “There are over 30 years of engineering effort and hundreds of companies involved in perfecting video compression standards; this isn’t easily replicated or supplanted by introducing AI into the discipline.”

For instance, the complexity of neural networks “is often exceptionally high” which leads to a proportional increase in energy usage.

“This leads to the inevitable questions around the scalability of such solutions, and the impact of media on environmental sustainability,” InterDigital says.

There are other challenges with AI-based video development. One of them is measurement. While the video standard is fully described and verified against agreed metrics, “when using AI, it is sometimes difficult to explain exactly how the implementation operates; there must be an understanding on how AI-based algorithms adhere to the specifications and produce standards-compliant bitstreams.”

Part of the work of the JVET AhG11 is to establish clear rules by which AI methods can be assessed and made reproducible.

Then there’s the sheer pace of development in AI, which has resulted in generation of synthetic media. With synthetic media, instead of transmitting an image for every frame using existing compression techniques, systems can use AI to identify key data points describing the features of a person’s face. A compact representation is then sent across the network, where a second AI engine reconstructs the original image from the point data.

Consequently, InterDigital thinks it may become unnecessary to send video frames, and instead systems might describe scenes using metadata.

The next evolution is data-driven synthetic media, created in near-real time and used in place of traditional media that could see hyper-personalized video advertising created and delivered within seconds.

“Cloud and device AI processing capability will undoubtedly need to develop substantially for this to happen at scale,” says InterDigital, “however the future for video coding and transmission certainly seems destined for significant transformation.”


How SVODs Are Shifting Their Business Models

NAB

More than three-quarters of SVOD streamers plan to introduce ads by 2025, according to a new global B2B industry survey by video intelligence company NPAW. And nearly 60% of those will implement a hybrid model that combines an ad-supported tier alongside a premium, subscription-based one.

article here 

All of the survey respondents agree that the main driver for this shift is to lower the price of subscriptions for their viewers. Of the sample of streaming platforms surveyed for this study, 41% currently have a two-tier business model.

In second place comes free services with ads (23%), also known as the FAST model, and subscription-based with ads (22%). The traditional SVOD model, subscription-based pricing without ads, was the business model with the smallest sample representation (15%).

Overall, 85% of the companies NPAW spoke to are using ads as part of their monetization model.

Since NPAW is a provider of video analytics and business intelligence, it spends most of its survey assessing the use case for analytics.

“One of the key advantages of having full visibility into user behavior and experience is that services can identify users at risk of churn early on and address their perceived shortcomings before it’s too late,” the company explained. “This can be done by monitoring drops in usage and Quality-of-Experience issues such as buffering or latency.”

With that in mind, the survey asked respondents using third-party analytics if they used it to pinpoint at-risk users. Seventy-four percent of companies use their third-party analytics tool to identify customers at risk of churn. Eleven percent identify such users through other means, while 10% cannot do so with their tool’s information.

The most popular measurement method is with the data received from the ad server (38%) — despite 39% of companies believing that their ad server numbers are not fully reliable and accurate.

Third-party analytics tools were the most popular among a quarter or respondents, and another quarter favor a combination of these methods.

“It’s encouraging to see that more and more companies are taking a data-driven approach to running their video business, especially as the industry’s shift to ads brings a unique set of measurement challenges,” said NPAW CMO Till Sudworth. “To truly make the most of their advertising-based streaming business, video providers will need an advanced, third-party ad analytics tool — one that can help them track ad performance from an end-user perspective and correlate that information with insights about user behavior and content preferences.”

 


Why Streaming Has to Become More Social, Interactive, and Immersive

NAB

As streaming video competition continues to intensify, subscription growth rates across the industry have slowed — and churn rates have increased, according to Deloitte’s 17th annual Digital Media Trends report.

article here

On average, US consumers pay $48 per month for SVODs, Deloitte’s survey found. About half of those surveyed agreed that they “pay too much” for SVOD services, while about one-third said they intend to reduce their number of entertainment subscriptions.

Around half of consumers (47%) surveyed said they have made at least one change to their entertainment subscriptions because of their current financial situation, such as canceling a paid service to save money, switching to a free ad-supported version of a service or bundling services together.

Millennials are the most likely to have made changes to digital media subscriptions due to economic pressures. Indeed, millennials spend more than any other generation on paid streaming video services — an average of $54 per month. Nearly 45% of millennials have “churned and returned” with a paid SVOD service, cancelling a paid subscription only to renew that same subscription within six months, according to Deloitte’s study.

The report said that watching TV shows and movies at home is no longer the dominant, “go-to” activity it once was — especially with younger generations that are more evenly dividing their entertainment time across TV shows and movies, user-generated content (UGC) on social media services, and video games. They seek entertainment, connection, immersion and utility.

Deloitte describes millennials using media via a personalized tapestry of immersive, social and vibrant experiences. SVODs, therefore, need to adopt and accelerate new strategies to reduce churn that accounts for this change in consumer behavior.

Kevin Westcott, a vice chair who leads the US Technology, Media & Telecommunications practice at Deloitte, said, “The race to continue to add customers by commissioning and acquiring really high-cost content will not succeed on its own.”

The influence of video games is illuminating. More than half of younger gamers decide to play a specific video game after watching a certain TV show or movie. About 45% of gamers said they want to play games based on their favorite movies and TV shows. More than a third of gamers say they feel better about their self-image when they’re playing video games. In addition, Deloitte said, almost half of Gen Z and millennial gamers say they socialize more in video games than in the physical world. A majority of Gen Z and millennial gamers wish more of their favorite movies and TV shows also had video game experiences.

The report said 32% of people surveyed in the US consider online experiences to be meaningful replacements for in-person experiences. For Gen Zs and Millennials, it’s 50%.

Half of consumers say UGC videos help them to discover new products or services to buy, and around 40% of consumers say they are more likely to purchase a product after they watch a creator they follow review it.

Westcott said SVOD services should invest in diversifying the content on their platforms, including considering incorporating user-generated short-form video, music and games.

“Streamers are under pressure to reinforce their core offerings, but they should also be leveraging gaming and social media, especially considering the behaviors we are seeing in younger generations. To stay competitive, SVOD providers should seriously consider how to engage broader audiences, play across diverse media properties that add value, and advance their ad platforms to better support advertisers.”


 


Tuesday, 25 April 2023

Behind the Scenes: Yellowstone

 IBC

article here

“It’s all about the land,” editor Chad Galster tells IBC365 about working on Yellowstone and spin-off show 1883.

For anyone unfamiliar with Yellowstone a shorthand would be Dallas meets Succession set amid scenic country and western. The formula has made the Paramount+ show the most popular scripted series on US television. Now in its fifth (and probably last) season, the franchise has spawned spin-offs including 1883 and 1923 each chronicling the settlement of land in Montana eventually owned by the Dutton family. It’s not too grand to call the show’s story arc a foundational myth of the United States of America.

Yellowstone is a modern day western but the writing, the situations and the performances make it Shakespearean in a lot of ways,” says editor Chad Galster ACE who has worked on virtually every episode with creator and showrunner Taylor Sheridan.

“There is a little bit of soap in there, a bit of melodrama from time to time,” Galster tells IBC365, “but at the forefront of our thinking is the father and son relationship, the husband and wife drama, the public and private politics and, as with all these shows, it is all about the land.”

Amid shifting alliances, unsolved murders, open wounds, and hard-earned respect – the Dutton ranch is in constant conflict with those it borders – an expanding town, an Indian reservation, and America’s first national park. 

“We show the land as often as we can. Beautiful wide shots transitioning between scenes act as this gentle reminder that this is what everybody wants, this is what they are willing to kill for and this is what is at stake for everyone. For me, that approach works for every show we do and certainly for Yellowstone.”

Galster has notched up 32 editing credits on Yellowstone though he will also oversee the work of other editors on the show. Intimate knowledge of the character’s back story helps him cut.

Show history enriches storytelling

“One of the beauties of a long running TV show is the more it gets to reference itself from years ago. Just like life, you might remember an exchange you had a few years ago so when see that person again you are going to react, your body might tense up. We get to do that with our characters.

“It might just be a look that I will know to include because I remember this exchange that they had in Season 2. It’s not that we wouldn’t get there eventually no matter who was cutting the show, but the benefit of having been involved since Season one is knowing this history so these things come naturally to me. It’s fun for me and hopefully the audience as well when these little references to the past can be put in.”

It may be a reprise of music. In Episode 1 of S4, for example, when it becomes clear that Dutton’s daughter Beth (Kelly Reilly) is going to take the young boy Carter away Galster reused the score of a moment in Season 1 when young John Dutton finds Rip for the first time.

“I know this is just for the diehard fans but it is little references like that in the sound design or in the picture that enriches the show and all depends on knowledge of its history. I don’t feel I have to study [the show] to introduce these elements.”

Amid all the romantic rivalries and power games in Yellowstone there are moments of camaraderie, notably when country & western tunes play out around a camp fire or ranch party. These musical interludes are “palette cleansers” says Galster to give the audience a chance to breath and reflect.

“It’s very easy to make music a crutch to signpost what an audience should be feeling so try to have the scene resolve dramatically. You know what is going to happen or what has happened with our and so these musical moments help transition you into the next scene. It resets you for a new experience, new dialogue, new characters. So we’re commenting on something after it has occurred but hopefully not dragging you through the scene musically.”

The landscape of sweeping cattle ranch and lush mountain vistas are another big drawer for the audience. Galster can select aerial shots to comment on specific points in the story “perhaps a dark cloud can portend a character’s fate” or to transition between scenes.

“We also use the soundscape of the b-roll such as of horses and cowboys in the distance to making you feel you are dropped into that environment.”

 

Going back to 1883

Set more than a hundred years before Yellowstone1883 tells the story of how the ancestors of the Dutton family set out into America's untamed west to create what will one day become their namesake Montana homestead. The wagon train travels from Texas into the Great Plains, enduring desperation, death, and destruction on the way.

“We all have a general knowledge of the settlers and the move west to find land but I’d not seen a story in this level of detail,” says Galster, who cut six of the 10 episodes. “We viewed 1883 as a standalone series but part of the greater mythology of another generation of Duttons.”

Galster was with the crew during principal photography at the Fort Worth Stockyards where the first two episodes were staged and regularly visited Texas (where Sheridan is based) to show Sheridan work in progress.

"We had an airstream trailer with an Avid on set so I could do my work at home in LA and take those scenes to set and refine the cut with Sheridan.  During Covid we learned you can all do a lot remotely but that there’s an in-person aspect to putting a show together that is just irreplaceable.

“If you’re having to hold every session on Zoom you just miss things in people’s body language, and tone, the way they breathe and way they react. You have to do that in person which is why I travel to Taylor.”

A background as an “amateur classical musician” gives Galster a natural affinity with working with music. In 1883 there are the sounds you’d expect, like horses, and some you don’t.

“We wanted to find ways to play against what you’d expect so, for example, we used silence a lot and ethereal sounds. It’s something we did in Yellowstone S4 as well, the last two eps in particular. Entire sequences are treated in the sound design like a score with chords for low, middle and high setting. It’s a powerful storytelling tool.”

Working with Taylor Sheridan

Galster had a 20-year career as editor working principally on docs and reality shows before he met Sheridan.

Through a fellow editor on MTV reality show The Hills he was introduced to Sheridan who then called him up when extra help was required on the first season of Yellowstone.

“That was my move from reality to scripted,” Galster relates. “I went off to the Utah Studio in Park City where Yellowstone was being staged, met Taylor, and we hit it off as professionals and as people. I guess he liked my work because from there I became his finishing editor meaning I’d either cut episodes from scratch to completion or be there as help for the final version should another episode need it. I’ve worked exclusively for him ever since.”

Many editors enjoy good relationships with their directors; it helps all round if both parties share similar interests and creative ideas, but few seem quite as buddy-buddy as Galster and Sheridan.

“We know each other’s families and we just enjoy engaging each other as people and found a lot of success in the work we have done. He is very exciting to work for and a brilliant writer.”

Galster also cut as Sheridan’s 2021 feature thriller Those Who Those Wish Me Dead starring Angeline Jolie and half of Mayor of Kingstown and is currently working on Lioness another Sheridan-run Paramount+ series starring Nicole Kidman, Zoe Saldaña, and Morgan Freeman about undercover CIA agents attempting to bring down a terrorist organisation.

“The last couple years with Taylor have been pretty relentless. We’ve done some phenomenal work together one I’m very proud of and I wouldn’t trade it but it means you have to be very present at home when you have that opportunity and to take advantage of the breaks.”

In Season 5 he says he is particularly proud of the first episode which opens on the face of John Dutton (Kevin Costner) and we learn that he has been elected as state governor.

“We don’t deal with the election process itself, we just drop the audience in to the aftermath – he is now governor so let’s see what that means to him and those around him.

“I love the inauguration scene when he is being sworn in. It is this ‘holy crap what I have I done to myself?’ moment. One thing I did in the offline editing which the sound department elevated was to just have all the sound slowly fade away. Dutton can just hardly believe what he is seeing and how he got there. When the judge asks him to ‘repeat after me’ he can’t do it at first since he is completely lost in this moment. All the sound goes away and you hear these little church bells. I am proud of that sequence. It feels the way I hoped it would feel.”

 

 


Saturday, 22 April 2023

The AI-Generated Search for Creativity is About to Explode

 NAB

We are about to experience a huge boost in creativity thanks to the supercharged relationship between humans and artificial intelligence, believes internet blogger Jon Radoff.

article here 

Even if we don’t fully understand how it works, it is the ease of communicating with large language models like Chat GPT, which will propel society’s ability to become more “efficient at creativity.”

“We are moving towards a new phase in human civilization: one that involves not only enhancing our own creativity with computers, but working alongside a network of generative models and agents that will help along the path of discovery,” writes Radoff, a self-described adventurer and entrepreneur, blogging about gaming and AI at Metavert Meditations on Substack.

These systems will not only be collaborators, he says. AI will help us “filter through the vast ocean of data, information and applications and practices” of all the creativity that happened of the past.”

He prophesizes an acceleration in the ease of integrating, linking and combining creative content (so-called “composability”) and an exponential scaling-up in the number of creative actors.

“Rather than think of creativity as something unique to [human] genes or our brains, or divinely inspired, or based on some other vital magic — it may be helpful to think of creativity as a search,” he suggests.

“If the universe is a nearly-infinite number of possibilities, parameters and variables—then perhaps creativity is about applying efficient processes towards this search for effective solutions.”

This search is one that results in all manner of discoveries, he says, not only scientific discoveries, but engineering problem-solving and the production of artistic works and cultural products.

Searching the entire variable-space (let’s call that the multiverse of possibilities) would be impractical since it would require infinite computation and therefore infinite time and energy.

A better means of conducting this search is what we might call intelligence, Radoff suggests.

“As we continue to scale-up the number of minds, network with each other, and create better algorithms for conducting the search, we will produce useful outputs: the kind we call creative,” he says.

Radoff also talks about the concept of “emergence,” which is already well known to game makers. Emergence is the idea that from a set of simple underlying rules, complex systems may emerge. As more inputs are available to the system, it is possible for the game to become far more complex.

For example, what made roleplaying games like Dungeons & Dragons so compelling is that the “emergent complexity” came from the ability for players to add their own creativity and storytelling to the experience. A game like Minecraft gave players the ability to shape the structure of the world, build custom servers, and invent mods that affected the experience of other players.

Multiuser dungeons, virtual worlds, and then massively multiplayer online role playing games added even more emergent complexity: they scaled-up the number of players and their network of social interactions.

He thinks that the simple interface of ChatGPT is a gateway to ever increasing complexity that meshes human-machine creativity.

“Much of the recent excitement in artificial language is that the natural-language interfaces ‘just work.’ And while these systems make mistakes (itself a quality we attribute to humans more than machines) it is a universal interface that allows us to interact with them efficiently.”

Good games, he adds in a side note, are usually those that don’t overwhelm the player with this complexity within the basic rules — otherwise the game becomes too hard to learn.

But when the learning curve is balanced with complexity that’s more emergent in nature, it often makes for long-term fun as players continuously learn new forms of interaction with the environment.