Monday, 29 November 2021

How Should Streamers React to Slowing Subscription Growth?

NAB

From Disney to Netflix, the major streamers are still growing, but subscriptions are slowing, suggesting that pandemic gains are waning as more people return to outside activities and out-of-home work.

https://amplify.nabshow.com/articles/how-should-streamers-react-to-slowing-subscription-growth/

Disney announced that it added 2.1 million subscribers for its fiscal fourth quarter, which ended October 2. That’s down from 12.6 million added the previous quarter.

Slowing growth was also the story at WarnerMedia and ViacomCBS. NBCUniversal’s Peacock added a few million more subscribers according to CEO Jeff Shell during a recent earnings call, but he didn’t reveal a new figure.

AT&T’s WarnerMedia revealed HBO Max added just 570,000 new US subscribers in its last quarter but chalked a net loss of 1.8 million HBO and HBO Max customers because of its decision to remove HBO from Amazon Channels last year. According to analysis from Television Business International’s Richard Middleton, instead of going direct, many customers just didn’t bother.

Lionsgate’s Starz saw 40% year-over-year growth in streaming subscribers in the quarter having lost about 600,000 global subscribers in Q3, a decline it attributed to cancellations of the company’s linear service.

Middleton believes Americans have reached a tipping point on the number of SVOD services they need in their lives.

“US apathy towards SVOD was also evident in Netflix’s latest numbers, with just 70,000 new customers in North America joining the streamer,” he says. “That was despite the advent of its giant South Korean hit, Squid Game, dominating headlines across the continent.”

That said, Netflix reported adding 4.4 million subscribers this quarter compared to 1 million adds in its Q2 as it continued to grow faster than other platforms outside the US. It is expecting an even bigger bounce next quarter, forecasting 8.5 million new subscribers on the strength of Squid Game and other buzzy content coming to the service before year end, including Tiger King 2.

Determining who is winning and losing the game isn’t easy. A simple way to gauge that is by looking at total subscribers and average revenue per user, or ARPU, which CNBC does. But not every company reveals those numbers.

Apple, for instance, has not revealed subscriber numbers since it launched in 2019, and Amazon doesn’t break out ARPU and hasn’t provided updates on Prime Video during Q2 or Q3 (though in April, Amazon said it had 175 million Amazon Prime members, all of whom receive Prime Video as part of the package).

Even where ARPU is available, it is evident that margins are far tighter outside the North American market.

According to CNBC, WarnerMedia’s HBO and HBO Max are delivering a healthy looking ARPU of $11.82 in the US market. HBO Max is in the process of rolling out overseas — where ARPU will be tighter.

The ARPU of Disney+ is currently $4.12 — a figure that includes the millions of subscribers paying for Disney+ Hotstar in Asia. Take those subscribers out of the equation, however, and the figure shoots up to $6.24. Netflix subscribers in the US pay, on average, just under $15 a month for the service, while those in Asia pay an average of $9.60.

 


What Unity/Weta Digital Will Mean for the Metaverse

NAB

Unity Software’s audacious $1.625 billion swoop for Peter Jackson’s VFX house Weta Digital is an attempt by the maker of the second-most popular game development platform to close the gap on its rival Epic Games.

https://amplify.nabshow.com/articles/why-unity-weta-digital-means-another-metaverse-maybe/

The deal is predicated on a strategic bet by Unity that there is pent-up demand for the creation of 3D characters, assets and environments on a scale far beyond its current generation for big budget films and AAA games in elite shops like Weta.

In the first instance, Unity aims to open up the creation of photoreal CGI powered by its Unity games engine to fuel virtual production. It’s a market currently dominated by Epic’s Unreal Engine on stages such as Dark Bay at Studio Babelsberg in Berlin.

Longer term, Unity has its eye on the metaverse and what it thinks will be huge demand for professional and non-professional content creators to build assets to populate the 3D internet.

The deal promises to make the tools used to create Gollum for Jackson’s Lord of the Rings trilogy, Caesar from Planet of the Apes, and Pandora from Avatar available to creators all over the world. Indeed, Unity’s move for Weta is intended to stoke the market by allowing access to these tools over a cloud-based platform, though whether this will come to pass as Unity imagines remains a gamble.

Explaining the reasoning behind the deal, Marc Whitten, Unity Create SVP and GM, told VentureBeat, “The key for me [is that] the metaverse is going to need more 3D content. It’s going to need an extraordinary increase in the number of people capable of building in 3D. From a Unity perspective, we really started thinking hard about how we could build something that democratizes content creation.”

Under the deal, Unity is obtaining the Weta Digital suite of VFX tools and technology and its team of 275 engineers, who will join Unity’s Create Solutions division. WetaFX remains a standalone entity (under majority ownership of Jackson and led by CEO Prem Akkaraju) and Unity’s largest customer.

Whitten added, “You had this set of people who had built the most spectacular tools ever for 3D content creation that had never been productized, and then you had Unity, where our bread and butter is packaging and democratizing tools and making them more accessible.”

“Industry observers viewed the buy as a shrewd move by Unity to make gains in the rapidly evolving area of virtual production — a term that describes techniques that enable real-time visual effects production and may include technologies such as LED walls,” The Hollywood Reporter, said of the deal. “Most major VFX companies such as Weta, as well as the likes of Netflix and other entities, are exploring or investing in virtual production.”

“This whole space is exploding,” Whitten told her. “This is the beginning… I hope something you see, is a substantial shift in our position and our kind of level of commitment to Hollywood and the industry.”

Weta Digital’s tools provide a range of features including advanced facial capture and manipulation, anatomical modeling, advance simulation and deformation of objects in movement, and procedural hair and fur modeling. All told, Weta Digital’s software assets comprise some 50 million lines of computer code.

More prosaically, IndieWire suggests that means the “secret sauce” behind the facial capture of Caesar will now become more widely available, along with the rendering capabilities of Manuka and Gazebo, the physics-based simulation Loki tool for water and smoke, the Barbershop hair and fur system, the CityBuilder world-building tool, and a Weta VFX asset library in the thousands.

According to Akkaraju, Weta Digital previously evaluated commercializing the tools itself but concluded that selling the technology assets to Unity was the best way to bring them to market.

“There was a gigantic demand for artists and these services that were driven by Amazon, Apple, Netflix, and all the major studios,” he told IndieWire. “But there were so many restraints on specialized hardware and a lot of licenses, and, by putting it in the cloud, you don’t need all these licenses by providing end-to-end service.

“We looked for a partner that could actually bring these tools to life, and fill up the gap between the demand and the supply in the film [and TV] business. Then beyond that, it gets significantly larger as you go into consumer products and you start thinking about this as being the new creation device of 3D content rather than what it is today, which is more 2D content.”

Weta already has an arrangement with Amazon AWS to create a cloud-based VFX workflow, and has also signed cloud-services bundling deals for Autodesk’s Maya and SideFX’s Houdini.

Unity now intends to offer these tools in a cloud-based Software-as-a-Service subscription model to build on the more than five billion downloads of its apps per month that Unity claims it received in 2020.

According to Bay Raitt, principal of UX design at Unity (and formerly an animator at Weta), Weta Digital’s tools, “have been kind of landlocked inside of Weta,” he told Variety, but with cloud-based access, “You can essentially spin up the Weta workstation and summon the power of thousands of computers from anywhere.”

If the deal can help bring down the cost of virtual production content creation, by providing competition to Unreal for example, all well and good. It could also help to bridge the skills gap between traditional content production and the emerging disciplines of real-time digital production, photographing virtual assets live on LED screens and integrating game engine technologies into VFX pipelines.

When it comes to the metaverse, though, there are dissenting voices skeptical of the whole enterprise and its trillion dollar valuation. Rob Fahey worries that vast amounts of time, money and effort are being thrown at a “massively hyped venture” — the metaverse — that ultimately ends up changing very little about how people interact with their hardware devices, with the Internet, or with one another, because the necessary groundwork hasn’t been done.

“It’s great that Unity is doing some blue-sky thinking about the metaverse and the tools it might require,” he writes in Games Industry, “but companies that can’t afford to spend billions should be far more circumspect, especially since the real value of Weta Digital to Unity is almost certainly going to end up being far closer to its own wheelhouse than to Zuckerberg’s grand and nebulous plans.”

 


Avatar to Web3: An A-Z Compendium of the Metaverse

NAB 

Metaverse, NFT, creator, Web3, avatars: buzzwords that have gone viral in 2021 are all linked to one another. Here’s how.

https://amplify.nabshow.com/articles/avatar-to-web3-an-a-z-compendium-of-the-metaverse/

First, let’s look at the idea of community. The world’s citizens as one digitally connected species. Some 60% of the human race or 4.5 billion people, are online.

“What’s fascinating is that today’s communities are both the deepest and the broadest in human history, argues Rex Woodbury of Index Ventures blogging at Digital Native.

On the one hand, maybe only 1 in ,000 people like the same things as you—but with 4 billion people online, that’s 4 million people who share your interests. On the internet, no niche is too niche.

At the same time, the internet’s scale unlocks breadth: 142 million Netflix accounts watched Squid Game in its first month—67% of all accounts around the world.

“The pace of internet culture means cultural phenomena have shorter durations but this scale has never been seen before,” Woodbury says.

He cites one Chinese internet company, Bilibili which has a $33bn market cap and 202 million monthly active users achieved by building friction into community.

“In order to join a Bilibili community, users must pass a 100-question test. A sample question from the quiz to join the Game of Thrones community, “Which of the following is not part of the Faith of Seven?”

No, me neither. Building in friction means that communities are comprised only of superfans; 80%+ of its users are still loyal after 12 months.

Woodbury: “If the 2010s were the decade of performance online—status and signaling in broad brushstrokes, through likes and retweets and follower counts—the 2020s are the decade of deep and engaged digital communities.”

Taking this forward is the concept of authenticity. Admittedly an overused word—especially when tied to Gen Z—"it also captures a collective exhaustion with the narcissistic, image-obsessed internet culture of the past decade,” Woodbury says. “This year’s social media upstarts have leaned into authenticity. Authenticity shows up in the types of platforms we’re spending time on. More ‘authentic’ platforms like Snapchat and TikTok continue to thrive.”

Authenticity brings us to avatars. This might seem a strange connection: aren’t avatars, by nature, inauthentic?

“But for many people, avatars are a vessel for more authentic self-expression. Avatars have evolved into digital representations of who we are or who we hope to be.”

A couple of examples: YouTuber Ironmouse is a creator who streams in the form of a pink-haired anime girl.  She became a vTuber (virtual YouTuber) because of an autoimmune disorder that limited her offline life. In interview here she remembers: “I got so sick that I couldn’t go out. My contact with people was very limited and I felt that I couldn’t really be a human. So I started being a vTuber.” Adding, “I have never felt more myself than I have in this digital body.”

Equally fascinating is Sam Kelly, a man who spends hours in a virtual world called Stardew Valley. Sam writes:

“In the real world, I am a burly 27-year-old man with a bushy beard. In the video game, I am Olivianne, a strapping blue-haired woman married to Penny.”

Within worlds like Minecraft and Roblox and Fortnite, people are already embodying new digital identities. In Fortnite, you can pay 1,500 V-Bucks ($15) to be Iron Man or a Patriots player.

New companies are bringing avatars to life in new ways, often by building interoperability into digital assets. RTFK for instance, is a digital fashion house that sells NFT sneakers. “You can envision one day wearing these sneakers between virtual worlds,” Woodbury says.

Ready Player Me – Metaverse Full-Body 3D Avatar Creator is a cross-game avatar platform. Game developers can integrate avatars into their game, while players can snap a selfie (which generates their avatar) and then use that avatar in 660 supported games.

Woodbury quotes from Ready Player One. Asked why people visit the OASIS—a vast, immersive virtual world—the protagonist says: “People come to the OASIS for all the things they can do, but they stay because of all the things they can be.”

In a piece for Vanity Fair  The Metaverse Is About to Change Everything | Vanity Fair Nick Bilton envisions what this future could look like:

“In a world where the metaverse exists, rather than hosting a weekly meeting on Zoom with all of your coworkers, you could imagine meeting in a physical representation of your office, where each person looks like a digital version of themselves, seated at a digital coffee table drinking digital artisanal coffee and snacking on digital donuts. If that sounds a bit boring, you could meet somewhere else, perhaps in the past, like in 1776 New York City, or in the future, on a spaceship, or at the zoo, on another planet. You could choose not to be yourself, but rather some form of digital avatar you picked up at the local online NFT swap meet, or at a virtual Balenciaga store. You could dress like a bunny rabbit to go to the meeting. A dragon. A dead dragon. And that’s just one measly little meeting. Imagine what the rest of the metaverse might look like.”

When you can use Epic Games’ MetaHuman Creator Digital Humans | MetaHuman Creator - Unreal Engine for creating believable digital humans within minutes we’re hurtling towards our metaverse future. The full manifestation is still years (decades?) off, but the bricks are being laid. The next steps are VR and AR.

We’re still in the early days of VR and AR, but things are picking up. VR software sales inflected in 2019. By early 2020, over 100 VR titles had broken $1M in revenue. According to Woodbury, VR is finally moving from product to platform.

He cites Snap’s AR platform Lens Studio Lens Studio - Lens Studio by Snap Inc. (snapchat.com) that lets developers build their own AR experiences with a set of accessible tools. 200 million Snapchat users interact with AR in the app every day, and ‘AR creator’ is rapidly becoming a new job title.

The Metaverse is being built using open standard software like Universal Scene Description that allows 3D assets to be read by multiple third party software. This is one part of what can also be described as Web3, the successor to our current internet which Woodbury characterises as a reorientation of our digital economy.

“Web3 is the internet (finally) owned by creators and communities. This is made possible through blockchains like Ethereum. Smart contracts run on Ethereum as collections of code with specific built-in instructions—there’s no need for a centralized authority and no intermediaries are involved.”

This is in contrast to the first stage of the internet which centered around documents and pages being linked together, with companies like Google and Yahoo! making the world’s information easily discoverable. In Web1, most people were passive consumers.

“If Web1 was about information, Web2 was about social connection and content creation. With Web2, which has run from around 2005 through to today, we became active creators and the web shifted from a reading platform to a publishing platform.  But Web2 had a dark side: the major platforms vacuumed up all of the economics. In Web2, users created the value that the platforms then enjoyed.”

With Web3 the idea is that instead of Facebook owning and profiting from user-generated content, everyone contributes value to the internet, and everyone enjoys the benefits of that value creation.

“In Web1, we browsed. Web2 was about users who were acquired. Web3 is about creators and communities who are owners.”

And then there is crypto which is covers a broad range of new economic currencies and transactions which will power the creator economy.

“If you strip away the noise, you get to the heart of why this movement matters: crypto is about removing gatekeepers and providing a more efficient and more egalitarian digital economy. Crypto infuses value into the web’s vast networks of information, people, goods, and services.”

Digital tokens underpin this. Trading volume of NFTs surged this year to $10.7 billion in the third quarter with OpenSea emerging as the go-to destination for buying and selling NFTs, capturing 97% market share.

Brands from Pringles to Gucci to McDonald have introduced NFTs. For The Matrix: Resurrection, Warner Bros. will offer 10,000 NFT avatars for $50 each The Matrix NFTs Planned By Warner Bros. and Nifty’s – The Hollywood Reporter. On December 16th, NFT holders can choose to take the ‘Red Pill’ or ‘Blue Pill’ and if they choose the ‘Red Pill’, their avatar will transform into a resistance fighter.

Now that’s meta.

But despite more companies entering, NFTs remain niche: only 25% of US adults are familiar with NFTs, and only 7% are active users. OpenSea has about 300,000 monthly active traders. By comparison, Ebay has close to 200 million monthly actives.

“Across all of Web3, tokens are being used in new ways to influence behavior,” Woodbury says. “Fungible tokens are becoming the currencies of virtual worlds; NFTs denote ownership and scarcity. Above all, tokens inject incentives into the digital economy. They incentivize creation and consumption, investment and governance. They are the architecture behind the complex economies being built.”

More buzzwords to learn: DeFi is decentralized finance, a blockchain-based form of finance that doesn’t rely on traditional financial intermediaries like brokerages, exchanges, and banks. Fewer industries have more middlemen than finance, and DeFi obfuscates them all. 

“You don’t need your government-issued ID or Social Security number to use DeFi. By using blockchains—software-based smart contracts—DeFi enables frictionless peer-to-peer transactions with no institution or bank or company facilitating.

“Crypto needs both money and culture. If DeFi ushers Wall Street into a new era, NFTs will do the same for Hollywood, for Fifth Avenue, and for other cultural hubs. Both matter, and both will be massive.”

Finally, creator. In Web3, it is argued, it will become easier for us all to be participants in the creator economy—to earn income from the things we make.

Woodbury poses this scenario: you make a video of a popular dance trend set to Olivia Rodrigo’s ‘Good 4 U’ that uses an AR filter. In Web3, three creators earn income off of this: the person who came up with the dance trend; Olivia Rodrigo; and the person who originally made the AR filter. Each component part lives on the blockchain and value flows to creators frictionlessly and instantaneously.

“Today, society devalues creative work,” he says. “In many circles, being a lawyer is more prestigious than being a podcaster. Building better monetization will help solve this.”

The market for digitally-native creative work is predicted to grow enormously in Web3.

At its heart, the creator economy is a reorientation of how economics flow to the people who make things, rather than being captured by intermediaries along the way. That is where avatars, authentic online communities, developments in AR/VR and the whole project metaverse is headed but I’m not sure how big tech will take that lying down.

 

 

The Camera in “Succession” Is a Player in the Game

NAB 

The cinematography of Succession is full of flaws. Yes, one of the most popular and critically acclaimed TV dramas of the moment seems to get away with imprecise framing, characters who block other characters and awkward focus pulls – all the things that in the normal world of TV styling and especially with the kind of HBO puts behind a prestige show like this this would have the camera operators fired.

https://amplify.nabshow.com/articles/the-camera-in-succession-is-a-player-in-the-game/

Of course, the flaws aren’t flaws but carefully designed into the visual grammar of the show. The show is consciously shot as though a real camera were in the room, often at the expense of ideal compositions, and a big part of why that is has to do with how the show treats its camera like it’s a character.

Thomas Flight a video blogger has dissected all this in a video https://youtu.be/_lU91279xZk

Since the show is mainly driven by dialogue between various people in a room it could easily be a very formulaic and boring, Flight says. Yet many of these conversations feel tense and exciting.  It’s also a show about a group of people who aren’t particularly nice yet you find yourself getting engrossed in their drama. Why?

Because the camera crafts a character that doesn’t exist consciously on screen but one that sits in the unconscious mind of the viewer that aids in the telling of the story.

This style of cinematography isn’t new. The pilot to the series is titled Celebration which is a reference to the 1998 Danish film Festen (The Celebration) made by director Thomas Vinterberg. Festen was the first film in the Dogme95 movement that employed handheld camera work and an approach to filmmaking that attempted to mimic the conditions of documentary filmmaking.

Succession takes cues from Dogme95, cinema verité and other styles that use documentary techniques to create fictional stories.

“Even though it is not a documentary or a mockumentary the scenes in Succession are still shot as if the camera-operator is in the room with the characters attempting to capture things as if they were real events,” Flight explains.

“A more formal narrative show would place the camera between the characters and the actors would pretend it isn’t there. But filming in an ob-doc style the cameras are forced to the sidelines. The camera operators don’t want to get in the way so they end up looking around the people in the room to get the best view. Sometimes the result is less than ideal compositions.”

If it feels like the cameras are actually in the room is also feels like there’s an actual person operating them in that room. They are not just objective floating observers. Where they look and how they move has a subjective motivation and personality to it, Flight contends. This creates the opportunity for the character of the cameras to express itself.

Flight says the camera acts like a player in the game being played on screen.

Succession is about the schemes and machinations of the family as they each try to achieve what they want. It’s like a game. They have strategies and they talk about making ‘plays’. The board of this game are the spaces on screen and the conversations between characters. Often the goal is to accomplish what they want while hiding their true intentions.

“The actual lines the characters say are often meaningless while the real meaning is in the looks and glances and expressions of characters caught off guard by the camera in the room.

“All the players know they are playing this game so each character is also trying to understand what the other character’s hidden motivations are. Reactions, hidden subtle expressions, body language are all clues that the character and the audience can use to understand what the character really wants or really means.”

In the same way the characters in the scene are scanning each other for clues that betray their real intention, so are the viewers and the camera operators in the scene.

Breaking form

The show’s style doesn’t always stick to these rules. Once the conventions for a show are established you can break those norms to create contrast for a specific impact. For example, the energy of the cameras often matches the energy of the scene. When the family is scrambling around trying to say the right thing, the camera searches and dives as well. In other scenes where the characters feel safe or in control the camera calms down. At times the cameras use smoother movement, dollies or even slow-motion in contrast to the frenetic handheld movement in the other scenes to build tension.

 

 


Processing the Difference Between AI and Machine Learning

NAB

It’s one of the bugbears of vendors who might have a genuinely artificially intelligent product that so many of their rivals claim the same for inferior technology. Likewise, the use of AI or Machine Learning is employed interchangeably without due diligence for its veracity. Putting AI/ML on a press release is often lazy marketing.

https://amplify.nabshow.com/articles/processing-the-differences-between-ai-and-machine-learning/

Emily Yale, Principal Data Scientist at F5 Shape Security, is frustrated too.

“This vagueness is the fundamental problem with simply describing your product as ‘using AI.’ ” she argues in a column for Information Week. “You haven’t told anything about what the product is actually doing, why it qualifies as AI, and how they should evaluate it.”

She attempts to clarify. AI is a broad field that aims to bring human intelligence to machines, while ML is a subset of AI that focuses on learning from data without explicit programming. Use of ML does qualify as use of AI, but use of AI does not imply use of ML.

Yale also supplies some questions for prospective buyers — and urges them to ask them: “What are the components of the AI system? Why do they warrant classification as AI? How are they established, tested, and updated? If these kinds of details can’t be provided or seem thin and vague, be wary of snake oil and signatures repackaged as ‘AI.’”

She calls out vendors not being accurate with the truth.

“If your AI system is AI only because it is using ML, stop diluting its description by calling it an AI system and call it an ML system instead.”

Likewise, there are specific questions to ask about systems described as ML: “How is the data populated, labeled (if at all), and updated? What type of models are being used and how are they trained? What output do they produce, and how can that be tailored to specific performance goals and risk tolerances?”

Using the right language is a critical step forward in navigating the buzzword hype around AI and ML. If AI is the right term to go in a product description, then use it, Yale urges, but be prepared to justify why it is warranted and accurate. If a product description is better served with ML instead, then ditch AI and be precise.

Her broader point applies to any product description caught up in meaningless marketing speak which does no good for the seller or the buyer.

“Product descriptions should cue buyers on what they need to ask in order to understand if a purchase is the right fit for them, and they should enable sellers to easily articulate the product’s strengths and use of technology.”

 


SaaS, IaaS, PaaS: Cloud Computing Class is in Session

NAB

Over the last decade, cloud computing has evolved from a significant new method of provisioning enterprise technology to a cornerstone of delivering IT functionality and content. The technology is moving at pace taking business models with it. If most companies today straddle an on-premise into cloud hybrid position a move to a hybrid multicloud scenario is deemed inevitable in the long run by Verimatrix.

https://amplify.nabshow.com/articles/saas-iaas-paas-cloud-computing-class-is-in-session/

The specialist content security company backs up its findings in a new white paper, “Ahead in the Cloud,” written by researchers Omdia.

Growth of SaaS, IaaS, PaaS

If Cloud computing has gone mainstream, here’s what you need to know.

There are three widely used categories for discussing cloud services:

Infrastructure as a service (IaaS), which delivers infrastructure components such as compute, network, storage, and virtualization as services

Platform as a service (PaaS), which in addition to the IaaS layers, also provides an application development environment via an application programming interface (API) for developers to write to before they deploy their apps onto the platform

Software as a service (SaaS), in which the entire application stack, including the application itself, is delivered as a service

This ease of adoption has made SaaS “the early breakout star” of cloud computing, and it remains a mainstay of the overall market, according to the report.

Omdia estimates that spending on SaaS services totaled $58 billion in the first half of 2020, a 28% increase on the same period in 2019.

Extrapolate that growth rate through the end of the year (which seems reasonable given the uptick in demand for cloud services as a result of the coronavirus pandemic), and Omdia says the SaaS market was worth around $125 billion for 2020.

At the other end of the spectrum, IaaS requires most effort from the enterprise customer, which becomes responsible for everything from the operating system on which the application will run, through the app itself, up to and including the data.

“IaaS adoption is therefore a weightier undertaking, requiring a higher degree of confidence on the part of an enterprise, and for this reason, its growth has been a more recent phenomenon as companies’ comfort levels have risen.”

That said, IaaS offers the enterprise customer considerably more control and freedom of movement than SaaS does, and for this reason it has now grown to be the largest segment of the market.

Omdia estimates that it was worth $64 billion in the first half of 2020 and extrapolates that to $138 billion for the entire year.

PaaS, meanwhile, can be thought of as a halfway house between the other two, affording more control over the application than SaaS but with less of the heavy lifting required for IaaS. Here, the cloud service provider (CSP) is responsible for the underlying runtime environment and OS.

Omdia’s numbers for the first half of last year put the PaaS market at $32 billion, which it extrapolates to $71 billion for all of 2020. By this calculation, PaaS grew 39% in 2020, compared to IaaS’s 32% and SaaS’s 28%.

“PaaS is clearly proving to be the most popular delivery mode for cloud computing of late, enjoying the fastest growth rate overall,” states the report.

Hybrid Cloud to Hybrid Multicloud

Omdia’s separate ICT Enterprise Insights survey for 2020/21 highlighted that almost 18% of industry-specific applications — such as multiplatform engagement and enterprise apps such as media asset management, playout, and multiplatform — will be moved to hybrid cloud in 2021.

However, the trend is toward hosting applications both on-premise and in multicloud, where media companies can take advantage of the strengths of major CSPs.

Per Verimatrix/Omdia: “AWS excels in the breadth and depth of its services, while Microsoft’s dominance in office productivity makes it a favored destination for certain types of workload, and Google’s strength in AI gives it an edge for any analytical application leveraging AI. Such multicloud environments are also quite often hybrid, with at least some functionality still residing on the customer’s premises.”

The white paper continues: “Since most M&E enterprises have invested heavily in on-premises infrastructure, writing off such investments will not be feasible in the short term. [Therefore] the most likely scenario for the immediate future is the development of hybrid multicloud infrastructures, with some functionality remaining on the provider’s premises while other parts move into the cloud, that is, some combination of public cloud/IaaS or PaaS, on-premises, and SaaS.”

Security Remains a Complex Issue

The white paper underscores the benefits of moving to cloud all of which are well rehearsed for those involved in M&E tech. It includes the ability to scale infrastructure up and down according to business demand; the related cost efficiencies of running on operational expenditure rather than loading costs up front with capex; and faster upgrades to (software) equipment and the ability to customize applications at speed.

Where Verimatrix wants to focus is on security, which is its own specialization. The company believes that each form of cloud computing and/or mix of on-premise computing presents its own type of risk.

“The on-premises private cloud variant, for instance, may look more secure because it is completely under the control of enterprise from the physical data center all the way up the stack to the data/content. However, if an enterprise suffers a supply chain breach along the lines of the now infamous SolarWinds attack made public in December 2020, being on-premises or in the cloud will make little difference, because resources in both those environments were compromised.

“Equally… there is scant evidence to suggest that one [cloud platform] is noticeably more insecure than another.

Yet as the hybrid multicloud approach to delivery becomes the norm across M&E sector enterprises “will need to deploy security tools that can not only scale during times of peak activity but ideally also span the different cloud and on-premises environments that make up the customer’s hybrid infrastructure. This will be fundamental if a company is to gain an enterprisewide view of its attack surface and take remedial action across its infrastructure.”

Naturally, Verimatrix has content protection solutions for this.

Security, and especially content protection has been at the center of media enterprises’ business priorities for decades. Omdia’s ICTEI survey 2020/21 highlighted that end-to-end premium content protection is one of the top three business priorities for 37% of media enterprises globally in the next 18–24 months. Almost the same proportion (34%) stated that application security is fast becoming one of their key investment areas over the next two to three years.

 


Friday, 26 November 2021

DP David Rom / Ted Lasso

British Cinematographer

David Rom discusses avoiding comedy visual stereotypes, a fish-out-of-water cinematography style and pairing the ARRI Alexa LF with Tokina lenses.

Ted Lasso is the hugely successful Emmy award-winning comedy series that follows an American football coach as he helps struggling London soccer team AFC Richmond.

https://britishcinematographer.co.uk/david-rom-ted-lasso/

The AppleTV+ show’s charm struck a chord with international audiences wanting something feel-good, uncynical and optimistic. It is based on a character of the same name that star Jason Sudeikis first portrayed in a series of promos for NBC Sports and developed by Sudeikis, Bill Lawrence, Brendan Hunt, and Joe Kelly. British DP David Rom developed the show’s pilot and has been behind the camera for 13 of the 22 episodes in the two seasons to date.

“At the start of Ted Lasso, I’d not actually been sent or seen any of those original NBC promos, and I think that was for a reason,” he says. “My main goal visually was to steer the show away from those NBC clips and to avoid a doco approach and also to avoid the show having a generic network comedy look. Having more of a drama background, this suited me well and some early anxiety about shooting comedy disappeared.”

Rom’s CV includes primetime BBC and ITV drama series Mr Selfridge, Cold Feet, Poldark, Grantchester, Ackley Bridge and Harlots.

He led the visual direction on the nascent comedy, watched a lot of sports films and decided to pitch a Moneyball-style approach. “Filmic and naturalistic, avoiding too many primary colours,” he explains. “I, Tonya was also a big reference for the football. I loved the ice skating camera work and how it was shot for drama and nothing like TV sports coverage. My aim was to have the audience feel as if they were on the pitch with the players. Friday Night Lights was also a reference, especially the locker room where I wanted to capture the energy of team talks and the player interactions.” 

To achieve this, Rom employed a significant amount of handheld work and shot mostly single camera. Two cameras are often used in large ensemble scenes to avoid excessive time spent on coverage or sports scenes. Focus (or occasionally lack of) also helped give scenes energy and this tied into his lens and camera choices.

Show design

“When I joined the show, production designer Paul Cripps had been working on the sets already and many were built. I worked with Paul to mute some of the more primary football team colours. Paul had approached the locker room area very much with an eye for camera and to aid the fish-out-of-water story. While it was clearly a comedy, Ted Lasso has more serious moments and the discussion regarding how the show should look always fell on the side of shooting it more like a drama. We wanted to be wide and close with characters and use huge wides with some of the larger locations as well as examining close-ups; to use framing and lighting to tell the story or capture a feeling whenever possible.

“I felt the interior locations would benefit from a wide field of view, showing off the ceilings with light tiles as well as the drop off in focus. But for the handheld work I knew I needed a smaller camera. Luckily, the ARRI Alexa Mini LF had come out and we were one of the first shows to use it.”

Rom’s next job was to choose the right lenses for the show. “I love vintage lenses and have shot the majority of my shows with a selection. Here I tested both old and new, from K35s and Ultra Primes to Zeiss Supremes and ARRI Signatures. This was a big show, and I knew there’d be times I’d need more cameras, so I wasn’t keen on having a limited lens selection as per many vintage sets. I also didn’t want too clean an image which many modern lens options give. I wanted the lights from the stadium to flare and for there to be a ‘look’ from the lens.”

Tokina Vista out in front

The DP had never used Tokina before and admits he only knew them as a stills camera lens manufacturer. “I was given some to look over during early tests and was really very surprised,” he says. “They had many of the characteristics I wanted – a round, natural bokeh, focus fall off, cool but natural colour, and a flare characteristic with character.

“I set about comparing these to as many modern large format lenses as I could and kept coming back to them. Sometimes being a little ignorant about cost can be helpful to avoid being biased that you’re getting something extra simply by paying more.”

He adds, “For what I was looking for to create Ted Lasso, the Tokina Vista felt perfect. We had found the right visual solution.”

The Tokina Vista lenses are fast (T1.5) allowing for a very shallow depth of field. Isolating Ted with focus would be a powerful tool to underline the fish-out-of-water story. Rom says he was also able to get glorious wides with focal fall off and even to force his skilful focus pullers to make mistakes when he wanted extra energy.

“Yet the Vistas were sharp enough to allow them to find focus again with ease,” he says. “I loved the wider lenses and the 35mm and 50mm most used in series one.”

Interior design

Interiors (built at West London Studios) presented unique issues. The locker room had scale and the players tended to group together, making things easier for the DP to shoot single camera. Ted’s office was tiny so coverage there – especially when filming under COVID conditions – had to be carefully planned. Remote heads were used here, with a Ronin helping keep the operators out of the room. Coverage on the football pitch itself was mostly with a rig build by head grip Anthony Ward. It allowed the Ronin to be attached and to be pushed/pulled at speed, following the player’s feet, and rising to their faces.

“Locker rooms are usually lit from overhead and there is something dramatic and right about that,” Rom says. “Trying to soften the look too much detracted from that in tests. Gaffer John Attwood designed a very controllable system where each light could be controlled and coloured. We were able to use this to create areas of darkness and clear the floor of lamps when needed but to also augment with floor lighting when that was more appropriate. The cooler harsher locker room, contrasted with his warmer, less top lit office allowing for a nice contrast.”

Most of the actual football match scenes were pre-viz’d with a special football director brought in to assist alongside the VFX team. The use of crowd replication from individuals to real crowd plates shot in stadiums were used where appropriate.

“Shooting such dramatic football scenes in completely empty fields with no fans or stadium needed a lot of imagination and energy from players, directors and camera operators. My framing had to take into account a full stadium of people cheering to capture some of those moments.”

Tests for season 2

At the start of season 2, Rom decided to redo the lens tests again and add in other LF options such as the DNAs and Zeiss Supreme Radiance.

“If I was surprised I’d chosen the Tokinas the first time, I was even more surprised the second. I worked with John Sorapure (the DP who lensed alternate episodes of the series) and we projected all the lenses at ARRI, only to find that again we preferred the Tokinas for this show. Our directors and producers agreed after we presented a selection of options, looking at the key characteristics we wanted.

“One of the biggest issues I had from the year before was that the lens selection in the set had a big jump exactly where you didn’t want one. For example. 35mm to 50mm and 50mm to 85mm. A 40mm and a 65mm had been released just in time and that sealed our decision. We’d also all fallen a little in love with the look and it had defined the first season which was something we didn’t want to depart too far from.”

The first season of the show was actually the first time Rom had worked so closely with another DP. “I absolutely loved it. Having a partner to bounce ideas and suggestions off only makes things better, especially when they work collaboratively.”

Scoring a global hit

“As the show went on, John and I would pick up shots from each other’s episodes and help each other out. It was especially reassuring knowing that the alternative block DP was on the same page. At the end of season 1, we both sat down and discussed all the areas we could improve for the following year from each of our separate experiences and most of these lined up.”

You always hope but never know when you’re in the midst of a project that it will be a success. Ted Lasso is a bona fide global superstar.

“I just pushed hard to keep a drama aesthetic even though the page count was sometimes against us,” Rom says. “With the show being a comedy at its heart, I wasn’t sure how it would all come out in the wash. All the assemblies were done in the US and when I did eventually see a cut, I was blown away with the performances and how well it worked.

“I still wondered if a UK audience would accept the football inaccuracies and a fake team in the Premiership. How wrong I was! It was from early tweets that I realised just how much people were loving the show and particularly Jason’s performance. And it seemed to somehow arrive just as people really needed a lift.”

Season 3 is already underway, with Rom once again on the team. “I feel that after last year’s deep dive into alternative lenses we have now very happily settled on the look we want and the look of the show. I don’t anticipate needing to change much.”

 

Bringing visibility to the cost of cloud

copy written for Blackbird

https://www.blackbird.video/uncategorized/bringing-visibility-to-the-cost-of-cloud/

The move to the cloud has been an imperative for many companies this past year. But adoption of cloud was always about more than survival.  It brings the prospect of flexible working, scalability, collaboration, efficient upgrading, deeper analytics, lower carbon emissions and lower cost.

That’s the mantra anyway – but as we emerge from the pandemic it is on all of us to substantiate these claims.

The cost of cloud, in particular, can be a minefield. Among other things, costs vary based on where the physical data centres are geographically located. Fees include monthly access, retention time and storage volume. There are even costs for simply deleting content from some cloud providers.

We want to highlight the hidden factors that significantly increase the cost of running traditional video editing platforms in the cloud.

Hidden costs to the light

It is possible to find the market costs of traditional video editing platforms that have been adapted for the cloud. However, there are multiple background costs associated with these platforms that are not immediately apparent.

For example, moving desktop applications to the cloud means virtualizing the systems they run on to provide remote access to that machine for users. Virtualized systems accessed through virtual desktops require high bandwidth for acceptable video performance. They also require expensive high-end workstation infrastructure with significant GPU resources.

Moving media and content to the cloud is expensive and time consuming. The same applies to moving content between different types of storage and between regions within the cloud.

Compute costs can be identified relatively easily but the various hidden storage, connectivity and egress costs can be hard to determine. All of this can spiral when scaling up for a new project.

These hidden costs make it difficult for purchasers to understand their true financial commitment which leads to ‘bill shock’ and uncontrolled expenditure.

Limited bandwidth is the ever-present bottleneck in the cloud. With resolutions, bit depths, frame rates and hence file sizes increasing all the time, you can’t just rely on technology’s tendency to speed up to solve the problem. Instead, you have to design a system that doesn’t have the problem in the first place.

Transparency in the Cloud

This is how Blackbird is built. It’s a cloud native video editing system that enables professional editing in a browser. It doesn’t need to move a single large video file, ever.

With our patented codec technology, frame accurate editing is always available. And because the Blackbird codec is so efficient (requiring just 2Mb/s bandwidth to operate), you don’t need heavy duty workstations: any recent computer will work. Nor do you need monolithic software applications, because Blackbird gives full editing in a browser.

Precisely because of the way it works, we are able to show that the Total Cost of Ownership (TCO) is up to 35% lower with Blackbird than with traditional NLEs. Indeed infrastructure costs up to 75% lower with Blackbird. The TCO is lower with Blackbird across a range of live sports and news production scenarios ranging from 15 to 150 users as verified by independent research www.blackbird.video/tco.

You don’t have to take our word for it. Our customers are the world’s leading sports, news, entertainment and government organizations including CBS Sports, NHL, Univision, Cheddar News, BT and the US Department of State.

We have engineered Blackbird so that efficiency, sustainability and low TCO go hand in hand. It’s a virtuous circle that’s good for speed of production, good for sustainability and good for your budget too.

Thursday, 25 November 2021

Agile filmmaking grounds The Suicide Squad in magical realism

British Cinematographer

Filmed entirely with IMAX-certified RED cameras, The Suicide Squad is the explosive return to action of DC Comics’ Super-Villain characters.  

https://britishcinematographer.co.uk/agile-filmmaking-grounds-the-suicide-squad-in-magical-realism/

A completely standalone feature, Warner Bros. Pictures release The Suicide Squad is envisaged by writer-director James Gunn and inspired by the classic 1967 war movie The Dirty Dozen, among others. “The way that movie is shot is the way I’ve wanted to shoot every movie but have not been able to until now,” Gunn declares.

Cinematographer Henry Braham BSC and Gunn found a fluidity of movement for the large format canvas that defies convention. “Nearly every shot in this movie is on the move,” says Gunn. “We also wanted to get up close and move around and between people. The tech has advanced to match what I see with my mind.” 

In the film, a task force of convicts, including Harley Quinn, Bloodsport and Peacemaker, are sent to destroy a Nazi-era facility and laboratory. The ensemble cast includes Margot Robbie, Idris Elba, John Cena, Viola Davis, and Pete Davidson, among others. 

“James conceived the movie as magical realism,” relates Braham, who collaborated with Gunn on Guardians of the Galaxy Vol. 2. “It is a black ops caper with highly dysfunctional Super Heroes. But the flaws in their characters make them highly relatable to an audience. They have a humanity to them which is what James is interested in portraying.” 

A main goal for the filmmakers was keeping the story visceral and real to create a grounded atmosphere for what are over-the-top and sometimes ludicrous characters. “Of course, the story is fantastical,” Braham admits. “We have a walking shark in the movie! So, to make it believable for the audience, we needed a look and feel for the movie that combined fantasy with realism.” 

Braham points out that King Shark (voiced by Sylvester Stallone) was created with special effects and prosthetics in keeping with the desire to keep as much in-camera as possible. Likewise, the filmmakers opted to shoot jungle scenes on stages at Pinewood Studios (now Trilith) in Atlanta and beautiful locations in Panama, rather than use virtual production techniques.  

Braham lit the giant sets to allow Gunn to design shots from any angle. “If you can light truthfully, you can move the camera freely, no matter how large a setting,” he explains. 

Gunn and Braham evolved a dynamic shooting style that they agree wasn’t possible before the creation of RED’s latest camera innovations. “The Suicide Squad is a rollercoaster ride on the big screen,” Braham says. “You want the smallest physical technology possible with the best picture quality you can possibly achieve. That’s the case with RED.” 

The director and DP’s journey with RED began with Guardians Vol. 2, the first feature film captured on the 8K RED DRAGON VV sensor inside the WEAPON camera. “Jarred (Land, RED’s CEO) and the team at RED were really engaged with us on Guardians and in the intervening time they’ve taken another big step forward,” Braham notes. “For The Suicide Squad, I needed to bring together two potentially irreconcilable demands: to shoot a large format 70mm movie with a fluidity of movement that feels alive. It is a style of filmmaking that gives total freedom to James. The decision to shoot RED was a slam dunk because the technology serves the idea.” 

Braham selected an array of eight REDs, including RANGER MONSTRO 8K VV and WEAPON 8K VV as well as a KOMODO, each mounted in different ways to offer maximum flexibility on set. “The physicality of these cameras means you can invent entirely new ways to use them,” Braham says. “It’s like having an array of musical instruments all tuned in different ways for different shots. I can put one down and pick another up to achieve the exact shot we need.” 

Braham and his camera team made customised gyrostabilised mounts to enable genuinely stabilised hand-held movement on The Suicide Squad. The RANGER MONSTRO was Braham’s primary camera with the KOMODO, then in prototype, used on select shots. “KOMODO is a great little camera,” he says. “There are shots in the movie we could only get with something that small that comes with high-res imagery.” 

All the camera configurations were made possible by the form factor of the cameras, but the moment large lenses are mounted on, the possibilities diminish. Braham’s choice of Leica M-System glass kept image quality and maneuverability in mind. “The decision had a lot to do with the lens geometry combined with the VV sensor, which worked incredibly well for what I needed. I could shoot large format on wide lenses without distortion, or I could make the camera very intimate with the actors when required.”  

Braham partnered with award-winning colourist Stefan Sonnenfeld, co-founder and president at Company 3, to develop the LUT. “I like to use stills and paintings as references,” Braham says. “I’m looking at the quality of colour and tone of contrast, as well as the shape of black and the shape of white. For the core visual idea of The Suicide Squad – which is of a colourful, rich but violent war movie – I wanted a lot of colour and beauty alongside gritty reality.” 

Company 3 also prepped dailies for Gunn and Braham to view projected on set. “Every day we’d build the look of the movie as it would look on a big screen in a theatre,” Braham says. “What RED has done is come up with tech that is so small yet perfect for shooting pictures made for IMAX.” 

Braham asks us to view this in context of the history of moviemaking. “Long ago, technology defined the types of movies that got made. With the invention of sound, the cameras got huge, and the film stock was very insensitive, so that meant movies had to be made in very controlled situations. To me, camera size and image quality are everything. RED is at the vanguard of this. It means that I can begin a creative conversation with ‘these are the requirements of our movie’ and then determine ‘what are the technologies we need to do it.’” 

Braham’s next project is also being captured with RED cameras. “Once you’ve been bitten by the freedom of filmmaking, it sets directors and actors free,” Braham says. “That freedom is something that I find fascinating and, for me, the key to it is the physicality of the camera.” 

Crew credits: B camera operator/Steadicam Chris McGuire; C camera operator Tom Lappin; Second unit DP Patrick Loungway; 1st assistant A camera Taylor Matheson; 1st assistant B camera Will Emery; 1st assistant C camera Max Junquera; chief lighting technician Dan Cornwall; and key grip Kurt Kornemann.

 

Wednesday, 24 November 2021

The Forces That’ll Impact Business in 2022

NAB

Another year, another set of forecasts for the year ahead – only this time the pace of change seems almost out of control. It’s a function of the seismic shock of Covid-19, still unwinding, on how we organize our work and social lives. Most accounts suggest that the world we will inhabit in 2022 is one that would have happened organically but in many years ahead.

Self-proclaimed futurist and influencer Bernard Marr has grappled with the issue and penned eight trends he sees impacting on business in the year ahead.

https://amplify.nabshow.com/articles/the-forces-thatll-impact-business-in-2022/

So here they are:

1 Sustainability

If nothing else, Cop26 drove climate change to the top of news agendas and awareness of its urgency means companies can no longer pay lip service to the sustainability and get away with it.

“Every organization must seek to eliminate or reduce the environmental costs of doing business,” says Marr. “Decarbonizing the supply chain is a sensible place to start, but forward-thinking businesses are looking beyond the supply chain to improve sustainability across all business operations. Any business that ignores sustainability is unlikely to do well in this age of conscious consumption.”

2 Human workers and intelligent robots

Automation will affect every industry, so business leaders must prepare their organizations – and their employees – for the changing nature of work. This leaves employers with some key questions, poses Marr. How do we find the balance between intelligent machines and human intelligence? What roles should be given over to machines? Which roles are best suited to humans?

3 The shifting talent pool

The way we work is evolving, with more younger people entering the workforce, more gig workers, and more remote workers. Marr thinks traditional full-time employment will be a thing of the past, as organizations shift to hiring people on a contract basis – with those contractors working remotely. What this means for worker rights versus capitalism is not debated here but is clearly a source of potential friction as is any attempt to automate jobs from the workforce without providing new forms of human employment.

4 Flatter, more agile organizations

In part a response to the changing nature of work - particularly the proliferation of freelance and remote workers – more organizations will shift from being rigidly hierarchical in their structures to one that is “flatter, more agile” to allow the business to quickly reorganize teams and respond to change.

That also chimes with the underlying technologies of many media organizations which is shifting from monolithic hardware to software run on commodity machines that can be scaled up and customized at will.

5 Authenticity

We’ve a lot this year about how the most successful content creators have a greater degree of integrity with their fan base than ‘influencers’ which are deemed to be inauthentic marketing outlets. Marr thinks such authenticity helps to foster human connections – “because, as humans, we like to see brands (and business leaders) display important human qualities like honesty, reliability, empathy, compassion, humility, and maybe even a bit of vulnerability and fear. We want brands (and leaders) to care about issues and stand for more than just turning a profit.”

That’s as may be, but I’m not sure it holds true given the rise of political leaders, sustained by the vast base who vote for them, who don’t give a damn about science and don’t fear being held to account for lying.

6 Purposeful business

Linked to authenticity, this trend, is all about ensuring an organization exists to serve a meaningful purpose – and not just serve up profits to shareholders.

“Purpose defines why the organization exists,” Marr says. “Importantly, a strong purpose has the promise of transformation or striving for something better – be it a better world, a better way to do something, or whatever is important to your organization.”

What then is Meta’s purpose, we might ask?

7 Co-opetition and integration

If it weren’t evident already the continuing global shortages of semiconductors, supply chain delays from the grounding of the Ever Given and sky high energy prices the world has never been so integrated. Marr, ever the optimist, thinks this is a good thing “because the need to work together to solve key business challenges (not to mention humanity’s biggest challenges) is great.”

In future, he thinks, it will become increasingly difficult to succeed without really close partnerships with other organizations. In practice, this means greater supply chain integration, more data integration and sharing of data between organizations, and even cooperation between competitors.

Let’s see if that crosses the geopolitical boundaries of EU and Russia, the Gulf states on climate change and if business dealings with China demand any sanction on intervention in Taiwan.

8 New forms of funding

Marr’s final trend is a key one. With physical currencies being slowly phased out and crypto being phased in, the economics of just about anything is undergoing fundamental change. Historically, this is not perhaps something to be afraid of. Cultures have bartered with shells among other forms of currency, of which notes and coins are merely another token having no intrinsic value in and of itself. Marr sees only upsides:

“New platforms and mechanisms have sprung up to connect businesses with investors and donors – think crowdfunding, initial coin offerings and tokenization. Many of these new methods are driven by the decentralized finance movement, in which financial services like borrowing and trading take place in a peer-to-peer network, via a public decentralized blockchain network.”

Some convincing will be needed of people who believe that hording physical notes under the mattress is safer than having all their savings locked away on a blockchain.


Four Scenarios for Media Monetization

NAB

Revenue models are crucial for the evolution of the media industry, but it is not clear what media monetisation will look by the end of this decade.

https://amplify.nabshow.com/articles/four-scenarios-for-media-monetization/

Consultant’s Deloitte’s have had a stab and come up with four “extreme but valid” scenarios for the evolution of media revenue models up to the year 2030.

For a full understanding of their methodology head to during which Deloitte caveats its predictions by saying there are too many variables to make precise models.

However, in the face of all this uncertainty, it is worth acknowledging what we do know. Its analysis revealed that the following trends are “most likely universally valid” and provide context for the future of the media industry.

So, before we check out the four media monetization scenarios what Deloitte can reasonably say for sure by 2030 is:

1 Media will be almost exclusively digital and internet based. Consumers will cover their content needs digitally, and acceptance will cover all age segments – even seniors will primarily turn to web-based media.

2 Thanks to the omnipresence of adtech, the effectiveness of digital advertising will most probably be measured with the highest accuracy at the end of this decade. Nonetheless, even in 2030, a common, uniform performance indicator will still be essential for the entire media industry,

3 The willingness to pay for premium content will most likely be strong in 2030. A considerable number of consumers will have come to appreciate quality and curated media, for several reasons. One is becoming accustomed to a high level of quality in VOD and feeling it is necessary. Another is desiring quality news, as a response to the spread of fake news.

4 Micropayments will proliferate: for individual films, series, music tracks or news articles. Consumers will see such a payment model as easy and secure. Blockchain-based, pay-per-use models will complement conventional payment solutions, and they will enable new monetisation options for media professionals, despite being initially complex and fragmented.

5 Screens will be everywhere – encompassing all sizes, from smartwatches to movie screens. Just like speakers, they will all be connected in the year 2030, so media can be streamed extensively. This applies to media consumption at home, as well as on the move.

Four scenarios of media and revenue in 2030

So, onward then to the four scenarios painted by Deloitte. “It is not about predicting the future, per se, but depicting the risks and opportunities of specific strategic options,” the consultant says. “In other words, they are narratives set in alternative future environments that are affected by today’s decisions and trends.”

Scenario 1: Creators’ Heaven  (creators win)

Here, the market is characterised by a fragmented and open ecosystem that includes a large number of local content providers who maintain a multitude of paid customer relationships.

“In this highly connected and hyper-digital world, the level of innovation and technological development is extremely high. Customers are used to micropayments and direct, blockchain-based payment methods. Content is cheap and easy to consume in small doses, and subscriptions are easy to cancel instantly. Individual, pay-as-you-go transactions and subscriptions are the dominant revenue models.”

This creator economy allows everyone to implement their own content and business models. As a result, the media landscape is fragmented and margins are low, due to atomistic competition.

The big legacy players - what Deloitte calls digital platform companies (DPCs) - cannot leverage their global blockbuster content, instead acting as one of many distributors of platform-as-a-service solutions for smaller media companies.

“In this scenario, local content producers and intellectual property owners are the winners, since they can use their direct access to media consumers in order to grow. They successfully implement e-commerce and in-app purchases as additional revenue models.”

 

Scenario 2: Guided Freedom (aggregators and local content owners win)

In this scenario, numerous revenue models have prevailed in an open ecosystem, with large DPCs taking on the central aggregator role.

“DPCs provide their technology and set the rules of the game, which funnels the variety available in the open metaverse-ecosystem and allows DPCs to monetise their global content. Local content remains relevant but is supplied by partners. The DPCs’ search and recommendation functionalities provide orientation in the overwhelming content flood but, on the other hand, this shapes a global mainstream media culture in line with DPC preferences.”

Deloitte predicts that data, analytics and AI are omnipresent and freely available to all. As media has become almost entirely digital, smart technologies can predict consumption and pave the way for targeted advertising. Regulation is in place but is unable to break the supremacy of the DPCs. 

The extensive availability of data allows for highly targeted advertisements. More than that, some content is offered for free in exchange for consumer data. 

Subscription models survive as flat-fee access to premium DPC content, but the majority of payments are transaction based. Alongside these, a new generation of blockchain-based technologies and crowdfunding platforms enable small local producers to monetise their content directly.

Local producers benefit from partnerships with the large platform providers but are making themselves increasingly independent through direct customer and payment relationships. In this way, the dominant role of DPCs tends to come under pressure.

 

Scenario 3: Global Hotel California (slam dunk for the major platforms)

This outcome has global DPCs command the bulk of media revenues through both subscription models and highly innovative forms of advertising.

“In a completely unregulated market environment, DPCs benefit at all levels: They can make best use of their financial power, monetise their global blockbusters, collect user data and leverage their analytics and AI capabilities. DPCs have created their own metaverses and act as central aggregators for all types of content, consequently ‘locking in’ media consumers. The level of technological innovation is high in this scenario, and DPCs set global standards. The outcome is an oligopolistic market structure with a high price level.”

Dominant DPCs rely on two main revenue models: First, there are constant revenue streams from subscriptions in this locked-in market landscape. Second, DPCs benefit from maximally customised and targeted forms of advertising. In addition, they cross-finance content through their e-commerce business. 

Local content providers are pushed into a pure production role and depend on the DPCs for direct customer access. Small local aggregators and content producers have largely been eliminated. 

Media consumers can access a wide variety of different content, but the offerings usually follow a global one-size-fits-all approach that completely ignores country-specific tastes and requirements. In the end, consumer sovereignty is weakened – “You can check out any time you like, but you can never leave,” as the song goes.

Scenario 4: The Incumbents Strike Back (telcos and newspapers win)

In our last (perhaps least likely) scenario, media regulators strongly protect local content and have pushed back large global players. “Instead, traditional media channels, like newspapers, still play a prominent role. The dominant revenue model is subscription based. Advertising is less significant for future media revenues, not least because stakeholders are not incentivised to collect the data needed for targeted advertising. National media houses and telecommunication incumbents are the winners in this scenario: They are aggregators and super-aggregators, and act as content gateways for consumers.”

Deloitte says that due to intensive data regulation, the level of innovation and technological development is low in this outcome. “The market environment does not foster an intense start-up culture and lacks innovative media services, while M&A are also restricted. Regulatory requirements prevent DPCs from contributing their scale and data expertise. As a result, the entire media industry stagnates, and revenue potential cannot be exploited because media consumers face a limited and uninspiring media landscape that lacks international ingredients. Therefore, consumers’ willingness to pay is limited to the bare minimum of information, sports and entertainment.”

Deloitte concludes by saying that. companies that succeed will be alert, aware and planning for all possible outcomes ready to tweak those plans at a moment’s notice. Because as we look back at our pandemic-occupied past year, and ahead to media in 2030, the only thing certain is that nothing can be foretold with certainty.