Wednesday 31 August 2022

Revamped IBC returns to win back confidence

Broadcast

article here

The enforced cancellation of trade shows put the entire sector into an existential spin from which it has not recovered. Major fairs like CES, NAB and ISE earlier this year were all significantly down on attendance and IBC will be no different. 

But while there is genuine excitement about the return to Amsterdam in IBC’s regular September slot after three years absence, it’s fair to say the event itself has been through the wringer. Much of the debate at IBC 2022 will be about the show’s own future. 

This is in no small part due to the last-minute abort of the already twice postponed show last December. Organisers were stung by the criticism that came its way from exhibitors exasperated by poor communication and angry that, in effect, they’d lost their deposit. 

To recap, on November 16, 2021, IBC confirmed that its December event was going ahead only to reverse the decision a week later. Among losses for some exhibitors was the cost of shipping gear to Europe from the UK (or overseas). 

IBC issued a mea culpa. “This late cancellation led to exhibitors incurring unrecoverable costs.” 

Its statement continued, “It has become clear that IBC’s actions and position did not resonate with the industry, especially exhibitors. Perhaps the only mitigation worth repeating is the speed of the deterioration of the macro situation.” 

Based on feedback from the IABM and other exhibitors IBC says it “recognises the emotional and financial pain the circumstances of 2021 created. There is a strong feeling that IBC’s communications and approach were detached from exhibitor sentiment, and we are extremely sorry about this.” 

It even acknowledges, that the “by the industry, for the industry” mantra has become tarnished.  

That is a pretty remarkable volte face, commendable in its candidness and published in full on its website. But the fallout goes further and impacts on a cut-down IBC 2022. 

“There is an opportunity to ‘reimagine IBC’ not only for an adjusted Covid era, but also to address environmental concerns and drive more diversity in the media technology industry,” is one way of saying IBC 2022 will be a lot smaller. 

At the start of this year it set up a new advisory group led by a trio of senior industry marketers to review its processes and communications plan. The group includes Nicolas Bourdon, Chief Marketing Officer, EVS; Grass Valley’s CMO Neil Maycock and Ciaran Doran, Director of Marketing at Rohde & Schwarz. 

Its role is to ensure that IBC “remains in touch and is empathetic with” the market situation. This has led to an overhaul of the refund policy for exhibitors “seen to be harsh and needs reconsideration” and a slashing of many regular IBC events so that IBC can be run more cost effectively. 

Gone is the set of cinema related technology and craft sessions known as the Big Screen. Gone too a screening of blockbuster movies in the main auditorium which takes some of the Hollywood sheen out of attendance. 

More sparkle goes with the axing of the ‘red carpet’ IBC Awards Ceremony. The Awards themselves stay but will now be announced virtually in what is euphemistically called “a unique, digital-first online presentation.” 

Also out, the Future Zone – always one of the more intriguing areas of the show not least in profiling the work of NHK. The Japanese broadcaster can now be found in Hall 10. 

Gone is the frankly outdated Companions Programme which paid for spouses to enjoy canal trips. There’s no return for the eSports Showcase in which pro ESL teams played each other or the Next Gen Hub, both 2019 initiatives. 

More significantly, the Conference has been pruned to just two days but remains a paid-for option (EUR600). IBC’s digital activities have also been reviewed. 

As a result of this so called “back to basics” approach IBC says it is able to significantly enhance its Ts&Cs should the show be cancelled again.  

You can expect a greater focus this year on the work of the event’s owners IABM, IEEE, IET, Royal Television Society, SCTE and SMPTE. This includes a wide range of thought-leadership, spanning the development of technical standards, training, advocacy through to diversity and inclusion. All laudable efforts but not exactly worth a trip away from home on its own. Nonetheless, these bodies (who fund IBC after all) have taken the opportunity to wrest back control from the razzamatazz of recent IBCs. 

“The value the IBC owners derive from IBC and the massive contribution these organisations make to the industry need to be brought centre stage,” IBC states. “[We are] aware that all this good work is not always visible and so will be promoting this more widely in the future.” 

IBC expects around 1,000 exhibitors for 2022, some 700 down on the 2019 peak, with around 35,000 visitors, down from 55,000 at peak. Those soft figures are to be expected in a year in which travel restrictions in some countries are only just lifting and the threat of Covid remains.  

“It isn’t so much a product supermarket anymore, where people would walk around and touch and feel the kit,” said Michael Crimp, IBC’s Chief Executive Officer in a press conference. “There’s still a large element of that, but there’s an emerging element of people wanting to tell the story.” 

At this stage, there appears no stardust booking of an Andy Serkis, Ang Lee or a James Cameron but there is Studio commitment in the form of senior executives on the technical side.  Marvel Studios (Eddie Drake, Head of Technology), Paramount Global (Anthony Guarino, EVP, Worldwide Technical Operations); Universal Pictures (Michael Wise, SVP and CTO); Warner Bros. Discovery (Renard Jenkins, SVP, Production Integration and Creative Technology Services) and Sony Pictures (Bill Baggelaar, EVP / CTO) will all speak about efforts to move Hollywood production to the cloud in an initiative led by MovieLabs, the technology joint venture funded by the Studios. 

Crimp explained the decision to divide the two days of paid for conference program, “which has been curated very carefully for those who want it”, from another two days of free sessions about technology narratives, or key issues that IBC believes are important, like sustainability and diversity, “that] we need to develop and may not attract paying customers.” 

Other conference highlights include an outline of ITV’s plans to “supercharge” streaming, by Deep Bagchee, ITV’s Chief Product Officer plus an inevitable check on the Metaverse, what it is and what it means to M&E led by Lewis Smithingham of digital marketers Media Monks. 

When it comes to macro themes reflected at the show arguably the quest for efficiency at scale comes top of the list. Huge advances in hybrid-cloud solutions have taken place over the last two years, taking broadcast into a new technological era. As OTT continues to dominate growth markets and linear audiences hold steady, broadcasters are experimenting with cloud-based playout, offering them flexibility to scale their distribution while minimising resource.  

Another titanic area of innovation, catalysed by work-from-home demands, is remote access to workflows across the content supply chain.  

Future of trade shows 

The show has received a boost in the return of one major exhibitor that had originally announced that it would not exhibit at any live shows in 2022. Avid is back albeit with a much smaller footprint than before – a strategy that is likely to be replicated by vendors going forward. Such a move has been in the works for a while as more and more technology moves away from big blocks of hardware toward software, systems and services. 

“The pandemic forever changed our marketing mix,” Avid president/CEO Jeff Rosica explained in an IBC and IABM interview. “Trade shows are an important part of what we're doing but we really learned the value of digital marketing content with many more virtual events because you can get to many more people around the world. We've learned that that's served us well. But obviously, things evolve as the market comes back to whatever that ‘new normal’ is. So I guess it's going to be a mix going forward.” 

IBC itself says it is engaging with exhibitors through 2022 to explore “the future shape of trade shows in our industry”.   

Soho in the Dam 

So, what can we look forward to at the RAI? It’s not all doom and gloom. For a start, it is in Amsterdam, always one of the show’s prize assets. 

“I’m looking forward to meeting colleagues in person again in Amsterdam,” says Maria Rua Aguete, Senior Research Director, Omdia. 

“The thing I’m most looking forward to is being in Amsterdam,” says Paul Robinson, a member of IBC’s content steering group. “Not going to the coffee shops but going to the RAI.” 

Overwhelmingly people are keen to get together and network, a facility at which IBC and Amsterdam does excel. As Jose Puga, CEO of Imaginario.ai puts it, “Being in front of a zoom screen for so long can affect your mind.” 

“For the first time we’re back after the pandemic and that of itself is going to make a great IBC where the ideas begin to flow,” says Sandy MacIntyre, former AP vice president. 

Morwen Williams, Director of UK Operations at BBC News concurs, “Shows like IBC are about being inspired by different things, knowing what I don’t know. It’s about going around and seeing what could work for me.” 

Joe Newcombe, Media Technologist at Microsoft has a straightforward reason to return. “The beach, obviously,” he says, in reference to the canal-side bar which thankfully remains in place. 



Are AI Art Models for Creativity or Commerce?

NAB

Last month OpenAI announced that it is releasing DALL·E 2 as an open beta. This means anyone using it will be able to use the generated images commercially. Midjourney and Stable Diffusion, two other comparable AI models, will allow this too. Paid members of Midjourney can sell their generations and Stable Diffusion is open source — which means that very soon, developers will have the tools to build paid apps on top of it.

article here 

“Soon, these paid services will become ubiquitous and everyone working in the visual creative space will face the decision to either learn/pay to use them or risk becoming irrelevant,” says Alberto Romero, an analyst at CambrianAI, blogging on Medium.

AI-created text-to-image models have risen to the fore this year. Soon anyone will have the chance to experience the emergent AI art scene. Most people will use these models recreationally to see what the fuss is about. Others plan to take advantage professionally and/or for commercial purposes and here’s where the debate starts.

Some artists have recently been vocal about the negative impact of AI models.

OpenAI scraped the web without retributing artists to feed an AI model that would then become a service those very same artists would have to pay for, said 3D artist David OReilly in an Instagram post:

Artist Karla Ortiz argued in a Twitter thread that companies like Midjourney should give artists the option to “opt-out” from being used explicitly in prompts intended to mimic their work.

Concept artist and illustrator RJ Palmer tweeted: “What makes this AI different is that it’s explicitly trained on current working artists. [It] even tried to recreate the artist’s logo of the artist it ripped off. As an artist I am extremely concerned.”

So, do they have a point?

Romero puts this into context. In his view, all AI art models have two features that essentially differentiate them from any other previous creative tool.

First, opacity. Meaning: we don’t know precisely or reliably how AIs do what they do and we can’t look inside to find out. We don’t know how AI systems represent associations of language and images, how they remember what they’ve seen during training, or how they process the inputs to create those impressive original visual creations.

He has some examples of this using the same text which produced different outputs from the Midjourney AI, but not even Midjourney can explain the exact cause and effect.

“In the case of AI art, the intention I may have when I use a particular prompt is largely lost in a sea of parameters within the model. However it transforms my words into those paintings, I can’t look inside to study or analyze. AI art models are (so far) uninterpretable tools.”

The second feature that makes AI art models different is termed stochasticity. This means that if you use the same prompt with the same model a thousand times, you’ll get a thousand different outputs. Again, Romero has an example of 16 red and blue cats Midjourney created using the same input. They are similar, but not the same. The differences are due to the stochasticity of the AI.

“Can I say I created the above images with the help of a tool? I don’t know how the AI helped me (opacity) and couldn’t repeat them even if I wanted to (stochasticity). It’s fairer to argue that the AI did the work and my help was barely an initial push.”

 

But why does it matter if AI art models are different from other tools? Because there’s a distinct lack of regulation on what the companies that own the AI can or can’t do regarding the training and deployment of these models.

OReilly is right, agrees Romero. OpenAI, Google, Meta, Midjourney, and the like have scraped the web to amass tons of data. Where does that data come from? Under which copyright does it fall? How can people license their AI-generated creations? Do they ask for permission from the original creators? Do they retribute them? Many questions with unsatisfactory answers.

Operating models and goals may vary, “but all these tech companies have one thing in common: They all take advantage of a notorious lack of regulation. They decide. And we adapt,” says Romero.

This is a recipe for disaster, because companies, and by extension users and even the models themselves, can’t be judged “when the rules we’d use to judge it are non-existent,” he says.

“To the opacity and stochasticity of AI art models, we have to add the injudgeability of tech companies that own those models. This further opens the doors to plagiarism, copying, and infringement of copyright — and copyleft — laws.”

Inspiration or Copying?

This becomes a live issue if artists whose work has been fed into an AI for training now spews out identical copies of that artist’s work for other artists to sell or pass as their own.

“AI art models can reproduce existing styles with high fidelity, but does that make them plagiarizers automatically?” Romero poses.

Artists learn by studying greater artists. They reproduce and mimic other people’s work until they can grow past that and develop their personal style. Asking the AI for another artist’s style is no different.

Here, Romero doubles back on his own argument. The two features of opacity and stochasticity that he says made AI art models stand out from other creative tools, are also shared by humans.

“Human brains are also opaque — we can’t dissect one and analyze how it learns or paints — and stochastic — the number of factors that affect the result is so vast, and the lack of appropriate measuring tools so determinant, that we can consider a human brain non-deterministic at this level of analysis.”

So according to Romero’s own logic, this puts human brains in the same category as AI art models. Also, and this is key to consider, both AI art models and humans have the ability to copy, reproduce, or plagiarize. Even if not to the same degree — expert artists can reproduce styles they’re familiar with, but AI’s superior memory and computation capability make copying a style a kid’s game — both can do it.

“And that’s precisely what makes us different than AI art models. Unlike them, we’re judgeable because those ‘lines of illegality’ exist to keep us in check. Regulation for humans is mature enough to precisely define the boundaries of what we can or can’t do.”

And, therefore, before we can decide what’s inspiration and what’s plagiarism, “we first have to define impartial rules of use that take into consideration the unique characteristics of AI art models, the pace of progress of these tools, and whether or not artists want to be part of the emerging AI art scene.”

AI art models that are opaque, stochastic, very capable of copying, and injudgeable can’t be subject to current frameworks of thought, Romero concludes.

“The singular nature of AI art models and the lack of regulation is an explosive mix that makes this situation uniquely challenging.”

Recently, Capitol Records signed rapper FN Meka, making no bones about it being a virtual TikTok star. Ten days later the record company had ditched the AI character after an outcry about racial stereotyping.

Yet what should bug us just as much is the attempt to pass off an artificial computer generated avatar as having genuine experience of the urban life and culture it promoted.

FN Meka is an AI-generated rapper avatar developed in 2019 by Anthony Martini and Brandon Le of Factory New, with music created by AI by the music company Vydia. The voice was real, but everything from the lyrics to the music was AI.

The character has more than 500,000 monthly Spotify subscribers and more than one billion views on its TikTok account, where Factory New sells NFTs and posts CG videos of FN Meka’s lifestyle, including Bugatti jets, helicopters and a Rolls Royce custom fit with a Hibachi grill. Its Instagram account has more than 220,000 followers.

Hours before Capitol fired the “artist,” The Guardian reports, Industry Blackout, a Black activist group fighting for equity in the music business, released a statement addressed to Capitol calling FN Meka “offensive” and “a direct insult to the Black community and our culture. An amalgamation of gross stereotypes, appropriative mannerisms that derive from Black artists, complete with slurs infused in lyrics.”

This included use of the “n” word. While the AI rapper was accused of being a gross racial stereotype, just as pertinently it clearly had no lived experience of what it was rapping about.

For tech commentator Lance Ulanoff, this episode highlights a fundamental flaw in the AI-as-artist experience.

“AI art is based on influences from art the machine learning has seen or been trained on from all over the internet,” he writes in a post on Medium.

“The problem is that the AI lacks the human ability to react to the art it sees and interpret it through its emotional response to the art. Because it has… no… emotions.”

Ulanoff is making a distinction between human-made art and the creations we might call art generated by increasingly sophisticated AI models like Dall-E 2.

Genuine art, he maintains, is not simply representational or an interpretation or — more reductively — a collage.

“An artist creates with a combination of skill and interpretation, the latter informs how their skill is applied. Art not only elicits emotion, but it also has it embedded within it,” he says.

By contrast, AI mimics this by borrowing paint strokes, lines, and visual styles from a million sources. But none of them — not one — is its own.

 That was the flaw with FN Meka. “It could create a reasonable rap but was not informing it through its own experiences (which do not exist), but instead those it saw elsewhere. It’s the essence of a million other lives lived in a certain style and with so many computational assumptions thrown in.”

Another artistic pursuit, AI writing, is no less fraught with false humanity. Any decent AI can write a workable news story, Ulanoff points out. An article in Forbes demonstrates that many AIs already do create extremely efficient article indistinguishable from one written by a person. 

“But these stories offer zero insight and little if any, context,” opines Ulanoff. “How can they? You only do that through lived, not artificial or borrowed, experiences.”

The critic has titled his post “I Don’t Want Your AI Artist,” but as he acknowledges it already difficult to determine at face value whether an image, a story, a piece of music is the creation of a machine (albeit based on lived experiences) and the genuine article.

Do we then need to label the creations of an AI, just as we would barcode a product in a shop, with details of its fabricator?

And if so (and I’m not delving into the philosophical or practical implications of this here) then we would need to act fast since AI art is populating our cultural landscape as we speak.

 


Thinking About AI (While AI Is Thinking About Everything)

NAB

Google may have fired computer programmer Blake Lemoine and hoped to draw a line under the debate about whether its AI is sentient or not (Google says not). But that’s a mistake.

article here 

Lemoine should be applauded for opening up a can of philosophical worms that will frame debates about intelligence, machine consciousness, language and human-AI interaction in the coming years.

Most thinkers on the topic do not conclude that LaMDA is conscious in the ways that Lemoine believes it to be, concluding that his inference is based in motivated anthropomorphic projection. At the same time, it is also possible that AI models are “intelligent” — and even “conscious” in some way — depending on how those terms are defined.

“For example, an AI may be genuinely intelligent in some way but only sentient in the restrictive sense of sensing and acting deliberately on external information,” Benjamin Bratton, a philosopher of technology and professor at the University of California, San Diego, and Blaise Agüera y Arcas, a VP and fellow at Google Research, write in an article for NOĒMA.

“Perhaps the real lesson for philosophy of AI is that reality has outpaced the available language to parse what is already at hand. A more precise vocabulary is essential.”

Bratton and Agüera y Arcas argue that we need more specific and creative language that can cut the knots around terms like “sentience,” “ethics,” “intelligence,” and even “artificial,” in order to name and measure what is already here and orient what is to come.

This has come to a head because of the advance in artificial intelligence that LaMDA, the Google AI, has achieved. It is doing a lot more than just reproducing pre-scripted responses. It is instead constructing new sentences, tendencies, and attitudes “on the fly” in response to the flow of conversation.

“For LaMDA to achieve this means it is doing something pretty tricky: it is mind modelling,” explains Bratton. “It seems to have enough of a sense of itself — not necessarily as a subjective mind, but as a construction in the mind of Lemoine — that it can react accordingly and thus amplify his anthropomorphic projection of personhood.”

Put differently, there may be some kind of real intelligence here, not in the way Lemoine asserts, but in how the AI models itself according to how it thinks Lemoine thinks of it.

Some neuroscientists posit that the emergence of consciousness is the effect of this exact kind of mind modeling. Michael Graziano, a professor of neuroscience and psychology at Princeton is one of them. He suggests that consciousness is the evolutionary result of minds getting good at empathetically modeling other minds and then, over evolutionary time, turning that process inward on themselves.

Put differently, it is no less interesting that a non-sentient machine could perform so many feats deeply associated with human sapience as that has profound implications for what sapience is and is not.

Here’s a conundrum: Is it anthropomorphism to call what a light sensor does machine “vision,” or should the definition of vision include all photoreceptive responses, even photosynthesis?

And another: At what point is calling synthetic language “language” accurate, as opposed to metaphorical?

The way we talk about and label the world has, well, real-world implications. You don’t have to have studied your Wittgenstein to know this.

As Bratton and Agüera y Arcas put it, “In the history of AI philosophy, from Turing’s Test to Searle’s Chinese Room, the performance of language has played a central conceptual role in debates as to where sentience may or may not be in human-AI interaction. It does again today and will continue to do so. As we see, chatbots and artificially generated text are becoming more convincing.”

Trying to peel belief and reality apart is always difficult. Here the question is not whether the person is imagining things in the AI but whether the AI is imagining things about the world, and whether the human accepts the AI’s conclusions as insights or dismisses them as noise. The philosophical term for this is the Artificial Epistemology Confidence Problem.

It has been suggested, Bratton and Agüera y Arcas note, that there should be a clear line prohibiting the construction of AIs that convincingly mimic humans due to the evident harms and dangers of rampant impersonation.

“A future filled with deepfakes, evangelical scams, manipulative psychological projections, etc. is to be avoided at all costs. These dark possibilities are real, but so are many equally weird and less unanimously negative sorts of synthetic humanism.

“The path of augmented intelligence, whereby human sapience and machine cunning collaborate as well as a driver and a car or a surgeon and her scalpel, will almost certainly result in amalgamations that are not merely prosthetic, but which fuse categories of self and object, me and it.”

In other words our definitions of me, myself and I plus it are about to get a whole more pixelated.

 


Sunday 28 August 2022

Platforms Are Using Audiences as a New Means of Distribution (and Here’s Why That’s a Problem)

NAB

It might be the creator economy but creators fuel it rather than drive it, and as long as that’s the case creators lack control, says Mark Mulligan, co-founder of MIDiA Consulting.

article here

In a piece that takes aim at current video sharing platforms like YouTube, Facebook and TikTok (aka Web2), Mulligan says it is time to re-create the creator economy.

First and foremost, platforms like these do not need the creators to find success for their respective business models to work, Mulligan argues. This is because, YouTube, TikTok et al, monetize creators by harnessing aspiration at scale.

“If there are enough creators — and the pool is growing fast — a multitude of small-scale audiences are enough to drive the platforms’ strategic objectives of driving audience engagement, which, in turn, drives revenue.”

What complicates matters further is the fact that creators are developing platform dependence — merely renting space on the platforms they depend upon, rarely with tenancy rights and often slave to the algorithm.

What has enabled this conflicted set of priorities to become established is the rise of platforms that use audience as the new form of distribution.

“Whereas traditional entertainment services, like Netflix and Spotify, license and create content to distribute to audiences, audience platforms, like TikTok and Twitch, pull their content from the audiences themselves,” Mulligan argues. “Even though most users consume rather than create, the creators come from their ranks. The old paradigm of license/create-distribute-audience has been replaced by audience-create-audience.”

While it’s great that the creator economy is opening more doors for more creators than ever before as the number of creators grows, fandom and consumption fragment. The longer the tail, the harder it is for creators to cut through, find audiences, and build careers.

“Creators find themselves locked in a perpetual cycle of create/produce/perform/engage, with their host platforms demanding ever higher levels of frequency and volume of output.”

There is a growing awareness that owning their audiences and having direct communication with them is important and that Web3 companies like Pico, Disciple Media which are owned and controlled by its members are a way of achieving this.

“Yet today’s creator economy is not built this way,” says Mulligan. “The majority of creators have the majority of their audiences on platforms where they are slave to the algorithm.”

Owning audience is just one item on a long list of structural challenges that the creator economy must address Mulligan, pointing to MIDiA’s new report, “Re-creating the creator economy,” says. “If it is to transition from its current phase of undoubted opportunity, into something that can genuinely reshape and redefine the future of entertainment itself.”


Friday 26 August 2022

Discovery + HBO: Merger of Apps Inevitable but not Without Danger

Streaming Media

First signalled last Spring, the move to combine HBO Max and Discovery+ into one bundle, down the line as a single app, has been confirmed by WarnerBros. Discovery. 

article here

As of summer 2023 for US customers, the two apps will be one consolidated service, with both an “ad-lite” and an ad-free version becoming available. A LatAm expansion follows, and the European market launch is in 2024. 

It’s in line with Warner Bros. Discovery’s aim to save $3 billion and reach a goal of 130 million global subscribers by 2025, following the $43bn mega merger of the two companies. 

“Every streaming provider has three main buckets of cost:  content, technology, marketing,” says Robert Ambrose, Co-Founder and Managing Director of Caretta Research. “Consolidating apps means an immediate reduction in cost and an increase in efficiency for technology and marketing, and allows the content budget to have much greater impact.  

“In terms of the wider SVOD market, it reinforces the point that all of the big content owners want to emulate Disney and pivot to D2C. The jury is out on whether consumers will ultimately have the appetite to subscribe to multiple services on an ongoing basis to replace their cable bundle.  Anything which cuts costs, increases the clarity of the consumer proposition, and puts more content into the service will help.”  

The company also admitted on an earnings call earlier this month that both HBO Max and Discovery+ had shortcomings from a product perspective. The new, as yet unnamed, product would address it. 

“HBO Max has a competitive feature set, but has had performance and customer issues,” Jean-Briac Perrette, CEO and president for global streaming and gaming, said. “Discovery+ has best-in-class performance and consumer ratings, but more limited features. Our combined service will focus on delivering the best of both, market-leading features with world-class performance.”  

Certainly, both Warner Bros. and Discovery each had a fragmented set of apps which creates numerous issues spanning technical, partnerships and marketing. 

“Given the duplication of development and operational effort, especially in the ever-thorny challenge of getting apps to work on streaming sticks and smart TVs plus back-end development costs, it makes even more sense to take cost out by having a single core tech stack that can scale,” says Ambrose. “Ultimately this enables a much better app and UX, on more devices, at lower cost.” 

According to articles in TechCrunch the HBO Max app was particularly buggy. It had a notorious reputation of being a buggy app with many issues: the app freezing and crashing on Roku; unable to remember subtitle settings on Apple TV; and content being inaccessible at times. The HBO Max’s app ratings — 3.7 on the Google Play Store and just 2.8 on the Apple App Store — are reflective of customers experiencing multiple issues.   

Ambrose outlines that further duplication of effort was needed to sign super-aggregation deals with operators “crucial for both HBO Max and Discovery+” as they carry less weight standalone in the market versus Netflix and Disney.  

When it comes to marketing, the analyst says it is much easier to build consumer recognition around a single brand, rather than confusing customers by trying to get them to guess the difference between HBO and HBO Max. 

The integrated offer will combine acclaimed scripted shows such as Succession and House of the Dragon with unscripted shows such as Fixer Upper under one roof--much like Netflix does with its mix of documentary and drama. Disney is also able to bundle sports with ESPN, something WBDiscovery may seek to emulate with Eurosport. 

“The size of content catalogue, and even more so consumers' perception of that size, is crucial for retention and mitigating churn,” says Ambrose. “If you can hook people with a wider range of content like HBO drama, Discovery factual and Eurosport live sport and apply good recommendation and personalisation across that, it's going to increase engagement.”  

Gunnar Wiedenfels, Discovery’s CFO, told MediaPost HBO Max, Discovery Apps To Be Merged, Following A Bundle Period 03/15/2022 (mediapost.com), “Right out of the gate, we’re working on getting the bundling approach ready — maybe a single sign-on, maybe ingesting content into the other product so that we can start to get some benefits early on,”  

Globally, the D2C dynamics will be an “exponentially better business” than Discovery’s traditionally linear TV-driven model, Wiedenfels said. 

But building a “blowout” combined D2C product and platform offering a “great consumer experience” will take time — at least months, although “hopefully not years,” he said. 

The company confirmed in its earning calls that “once our SVOD service is firmly established” it would explore a fast or free ad-supported streaming offering “that will give consumers who do not want to pay a subscription fee access to great library content, while at the same time serving as an entry point to our premium service.” 

The monthly subscription cost of the upcoming combined SVOD has not been released. CNBC suggests the firm may want to raise the price for a combined HBO Max-Discovery+ offering, especially as competitors Disney and Netflix have recently raised prices. Eliminating little watched content, while adding a slew of new Discovery + content, could help justify the increase. 

“All the SVODs including Netflix are getting excited by introducing the ad-supported tier as a way of widening their market at lower price points, or free,” notes Ambrose. “What's yet to become clear is whether it will work. Will it be just as hard to attract and retain a $3 customer as a $10 one? Will the ad sales targets be met in what is a largely untried model?   What we do know is that free AVOD is a pretty tough business, with FAST proving a simpler and more attractive option for many.” 

Both HBO and Discovery built their business on cable and must now wean itself away from linear to digital first. At Discovery this has been slowly in train for the best part of a decade. 

“Like all premium content owners, Warner Discovery has to balance the growth of streaming revenue with the loss of the much easier and more lucrative carriage fees,” Ambrose advises. “Affiliates will get ever-more reluctant to pay top dollar for channels when all the good stuff is on streaming first.  Disney bet the farm on switching from linear to SVOD.  Will HBO/Discovery do the same?  Can they convince operators that it is a better deal to bundle the app?  

Ambrose adds, “In a linear channel world you could be profitable at a smaller scale. In streaming, it's all about building economies of scale on a global business. Hence M&A is inevitable. If they weren't consolidating their D2C offers as a result, that would be mad.”  

The total number of direct-to-consumer subscribers across HBO, HBO Max and Discovery+ was 92.1 million in the second quarter of 2022, up 1.7 million from the end of Q1 with 90.4 million subscribers. The company did not break down the numbers individually, so it’s unclear what the exact number is in terms of HBO Max and Discovery+ subs. 

The goal of 130 million global subs by 2025 is a stepping stone on the path to a 400 million global target outlined by CEO David Zaslav (for example last November with CNBC.

As it stands its main competitors are Disney+ (211 million subs reported in the company’s fiscal Q3) and Netflix at 220 million in June. WarnerDiscovery is ahead of Pararmount+ with 43m in June and Peacock’s 27m (including free users). 

Thursday 25 August 2022

Holding pictures up to the light

InBroadcast 

article here

As filmmakers and broadcasters strive for better digital imaging and colour science, HDR is a core priority across monitoring and content capture.  

 

HDR has grown in popularity due to the increased accessibility of content and the numerous technical advantages, such as increased dynamic range, wider colour gamut and smoother gradients, which helps to create more lifelike content. However, HDR adds a layer of complexity to content creators’ workflows, as they need to ensure their images are being faithfully reproduced.  

 

Canon is uniquely positioned to simplify these HDR workflows, from input to output, shares Aron Randhawa, European Product Specialist, Canon Europe. 

 

“The Canon Cinema EOS line-up supports creators with capturing HDR content with low noise, which can easily be fine-tuned for different purposes. As well as capturing incredible footage, Cinema EOS cameras are equipped with a number of tools to help make HDR workflows more efficient, including waveform monitors, Canon Log Gamma and PQ/HLG internal recording. It’s for this reason that a number of large and small scale film productions use cameras such as the EOS C70 or C300 Mark III to capture HDR footage.” 

 

As HDR capture grows exponentially, so too does the need for monitoring tools that enable broadcasters and filmmakers of all sizes to precisely view, analyse and then grade every aspect of their HDR content. That is why Canon is making more of its reference displays HDR compliant with Dolby Vision standards and embedding award-winning HDR monitoring tools in them.  

 

For example, the DP-V1830 which launched last year, delivers 1000 cd/m² full-screen brightness and a suite of monitoring tools including a HDR waveform monitor, Pixel Value Check and configurable False Colour, driven by Canon’s latest processing platform for class-leading performance. 

 

“These features prove that the DP-V1830 is a versatile tool for demanding industry professionals supporting efficient workflows in a wide variety of working environments,” says Ricardo Chen, Director, B2B Product Planning and Strategy, Canon USA. “All of Canon Reference Displays feature the built-in HDR Toolkit, which was awarded the Hollywood Professional Association's 2018 Engineering Excellence Award. These tools help to ensure a finished product that delivers beautiful and vivid HDR imagery.” 

 

Consumer demand for high-quality content rose at the start of the pandemic and has remained steady. As more streaming services have debuted, many providers have leveraged HDR as a content differentiator, especially in live sports.  

 

HDR provides a richer, more visually stimulating viewer experience but can be challenging to execute behind the scenes,” says Bryce Button, AJA Director of Product Marketing. “Production and post professionals must often manage various HDR and SDR sources and equipment to ensure a final high-quality picture that is consistent and aesthetically pleasing.” 

 

To this end, AJA has continued to build out its HDR offering to support these needs, including a recent v3.0 software update for the FS4 frame synchroniser and converter. The release introduced VPID management improvements, new HDR test patterns, and a Web UI status page to display VPID information.  

 

“We’re also working closely with FS-HDR customers in the field to gather real-world production insights that can inform continued development of the HDR/WCG converter/frame synchroniser. Ultimately, our goal is to make it as simple as possible for professionals to achieve their desired HDR aesthetic as content moves throughout the production chain.” 

 

 

The Small HD brand of Vision series monitors offer 1000nits t handle both the rigors of set life and the scrutiny of a grading suite. 

 

“Our technical team has spent over three years developing the proprietary local-dimming technology driving these displays, and we believe there’s no other monitor in this form-factor that comes close to the true HDR image quality we deliver with Vision,” explains says Greg Smokler, GM of Cine Production at Creative Solutions. 

 

Both Vision Monitors are crafted from lightweight, aircraft-grade aluminum — with the 17” weighing 12.9 pounds and the 24” weighing 22.7 pounds — and feature interchangeable mounting points, as well as a dovetail mounting rail for battery plates and other accessories. Both models offer 4x 12G-SDI inputs and outputs, 1x HDMI 2.0 in and out, and 2x 2pin accessory power outputs. 

 

“Smartphones, TVs, and computer monitors now come with HDR capable displays,” reflects Smokler. “And if we’re already editing, coloring, and consuming content in 4K HDR, it’s more important than ever for creators to reference the broadest dynamic range possible while they're in production. SmallHD Vision Monitors are designed to offer the best of both worlds: finely-tuned post-production quality in a practical, ruggedized design — HDR for all.” 

 

Flanders Scientific is tackling HDR monitoring requirements in a twofold manner. First, for HDR content mastering and other highly color-critical HDR monitoring tasks, FSI continues to develop some of the world’s brightest HDR mastering displays. These displays meet critical performance benchmarks required for proper reference grade HDR viewing. 

“However, for many other tasks a mastering level HDR display is neither economical nor strictly required,” explains Bram Desmet, company manager and CEO. “For these less critical HDR monitoring applications FSI has now rolled out HDR Preview modes on all of its current production monitor lineup. These HDR Preview modes are useful in production, broadcast, and other environments where an HDR signal is being distributed beyond a primary HDR mastering display. These modes provide a rough normalization of HDR signals giving viewers both a more useful and more visually pleasing preview of the HDR signal feed. 

“Operators can use these HDR Preview modes to help ensure that highlights are protected and shots are well balanced. This HDR image normalization is sufficient for the majority of departments on set, providing a very economical alternative to procuring HDR reference monitors for all production departments or requiring more complex workflows distributing both HDR and SDR feeds. A single HDR feed can be shared across the set with even our entry-level (~$3,000) production monitors now providing an HDR preview mode to view those feeds in an acceptable manner.” 

 

Atomos recently released the Shogun Connect, claimed as the first all-in-one device for HDR monitoring and RAW recording as well as advanced network and cloud workflows. This device costing U$1299 builds on the company’s Shogun line featuring a comprehensive set of monitoring tools and recording options.,The enhanced 7-inch HDR screen is brighter (2000 nits) with a slimline bevel that makes it even more of a pleasure to use. Shogun Connect’s range of interfaces includes a loop through 12G SDI IN and OUT to support SDI RAW and Atomos Sync timecode technology for seamless camera synchronisation. There are multiple power options to accommodate studio or location shoots and connectivity options for Wi-Fi 6, network Gigabit Ethernet, Bluetooth LE, and USB C. Atomos has also developed a full implementation of the NDI HX standard for the product.  

 

In September the firm will release Live Production, a complete, cloud-based control room for live video and remote collaboration. With Live Production video creatives will be able to produce a live show, of the highest quality, at a fraction of the cost. The toolset includes a fully featured video switcher and sound mixer, with video effects, graphics, and talkback. Production for live events and multi-camera shoots has never been this accessible or this easy. 

 

Using the new Zato Connect, Atomos Connect or Shogun Connect devices, camera feeds are streamed to the Live Production system. Using an ultra-low-latency protocol, each stream can be controlled in real-time from a browser, iPad app, or any compatible control panel from anywhere in the world.  


  

Wednesday 24 August 2022

It’s Getting Almost Impossible to Navigate the Metaverse

NAB

Apparently the metaverse is already broken. The reason? Every commentator on the topic, no matter how trust, is part of the problem and rather than the solution. The only person who sees through the blind fealty to Web3 religion is Theo Priestley, CEO at Metanomic and self-proclaimed metaverse and Web3 “agitator.”

article here

In a three-part series of articles posted on Medium, Priestley demystified the concept that decentralization (or democratization) is the answer to the problems of Web2: “Over the last 12 months or so many people have decided it was time to write the rule book on the metaverse and Web3; lay out the guidelines, frameworks, or cookbook that everyone must follow. They are all wrong.”

The gist of Priestley’s argument is this: while everyone is fixated on the prediction that the metaverse will one day be controlled by a single entity (such as Facebook), they are so violently opposed to it to the exclusion of seeing the truth. “That is, ignoring the f***ing obvious that single entities can also mean ecosystems and platforms that are used to build it in the first place,” he points out.

Priestley singles out Epic Games and a16z for attack, as well as those he thinks unwittingly support them. Some of this is a welcome antidote to the almost uncritical coverage of such companies online.

a16z (and others) have stated that “Decentralization is the overarching, governing principle of a proper metaverse, and many of the traits that follow depend on or result from this main concept,” he notes.

“By decentralization, we mean not owned or operated by a single entity or at the mercy of a few powerbrokers.”

It sounds good, but Priestley challenges that “if they were really supportive of the development community they’d tell them about all the open source alternatives that exist across the industry. In fact, if VC firms like Andreessen Horowitz were serious about the ideology behind Web3, democratization and decentralization they’d be pumping money into the open-source community. But they aren’t.”

On Epic Games, he notes again — with justification — that the game engines everyone talks about in conjunction with the building of the metaverse is Unreal followed by Unity.

But are you even aware of alternatives? So much reporting is done on the prior two major platforms that nobody cares to research whether anything else is worth examining. Not even Matthew Ball.”

He lists 10 other game engines that could potentially become the protocols and platforms for an open metaverse: Godot, Cocos2d-x, Armory, Openage, Sprint Engine, Panda 3D, Defold, Phaser, Moonstream and O3DE.

“Any one of these could be taken by the community and developed further, their code is open source.”

Epic Games CEO Tim Sweeney “has been very vocal about centralized platforms and walled gardens like Facebook, Apple, and Microsoft taking control, and rightly so. But what about centralized ecosystems?” — like the one Epic Games is building beneath our uncritical noses, Priestley charges.

“Then there’s the open source graphics software packages, sound software, and physics engines…. in fact for pretty much every commercial and centralized tool that can be and is currently used to build the Web3 and metaverse utopia we’ve been promised there is a free and open alternative.”

So why aren’t they being used?

“Convenience and apathy. We have become an apathetic society, indolent and

This too is hard to argue against. Web3 apostles seem to believe that — like Marx and Engels — the person in the street will actively want to move to a domain where they are free and their data is under their control and everything is a paradise outside the command of Big Tech.

You’d think. Except most people don’t know about the utopia of Web3 and even if they did — they wouldn’t care.

“This is also the primary reason that Project Solid — Tim Berners-Lee’s attempt to regain control over data privacy — has failed. It was predicated on the notion that societal attitudes in 2022 are the same societal attitudes of 1992; build it and they will come, they will care about their data again, they will launch their own pods and the web will be good again.

“Bless you, Tim. You should have hired a behavioral scientist or at least spent a couple of hours in Primark to understand where the world is today.”

The other issue here is open standards which are fast becoming another set of walled gardens, according to the blogger.

He takes aim at new metaverse standards consortium OMA3, which includes Web3 giants like Animoca, Decentraland and Sandbox.

The metaverse standards forum has around 650 members so far, and its mission states that “the potential of the metaverse will be best realized if it is built on a foundation of open standards,” providing a venue for cooperation between standards organizations and companies to foster the development of interoperability standards for an open and inclusive metaverse.

Sounds decent enough.

Except OMA3’s manifesto goes on to say: “We will build infrastructure to ensure the metaverse operates as a unified system where digital assets (such as NFTs), identities, and data are permissionless and interoperable for all and controlled by users, not platforms. Users will immutably own these assets and transfer them to any OMA3 virtual worlds freely, without needing the platform’s permission.”

As Priestly points out, this sounds like a lot like a closed system — as long as you’re part of the group of companies building on their infrastructure.

 “So, we have sets of competing ‘open standards’ being created by commercially driven and centralized agencies, centralized closed ecosystems in favor of open-source software, and we’ve still to tackle interoperability,” he says.

“For an industry that preaches about collaboration and community, there are an awful lot of walls being put up around paradise already.”

In fact, Priestley argues, in order to achieve the scale that the metaverse requires there’s no need to rely on NFTs or other forms of Web3 economics like DAOs to fund it.

No, he says, the internet is inherently distributed. We just aren’t using it and we aren’t being allowed to.

Since distributed storage and compute are the only ways to handle the processing power and storage requirements needed for the metaverse and Web3 — “the fact that every person on this planet with a connected device capable enough now owns and runs a part of the metaverse passively.”

Your everyday broadband router could become “a metaverse node” — with the power and storage capacity when sitting within a distributed computing network that can solve “a shit ton of problems” across science, govt, manufacturing, pharma, healthcare, banking, and of course the metaverse.

You tie Universal Basic Income (UBI) to the provision of distributed computing and storage necessary to handle the metaverse and enterprise in general. You don’t need to tie it to a token as capitalism pays for this, Priestley argues.

He even imagines “in the distant future even AWS will be paying you for some of your capacity when they can’t build anything to scale anymore.

“Makes you think about where the real power lies,” he says. “That’s decentralization.”

Why is this not happening? Once again — people don’t care. “People would rather create an Inigo Montoya meme on ImgFlip than download an app that drains their battery for free money.”

Plus, “You always had the power but they kept you doped up on social media dopamine so you forgot.”