Monday, 28 February 2022

“Severance:” Now, About Solving the Work/Life Balance…

NAB

The Apple TV+ series Severance puts a new spin on the work - life balance. A mix of sci-fi and social satire, Severance is gaining plaudits for its vision of a corporate world that’s as sinister as it is sterile.

article here 

As Film School Rejects puts it, the people in Severance can’t simply clock out of work because their work is their life. Or, rather, their work is a perfectly bisected portion of their lives.

The enigmatic series follows the lives of a pod of workers who are employed by Lumon Industries, a powerful company with a long and corporate cult-like history. Each of the series’ main characters willingly underwent a procedure called severance, which involves implanting a device in one’s brain that will wall off work memories while at home, and home memories while at work.

Ben Stiller, who directs 6 of the 9 episodes optioned the script by Dan Erickson through his production company Red Hour, telling Forbes,  “The idea of going to work where you have a chip put in your head and you forget everything about your life when you’re at work and when you leave work, you don’t remember what happened was just such an interesting and kind of tantalizing idea.”

At first, this seems like a quaint premise that could be used for an easily digestible slice of social commentary, but it soon becomes clear that the series has built out every corner of its own freaky world.

“This isn’t just a neat trick to turn people into focused worker bees; it essentially turns them into two people,” explains FSR. “Characters in the show call them ‘innies’ and ‘outies.’ The “outies” don’t remember the classified work they do all day, but the ‘innies’ have it much harder. They don’t remember if they have kids or partners. They don’t experience sleep, or know how to drive, or have any sense of whether or not their experiences at work are normal.”

The jumping off point is fascinating and sets up many interesting questions about the nature of our relationship to work and the work environment. “It was a question of, where should we go tonally with the show? Because we didn’t want it to go to a familiar place, necessarily,” Stiller told Variety.

In the same interview Patricia Arquette who plays one of the company managers says, “there was so much structure” in the fictional company, but also within the making of the show itself, in terms of “the composition of the shots, in the wardrobe,” which helped establish “the behind-the-scenes working of this corporation and all the years we’ve been in it and how it had informed how we communicate with people.”

IndieWire  observes the show’s “unsettling symmetry; balancing and unbalancing compositions in order to undercut the inherent comforts of routine and uniformity.”

This is a function of camera work and production design (by Jeremy Hindle). “The oners down an endless maze of white corridors. The way cubicle walls slide up and down to trap workers in tight frames. The contrast between the timeless stasis of antiseptic office life and the bristling cold of a messy existence above ground. It’s a striking, immersive design, if you’re on or off the clock.”

Mark (Adam Scott), Irving (John Turturro), and Dylan (Zach Cherry) sit in a small, four-sided cubicle space within a massive white room, focused on computer screens displaying a series of numbers.

The series is shot with “a sometimes alienating precision,” describes FSR, “moving quickly between the outie memories and the ‘Innie’ experiences to communicate the disorientation of severance. As Lumon’s interests grow more confounding and menacing, the building starts to feel like an inescapable funhouse full of ever-twisting hallways.”

Stiller seems to have approached the scripts as if wondering “What if Hitchcock had directed The Office?” ponders AVClub adding that Severance’s deadpan humor “is more like Being John Malkovich than Zoolander and Tropic Thunder.”

The reviewer later adds that some scenes feel like a David Lynch movie. “Yet, moments from Mark’s outside life play out like Noah Baumbach comedies with an undercurrent of The X-Files  It’s an instalment of The Twilight Zone that continues past the shock ending.”

The character of Helly, played by Britt Lower, is described by the same critic as a modern version of Patrick McGoohan’s character from The Prisoner.  “She won’t be pushed, filed, stamped, indexed, briefed, debriefed, or numbered. You’re led to think that Mark is Helly’s Number 2, but he’s just as much as prisoner as she is.”

Indeed, critics can’t seem to get a fix on the show’s genre. When the series zig-zags into mind-melting corporate thriller territory, for FSR it calls to mind “the better parts of less focused shows about similar subjects, like Homecoming and Mr. Robot.”

Wired suggests Stiller has distilled the sensibilities of writer-director Charlie Kaufman. “Like Eternal Sunshine of the Spotless Mind, (which Kaufman wrote) it’s about a man trying to deal with the grief of lost love by messing with his memory via experimental surgery. Like Being John Malkovich, it uses a high-concept mind-control premise to explore knotty questions about identity. Like Adaptation, it’s fond of genre-hopping and piling twists and turns on top of twists and turns. And, as with Kaufman’s best work, it’s at least as funny as it is trippy.”

What gives the show its sting, according to John Powers at NPR, is the way that Mark and his comrades' story tap so engrossingly into the anxieties that, even in post-COVID working conditions, many people feel about their jobs:

“How companies try to own us, how employees feel like cogs in corporate machines that they fear may be actually ruining the world, how many people bury themselves in work to avoid dealing with the difficulties of their personal lives and how many of us already live a de facto version of Severance. We have two different selves, one for work, where we play a role, and one for home, where, if we don't feel depleted, we can be who we really are.”

Stiller reflects on what these past two years have done to working conditions in Hollywood.

“One of the things I’ve really noticed recently with the pandemic was looking at the unions and talking about the work situation for crews, which is something that has just been entrenched for a long time - with the hours we work and the turnarounds on those hours for crews, basically just to save money,” Stiller told Forbes. “And so, I think that was a really important thing that changed. Everybody was looking at their priorities and looking at really what was important in life and that’s just something that has been a thing in show business for a long time that was just accepted and I think that’s changing, which I think is a really good thing.”

 


TV Ad Sales Have Been Strong For a Decade, But We Need to Think About OTT

NAB

While the shift towards streaming delivery is having an undeniable impact on linear broadcast ad revenue, Imagine Communications believes the decline is much less dramatic than headlines suggest.

article here

The broadcast technology systems vendor predicts that by 2030 linear broadcast will still command between 40-50% share of ad revenue. Some analysts are even more bullish, projecting that by the end of the decade linear advertising will still take more than half the ad dollars in certain economies.

It is not dismissing OTT revenues or the rise of connected TV but advises broadcasters to take steps to enabling what it calls ‘true cross-platform monetization’.

The reasons Imagine thinks linear TV will remain a major plank of media campaigns is the age-old one that strong linear brands gives audiences “reassurance” that they will receive quality programming.

“That same promise of quality content attracts top-tier advertisers; they know they can rely on linear television to deliver the most valuable audiences and the broadest reach to power their brands.”

OTT advertising, on the other hand, tends to be sold at arm’s length by demand-side platforms (DSP) that manage multiple inventories without ever really matching advertisers and content.

“This disconnect is a key reason why broadcast is still seen as the gold standard for television advertising,” Imagine says. “That said, the one advantage that OTT advertising has is how efficient the trading becomes due to end-to-end automation with minimal manual intervention required.”

Imagine argues that if cross-platform content distribution is the goal for most broadcasters and global media companies, then mass campaigns will not be able to achieve targets an efficient cross-platform monetization model.

Its solution is to bring all advertising inventory, linear and nonlinear, together in a single point of sale, “treating it as a single audience for unified campaign planning.”

Pay-TV broadcasters Sky in the UK and Nine in Australia are already moving toward a converged selling approach, it says, “that combines the quality and brand safety of linear with the speed and precision targeting of digital on a single platform.”

To transition from spot-based to audience-based monetization the essential focus must be on the notion “that the audience is the inventory, rather than the spots.”

To elaborate what that means, Imagine has outlined a five-step guide to making the transformation.

This starts from decoupling programs, spots and audiences, followed by the optimization of linear inventory to find audience.

“This is the logical next step. Use all the research and tracking tools — at least once a day and ultimately in real time — to refine placement, making it more fluid. Know exactly how close you are to your audience commitments in volume, demographics and frequency, rather than rely on pure ratings-based audience predictions.”

Step three in Imagine’s guidance is to begin selling linear and VoD inventory together and then move away from DSPs by “filling VoD/OTT inventory with campaigns you have sold.”

All this should mean broadcasters can finally start optimizing across inventories to guarantee reach, volume and frequency goals.

 “The audience is fragmenting. Maybe linear is skewing older, OTT skewing younger. For a single campaign, you can optimize across those different platforms to find your audience, maximize your operational efficiencies and achieve your revenue goals.”

Imagine even suggests that, all being well, a broadcaster could consolidate operations for other broadcasters and media companies ― offering advertising management as a service.

“The five-step program is as much cultural change as technical transformation ― and that is always challenging. But without it, you will be in no position to build a strong business in the new, dynamic, multiplatform media environment. This journey of transformation is critical for the survival of our entire industry.”

 


Streaming Aggregation is Coming to Save Us All

NAB

Consumers may not be able to express it, but they want an aggregator. The industry knows this too. In a new survey, management consultant Accenture has determined that streamers who ignore the frustrations felt by consumer and instead focus blindly on subscriber acquisition do so at their own peril.

article here

Based on a survey of 6,000 consumers across the world last October and November, industry consultancy Accenture has released a new report pointing to three big issues that are eroding the streaming experience.

Issue one: Navigating through OTT services is like entering different rabbit holes, each with its own entry and exit — a turnoff for consumers.

This is borne out by the survey, which found that 60% of consumers globally consider the process of navigating among these different services “a little” to “very” frustrating, and nearly half (44%) spend more than six minutes trying to find something they want to watch.

A second issue is encountering inefficient bundles. According to Accenture, 33% of consumers globally say they will “somewhat” or “greatly” decrease spend on media and entertainment across subscriptions and one-time purchases in the next 12 months.

A third issue is that incomplete or inaccurate recommendations and, hence, often irrelevant content, is the norm for most consumers. That’s because algorithms remained scattered across providers

“Furthermore, the reliance on the algorithm to pitch consumers shows doesn’t allow consumers to tune the model, except through actual show selection,” the report says.

Not surprisingly, a majority of global consumers said they’d like to be able to take their profile from one service to another to better personalize content (56%); and they’d be happy to let a VOD service know more about them to make recommendations more relevant to them (51%).

The analysts’ cast-iron prescription — an inevitability, in fact — is that the streaming landscape needs to consolidate under aggregators. This trend has been on the cards for a few years but it seems that, as streaming services continue to multiply, the need is greater than ever.

Accenture: “For streaming to continue to grow and fulfill its potential, we believe a big change to the ecosystem is needed: the addition of a smart aggregator, sitting across multiple platforms, that dramatically increases viewers’ control over the content they watch.”

For their part, most media executives agree: According to Accenture’s Technology Vision 2021 research, 77% of media executives said their organizations need to dramatically re-engineer the experiences that bring technology and people together in a way that puts people first.

Such an aggregator service would among other things act as a single platform that enables viewers to select exactly what they want to watch, such as categories of specific shows, regardless of who’s providing it.

It would also personalize the experience by providing seamless navigation and curation across streaming services, created in collaboration with and for every individual.

Any of the current streaming ecosystem players — major SVOD services, access devices and connected TVs, major internet onramps and consumer apps, and even traditional cable operators — could become an aggregator.

“Early versions are being assembled, although it’s too early to call the winners. Some might come from the top-tier SVODs that “google up” other apps or make partner apps available inside their service. Others might emerge from access devices that already do basic aggregation of apps on, for instance, a connected TV or a stick that plugs into a TV.”

The obvious question to ask is whether the end game for consumers is just another version of the cable/satellite pay TV bundle. Accenture thinks the new model will be different, if aggregators deliver on the promises of choice, personalization, and convenience.

“Initial incarnations will be bundles of SVOD and AVOD streaming services. But look out for the categories of offerings to expand to including music services, digital books and podcast apps, video games, virtual fitness, food delivery, commerce, and even productivity tools.

What’s more expect future evolutions to be the onramps for any form of digital consumer experience — such as the metaverse.

“Aggregators — if trusted — can be enablers and caretakers of digital identity, entitlements, security, currency, and more. Indeed, the battle to be the home of a consumer’s streaming experience may, in fact, be just the first skirmish in the broader battle to be the home of a consumer’s every experience.”

The analyst is in no doubt aggregation is coming.

“Becoming a successful aggregator or surviving as an individual streaming service requires different sets of actions. But what’s clear for all players: A blind focus on driving subscriber counts without taking steps to position the business for the aggregated future, regardless of which route you choose, presents near-certain peril.”

 


Here Is The AI-Driven News

BroadcastBridge

Everyone is trying to do more with less and the newsroom is no different. Automation offers significant benefits, including the ability to quickly make changes and adapt technically to things like work from home and remote production. But to what extent is AI taking over the newsroom?


article here

The latest version of the BBC's automatic live subtitling software was recently shown to have some major defects - such as getting the spelling of proper names wrong, Head of News Neil Reid was previously embarrassed when, as Controller of Current Affairs, BBC News coverage of the Syrian crisis used a a photo of Sting’s wife Trudie Styler instead of Asma al-Assad.

These incidents in the BBC’s Broadcast Centre parody W1A (see series 3 ep3 and series 1 ep3) are funny because we know they cleave close to the bone. Some insider knowledge perhaps illustrating what can go wrong when automation is out of control.

“Automation is principally about cost-saving and speed,” says Trevor Francis, Grass Valley's business development manager who trained as an engineer at the BBC some four decades ago and spent 17 years at ITN, managing the news editing team and helping digitise its newsroom among other roles.

“When I was at ITN, the chief of news had a row of TV screens in their office displaying CNN, Sky News, BBC, AlJazeera and so on. And when a story broke of course they wanted to make sure ITN was first with it as often as possible. That’s just as true today for breaking news to social but the same process of quality control, verification of sources and reliability needs to be enacted.”

Automation back then meant using Quantel, Grass Valley, Ross or other vendor’s tools for automating programme output and controlling replay devices (then tapedecks, now video servers) and vision mixers.

A classic use case would be automating the new clips to play out in exact sequence with Big Ben’s ‘bongs’ at the start of the 10 O’Clock News. “You want to automate the right video clips in the right place with frame accurate split-second consistency so the show opened perfectly and you couldn’t do this by relying on lots of people triggering replay devices and operating DVEs and mixers.”

That automated output hasn’t gone away. If anything, the need to produce eye-catching TV presentations has got more important.

“Good automation will ensure a consistent look-and-feel for a show, enable a more elaborate production than might otherwise be possible with the resources available, and help to minimise errors,” says Neil Hutchins, CEO at aQ Broadcast. “And, in these times, it will provide seamless support for remote or distributed production teams.”

Automated production systems have been expanded as a result to improve security in more distributed environments through cloud, virtualization, or working from home. The latest version of Ross’ OverDrive for instance, was primarily based around securing all the platforms, UIs, and communication required to work in those environments.

“It’s all about sophistication and speed,” says Mike Paquin, Product Manager, Ross Video. “Automated production systems provide directors and producers with the ability to choreograph complex opening sequences and storytelling devices to present a flawless, engaging segment. Things that would have needed to be pre-taped or edited in the past can now be done live every day with as little as one person to execute. This has also freed up more time for editors and creative people to focus on additional content.”

Everyone is trying to do more with less. Automation offers significant benefits, including the ability to quickly make changes and adapt technically to things like work from home and remote production.

"It enables production teams to improve their creative output with new graphics and set designs that include elements like LED screens,” says Paquin. “Automation makes it easy to adapt to new OTT outputs or fast-developing news stories – and allows for quick and efficient changes in the heat of the moment.”

“Additionally, news broadcasters now want to get content onto social media as quickly as possible,” says Francis. “Perhaps a piece of news is government embargoed or the story needs to appear on Facebook the split second after the last frame comes off air.”

What has changed to enable this is that automation has expanded back down the chain to encompass postproduction and multi-platform publishing.

“Most major news broadcasters have a review process so that before a story goes live on Facebook a senior journalist would review the finished edit,” says Francis. “Once they give the ‘ok’, a single button press will automatically transcode the clip, bundle it with metadata and send to the CMS or directly to playout and to various online and social platforms.”

The mechanics of the automated process include creating a version of the story/file acceptable in technical terms to each platform.

“Automating content production to provide wider access to content, but at lower costs, will be the biggest benefit,” believes Michael Pfitzner, vp, Newsroom Solutions, CGI. “Broadcasters are now dividing their various content outputs between different publishing channels. The earlier approach, which was to produce once and then publish everywhere, did not work out as expected. Audience centricity is key as we head into 2022, as channels have distinct needs.”

Publicly funded broadcasters in particular tend to have operations built with teams of people and often different sets of infrastructure to feed social media and web platforms and other outlets. It’s a workflow architecture that is not sustainable.

“Ideally, news broadcasters want the producers and editor in their newsroom to create a piece of content for repurposing on any platform including radio at the touch of a button,” Francis insists.

“News automation is a key tool to enabling leaner production of consistently high-quality content,” says Ulrich Voigt – VP Product Management, Vizrt. “Producers, Directors, and other production staff don’t need in-depth device knowledge, so they are able to perform complex operations flawlessly, quickly, and consistently and are able to focus on the creation of exceptional stories."

He adds, “News automation helps to eliminate production mistakes, ensures a high-quality show, and lowers operational costs for production teams to provide audiences with accurate, timely and engaging shows.”

This is increasingly possible thanks to AI-driven tools like speech to text engines, subtitling, facial detection and others. This is making searching and locating media content in newsrooms much easier and, given AI’s increasing role in generating metadata regarding content, this is a virtuous circle.

“In production, AI supports scanning and tracking of material, identifying locations and faces in videos, transcribes audio and much more, to ease the daily journalistic routine,” says Pfitzner. “AI is also helping to diversify content automatically, e.g., specific regional topics that can be created based on simple inputs like sports and weather.”

The use of AI is extending to detection of deepfakes. “Given the increasing polarisation in global politics, the use of deepfakes to stir up controversy is increasing. As these are created by AI, it stands to reason that the tools to detect them should also be AI-driven.”

Even without AI, automated tools to help check the veracity of material is vital. “Credibility remains a core value for broadcasters,” Pfitzner says. “News organisations face an increasing struggle to maintain credibility and public trust amidst an avalanche of misinformation. This is especially true when it comes to publicly funded broadcasters, who are under increasing political pressures.”

Other AI tools can quick-assemble a package of clips and audio from internal and external sources to cut the time it takes a journalist to research elements of a story complete with authentication. AI-editing tools can also assemble a complete video package or write an entire story, though even here fact checking and verification before publishing would be in everyone’s interest.

“AI will have a role in helping directors and producers improve the speed of pre-production and quality assurance during a show,” says Paquin. “This can be done with motion tracking for robotics, video analysis that provides the director a warning to avoid jump cuts, and suggested camera shots or effects.”

Since AI tools deliver the best outcomes when trained on vast relevant data sets, most AI tools are likely to be found hooked into major cloud providers Google Cloud, Microsoft Azure or Amazon AWS in some way. It’s not the only reason news production is moving to the cloud, but it’s one of the cloud’s advantages to the operation alongside gaining benefits in OPEX models and the agility to pop-up special event news channels.

Here Is The AI-driven News

The evolution of AI-driven news automation is likely to be incremental rather than a quantum leap. “Even more flexibility will be needed to support a greater range of environments and workflows,” thinks Hutchins. “The automation should properly enable different scales of production within the same facility, from true, single-person operation up to a conventional news programme supported by a full technical team, whilst ensuring that each individual can work as efficiently as possible.

“And there should be allowance for multi-platform output, from traditional linear shows to on-demand piecemeal packages, to minimise the effort involved in repurposing content from one distribution mechanism to another.”

Voigt points to multi-platform adaptation to various screen sizes and aspect ratios as the next big innovation in news automation.

“For audiences, seeing content optimized to the screen they are viewing is becoming a need, not a want. Providing all audiences with news and content that is optimized for the device they are on creates more stories, better told for higher viewer engagement.”

Whether AI will drive people – editors, journalists, producers – out of the newsroom altogether seems highly unlikely.

“At least not in any recognisable form,” says Hutchins. “News programmes are living, breathing things – they reflect all aspects of their human creators, from the presenters, producers, editors, directors, production assistants, technicians and the rest of the creative team, to the owners and managers of the channel that they represent. The productions have a heart and a soul, which news automation helps to bring to life for the viewers. Even the smallest news bulletins require at least one person to create the linear production. At that scale, surely AI would not be viable or effective in any form, given that news automation already supports presenter-driven production?”

In any case, if AI was somehow generating news programmes without human intervention, then, by definition, news automation would not be required: the entire function of the automation is to act as an effective interface between people and technology.

“In that case, the AI-generated news would lack any of the natural characteristics of a traditional production,” says Hutchins. “Whilst we live in an era of high demand for instant, unique and compelling streamed content, at least conventional news programmes can appeal by being different.”

While AI is rapidly growing for many sectors of production, the idea of completely replacing human run news productions isn’t something Viz see coming to fruition either.

“We don’t predict a world where there isn’t at least one person overseeing a production to ensure a near-perfect broadcast,” says Voigt.

Eventually, AI might take over the world… but creative purists might have a say in that fight.

AI + Human = Superhumachine: The Debate at the Center of Deep Learning

NAB

The science of artificial intelligence is new but has already taken a few twists and turns. There’s a debate raging in some quarters that computers alone will never have the smarts to emulate human thought — unless we work in collaboration with the machine.

article here 

The idea requires a brief history of AI, which Clive Thompson charts in the MIT Technology Review. Go back to 1997, when IBM computer Deep Blue made headlines by beating chess grandmaster Gary Kasparov. Game over — or so everyone thought. In fact, not long afterwards, Deep Blue was left out in the cold.

“Deep Blue’s victory was the moment that showed just how limited hand-coded systems could be. IBM had spent years and millions of dollars developing a computer to play chess. But it couldn’t do anything else,” says Thompson.

The reason lay the in AI baked into Deep Blue. It could play chess, brilliantly, because it’s based on logic: the rules are clear, there’s no hidden information, and a computer doesn’t even need to keep track of what happened in previous moves. It just assesses the position of the pieces right now.

Chess turned out to be fairly easy for computers to master. What was far harder for computers to learn was the casual, unconscious mental work that humans do — ”like conducting a lively conversation, piloting a car through traffic, or reading the emotional state of a friend.”

This requires, in Thompson’s phraseology “fuzzy, grayscale judgment,” which we do without thinking.

Enter the era of neural nets.

Instead of hard-wiring the rules for each decision, a neural net trained and reinforced on data would strengthen internal connections in rough emulation of how the human brain learns.

By the 2000s, the computer industry was evolving to make neural nets viable and, by 2010, AI scientists could create networks with many layers of neurons (which is what the “deep” in “deep learning” means).

A decade into our deep-learning revolution and neural nets and their pattern-recognizing abilities have colonized every nook of daily life.

Writes Thompson, “They help Gmail autocomplete your sentences, help banks detect fraud, let photo apps automatically recognize faces, and — in the case of OpenAI’s GPT-3 and DeepMind’s Gopher — write long, human-sounding essays and summarize texts.”

“Deep learning’s great utility has come from being able to capture small bits of subtle, unheralded human intelligence,” he says.

Yet Deep Learning’s position as the dominant AI paradigm is coming under attack. That’s because such systems are often trained on biased data.

For instance, computer scientists Joy Buolamwini and Timnit Gebru discovered that three commercially available visual AI systems were terrible at analyzing the faces of darker-skinned women.

On top of that, neural nets are also “massive black boxes,” according to Daniela Rus, who runs MIT’s Computer Science and AI Lab. Once a neural net is trained, its mechanics are not easily understood even by its creator, she says. It is not clear how it comes to its conclusions — or how it will fail.

This manifests itself in real world problems. For example, visual AI (computer vision) can make terrible mistakes when it encounters an “edge” case.

“Self-driving cars have slammed into fire trucks parked on highways, because in all the millions of hours of video they’d been trained on, they’d never encountered that situation,” according to Thompson.

Some computer scientists believe neural nets have a design fault and that the AI also needs to be trained in common sense.

In other words, a self-driving car cannot rely only on pattern matching. It also has to have common sense — to know what a fire truck is, and why seeing one parked on a highway would signify danger.

The problem is that no one quite knows how to build neural nets that can reason or use common sense.

Gary Marcus, a cognitive scientist and co-author of Rebooting AI, tells Thompson that the future of AI will require a “hybrid” approach — neural nets to learn patterns, but guided by some old-fashioned, hand-coded logic. This would, in a sense, merge the benefits of Deep Blue with the benefits of deep learning.

Then again, hard-core aficionados of deep learning disagree. Scientists like Geoff Hinton, an emeritus computer science professor at the University of Toronto, believes neural networks should be perfectly capable of reasoning and will eventually develop to accurately mimic how the human brain works.

Still others argue for a Frankensteinian approach — the two stitched together.

One of them is Kasparov, who after losing to Deep Blue, invented “advanced chess,” where humans compete against each other in partnership with AIs.

Amateur chess players working with AIs (on a laptop) have beaten superior human chess pros. This, Kasparov argues in an email to Thompson, is precisely how we ought to approach AI developments.

“The future lies in finding ways to combine human and machine intelligences to reach new heights, and to do things neither could do alone,” Kasparov says. “We will increasingly become managers of algorithms and use them to boost our creative output — our adventuresome souls.”

 


We Are Not In a Computer Sim. Get Real.

NAB

The Turing test, devised by British code-breaking genius and AI pioneer Alan Turing, is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Some people are seriously suggesting that we should be applying a sort of inverse Turing test to ourselves. Are now we now and haven’t we always been avatars in a giant game played by extra-terrestrial beings?

article here

It’s the sort of existential argument that writer Douglas Adams satirized in his Hitchhiker’s Guide to the Galaxy (a trilogy in five parts). You’d think that would be the end of it, but all this talk of the metaverse is apparently pushing the sanity of some theoreticians over the edge.

One of them is Rizwan Virk, who has the credentials of founding MIT startup incubator Play Labs. He’s also written books, one of them called The Simulation Hypothesis, published in 2019.

In the book he argued that humanity would reach the “Simulation Point,” a sort of collective transcendence where we won’t be able to distinguish our virtual worlds from the physical world, or AI characters that live in those virtual worlds from real humans.

Now that the metaverse has taken off (in print at least) he’s written an article in Scientific American essentially saying he was right, only that the timeframe for it to happen had been jumped forward from a hundred years hence to today.

This kind of argument is so absurd in its apparent face-value seriousness that it’s actually fun — and disturbing.

In his 2019 book, Virk says he concluded that if our civilization could reach this Simulation Point, “then some advanced civilization elsewhere in the real universe had probably already done so, and that we are already inside one of their Matrix-like virtual worlds.”

Yes, really. We are according to Virk already trapped, just like the Ryan Reynolds character in Free Guy, only we’ve yet to wake up to that fact.

The metaverse has moved beyond science fiction, he now writes, to become a “technosocial imaginary,” a collective vision of the future held by those with the power to turn that vision into reality.

By those in power he means Zuckerberg, of course, and also Microsoft, which just spent $69 billion buying massively multiplayer online games developer Activision Blizzard.

Virk is no outlier in his thought process. Philosopher David Chalmers has been getting a lot of attention and some flack for his popular science book Reality+, in which he also seems to think that the odds on us all being in a giant computer sim are pretty high.

Chalmers also believes that eventually we will all be spending so much time online wrapped in immersive VR that we won’t be able to distinguish — or frankly care — what’s physically real and what’s computer generated. To get there he talks about brain machine interfaces (BCIs) replacing VR headgear over the next century.

“BCIs will eventually allow us to not only control our avatars via brain waves, but eventually, to beam signals from the metaverse directly into our brains, further muddying the waters of what is real and what is virtual,” Virk says.

“If Silicon Valley continues its obsession with building the metaverse…[therefore jumping mankind to this ‘Simulation Point’ much faster] then it’s likely that a more advanced civilization (imagine one that is hundreds or thousands of years ahead of us) already got there,” he adds. “They would then create billions of simulated worlds with billions of simulated beings who do not realize they are in a simulation.”

Founded in 1845, Scientific American is an esteemed publication and claims to be the oldest continuously published magazine in the United States. It has published articles by more than 200 Nobel Prize winners. What it is doing airing this nonsense — without any counter to the argument — is beyond me.

Virk doubles down: “As we get closer to building out the full technosocial imaginary of the metaverse, we will be proving not only that [this] is possible, but also that it is likely.

To be clear, he is saying that we are all right now puppets in a computer game, one of a billion such games being played by some god? Super-alien AI? mice?

“While some of us might be players from the ‘outside’ world, trapped in the metaverse playing characters in this virtual reality, like in the Matrix, most of us, statistically speaking, would be simulated AI characters in a simulated virtual world, thinking that we are actually in the ‘real world.

 Like Chalmers, Virk seems to suggest that in which case we have no agency. That there is no point worrying about climate change or poverty or politics. Red pill or blue, we’re all locked in a sim from which there’s no escape.

This is disturbing on many levels, not least of which right now a democratic country and its citizens are being torn apart by a dictator. That’s real, it is happening, people are dying. Get offline and do something about it because you know what? We can.

 


How Decentralized Networks Can Pose the Biggest Threat to Big Tech

NAB

Exponents of the creator economy believe Facebook, Google, and TikTok have sown the seeds of their own downfall. What’s more, their reversal of fortune has already begun in 2022, the start of a sea change in online behavior that will see the workers finally rise to the top.

article here

There are only a few vocal enthusiasts for the “creator economy” like Li Jin, co-founder at Variant Fund and founder of Atelier Ventures.

“Imagine a world in which Facebook is owned and operated by its users,” prompts Jin in an op-ed for The Economist. According to her, it isn’t hard to do. “The next step is for creators to build, operate and own the products and platforms they rely on.”


Jin says the cultural impact a creator has is already surpassing that of traditional media. She cites Ryan’s World, a YouTube channel for children that creates “unboxing” videos of toys, which “has over 30 million subscribers, and its most popular video has had more than two billion views.” To show a contrast, fewer than a million people watch CNN in prime time.

What she terms as “the stark imbalance of power between proprietary platforms and the creators who use them” is on the verge of being upended, finally freeing workers from the tyranny of capitalist monopoly.

“Despite directly contributing to the value of platforms by uploading content that engages users, creators resemble an underclass of workers, lacking the benefits and protections of employees or the share options that would let them benefit from platforms’ success.

“Historically, advances in workers’ rights were driven by collective bargaining through unions,” she continues. The most effective means for today’s creative workers to gain greater reward for their efforts is by taking control of the platform itself. Not Meta, but new platforms built on co-operative ownership. Examples include Stocksy, a stock photography library that shares profits between its members, who also vote on Stocksy’s policies.

What gives this utopian idea legs is the simultaneous advance of new technologies, principally the decentralized networks, like those that underpin cryptocurrencies. This allows ownership to be distributed via tokens that are earned for contributions to the network and often confer governance rights.

“It may sound futuristic and abstract, but it is already happening,” says Jin. She references Axie Infinity, a pet-battle game in which users earn tokens they can sell and convert into income, that now has 1.7 million daily users who have traded more than $2 billion-worth of game assets. Another example is SuperRare, a digital-art marketplace which also gives users a say in the platform’s future via digital tokens.

“In 2022 new, decentralized networks serving the creator economy will reach a tipping-point. The democratization of wealth-building assets through token distribution is an appealing prospect,” Jin writes. “For innovators, rewarding users with ownership can help attract the enormous user bases that will enable these new platforms to outcompete existing, centralized ones.”

According to Jin, “creator ownership eliminates the conflict between platforms and participants and ensures that growth benefits all stakeholders.” Starting in 2022, she predicts more content creators “will realize and harness their power, leading to the birth of a new set of platforms that confer ownership and control — and treat creators as first-class citizens.”

It’s not as if Meta, Google, Snapchat, or others are blind to this potential undermining of their own business model. It will be interesting to see how they respond. Potentially, they’ll begin buying decentralized platforms, altering their terms and conditions, or even offering tokenized ownership.

 

 



Sunday, 27 February 2022

The Influencers Who Believe the Metaverse Is Already a Thing

NAB

Think the metaverse is years away? Not according to social media influencers, more than half of whom already believe they are active within it.

article here

That’s according to a new survey by influencer-marketing firm IZEA, and it’s not the only surprising finding to emerge. For instance, 70% of influencers believe social media will be replaced by the metaverse.

To be fair, IZEA didn’t define what it meant by metaverse “because the concept is still very nebulous,” IZEA CEO Ted Murphy told Business Insider, so respondents could interpret it in their own way.

The IZEA survey took in the views of 1,000 people, asking them about their behavior and expectations around the metaverse. Sixty-two percent of respondents said they were regular social media users, while 23% identified as “influencers” and 15% said they don’t use social media.

The survey found that 56% of influencers say they currently participate in the Metaverse but only 11% of regular social media users said they were. Top activities of those looking to participate in the metaverse include gaming (68%), exercising (53%), and watching media (48%).

Over half of influencers are actively considering ways to make money in the metaverse and 21% say they already are making doing so.

Making money as a creator in the metaverse could mean a host of things, such as creating experiences that feature brands, wearing or using branded objects, hosting virtual events like concerts or parties, and co-creating and promoting NFTs, IZEA suggests.

Crypto is leading the way for preferred payment options in the metaverse. 49% would like to be paid in Bitcoin if they earned money in the metaverse; another 9% of influencers said they’d like to be paid in Ethereum, and 5% would accept another cryptocurrency. Only 31% of influencers said they’d rather be paid in US dollars.

But it may take at least a decade for this to become a reality, Murphy said. The first step toward this, he added, is making the tech more affordable, as VR headsets currently cost up to $800 dollars.

When asked what is holding consumers back from joining the metaverse, 20% said they are waiting for VR technology to become more affordable and another 12% are waiting for VR tech to improve. 66% of consumers looking to join the metaverse expect to purchase a VR device in the next three years.

Murphy added, “Our research shows that influencers are early adopters of these new platforms and share our excitement around the opportunities available in the rapidly developing metaverse.”

 


Thursday, 24 February 2022

Smaller, Smarter Data Needed to Train and Scale AI

NAB

The current way to train successful AI models is to throw massive data sets at it, but that hits a snag with video. The processing power and bandwidth required to crunch video at sufficient volumes in current neural networks is holding back developments in computer vision.

article here

That could change if smaller, higher quality “data-centric AI” were employed, allowing it to scale much more quickly than today’s current rate.

Data scientist and businessman Andrew Ng says that “small data” solutions can solve big issues in AI, including model efficiency, accuracy, and bias.

“Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system,” he explains in an interview with IEEE Spectrum.

Ng has form, which is why IEEE is interested in what he has to say. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s; he cofounded Google Brain in 2011; and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group.

 “I’m excited about the potential of building foundation models in computer vision,” he says. “I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text.”

The compute power needed to process the large volume of images for video is significant, which is why foundation models have emerged first in audio and text contexts like Neural Language Processing. Ng is confident that advances in the power of semiconductors could see foundation models developed in computer vision.

“Architectures built for hundreds of millions of images don’t work with only 50 images,” he says. “But it turns out, if you have 50 really good examples, you can build something valuable. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”

He says the difficulty in being able to scale AI models is a problem in just about every industry. Using health care as an example, he says, “Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic.

“The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge.”

That’s what Ng’s new company, Landing AI, is executing in computer vision.

“In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.”

 


Wednesday, 23 February 2022

CTV Requires Converged Content Protection

NAB

The macro convergence of streaming with linear TV requires content security to merge, according to a new report.

article here

Intertrust’s “2022 Secure Streaming & Broadcast Workflows” report reveals broadcast TV will continue to be viable for the foreseeable future, despite the growth of streaming, and suggests the video industry will evolve to a hybrid approach, where linear broadcasting coexists alongside VOD content and live streaming.

“Our research shows that the convergence of streaming and linear broadcast media delivery has evolved to the point that consumers primarily care about what and where they watch, not whose OTT or Pay TV service that they are using,” said Tim Siglin, the survey report’s author.

As broadcasting and streaming are increasingly blending into a single, unified user experience to support this hybrid approach, user interfaces based on Connected TV (CTV) and operator apps will be key, replacing traditional electronic program guides (EPG) for content discovery and navigation. As a result, a converged security solution with layered protection will become the industry norm.

Report sponsor, the security and anti-piracy services developer Intertrust, highlighted the need for a multi-DRM approach to protect premium content, and that “a comprehensive, layered anti-piracy solution is also vital to protect service providers’ revenue.”

This approach calls for not only geo-blocking and DRM but also proactive application shielding and content web monitoring.

The report also found that, despite dire industry warnings, respondents — 63% of whom work in the streaming industry — indicated live-linear and broadcast TV isn’t going away anytime soon. Responding to a question about their vision of TV’s future, 42% of respondents see significant value in converged services that use both streaming and broadcast delivery via standards such as HbbTV and ATSC 3.0.

When respondents were asked about their vision of broadcast’s future, the top answer overall was “Smart TVs will offer converged solutions (broadcast TV and streaming) using HbbTV or ATSC 3.0.”

This is in keeping with the streaming industry’s belief that smart televisions will take on more and more of the converged media consumption workload, perhaps through the integration of live-event streaming and live broadcast events in a consolidated EPG.

This converged approach is perhaps a bit brighter than the alternate reality: no broadcast television. Slightly more than a third of respondents chose “Broadcast TV has no future; streaming to smart TVs or OTT devices will replace it” as more aligned with their vision of broadcast’s future.

Interestingly, despite all the mainstream press about Sky Glass and the Comcast XClass TV, only around 16% of respondents chose the option of cable set-top boxes being replaced by streaming-only smart TVs as aligning with their vision of the future.

 


Tuesday, 22 February 2022

Your Business in 2025 is Data-Driven, Right? (Right?)

NAB

By 2025, smart workflows and “seamless” interactions among humans and machines will likely be as standard as the corporate balance sheet, and most employees will use data to optimize nearly every aspect of their work.

article here

Wait — that’s less than three years away. Is your business anywhere near becoming data-driven?

Analysts at McKinsey have created a guide the forecaster thinks that most, if not all companies — including telcos and broadcasters — should be implementing.

Notable technologies include AI and cloud computing to speed data processing and analytics.

Companies already seeing 20% of their earnings before interest and taxes (EBIT) contributed by AI, for example, are far more likely to engage in data practices that underpin these characteristics, it finds.

By 2025, data will be embedded in every decision, interaction, and process, McKinsey predicts.

“Organizations are [in 2025] capable of better decision making as well as automating basic day-to-day activities and regularly occurring decisions. Employees are free to focus on more ‘human’ domains, such as innovation, collaboration, and communication.”

The data-driven culture fosters “continuous performance improvement” to create what the analyst calls “truly differentiated customer and employee experiences,” as well as enabling the growth of sophisticated new applications that aren’t widely available today.

Right now, only a fraction of data from connected devices is ingested, processed and analyzed in real time due to the limits of legacy technology and the high computational demands of intensive, real-time processing.

Three years from now, vast networks of connected devices will gather and transmit data and insights, often in real time. It’s not spelled out, but presumably this is dependent on the rollout of 5G networks and wider deployment of cloud infrastructure.

“Even the most sophisticated advanced analytics are reasonably available to all organizations as the cost of cloud computing continues to decline.”

We can also look forward to leveraging more flexible ways of organizing data, particularly unstructured and semi-structured data. This accelerates the discovery of new relationships in the data to drive innovation, McKinsey says. “This enables sophisticated simulations and what-if scenarios using traditional ML capabilities or more-advanced techniques such as reinforcement learning.”

There will be a bigger role for the chief data officer in organizations. Their responsibilities will widen from tracking compliance to a fully fledged Profit & Loss division.

“The unit is responsible for ideating new ways to use data, developing a holistic enterprise data strategy (and embedding it as part of a business strategy), and incubating new sources of revenue by monetizing data services and data sharing.”

None of this can happen if data remains siloed and inaccessible to sharing. By 2025, data-driven companies will actively participate in a data economy that facilitates the pooling of data to create more valuable insights for all members

 “Data marketplaces enable the exchange, sharing, and supplementation of data. Altogether, barriers to the exchange and combining of data are greatly reduced, bringing together various data sources in such a way that the value generated is much greater than the sum of its parts.”

McKinsey’s final note is around data protection. It forecasts that organizations will have fully shifted toward treating data privacy, ethics, and security as areas of required competency (driven by legislation such as GDPR, and California Consumer Privacy Act (CCPA).

Automated, near-constant backup procedures ensure data resiliency; faster recovery procedures rapidly establish and recover the “last good copy” of data in minutes, rather than days or weeks, thus minimizing risks when technological glitches occur.

Also, AI tools will become more effective at data management — for example, by automating the identification, correction, and remediation of data-quality issues.

“Altogether, these efforts enable organizations to build greater trust in both the data and how it’s managed, ultimately accelerating adoption of new data-driven services.”

 


Where the Metaverse Is Moving: 5G, AI, NFTs and Social Audio

NAB

Rather than a singular, encompassing metaverse, we may see many “verses” in the future, according to ETC@USC, the Entertainment Technology Center at the University of Southern California’s School of Cinematic Arts.

article here 

ETC has rounded up key trends from the CES show in Vegas at the start of the year into a new CES 2022 Report. It suggests that AR is poised for wider consumer adoption, and that NFTs could potentially become “a backbone of CRM and understanding for commercial, marketing, and experiences.”

Perhaps the most intriguing trend, which ETC is not alone in spotting, is that of “social audio.”

Its feedback matters since the ETC is supported by the major studios — Universal, Disney, Paramount and Warner Bros. — and other entertainment industry stakeholders including Epic Games, Google, Microsoft and Technicolor.

Social Audio

ETC suggests that the human voice could be described as the original mass media and, as social audio, it is now the latest trend.

United Talent Agency executive director of audio Kristin Myers moderated a CES 2022 panel discussion on the topic with Clubhouse head of community and creators Stephanie Simon and Audio Collective co-founder Toni Thai Sterrett, who offered up the definition of social media as “live group audio,” adding, “the key is the sense of participation and interactivity.”

At the show, German R&D powerhouse Fraunhofer demonstrated a capability that consumers have craved for years, the ability to adjust the volume of dialog separately from music and background audio. The MPEG-H Audio technology will allow consumers to choose from presets or create their own settings for both broadcast and streaming audio. It is now possible, for example, to switch between different languages, adjust the volume of a sports commentator, enhance the dialogue, and choose from several audio description option.

 The ABCs of NFTs

CES held its first-ever panel discussion on NFTs — non-fungible tokens, for the uninitiated — with two experts who have grown up with the nascent industry sector. United Talent Agency head of digital assets Lesley Silverman noted that her company established the new division in March 2021. Her guest was Art Blocks founder and CEO Erick Calderon, who first got involved with NFTs in 2017 by following a thread on Reddit.

Both had advice for newbies: Don’t let the jargon intimidate you and store your seed phrase in a cold storage device.

“The moment you register that seed phrase, you enter a new phase of your life, and you have to treat it like the valuable asset you have,” warned Calderon. “And be careful with your private keys. This is still the Wild West.”

Calderon noted that, because a smart contract is operated by computer, “it’s also the most ruthless contract in the world.” He pointed out that it’s the secondary market for NFTs that allow artists to participate in their own success. “OpenSea is the largest market for the secondary market, with creator royalties built into their contracts,” he said. “We hope that as other marketplaces pop up, they also respect that.”

5G, the Metaverse and AI

Going into 2022, 5G will be the connective tissue, said Steve Koenig, VP of research at the Consumer Technology Association. He noted that standards group 3GPP will publish protocols and requirements for industrial IoT applications, which will open the floodgates to new case studies.

5G emerges as the “Edge of Everything” solution for smart device delivery, writes ETC correspondent Paula Parisi. She quotes Qualcomm CMO Don McGuire, who did tours at Intel, NBC Universal and Dell, as saying that 5G is “pervasive… this unifying connectivity fabric.”

Koenig pointed to the metaverse as another trend that is “closer than you think.”

“All the building blocks are already present and in play — cloud, 5G, haptics, volumetric video,” he said. “Now it’s about assembling them into an experience. The metaverse is the next generation Internet that will provide immersive digital experiences which will become inextricably linked with our physical reality. For now, it’s an evolving story.”

The framework for what ETC calls the Multiverse has been growing steadily for decades. While there is no single definition for the Multiverse, it encompasses the confluence of many formerly disconnected aspects of media and entertainment combined with advancing technologies and a cultural shift from passive consumption of entertainment to active participation and immersion.

VR and AR markets are expected to grow to nearly $600 billion by 2025, according to industry research quoted by ETC, and announcements of new products from big-name companies like Sony, Lenovo, HTC, Panasonic and others at CES suggest robust consumer demand for “reality-altering” products.

“Commercial applications have gained acceptance, eyewear is vastly improved, and several products suggested that eyeglass and contact lenses are around the corner,” ETC says.

Another “key ingredient technology” for 2022, said Koenig, will be AI. This will impact all sectors.

Among consumer applications for AI is that of computational photography. The Cinematic mode on the new iPhone 13 series, for example, points to what might be possible if we rethink these ideas as automated camera mount systems did just a decade ago.

“For a creator or one-man band, these tools are getting close to a prosumer level solution that features a range of near professional capabilities of capture, cinematic art, instant production and delivery,” observes ETC.