Thursday 29 September 2022

IBC 2022: The value of live is richer than ever

 copy written for Blackbird


article here

To paraphrase the old saying ‘you never know what you’ve got until it’s gone’ it was great to be back at IBC again.

There’s been much industry soul searching about the value of trade shows since attendance was abruptly curtailed in 2020. Criticism too of the cost of exhibiting and whether there weren’t better and more productive means of targeting customers and communicating the corporate message. 

Face to face is always valuable

Some of these points remain valid. Many live events are returning with a strong hybrid element of live streaming and IBC was no exception. The entire industry will be far more judicious about which events they go to going forward, not least to cut back on the carbon footprint. Video conferencing and remote workflows are now engrained in everything the media and entertainment industry does.

But you could feel it in the build up to the show itself. There is an excitement about meeting face to face that can’t be replicated online, and that played out with a genuine buzz in Amsterdam.


I’m sure anyone who went to IBC felt this too. It was great to be back at a show that meant business. The haphazard Brownian motion of serendipitous networking is a business benefit that only physical presence can bring. Plus, there was a delightful humour to connections and conversation that is lacking in the more straitjacketed schedules of a Zoom meeting.

It helped that there was a lot to talk about too.

The big Clouds innovating

Clearly the major theme trending in the conference and on the showfloor was that of the tremendous gains made to deliver end to end production workflows in the Cloud. All three major public Cloud providers were at the RAI in force to showcase their support for the broadcast and media community. 

Google Cloud Platform was urging the industry to use data leveraging AI and ML to hyper tailor the streaming experience down to the individual level.

Amazon Web Services claimed to be the first Cloud provider to have achieved five of the goals of the MovieLabs’ 2030 Vision to move all Hollywood production to the Cloud.

Strengthening partnerships

Meanwhile, Microsoft was focussed on live and the results are stunning. Blackbird was excited to partner with Microsoft on its IBC booth and to work with Evertz and MediaKind in demonstrating a live end to end production of an NBA match running on Azure

Blackbird also enjoys a close working relationship with EVS which was demonstrating Blackbird’s lightning fast browser-based editing integrated with their end-to-end live production asset management platform, MediaCeption.

We partnered with our friends at LiveU to demonstrate how production teams can send high-quality video from anywhere and remotely edit, enrich and publish their live and VOD content to any destination in an instant.

Cross-vendor collaborations like these show that live production in the Cloud is robust, super efficient and reliable today for any size and scale of production. What’s more, when it comes to putting together the tool sets that work best in the open IP-based environment of Cloud, then the best-of-breed choice for professional editing is Blackbird.


Exciting times as we explore new markets

I’d like to take this opportunity to thank every member of the Blackbird team on site and back home who helped make it such a successful show for us. I know that Sumit Rai, our new Chief Product Officer, was impressed by what he saw and we achieved at IBC – not least in being nominated for the Best Stand of the Show Award! Well done everyone.

As you may know, part of Sumit’s role with us is to lead the strategic development of our product portfolio into fast growing video markets. One of those is the Creator Economy of social media influencers and independent artists which is already estimated to be worth over $100bn and which remains under served not just by IBC but by pretty much all trade shows. It’s a market primed for our cloud-native ultra-efficient tools and one we will be reaching out to further.

Wednesday 28 September 2022

Inclusion and accessibility takes centre stage

InBroadcast

Over the past few years, the film and television industry has finally begun to focus wholeheartedly on inclusion and accessibility. As a result, the demand - and the viewer expectation - for high-quality captions and subtitles across all platforms and content continues to increase.  

article here p36

Broadcasters, producers, content creators, and streamers are looking for expert service, fast-turnarounds, superior customer service, and a partner who can help them keep current with regulations. 

To meet this rising need, Take 1 recently became part of the Verbit family of companies and has partnered with VITAC, North America’s largest captioning provider, to provide greater access – captions and subtitles as well as transcription, audio description, and dubbing – in the media and entertainment sector. 

“The combination enables us to offer a variety of new, enhanced, and expanded services and products, including live captioning, keep clients up to date with the latest technologies and innovations, and work as a true one-stop shop for all access needs,” says Louise Tapia, Take 1 CEO. 

“Getting all your accessibility requirements from one provider means working with one point of contact, one billing department, and most importantly, receiving consistent quality in a single workflow from deliverable to deliverable and from project to project. Using one vendor also can save money as many offer volume discounts for larger projects or when multiple services are ordered.” 

Stanza is the captioning and subtitling software application from Telestream. It was created to address the challenges of the high initial cost of obtaining a broadcast-quality captioning tool, explains Ryan Irons, Captioning Product Manager. 

“Stanza provides a low-cost entry point for organizations requiring high-end captioning capabilities by offering a subscription-based business model.” 

To help address the challenges of remote working, the client-server deployment model of the product allows captioning editors to work from any location from a simple browser-based editing console, regardless of where media files are stored.  

“Stanza uses the Telestream GLIM engine to play back original high-res media instantly without any need to waste time transferring huge files across networks (on-prem or remote) or to spend time and energy creating proxies. 

It includes optional access to the AI-powered Timed Text Speech auto-transcription service which supports over 100 languages. Stanza also integrates with the Vantage Timed Text Flip text transcoder and processor to provide automation for captioning workflows. 

Stanza is built on the Telestream Media Framework, the same technology that underpins several Telestream products and services such as the Vantage Media Processing Platform and Telestream Cloud Transform. The Media Framework includes format and container support developed from over twenty years of experience, having been tested in some of the most challenging broadcast use cases around the world. 

Stanza uses the advanced IMSC 1.1 profile of TTML as its native format, and supports complex Unicode scripts, bidirectional text, vertical text layouts, Ruby, and other features needed for subtitling in all languages. 

And Stanza supports all modern export formats, including embed captions into media files, subtitle overlays and burn-ins, SCC & MCC caption files, and TTML, WebVTT, SRT, and EBU-STL subtitle files. 

Most broadcasters looking to utilize captioning and subtitling have three main goals, outlines Bill McLaughlin, Chief Product Officer, Ai-Media. 

Firstly, they want to caption more content than before as they expand their offerings into over-the-top streaming. Secondly, they want to power this through APIs and the cloud, without increasing on-premises infrastructure and human workflows. And thirdly, they want to leverage new technologies to reduce per-hour caption production budgets. 

Ai-Media's end-to-end captioning and subtitling solutions allow broadcasters to tick all these boxes. Our iCap Alta IP video encoder provides a resilient workflow for captions and subtitles across both compressed and uncompressed IP video, and it's a fully virtualised, API-powered pure software system. iCap Alta integrates with Ai-Media’s Lexi automatic live subtitling solution, which offers high accuracy and reliability at a compelling price point.  

“Broadcasters can use Lexi as a 100% automated captioning solution through a SaaS subscription,” McLaughlin says. “Or with Ai-Media’s Smart Lexi, they can leverage a hybrid automated captioning solution with added quality enhancement, management and review from our experienced broadcast services team. Lexi and Smart Lexi are also available across pre-recorded or VOD content with a simple API workflow.  

“When you add these solutions together, broadcasters finally have a complete end-to-end solution that offers full coverage, high quality captions and subtitles at low cost. And not only that, one that supports a modern cloud-based approach that allows broadcasters to fully leverage automated workflows. 

“Trusted by the world’s leading networks, Ai-Media is the perfect partner for broadcasters looking to caption and subtitle their content. Since acquiring EEG Enterprises in 2021, we have supercharged our service processes through automation, cloud and IP video to deliver ever-increasing captioning accuracy and cost-efficiency. Ai-Media is today a one-stop shop of captioning solutions and the only vendor that offers all the software, hardware and human services broadcasters need, in one place.” 

With a growing, aging population comes more hard of hearing and deaf people, who consume media across all kinds of platforms beyond just the regulated over-the-air model. People expect to have their content captioned, regardless of where and how they consume it.  

To aid content creators and distributors in their efforts to make their content as broadly accessible as possible, ENCO is building out a powerfully scalable Cloud version of enCaption, its Automated Speech Recognition (ASR) product. This introduces a microservices-based containerised processing and caption management environment that’s designed to scale and flex to myriad of Cloud-based captioning workflows (while also continually improving its on-premise and new hybrid-Cloud offerings too).  

“What’s more, a new and highly robust API allows for third party integration of automatic captioning into various automation architectures, MAM’s, and more,” says Bill Bennett, Media Solutions & Accounts Manager, ENCO Systems. “As always, custom word libraries can be added to support uniquely spelled names or terms, from both manual and automated ingest methods.  

Automatic transcription is a lot like automatic captioning with ASR and brings the benefits of searchable text files, helpful for on-the-spot live interviews to aid commentators and producers by helping to instantly call up the transcript and skim for key words needed to dive deeper into a story, or to help them find those unique sound bites hidden deep within a recorded interview.  

Automatic translation is also becoming increasingly crucial in an ever-shrinking world, so much so that ENCO recently acquired TranslateTV, a company specializing in fast and accurate on-premise English to Spanish translation. With many more languages available by Cloud, ENCO’s enTranslate product can concurrently generate dozens of different language versions of what’s said, live on-the-fly, to an incredibly diverse audience worldwide, in real-time. 

VoiceInteraction has been continually developing its core speech processing technology, while also expanding its coverage to new production and distribution workflows. Adding to an advanced new decoding strategy that allows for increased accuracy in unprepared speech, speech translation is now produced for live sources, enabling multiple subtitle languages per source stream.  

“Our underlying proprietary speaker identification and language identification modules were overhauled, for new classifications produced with lower latency and higher accuracy,” says head of marketing Marina Manteiga. 

Audimus.Media has been traditionally associated with an external closed caption encoder/Teletext inserter device to add the automatic subtitles to the SDI video signal. For markets with restricted budgets, CTA-708 captions can now be encoded into SDI signals while still offering a caption monitoring output, depending on the card used. Given the ongoing transition to IP-based production workflows, Audimus.Media can now operate as an ST 2110-40 captioning stream generator, with native SMPTE 2110 support added. 

“To cope with market specificities, VoiceInteraction has been expanding the native formats that can be produced and sent as a contribution to MPEG-TS multiplexers or muxed by Audimus.Media into a MPEG-TS stream: DVB-Teletext, DVB-Subtitling, ARIB B24, ST 2038:202, and ETSI EN 301 775. New supported transport protocols were also added: SRT and RIST can be used as input or output. 

“One of the longstanding challenges for our customers has been the distribution of live captions in their VOD platforms, with seamless audio and video synchronisation. Our latest product update combines encoded video recording with an embedded editor, exporting the subtitles into any NLE with automatic clip markings.” 

Digital Nirvana recently announced upgrades to its Trance self-service SaaS application to accommodate ease of use and in line with latest trends in the media production environment.  

Trance 4.0 can be either integrated within a media company’s workflow or used as an individual platform to generate and review transcripts with the aid of ASR and export them in various different formats for a variety of different use cases.  

Russell Vijayan, director of product at the company explains: Professional captioners can now easily convert the transcripts to time-synced closed captions using a combination of parameters as well as NLP technology to adhere to grammatical requirements.  

Users can further use a combination of MT, lexicon algorithms, and different presets to localise the captions into different languages, view them in a dual-pane tab for review. Enterprises will largely benefit from new features including elaborate account management and real-time account monitoring. 

Vijayan says the enhancements to Trance are based on customer requests and give users more outstanding capabilities and a better user experience.  

For instance, the new stand-alone Transcription app can be used to upload media assets and quickly access highly accurate, time-coded, speaker-segmented, automatic transcripts in the transcription window. Users can now get their content quickly transcribed, reviewed, and exported as a simple SRT for display as captions or time-coded VTT, JSON or other formats to ingest into various Web platforms or MAM systems. 

Automatically synced timecodes can be re-adjusted using spectrogram and manual inputs and the proper nouns and grammatical elements can be automatically adjusted based on requirement.  

Considering there are different display parameters for different languages, the ability to define a new set of parameters to split captions other than the source language is added. Users can also import an existing caption file to generate localised text.  

The new version comes with automatic checks for a list of parameters where users can identify any non-compliance with publishing platforms guidelines.  

AI Motion Pictures Are Almost Ready for Their Close Up

NAB

Independent filmmakers are experimenting with AI tools today. While they are not yet ready for their big screen close-up it won’t be long until these technologies become widely adopted in Hollywood.

article here

The most high profile text to image AI is DALLE-2, released by Open AI.  The model does not offer motion picture sequences – but the odds are that is soon will. Open AI is likely working on this as we speak.

LA based director Paul Trillo has been creating stop-motion animations using it.

AI art is in its infancy and making fledgling attempts at ‘temporal coherence’  the ability to make something move as we expect it to move in film and video (not forgetting that film is a set of still images replayed 24 times a second).

Deforum is a text to motion AI tool based on the AI model Stable Diffusion (by Stability AI). AI artist Michael Carychao has used this to show how AI tools can re-create famous actors.

“In a couple of years, we’ll be able to write ‘Brad Pitt dancing like James Brown’ and be able to have a screen-ready coherent result,” reckons Pinar Seyhan Demirdag, co-founder of AI based content developers Seyhan Lee, blogging at medium.

Another example using Deforum is provided by an artist known as Pharmapsychotic. The animated sample in this tweet is claimed to be raw output with no post processing or interpolation.

“Give it a couple of years, and you’ll be able to film a scene of four random people walking down an aisle, and to turn them into Dorothy and the gang in the Wizard of Oz,” comments Seyhan Demirdag. “Arguably, you can do this right now, but it will be wonky, smudgy, and missing 8K details, so not ready for the mass audience.”

There are two ways of transferring a style right now. One with a pre-defined style, for example, a Van Gogh painting, the other by using text-to-image based models such as Disco Diffusion and VQGAN+CLIP, where you guide the style with words, referred to as ‘prompts.’

“These prompts are your most significant creative assets, and many people who make art with text-to-image tools also call themselves ‘prompt artists’.”

There are even sites suggesting the best prompts to work with specific AIs – like this one for DALLE-2

Considerable work is being done to incorporate generative art models into games engines.

Daniel Skaale, who works for Khora VR has posted this sample, where he has carried a 2D image that he has created in the text to image AI Midjourney into the Unity games engine.

As good as this is, generating in real-time in Unity or Unreal Engine remains an unexplored territory with huge potential, says Seyhan Demirdag.

Face replacement

@Todd_Spence made this mashup of Willem Dafoe as Julia Roberts in

Pretty Woman. Just for fun of course, but examples like this using AI apps like Reface give us a glimpse into how AI will help optimize production in future.

“Soon, studios will simply need to rent Brad Pitt’s face value rights for him to appear in the upcoming blockbuster film without having to leave the comfort of his couch,” says Seyhan Demirdag.

Similar models have already been used. For example, Focus Pictures’ Roadrunner: A Film About Anthony Bourdain used Deep Fake for Bourdain to say things he didn't (this was controversial mainly because the AI wasn’t acknowledged up front). The Andy Warhol Diaries also used AI to mimic Warhol’s narration but since this was credited in the title sequence up front this Netfflix doc received plaudits for its innovation.

As with any other technology in its infancy, AI art still misses temporal coherence, which is our capacity to make jumping jacks or walk down the street. 

“Right now, you can produce mind-bending, never-before-seen sequences with AI, but you cannot do everything (yet).

She adds, “In a few years, we’ll be able to generate coherent and screen-ready full features that are entirely generated. If you are a producer, director, studio owner, or VFX artist who wants to stay ahead of the curve, now is the time to invest in this technology; otherwise, your competition will be generating headlines, not you.

Advanced Multimodal AI is Busting Out of the Lab and Into Your Life

NAB

Apple announced the iPhone in 2007. Now, we can no longer fathom a world without a smartphone in our pockets. The same happened with social media. Facebook and TikTok govern our virtual relationships and how we are informed about news. We’re on the verge of a third technology revolution, which will blend with and be fueled by the ubiquity of devices and algorithms.

article here 

AI has “world-shaping potential,” Alberto Romero, who runs the newsletter The Algorithmic Bridge, writes in an excerpt posted on Medium.

It’s not any old AI that will impact us in ways we can only imagine. The large AI models and an integration of those technologies into the Internet of Things.

Romero lists in The Algorithmic Bridge the rise of various AI tools and focuses — particularly concentrating on the tremendous gains made in the field of large language models.

From 2012 to 2022, the AI field has evolved at an unprecedented rate of progress.

Today, generative large language models, together with multimodal and art models, dominate the landscape, tech giants, ambitious startups, and non-profit organizations aim to leverage their potential — either for private benefit or to democratize their promises.

These include OpenAI’s release of GPT-3 — arguably the best-known AI model of the decade — and Google’s own LaMDA, the AI that earlier this year was claimed to be sentient by former Google engineer Blake Lemoine.

Even this has been superseded at Google by PaLM, published in April. PaLM currently holds the title of the largest dense language model and has the highest performance across benchmarks. Romero believes it’s state-of-the-art in language AI.

However, the next major advance is already in training. This phase is focused on building AI tools that mimic our other senses, notably hearing and sight — but also human creativity.

OpenAI’s DALL-E 2 is the most well-known AI art model (also known as diffusion-based generative visual models). Others include Microsoft’s NUWA, Meta’s Make-A-Scene, Google’s Imagen and Midjourney, and Stable Diffusion.

“These models, some behind paid memberships and others free-to-use, are redefining the creative process and our understanding of what it means to be an artist,” Romero says.

But that’s no longer news. Throwing the evolution forward, Romero assumes that these AI models combining language, multimodal, and art-based features are going to become our next virtual assistants.

Advanced AI is going to be a “truly conversational Siri or Alexa,” and your next search engine will be a “intuitive and more natural Google Search or Bing.” and your next artistic tool “will be a more versatile and creative Photoshop.”

The large-scale AI models are emerging from the lab to find a home in consumer products.

“This shift from research to production will entail a third technological revolution this century,” Romero maintains. “It will complete a trinity formed by smartphones, social media, and large AI models, an interdependent mix of technologies that will have lasting effects on society and its individuals.”

How is it all going to redefine our relationship with technology and with one another?

We’ll find out sooner rather than later.

 

Tuesday 27 September 2022

Working in the cloud allows creative people to be creative

copy written for Mission 

Article here 

It was good to be back at IBC, which had a real sense of business being done and genuine excitement for the future of our industry. No-one will have missed the big theme trending on the showfloor and conference which was the move to Cloud. All of the major public cloud vendors – GCP, AWS and Azure were at the Amsterdam RAI in force – as were the CTOs of all the major Studios. High on their agenda is the wholesale move of creation to distribution workflows to Cloud as outlined in the MovieLabs’ 2030 Vision.

We were proud to have played a small but significant role in demonstrating, with AWS,camera to cloud workflows, where we also had a chance to learn how our cloud-based platform Origami plugs into the future of media content creation to meet MovieLabs’ Vision.

Built on the philosophy that creativity should fuel technology, Origami is one of a growing suite of technologies designed to reduce the technical constraints for feature film and TV drama so creatives can focus on what’s important.

If it wasn’t clear already then IBC2022 underlined the seismic changes that have accelerated over the last couple of years: We are moving forward fast as an industry into Cloud as the logical evolutionary step.

Let’s consider where the industry has come from:

The traditional method for making film and TV, spanning nearly a century, was for film negative to be processed in a lab and for sound and picture to be assembled, laboriously (remember Steenbecks?), and synced in editorial.

Twenty years ago, the digital intermediate process greatly enhanced this by allowing greater manipulation of footage across editorial and VFX before conform, grade and final master but post production remained a rigidly linear workflow. It could be no other way. The technology had reached its limits.

Not any more. The transition of the entire postproduction chain to the Cloud is in full sway and represents a paradigm shift from Post 2.0.

You can call it nonlinear if you like but a more accurate term for Post 3.0 is collaborative. Once media is in the Cloud everyone can access it simultaneously and work on different aspects of post in parallel. This not only speeds production by smoothing away inefficiencies in moving media from A to Z but it enriches the potential for creative collaboration.

This is exactly what we were demonstrating with AWS at IBC.

Using Cloud for post is far from new - the industry has been using servers held in data centres off premises for aspects of the post workflow for well over a decade. VFX was among the first areas of production to use bursts of Cloud compute to speed rendering. More recently, Camera to Cloud using tools like QTake enable early viewing of footage in proxy form.

The difference between that way of working and Post 3.0 is that you can now send Original Camera Negative (OCN) to the Cloud and work with optimised images as soon as it is captured - irrespective of where your creatives might be.

Traditionally the DI has been done on premises with dedicated hardware but the move to Cloud means you can all but divest your machine room with all the headache of capex, maintenance and heat/power costs that entails.

This game-changing advance could not come a moment too soon. The sheer volume of content being commissioned by studios and streamers together with the heightened demand to hit a succession of tight deadlines presents several challenges to facilities.

The first is that with so much content coming down the pipe there are not enough vendors to actually deal with it in any local market. Consequently, the post production work on tentpole features and major episodic TV needs to be spread internationally. The challenge is how to ensure that file sharing and communication is seamless.

A second issue is that even when you go from facility to facility the experience is inconsistent. This is even the case locally when hiring multiple shops in

London, for example, let alone exporting that model across territories where some vendors won’t have the experience of delivering into Hollywood.

We are seeing a lot of facilities having to step up and deliver on expectations of quality they might not have had manage before.

These are the challenges that Origami is designed to address. Origami being a suite of tools for post-production, with the first product released to the market Phoenix, automating the delivery of VFX files

We’re not the first to automate VFX and DI/Drama pulls but Phoenix, running on Origami, is the first to take advantage of the scalability and global reach that Cloud brings. A unique feature of Origami is it goes to your media, eliminating unnecessary replication and then delivering to the defined vendor, keeping with the Movie Labs 2030 Vision. You simply submit your cut file of choice (EDL, ALE, XML), which Phoenix converts to ACES-compliant Open EXR files (or DPX for legacy workflows) and delivers to designated stakeholders as and when needed.

There are not enough skilled people and not enough hours in the day to cater for the scale and speed of today’s production output. Trying to do this manually will burn time and money.

Cloud-native tools like Origami erase those inefficiencies and frees talent to do tasks they actually want to do – creating art. Just because you can work in the Cloud doesn’t mean your workflow or the

tools that you use need to change. Also demonstrating Cloud capabilities with AWS at IBC were Moxion, Pixitmedia, Filmlight, Adobe, Autodesk, Blackmagic Design, Qtake, Colorfront and more. Editors and colorists, for

example, can still work as they did before but linking high resolution media and masterfiles in the Cloud will open up new creative opportunities. This includes the opportunity to work in parallel with other departments and the opportunity to introduce AI/ML to enhance production. Already highly repetitive manual tasks like rotoscoping are being driven by AI tools in the Cloud.

Enabling parallel workflows will accelerate production. With Origami, Mission Digital is a part of this pan-industry forward momentum.

 

How the Metaverse Is Actually Gaming By Design

NAB

And the nerds shall inherit the Earth. Gaming companies like Roblox and Epic Games (Fortnite) Minecraft (owned by Microsoft) are building the Metaverse in their own image and we are all going to players.

article here

“We have reached the point in which the constraints to simulation fidelity and functionality have relaxed enough that the expertise in gaming can be applied … to the Metaverse,” says Matthew Ball, a venture capital investor and respected commentator on all things Metaverse, in this video.

For that reason, he argues, the leaders of tomorrow are today's gaming companies.

Ball is usually on the money in delivering insight but his vision is blunted here. Perhaps because in this video sermon he is ultimately promoting the consultancy Big Think. Rather than presenting any big ideas he rehashes an idea that we’re all pretty accustomed to – that gaming technology and game-play is front and centre of the experience in the spatial internet.

Game development means basically creating an entire new universe from scratch, so it stands to reason that has a drastic effect on the Metaverse.

Roblox has, on the average day, about 55 million users; Minecraft is about 80% that size; Fortnite has 70 to 80 million estimated monthly users, and even more engagement time [figures quoted by Ball].

His reasoning for why gaming companies are most likely to be masters of the metaverse is two-fold. One is that the ‘super-fidelity’ necessary to deliver an ultra-real experience is now available to mainstream consumer entertainment. Previously, graphic ‘virtual’ reality was only possible for industries like medical and military which has the money to put behind the computing muscle.

Now, Ball reports, the US and British militaries are using Epic’s Unreal Engine for simulation training for active combat.   John Hopkins University is now performing live-patient spinal surgery using game engine-rendering technology. 

The second reason is that gaming development has, over decades, worked through many of the issues confronting those inhabiting, trading in, navigating and socializing within, virtual worlds.

“All of the expertise that is now relevant for the Metaverse has been built and incubated in the gaming sphere. That's not just design principles - obviously they're best at building a virtual world  -- but it's also more nuanced. 

“They have constructed complex marketplace economies, and most importantly, all of the hardest technological problems for the Metaverse - the challenges of networking globally, the constraints of having affordable but super powerful hardware to actually produce a real-time-rendered simulation. The world's best expertise comes from the gaming sector.”

They focus not on game-like objectives - win, kill, shoot, defeat, score - but non-game-like objectives. Instead, the goal is: identify, express, socialize, build, explore.

“That's one of the ways in which many of us have belief that this is a scalable experience, because it meets a human want, and it demonstrably brings many people together.”

But as one social media users comments to Ball’s video, gaming and gamers should not be left to uncritical scrutiny.

“The big problem is that the gaming industry, while highly creative, has also fallen into a lot of dark patterns which manipulate the gamer,” posts a user called hekette. “These patterns need to be reduced or eliminated or we're likely to end up with a depressing dystopic future. They will be used for more than just making money.”

 


Monday 26 September 2022

Gamers Shall Inherit the Metaverse

NAB

The metaverse may be many things to many people, but one thing everyone agrees on is that a lot of people will come together in its virtual worlds to play video games.

article here

“Video game developers are likely to be the primary designers of the metaverse because at its core the metaverse is a video game,” Janine Yorio, head of real estate at online investment platform Republic, and Zach Hungate, director of gaming at Everyrealm, a metaverse innovation and investment company, declare in CoinDesk.

Video games, they say, are the primary activity that will bring us to the metaverse in the first place, and then have us coming back over and over and over again.

“Video games have become a primary form of socialization,” they argue. “The metaverse generation of children up to age 18 today has very different expectations from technology — even compared to millennials. They spend time in interactive environments where they play games, socialize with friends, build small businesses, and buy and sell things.”

Today’s youth wants the socialization of playing together online. They believe the next set of metaverse video games will not be after-thoughts or mini games but the main experience.

This will include first-person shooters like Fortnite as well as old-school arcade games that resemble Super Mario Brothers or Pac Man. Other times we will procrastinate with more mindless pattern games like Candy Crush or build worlds like the Sims or Second Life.

If that’s beginning to sound like a lot like Ready, Player One then Yorio and Hungate would concur Author Ernest Cline’s vision of the metaverse is as a grandiose virtual reality where people spend the majority of their time, offering an escape from a reality battered by social, economic and political strife.

The metaverse, in their conception, will consist of robust and highly customizable video game worlds. The games will be of an extremely high quality, often built by AAA gaming studios. They may also be built on the blockchain and it is video game developers who will be the architects of this world.

“When a person turns a door knob in the metaverse, the door swings open. That is not just a 3D architectural model, but a world with cause and effect — and few coders outside of gaming studios know how to program that world.”

Video game developers are typically among the best and brightest graduates of computer science educational programs because video game development is highly complex.

“The person writing the code must think in 3D,” they say. “These developers cannot be created rapidly in coding schools the way that HTML developers are, which means there will be a greater demand for game development talent from remote locations and emerging markets, causing a new economic opportunity for those who are crafty enough to teach themselves game development.”


 


Sunday 25 September 2022

How the Metaverse Is Actually Gaming By Design

 NAB

article here

And the nerds shall inherit the Earth. Gaming companies like Roblox and Epic Games (Fortnite) Minecraft (owned by Microsoft) are building the Metaverse in their own image and we are all going to players.

“We have reached the point in which the constraints to simulation fidelity and functionality have relaxed enough that the expertise in gaming can be applied … to the Metaverse,” says Matthew Ball, a venture capital investor and respected commentator on all things Metaverse, in this video.

For that reason, he argues, the leaders of tomorrow are today's gaming companies.

Ball is usually on the money in delivering insight but his vision is blunted here. Perhaps because in this video sermon he is ultimately promoting the consultancy Big Think. Rather than presenting any big ideas he rehashes an idea that we’re all pretty accustomed to – that gaming technology and game-play is front and centre of the experience in the spatial internet.

Game development means basically creating an entire new universe from scratch, so it stands to reason that has a drastic effect on the Metaverse.

Roblox has, on the average day, about 55 million users; Minecraft is about 80% that size; Fortnite has 70 to 80 million estimated monthly users, and even more engagement time [figures quoted by Ball].

His reasoning for why gaming companies are most likely to be masters of the metaverse is two-fold. One is that the ‘super-fidelity’ necessary to deliver an ultra-real experience is now available to mainstream consumer entertainment. Previously, graphic ‘virtual’ reality was only possible for industries like medical and military which has the money to put behind the computing muscle.

Now, Ball reports, the US and British militaries are using Epic’s Unreal Engine for simulation training for active combat.   John Hopkins University is now performing live-patient spinal surgery using game engine-rendering technology. 

The second reason is that gaming development has, over decades, worked through many of the issues confronting those inhabiting, trading in, navigating and socializing within, virtual worlds.

“All of the expertise that is now relevant for the Metaverse has been built and incubated in the gaming sphere. That's not just design principles - obviously they're best at building a virtual world  -- but it's also more nuanced. 

“They have constructed complex marketplace economies, and most importantly, all of the hardest technological problems for the Metaverse - the challenges of networking globally, the constraints of having affordable but super powerful hardware to actually produce a real-time-rendered simulation. The world's best expertise comes from the gaming sector.”

They focus not on game-like objectives - win, kill, shoot, defeat, score - but non-game-like objectives. Instead, the goal is: identify, express, socialize, build, explore.

“That's one of the ways in which many of us have belief that this is a scalable experience, because it meets a human want, and it demonstrably brings many people together.”

But as one social media users comments to Ball’s video, gaming and gamers should not be left to uncritical scrutiny.

“The big problem is that the gaming industry, while highly creative, has also fallen into a lot of dark patterns which manipulate the gamer,” posts a user called hekette. “These patterns need to be reduced or eliminated or we're likely to end up with a depressing dystopic future. They will be used for more than just making money.”

 


Charting the Influence of… Influencer Culture

NAB

More than a quarter of Gen Z in the US plan to make a career as a social media influencer, according to new research which highlights the aspirational quality of a job that blurs the line between ‘influencer’ and ‘celebrity’.

article here

Digital consultancy Higher Visibility commissioned Censuswide to survey 1,000 people aged 16-35 across the US during July. 

Social media influencing is a two way street between audience and brands. The study revealed that over 1 in 4 (26%) Gen Z trust influencer reviews more than product page reviews, while some influencers can now make over 1 million dollars for a single social media post.

The desire to become an influencer is uniform across the States with 30% of those interviewed expressing an interest in the the Midwest, rising to 37% in the Northeast.  In New York State alone, 41% of local Gen Zs intend on becoming an influencer in the future, whilst 30% from LA also feel the same way.

Perhaps even more shockingly, the study found that Gen Z males (20%) are more likely than females (13%) to believe that being a social media influencer is the only choice of career for them.

Additionally, over 12% of Gen Z told us that they would quit college to become an influencer.

According to the results, nearly 1 in 4 Gen Z believe there should be social media influencer training in school, with 6% of Gen Z actively choosing not to go to college to become an influencer.

Naturally most parents don’t agree – or perhaps don’t understand what the job would entail. Nearly a quarter Gen Z claimed that their parents follow them on social media, yet almost half say their parents would prefer for them to go to college than become an influencer.

It’s not as if most Gen Z are blinkered when it comes to the money that can be made as influencer.

Asked to guess how much they thought an influencer would make in a year on average, the most frequent answer as between $75,001 – $100,000 was the most commonly guessed figure, followed by $50,001 – $75,000. Surprisingly, 10% of respondents told this research that they thought influencers could earn between $5,001 and $10,000 per year, with just 2% selecting ‘over $100,000’.

Instead, Gen Z are attracted to the others benefits of being an influencer which they perceive as being to receive ‘free products’ , free holidays, and being a “celebrity.”

The most popular social media influencers according to this cohort is Charli D’Amelio, beating out both Kim Kardashian and Kylie Jenner. D’Amelio gained popularity for her dance videos and has amassed 193,900,000 combined followers on TikTok and Instagram.

Unsurprisingly, TikTok is the social platform of choice carrying most weight about this demo. Nearly 40% of Gen Z said they would choose TikTok as their primary platform with over half believing it is easier to be a social media influencer on TikTok than on any other social media platform. YouTube is in second place with 21.68%, while Instagram follows in third with 21.39%.

Just 7.13% of Gen Z responded that they would not want to be a social media Influencer.

It is safe to say that over the years, the line between ‘influencer’ and ‘celebrity’ has blurred,” says Adam Heitzman, co-founder of the consultancy.

With influencer culture permeating the younger generations and becoming more prominent as time goes on, it is a movement unlikely to falter any time soon.


Eat the Rich: Class Warfare in the Current Cinema (and Possibly Elsewhere)

NAB

From Squid Game and Parasite to Cannes film festival prizewinner Triangle of Sadness, the every day super rich are getting their comuppance from every day people.  In the wake of these successes there is a clear global appetite for exposing and satirising the huge gaps in wealth and status.

article here

Super rich here is relative. In recent film’s such as Jordan Peele’s US or Todd Phillips’ Joker the target of revenge is anyone perceived as being more privileged by those who perceive themselves to have the right to take it.

Contrary to the meritocractic ideal of the American Dream, Peele was suggesting that class (not just race) is responsible in the United States today for division.

“There is a certain horrific, physical element used to undermine the rich in these stories that taps into a well of anger against the system,” says film critic and producer Jason Solomons in The Guardian. “I think filmmakers are intuiting the levels of anger and frustration out there, the frustration of trying to break through and earn a living, and offering audiences the pleasure of some catharsis.”

In the same article, Vanessa Thorpe highlights two more recent films: The Forgiven, and I Came By challenging the received social order. The former stars Jessica Chastain and Ralph Fiennes as rich travellers to Morocco. The latter has [the hither Hugh Bonneville [best known as the genial Earl in Downton Abbey] as a wealthy London philanthropist who is not all he seems. In both films the comfortably-off are revealed to be callous, hedonistic and detached, and in the case of Bonneville’s Sir Hector Blake, very dangerous.

Like US, director Jessica M Thompson takes class war firmly into the realms of horror in her film The Invitation, released last month.

The Invitation centers on Evie, a struggling artist in New York who has just lost her mother to cancer after losing her father as a teenager, and is feeling lonelier than she ever has before.

“I really identify with Evie,” Thompson explained in the film’s production notes. “When I was 24, I moved to New York City to become a filmmaker. I didn’t know a single soul. I struggled for quite a while – working survival jobs, figuring out how to thrive in this incredible city, how to fight for what you want, how not to feel lonely. Of course, things go awry. But through that [Evie] finds her strength, her conviction of character, and literally gets to stick it to the man.”

The motivation of Triangle of Sadness director Ruben Östlund are similar. He told The Hollywood Reporter: “Quite often I feel trapped in the culture that I live in. I want to be somewhere else, but cultural expectations are forcing me into a corner. There’s the dilemma between what I want to do and what I feel that I have to do. I write the scenes to make it as hard as possible for the characters to deal with the situation.

The Triangle of Sadness is set on a luxury cruise, then a desert island, with a rogues’ gallery of super-rich passengers including a Russian oligarch and a British arms dealer. The cruise ends catastrophically and they find themselves marooned on a desert island. Hierarchy is suddenly flipped upside down. The lowly housekeeper now has power since she is the only one who knows how to fish.

The ship’s captain (Woody Harrelson) plays a Marxist who quotes from The Communist Manifesto while his passenger’s puke with seasickness. Yet Östlund is as interested in the tawdry economic value of beauty as he is on inverting class structure.

You know, if you are born beautiful, it can be something that can help you climb up in society, even if you don’t have money or an education,” he tells THR. “Most of us are brought up by our parents saying, Looks aren’t important, but it’s so obvious we live in a world where looks are very important, maybe even more important today in this digital image world than they had been before.”

One of Östlund’s most obvious influences is director Michael Haneke, whose most extreme satire of European bourgeoisie is Funny Games. Here, a well-off family are brutally attacked without mercy or provocation, other than being symbolic of wealth and privilege.

The callousness of the attack in Funny Games, and that the protagonist is dressed all in white, deliberately recalls Stanley Kubrick’s 1971 adaptation of Anthony Burgess’ satire A Clockwork Orange. Three years earlier, lead actor Malcolm McDowell had also starred in Lindsay Anderson’s Cannes Palm D’Or winner If… about a group of pupils staging a savage insurrection at a boys' boarding school.

The renegades of If…  (see also the anti-heroes of in Easy Rider, Bonnie and Clyde and The Wild Bunch – all 1969) died violent, bloody but romantic deaths as if their revolt were not in vain.

Fast forward to now and the serfs, the servants, the commoners, the poor and the less than rich are turning over the established order and surviving to rule the roost.

In one of Östlund’s previous films, Force Majeure, a supposedly exemplary family man flees to save himself instead of his wife and children at the first sign of an avalanche.

“It has become a universal and caustic indictment on the proclaimed values ​​of a democratic society and capitalism,” finds Movieweb https://movieweb.com/ruben-ostlund-films-satire-triangle-of-sadness/

Östlund’s himself appears more nuanced in his feelings about the ultra-rich. Putting himself in their shoes he says he is interested in how we all react when we are spoilt.

For example, when I fly business class, I behave differently to when I fly economy. I sit there and read more slowly and drink more slowly as I watch passengers heading for economy class. It is almost impossible to not be affected by privilege.

He adds, “Successful people are often very socially skilled otherwise they wouldn’t be so successful. There’s an ongoing myth that successful and rich people are horrible, but it’s reductive. I wanted the sweet old English couple [in Triangle of Sadness] to be the most sympathetic characters in the film. They are nice and respectful to everyone – they just happen to have made their money on landmines and hand grenades. It’s probably a more accurate description of what the world looks like.

 In White Tiger, Ramin Bahrani’s adaptation for Netflix of Aravind Adiga's novel, the story is about Balram, who comes from a poor Indian village and uses his wit and cunning to escape from poverty – by learning from and plotting against his far richer employers. Balram is the hero because his employers are not only rich but seen to be rude and abusive to Balram whom they treat as less deserving.

So from South Korea to India to the US and beyond, class and class warfare is a universal phenomena. But nowhere more entrenched surely than in the UK.

James Cameron’s Titanic leant none to subtly on a romance about love being blind to class. The film’s poor, the Irish the Leo DiCaprios are forced below decks while those on the upper (class) deck enjoy fancy dinners, ballroom dances and the Captain’s presence. Billy Zane takes the role of pantomime villain and posh girl Kate Winslet lives to tell the tale.

It’s no coincidence that The Invitation is set in aristocratic England too (albeit filmed in Hungary). You don’t need a Phd in sociology to know who the real blood suckers are in this vampire story.

The film’s costume designer Danielle Knox, says that when heroine Evie goes to England, she is contrasted with another world.

“We’re going back into the past – an era that is her complete opposite. That’s the introduction of the horror: putting her in a rich, stuffy environment.”

Anyone who caught even a glimpse of the pomp and ceremony attending Queen Elizabeth’s funeral will realise that this rich, stuffy environment is alive and well in the UK. The Queen herself may have been a decent sort, but the institution of an hereditary monarchy upholds the wealth, power and priviledge of an elite.

 

Friday 23 September 2022

The DACH Territories: Global recession clouds AV fightback

AV Magazine

All AV sectors are having supply chain problems since the pandemic restrictions have been relaxed in many parts of the world.

Article here

The supply chain crunch has replaced Covid as the chief inhibitor to business across much of Europe. Large projects are being placed on hold due to a shortage of components.

“Sometimes it comes down to one part of the specified solution causing the delays, such as the displays or audio end points,” says Kai Ellingsen, senior sales manager, Atlona.

“The sector faces challenges with supply often struggling to keep up with demand and a lack of facilities able to deliver repairs,” reports Niels Lubbers, sales manager CVP. He adds that there are still plenty of big projects moving forward with the region’s economic climate “currently very active and stable” but there’s a cloud on the horizon.

“The order books seem to be full and installations secured until the end of this year,” concurs Volker Unland, sales director for Germany North and Austria at Hypervsn.

“However, the overall mood and outlook for the upcoming year doesn’t seem to be that optimistic. The economic recession and inflation expected in the global economy … will definitely negatively affect the DACH market too.”

Hybrid world
With back-to-the-office and hybrid working on everybody’s mind, vendors of conference and meeting solutions technology are reporting positive business, a trend expected to continue for the next 12 to 18 months.

“The DACH market is growing strongly, especially in video conferencing and hybrid meetings,” says Doug Remington, GM head of EMEA at video collaboration and device manufacturer, DTEN. “In the meeting area, there are still complex installations, but companies are moving towards setting up small, multifunctional rooms that are less complex because they are used by many users. These spaces will combine audio, video, wireless content sharing and collaboration.”

Ralf Kalker, regional sales director at conferencing specialist, Konftel says corporates are particularly keen to offer their employees the possibility of hybrid working. “Mercedes Benz explicitly states that employees are allowed to work hybrid or via video meetings wherever the tasks allow it. Similarly, insurance companies, bank advisors, real estate agents, lawyers, architects – anyone with counselling needs are setting up conference technology,” he says.

Kalker spots a broadening of the user base for video conferencing systems. “Whereas it was once about high-level equipment in really big conference rooms with digitalisation of the home office, today the emphasis is exactly between these two poles. The focus has shifted to ad hoc meetings, short team meetings, digital brainstorming or digital learning group support. But they are all meetings with two to ten, maybe fifteen participants and where we see the strongest demand for VC equipment because standard kit such as laptops and webcams are obviously not sufficient.”

“Retail continues to grow, especially in digital signage with indoor and outdoor display applications,” reports Michaela Hirsch, sale director, Germany, Peerless-AV. “Whether it’s a direct view LED installation, or an LCD display solution depends on the application as well as on the budget.”

The tech trend for meetings and collaboration is moving strongly in the direction of hybrid meetings, confirms Remington. “Two verticals in particular are worth mentioning here – education and medical. Universities are increasingly offering their students the possibility of hybrid learning and in medical, VC is increasingly being used for training and diagnosis. Doctors no longer have to be on site and can make diagnoses from anywhere.”

Digital Pact fur Education
Bielefeld University in Germany, for example, redesigned its teaching systems last year to offer students hybrid learning. Since the beginning of the winter semester 2021/22, 70 rooms have been equipped with DTEN D7 55in systems and some group rooms with 75in versions.

In 2019, the German government unveiled a 6.5 billion EUR ‘Digitalpakt Schule’ to digitise 40,000 schools but a report from May this year criticised rollout as being too slow. By the beginning of 2022, only ten per cent of the funds from the programme had actually reached the schools, concluded a research group from the University of Hildesheim and the Berlin Social Science Centre (WZB).

“We see the rapid digital transformation of education (and justice departments) with federal states and municipalities investing in the digital educational infrastructure,” reports Mark Bultinck, sales director, Crestron.

“So far only a minor part of the budget has been invested,” confirms Unland. “Covid and a lack of infrastructure to host classes online have exposed how far behind German schools are in terms of digitalisation. This experience will hopefully give the final push for a fast and significant development in this area from which the pro AV businesses in the region will benefit.”

As you would expect, main AV activity is located in the major cities (Berlin and the Ruhr area, Vienna, Graz, Bern, Hamburg, Munich, Stuttgart and Frankfurt) “where large universities, financial centres, government districts and industrial companies are particularly concentrated,” says Matthias Wolff, sales manager, Lightware Visual Engineering.

Many enterprise-level firms are headquartered in the Frankfurt Rhein/Main Region, often because of its proximity to the international airport, notes Ellingsen.
Atlona’s regional HQ is in the Frankfurt area, which is also home to its strongest dealer base. A recent project for the Deutsche Bahn HQ in the Bahntower in Berlin, saw 300 rooms deployed with Atlona Omega switchers and extenders.

The ski areas in the region, as well as on the coasts of northern Germany are hotspots for tourist and sports activities and related DooH. In Switzerland, Zürich is central to the corporate market, and Basel important for the chemical and medicine industry.

“Even smaller municipalities in the DACH region are increasing their efforts to stay or become more attractive,” says Unland. “AV installations are an essential part of everyday life, regardless of whether it’s work or leisure related.”

AV adoption doesn’t generally differ from other European countries, but there are certainly application considerations owing to regulations in each country. Hirsch points out that the installation of a digital signage/DooH solution, on a main road or Autobahn is heavily regulated in Germany but this differs from neighbouring countries.

“DACH is treated as a unit due to the predominantly German-speaking regions. Thinking big also makes sense in many projects and avoids unnecessary cross-deliveries, additional costs, and Co2,” she says.

CVP recently established a new warehouse, engineering centre, and sales infrastructure (in Belgium) to better serve the continent, especially post Brexit. Says Lubbers: “This ensures the largest selection of the most sought after equipment is held in stock locally, improves delivery times, remove price barriers (duty and VAT free where applicable), and expands our consultation expertise.”

The region also differs within countries in terms of mentality and dialects. “The Bavarian spoken in Germany is closer to Austrian than to the rest of Germany,” informs Hirsch. “In the north is the beautiful Frisian region, between Nordics and Benelux. In Switzerland, three languages are spoken and they are not a member of the EU.”

Hirsch sees the region as a unit “because we feel connected via the predominantly German language, as well as the direct proximity of the countries. Is it fair to treat DACH as a single entity? That’s probably a personal or business decision that everyone must make for themselves.”
Data protection laws can differ from other European countries “which makes implementation of cloud services that are hosted abroad more difficult,” says Bultinck.

“There are also some differences between the different countries. Projects in Austria are typical smaller and require intensive support by local companies. In Switzerland we see more international companies and projects which are sometimes initiated and coordinated from abroad.”

Focusing on the Swiss

According to Lightware’s Swiss general manager, Giuseppe Rizzo the Swiss market is still suffering from the health crisis. “Many projects which were planned after Covid are still pending installation because the companies that operate in the sector are experiencing financial issues,” he says. “What strikes this small but strong market harder is that, if there is a project to install, integrators are not getting products because of the chip supply crisis.

“Even worse, during Covid, many pro AV technicians moved to the IT industry, which looked more stable, so we’ve not had enough human or financial resources to proceed with installation. Despite this, we expect a thirty per cent plus growth compared to last year. This could have been higher without the Corona and chip crises.”

Switzerland has the highest penetration of Apple products in Europe, and along with local brand Logitech, Swiss customers requires compatibility with both, he says.

“It makes it hard for the pro AV industry to find its place because of the lack of integration and compatibility of products and solutions.”
Not only does 60 per cent of the population also speak German but “the market and channel structures are very similar to each other,” says Unland. “The major players are more or less similar.”

Corporate meeting room collaboration technologies are more requested than ever.

Companies have realised after Covid that their meeting rooms need videoconference solutions.

In Switzerland, the education market “is still developing, moving from projectors to touch screen solutions,” suggests Rizzo.

“Offering solutions for home study is a big topic. We’re even getting requests from primary schools.”

Switzerland lags behind Germany in the transition to AV over IP and “is still not top of mind for Swiss integrators,” says Ellingsen.

A final note is that, across DACH interest in the FIFA World Cup in November is anticipated to be high with a number of special events looking to equip with screens and audio solutions.”