Wednesday, 16 March 2022

Why 5G’s Impact on Broadcasting Could Be Insane

NAB

If you’re a broadcast CTO then you’re always working with at least one foot in the future. And according to one new survey, 5G will have the biggest impact on broadcasting within the next five years.

article here

“5G is making broadband internet ubiquitous, opening all sorts of possibilities for remote contribution and mobile collaboration,” states Haivision in its 2022 Broadcast IP Transformation Report.

Mobile networks, including cellular bonding and 5G, have overtaken satellite contribution compared to last year’s report.

“Meanwhile AI and machine learning jumped from fourth to second place in terms of industry impact. With more workflows adopting IP and cloud, the possibilities for AI in broadcasting are starting to take shape. 4K UHD remains a major future trend for the industry followed by SMPTE ST 2110.”

IP Transition Slow But Sure

Most of the report is taken up by the transition from SDI to IP. IP transformation has been a major focus for broadcasters over the past few years. Nevertheless, only about 17% of respondents to this report have made the complete leap to IP. SDI is still widely used across the industry.

That’s because broadcasters have important legacy investments in kit such as cameras, monitors, and switches that rely on SDI inputs and outputs. Although just over a third of respondents continue to rely solely on SDI infrastructure, almost half are adopting a hybrid approach that adds newer IP equipment while continuing to leverage existing SDI investments.

So, it’s not a shock that only a small percentage of broadcasters are 100% cloud enabled. Almost half though have moved at least a quarter of their workflow elements to the cloud. These findings suggest that most broadcasters are deploying hybrid on-prem/cloud workflows. Sixteen per cent of respondents have not adopted cloud technology at all, although the majority plan to adopt IP and cloud technology in the future.

The trend towards the remote production of events continues, while production workflows are becoming increasingly decentralized through IP transformation. Only 15% of those surveyed believe that their organization will go back to the way it was pre-pandemic.

The majority see hybrid workflows as the way of the future with a growing mix of on-premise and cloud technologies for both on-site and remote staff. Almost a quarter see their organizations becoming even more decentralized in the future.

A key challenge on the road to IP for live production is reducing latency. “Reducing latency at the first mile, for live contribution, can benefit the entire broadcast chain,” the report states.

Other challenges cited by about a quarter of respondents include budget constraints, network security, and the ability to hire qualified staff in today’s competitive job market.

HEVC and JPEG-XS Gains

Although most broadcasters continue to rely on the H.264 codec, HEVC usage has significantly increased from last year’s survey, up to 59% from 50%. This may be attributed to the growing demand for 4K UHD content as well as newer broadcast workflow components that support HEVC.

As more high-quality video in 4K and HDR is streamed over all types of IP networks, including the internet, Haivision says we can expect HEVC to continue to gain ground given its ability to provide higher quality or lower bitrates.

The legacy MPEG-2 codec is still needed for digital television and cable services though usage continues to slowly decline, down to 36% compared to 39% from the company’s previous survey.

JPEG-2000 remains a commonly used codec, likely for primary broadcast contribution when using dedicated high-bandwidth networks. JPEG-XS usage has more than doubled from last year, likely a result of the increase in SMPTE ST 2110 deployments while usage of the newer VP9 and AV1 codecs remain small for now.

Unsurprisingly for a survey sample that comprised of Haivision customers and prospects, SRT (Secure Reliable Transport), the open-source protocol designed by the company, is now employed by 63% of the broadcasters surveyed, overtaking the legacy RTMP protocol as the most commonly used method of transporting video over IP.

 


Monday, 14 March 2022

Minecraft and Web3: So Basically We Already Have a Working Metaverse?

 NAB

There’s impatience among metaverse watchers that the thing hasn’t been built yet.

article here 

Vice calls out The Sandbox and Decentraland as being bare-bones, largely empty and janky, and yet, according to Decrypt, still overrun with corporate promotions from brands such as Adidas.

There are also groups and consortia attempting to agree on standards as the first building blocks to scaling between metauniverses, as detailed by Streaming Media.

But Vice, and others, have latched onto a product — a game — that’s been under our noses all along.

Minecraft.

As digital currency market data provider FTFX points out, some of Minecraft’s software is open source, meaning that anyone with the right technical knowledge can build on it. And Minecraft doesn’t have an established economy like competitor Roblox, which has a robust virtual marketplace and its own (non-crypto) digital currency called Robux.

In recent months a project called NFT Worlds has established itself on Minecraft’s servers, offering gamers the chance to buy one of 10,000 unique NFTs related to Minecraft.

An NFT Worlds white paper describes the entity as “a fully decentralized, fully customizable, community-driven, play to earn gaming platform,” and explains that each NFT contains a “world seed,” which is a code that generates a Minecraft world.

In other words, you can now buy and own, via NFT, a virtual piece of land in Minecraft. These assets vary in appearance from snowy tundra to forest islands to massive volcanoes.

Vice explains that if you want your world to be a metaverse destination, you can host your own server. NFT Worlds claims to have “verified” builders on tap to help NFT holders build up their Minecraft experiences.

To get verified, teams of builders must purchase a world at the floor price (currently $45,000) to show they’re serious. FTFX pins the cost at 14.5 Ethereum, or about $38,150, but the Vice article is more current. The ante was just $26,000 in February, showing that property price inflation is rising.

If that seems like a lot of money to invest in a piece of real estate that only exists online, then consider that The Sandbox (a competing online game) often commands much higher prices. Back in December, someone paid $450,000 for a small piece of virtual land next to rapper Snoop Dogg’s property in The Sandbox, reports FTFX.

According to NFT Worlds’ co-founder, who goes by the moniker ArkDev, Minecraft was the obvious choice to build the metaverse on top of because it works, and because it already has a “thriving ecosystem” of mods, user-generated game modes, cosmetic items, and maps.

Quoting the project’s documentation, ArkDev tells Vice, “We didn’t want to have to ‘reinvent the wheel’ by creating our own unproven game from scratch, while also having to innovate on the NFT integration and decentralized metaverse side of the platform we envisioned. This would take far too long to deliver on.”

Cunningly, NFT Worlds has layered its own cryptocurrency called $WRLD on top of everything. According to Vice, “the idea is for $WRLD to be the plug-and-play currency for all NFT Worlds, which players can earn in bespoke “play-to-earn” games built in Minecraft and pay to world owners for various things.”

NFT Worlds isn’t alone in seeing Minecraft as a shortcut to the metaverse. Critterz is an NFT project where token ownership lets users buy plots of land in an “exclusive” Minecraft server and earn more tokens for in-game time.

Another project called “Survival Game NFT” invites players to purchase unique tokens in order to participate in play-to-earn games in “our private, masterfully-crafted Minecraft Server,” according to a post on Medium.

So what does Microsoft have to say about all this? Remember, the technology giant acquired Minecraft’s developer Mojang Studios for $2.5 billion in 2014. Since then, the game’s player base has grown to more than 141 million monthly active users.

Microsoft appears to be taking a look-and-see approach, content to see how this experiment unfolds, perhaps judging whether NFT Worlds can be incorporated into its newly acquired $69 billion asset of Activision Blizzard.

At the bottom of its website, NFT Worlds features the disclaimer: “NFT Worlds is in no way associated with, endorsed by, or a partner of Minecraft, Mojang, Microsoft or any related parties.” The whitepaper adds that the team believes NFT Worlds falls under “transformative fair use.”

NFT Worlds co-founder Temptranquil told FTFX, “They’re watching us from the sidelines — not like a formal green light — but I think in their eyes, we’re the best case scenario for someone using their product.”

 

Power to the People: The Development of DAOs

NAB

Most of us today make money on a simple 9-to-5 “work-to-earn” basis, but the future of income is “x-to-earn” — play to earn, learn to earn, create to earn.

article here

What’s more, it would cut out the middle-men (read Facebook) and reset the balance of the division of labor in favor of the working masses, not the few owners of Capital.

“In the future, it’s likely that the average person will not work for a company,” Ben Schecter, who works at crypto platform RabbitHole, writes at Future a16z. “Instead, people will earn income in non-traditional ways by taking actions such as playing games, learning new skills, creating art, or curating content.”

If that sounds like the sort of post-capitalist utopia that Karl Marx and Frederick Engels dreamed of in the 1840s — well that’s intentional.

That’s because Schecter’s vision of the future of work is being built on Decentralized Autonomous Organizations (DAOs). These are a set of crypto protocols which are emerging as new ways of coordinating, measuring, and rewarding contributions.

“The idea that most people would be employed by large corporations would have seemed crazy to someone in the year 1800,” he says. “This shift [to DAOs] is already beginning to unlock new earning potential for individuals, and it is leading toward a growing transfer of value capture from organizations to people participating as individuals in crypto networks.”

Imagine, “if we have chosen the position in life in which we can most of all work for mankind, no burdens can bow us down, because they are sacrifices for the benefit of all. Then we shall experience no petty, limited, selfish joy, but our happiness will belong to millions, our deeds will live on quietly but perpetually at work.”

Thus wrote Karl Marx. Are DAOs able to bring about the societal change he called for without a revolution from below?

Leaning heavily on Schecter’s work, this article is a primer on DAOs.

What’s the Problem with Work Now Anyway?

The central contention is that traditional corporate employment is rapidly becoming outdated, pointing to the rise of alternative forms of earning such as influencers, contractors, creators, gig economy participants, and more.

“These ways of earning don’t necessarily feel like ‘work,’ but they are all examples of people participating as individual value providers in complex networks, and earning income for their contributions,” Schecter says.

“However, these non-traditional opportunities are limited in number, and when available, often under-reward a contributor’s value. That’s because these jobs are still based in a web2 paradigm in which corporations continue to control the business model.”

DAOs, on the other hand, are core to web3, the suite of technologies underpinning the metaverse and the next generation of online interactions.

“The model of a company having strict boundaries between internal and external may have made sense in the Industrial Age, but in the Information Age, this leads to misaligned incentives and unsustainable extraction,” says Schecter. “In our world of complex information and orbital stakeholders, companies are no longer suited to help us coordinate our activity. Crypto networks create better alignment between participants, and DAOs will be the coordination layer for this new world.”

What’s a DAO Again?

An article from Utopian’s Derick David at Medium breaks down what DAOs are: Purely internet-native; Digitally owned; Operated through code and Decentralized (no one central source of power)

“The mechanism behind these internet-native organizations enables people to form and coordinate economically, from the comfort of their own homes on their computers and phones.”

DAOs matter because they create user-centric networks. In a web3 network’s terminal point, a user-centric governance structure can better align incentives between the DAO and its members.

Under the hood, DAOs run on two main technologies: Blockchain and Smart Contracts.

Blockchain can be thought of a network, ledger, excel sheet, or record of transactions. For smart contracts, think of Kickstarter, the crowdfunding platform.

“Basically, DAOs bind people together through the rule of code and the use of advanced blockchain technology,” says David. “By leveraging such technology, members are able to collaborate on a whole different level.”

Unlike traditional companies or corporations, DAOs don’t have CEOs or board members. When it comes to spending money, like how capital will be managed and deployed or making decisions, like project proposals and hiring, it requires the votes of all of its members.

Instead, these functions are enabled by technology. This is claimed to be democratic “through highly participatory processes or algorithms”, enforced by the rule of code or software, instead of written agreements and operated with no requirement for a physical office.

According to the Antler Insights newsletter, DAOs also claim to be: Permission-less (a broader base of people can participate); autonomous implementation: (meaning that decisions are enforced via self-executing smart contracts (software or code) versus human intervention; and resistant to censorship (meaning DAO decisions cannot be censored since they’re transparent).

 

Does this mean that DAOs are going to replace traditional corporations?

“Not entirely,” writes David in another piece, “DAOs Explained To a 12-Year-Old.”  

“Not everything has to be a DAO. Some organizations are better off as a traditional corporation and some are better off as a DAO. We’re most likely going to see both types complement each other.”

Power To The People

To underline the utopian (socialist?) leanings of some exponents of web3, David points out that what makes DAOs attractive and powerful are the “billions of dollars getting lost or being wasted by power-hungry and maniacally greedy executives” in the prevailing division of labor.

The structure of a DAO is inherently open and accountable, “a forcing function to share value with the participants who create it,” writes Schecter. “The openness of crypto economies will allow people to participate in several DAOs and crypto-networks, mixing and matching different income streams and ownership returns.”

He adds that the best DAOs distribute ownership to their participants through their own native token or NFT.

It is theorized that open economies will make work more flexible, fluid, and playful than the 9-to-5s we are accustomed to.

People’s income will be a mix of things we already currently do in our lives (such as play games), things we think of as traditional work (like contracts), and things that are currently accessible to only a small percentage of the population (like investing and passive income from things like rent).

“To think of it another way, DAOs will expand the type and quantity of opportunities that are open to several types of participants, including token holders, bounty hunters, and core contributors,” Schecter says.

Further, in this new future of work, jobs will be more transient and dynamic. The cost of switching jobs will be lower it is claimed, opportunities will be more visible, work will be reduced down into more atomic units, and the entire world will be unified under a single workforce with access to all opportunities.

“We will discover new opportunities based on our on-chain history, ownership, and reputation, and we will be matched to contribute where we have the best comparative advantage.”

For comparison, here is Marx’s classic — if vague — idea of a post-revolutionary future: “In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.”

Network Participants

The exciting part of the future of work, per Schecter, is the idea of “participate-to-earn.” Within any given DAO, this is where the majority of people will fall.

The idea is a critique of the current work/reward imbalance. “Networks gain strength with more activity and additional participants, yet, for years, users, consumers, and participants have been adding value to networks without capturing their share of value (app developers for Apple, creators for YouTube, and drivers for Uber, for example),” Schecter argues.

Functioning more like open economies than closed organizations, DAOs will reward each individual contribution based on the value it provides, regardless of who it comes from. This means that everyday actions that are valuable to a network will be turned into income-earning opportunities.

“Nearly every single person will earn some income from simply living their lives online, using products, and participating as a user. For people receiving compensation for their own participation in networks, earning an income will feel a lot like a game.”

Play-To-Earn

Play-to-earn is a new type of gaming model that rewards players for playing and unlocking achievements within a game. The traditional gaming model involves a one-sided transfer of value towards the game creators or platforms, whereas play-to-earn games reward users as well.

According to Schecter, play-to-earn games function like an economy: players provide labor (their time and energy) and capital (often purchasing NFTs to participate in the game), and are rewarded with fungible tokens for their achievements and progress within the game.

Earning currency from games isn’t new, but instead of rewarding players with in-game currencies confined to usage within the game, play-to-earn games distribute NFT rewards that are swappable for other crypto tokens or fiat currencies.

“This means that video game players can literally pay their bills through their in-game achievements, particularly for people in countries with lower wages and living expenses. This phenomena is already a source of income for millions of people, most notably through blockchain game Axie Infinity.”

Described as “ Pokémon on the blockchain,” Axie Infinity is operated by a Vietnam-based company called Sky Mavis, and has made more than $3 billion in total since launching in March 2018.

 

Possible Pitfalls

Schecter is careful to caveat his thesis, admitting that it is unclear how much income can be earned through DAOs. X-to-earn does not mean every single person will be able to make art and play video games for a living.

“X-to-earn is about rewarding value where it is created,” he says. “DAOs make these non-traditional paths more sustainable and available for more people, but the market will not reward everyone. Market dynamics are still relevant, and to be rewarded, you will need to provide value. Creators will need to find audiences, game players will need to achieve outcomes, and bounty-hunters and contributors will need to create an impact.”

That said, Schecter believes in the fundamental (Marxist) idea that creating value should be rewarded, and that DAOs will coordinate the value reward within crypto networks enabling new income earning opportunities.

On one hand, DAOs allow people to choose how they work and associate with communities where they are value-aligned. On the other, by reducing much of work into atomic units and purely financial incentives for actions, we risk reducing people’s meaning to purely financial rewards.

“We risk turning work into discrete, meaningless tasks, where labor is reduced down to a commodity service.”

 

 


Friday, 11 March 2022

Behind the Scenes: The Batman

IBC

For cinematographer Greig Fraser, lighting the image and keeping darkness within the character was the key challenge in this latest incarnation of Gotham’s avenging angel.

article here

When your main character is covered head to toe in a dark suit in a city that’s dark even in daylight that presents a unique problem for the cinematographer.

“It was the biggest lighting challenge of my career,” says Grieg Fraser ACS, ASC, who has shot Zero Dark Thirty, Rogue One, Lion, Vice and Dune.

“Very early on I recognised that if we were going to make a movie we all believed in we could not make the guy in the bat suit too bright because if you look at all the comics Batman is an enigma, a silhouette, a shape, a shadow against the wall.

“The problem with that is that you can only create an enigma for so long,” Fraser tells IBC365. “You have to start to see emotion but not give away the mood. Robert Pattinson is an inspired casting choice so therefore I can’t say I’m to [director] Matt Reeves or Rob that I’m always keeping him in silhouette.

“I love lighting dark scenes but asking an audience to watch a film for 2.5 hours that’s so oppressively dark? I felt we couldn’t do that. I needed to find the right balance between light and dark. How do we light his eyes without lighting the cowl?”

The DP researched thousands of images to put together a document for Reeves called ‘Light for Dark’. “They are pictures that you’d call dark but were easy to look at,” he explains. “They had big areas of light but shadow in the foreground or pockets of darkness so you knew it was a dark place.  It took a lot of referencing and a lot of study. Every frame in The Batman is teetering on the edge of unreadability and on the edge of being too light. It was a really tricky line to walk.”

The Batman’s storyline and aesthetic was particularly inspired by 1987 comic book, ‘Batman: Year One’ written by Frank Miller and illustrated by David Mazzucchelli. Fraser was fascinated by the way the artist had decided to draw frames a certain way.

“There are many ways to light Batman,” he says. “If you have Batman standing in a doorway you can choose to backlight him by putting a rim light to see his shape, cowl and ears and flowing cape which many great cinematographers have done. Or you can silhouette Batman by lighting an area behind him and keep him dark. For most part we elected not to backlight him because we felt the film we were making was of an urban noir. I didn’t want it to feel like we’d put in lights that didn’t belong in our Gotham. All the lights that illuminated the backlot were effectively built into the set, so at any given time I could change the look of the set through turning on lights that existed already. That was a very big bonus for us, because it meant wherever we looked, it felt real.”

Fraser says he and Reeves never discussed making their film look different to any other Batman. “We never said Zack Snyder’s is this, therefore let’s do this. The discussion was around ‘remember in Chinatown how this felt, or in Klute how the New York City streets felt? That’s how we built our Gotham.

“That’s not to say that the other Batman films don’t deserve referencing. I’ve talked to directors about Christopher Nolan’s The Dark Knight in so many ways for many other movies - just not for this.”

Production designer James Chinlund wanted to counter the broader palette, which skewed toward dour and gloom, by creating a different tone in the red-light district, where Catwoman lives. “We were inspired by some of the films of Wong Kar-Wai, in terms of textures and patterns,” Chinlund says. “There’s a romantic palette in some of those movies that we loved, like neon and a lot of colour from the light in the street. Our world is grim in a lot of places, and that was an environment where we could let some colour pop.”

Using VR in pre-viz

Reeves was keen to create a Gotham that was at once plausible and unrecognisable, to most audiences. Location teams looked at several American cities, including Chicago, Pittsburgh, Cleveland and New York, but decided to base the main shoot in the UK.

Chinlund admits to some early doubts, but once he started scouting in Manchester, Liverpool and Glasgow, he recognised the potential. “We noticed a decayed Gothic layer that we just don’t have in the States,” he says. “It gave us a real opportunity to combine practical set builds and some Chicago location work with this amazing rich tapestry of architecture from the UK, and to try and weave all that into an American city you’ve never seen before.”

Leavesden and Cardington Studios, near Bedford, provided stages and backlot space for the huge set builds including the Batcave and Wayne Tower. Second unit shooting also took place in Chicago.

“We didn’t want to have Times Square standing in for Gotham Square,” Reeves says in the film’s production notes, “so we added skyscrapers and an elevated train to the gothic architecture of central Liverpool, with the idea that you look at it and think, ‘Where is that?’”

All the sets were modelled in 3D and viewable in Virtual Reality for Reeves and Fraser to make decisions on blocking, camera position and lighting.

“If I knew Matt wanted the camera to look in a certain direction I could ask James to put a key light in a certain position in the physical set,” explains Fraser of the VR process. “If there’s a shot of someone coming up a hallway, now we know we need a light in that hallway. So, knowing how we were going to light it on a large scale and knowing what the frames were helped us be more efficient on the day. We knew exactly what the lens would be and where we needed to stand.”

That said, Fraser is no slave to a computer and is willing to use his own instinct to change focal length, lens or camera position on set.

“The computer may have pinpointed this spot but now you’re about to shoot I think we need to be on a different lens. VR doesn’t give you the emotionality of lenses - it just gives you the mathematics of the field of view. Any cinematographer knows that lenses create an emotional connection to the audience and on the day if the lens that we had pre-vized didn’t give that emotionality we changed it.”

Shooting in the Volume

Fraser had spent 10 months prepping The Mandalorian with director / showrunner Jon Favreau helping establish the new methodology of Virtual Production photography against computer generated visuals. Elements of The Batman were also shot on a LED stage.

“One of the things a Volume gives you is consistency of light,” Fraser explains. “If there’s a scene that has changeable light then putting that in a Volume is a smart idea.

“On The Batman, it gave Matt the ability to do whatever take he wants predictably, without having to rush to catch a certain light on location or be at the mercy of the weather.

He adds, “Where VP is really useful is basically tweaking mother nature. It’s taking the best that mother nature has and taking away all the negatives of shooting in the elements.”

The state of the art of Virtual Production at present means that filmmakers need to specify in advance where the camera will be looking in a Volume.  It’s more time and cost efficient to build the specific digital assets that will be shot in the games engine rather than create a full scale photoreal digital construct of the entire virtual world. But that is where VP is heading, Fraser predicts.

“In theory you could build an entire world for your film in the games engine much like Fortnite,” he says. “For example, if we did that for Gotham City, it would allow a director to choose anywhere in that city they wanted to shoot on any given day. You might decide to shoot on the corner of first and second street. Or high up on the Empire state. You can change the light, change the props and shoot. That’s what the future could be once the processing speeds up.”

The film is recorded on Alexa 65 LF with anamorphic lenses largely because of Reeves’ preference for digital. He had shot two of the Planet of the Apes movies digitally and felt that his last experience shooting on film, Let Me In, didn’t give him enough control.

“He had video taps on Let Me In that were a bit grainy and he couldn’t really see the performances on set,” said Fraser, who shot the vampire picture. “On the Apes movies I think he loved the fact that he could see performance in high rez straight out of the camera. We discussed shooting film briefly and he was adamant that this was a digital film.”

The camera in The Batman, rarely pans or tilts or moves frenetically. Fraser calls the movement “delicate”.

He says, “I’d ask why would you need to move the camera when what you want to focus on are Rob Pattinson’s eyes? Why move the camera if you just want to take in the design and glory of that Batmobile? If we try to move the camera too much it detracts from these incredible vistas, fantastic action piece and amazing characters.  You don’t put sugar on ice cream.”

 

 

Too sunny in Liverpool

Several sequences of the film shot in Liverpool. One of them, beginning on the rooftop of the Gotham City Police Department combined elements of the city’s Liver Building and the Chicago Board of Trade Building.

Wide aerials of Batman standing on the parapet were also shot in Liverpool, then altered in post to increase the height of the building and replace Liverpool’s waterfront with Gotham City. The shot of Batman leaping from the building was filmed on a partial set at Leavesden, with a camera strapped to the back of a stunt performer on wires. The stuntman only had eight feet of travel before he reached the bottom of the set, so the shot handed over to a digital GCPD Building and Batman as the wingsuit inflated and began to take flight. Batman’s descent through the urban canyon was based largely on LaSalle Street in Chicago, where the production shot extensive plates from a drone.

Other major sets included Gotham City Hall, whose interior was constructed at Cardington, while the neo-classical, grade 1 listed St. George’s Hall in Liverpool doubled for the exterior.

“Liverpool was fantastic and we spent a lot of time in prep before we visited,” Fraser says. “We made a number of light studies and 3D maps of St George’s Hall to try and tell when the sun was out. Gotham doesn’t really get sunny until there’s a bit of hope at the end of the film. For the mayor’s memorial in the film we needed to be super dreary and to shoot in the shade. A local might say Liverpool is the right place to go but sod’s law, when we were there in Spring 2020 the weather was pretty good. We had to make sure we protected ourselves from the sun and shoot in the shadows on the steps of the hall.”

Read more: BTS Dune with Grieg Fraser https://www.ibc.org/features/dune-behind-the-scenes-with-cinematographer-grieg-fraser/7992.article

Oscar contention

Earlier in his career, the Australian was mentor to up and coming cinematographer Ari Wegner. She assisted him on the set of outback drama Last Ride in 2009 and later this month goes head-to-head with Fraser for first time Oscar glory. (He was previously nominated for Lion).

“To watch Ari her entire career is extraordinary and I am really excited by the fact we get to share a nomination together. Win or lose is not relevant. What is relevant is getting to celebrate together.

“You know, the camaraderie of cinematographers is the reason I became one. God’s honest truth. I used to be a stills photographer and I didn’t find that same feeling among photographers. But among cinematographers we are one. We are one group and for the most part we applaud everyone’s work and we learn and grow from one another.”

 

 

ends

 

Wednesday, 9 March 2022

The Four Most Important Media Technology Issues (Right Now)

NAB

The carbon cost of streaming, industrialized AI, hybrid cloud computing and unceasing piracy — these are the issues which CTOs of every broadcaster and streamer are grappling with, according to Viaccess Orca.

article here

 VO’s own CTO, Alain Nochimowski, identifies in a blog post the key technology trends that should be firmly on the roadmap of any video service provider.

Chief among them is the rise of “green streaming” and the importance of being able to measure carbon footprints for different components of the video chain.

“That’s critical if we ever want to navigate the complex operational trade-offs towards ‘greener streaming,’” says Nochimowski.

For example, for an equivalent perceived quality-of-experience, will the expected gain in network bandwidth usage (and related reduction in energy consumption) associated with next generation codecs be sufficient to offset the increase in energy consumption due to more CPU-intensive compression techniques (compared to legacy codecs)?

Other related and hard to answer questions include: How to assess the impact of different architecture designs destined to run in data centers? Will companies, perhaps even consumers, be willing to pay extra to reach carbon neutral? Or will they perhaps prefer to compromise on their Quality of Experience in order to save some carbon emissions?

“Ultimately, there is the question of choice. We need to be able to assess the data across the whole chain so that we can present the option of an optimal route for our customers.”

There’s been a lot of hoopla about the overwhelming drive to public cloud, but the reality is that solutions will be at best hybrid for some years to come. Workflows will typically be a composite of on-premises equipment and possibly (sometimes several different) cloud provider(s).

“The key to future success is how we manage services that are deployed across these multiple platforms and orchestrate or scale them in an efficient and automatic manner,” says Nochimowski. “Both the business approach and the technology approach have to be optimized to fit this hybrid future.”

VO also points to the advent of MLOps. If you’ve not come across that acronym then be prepared to hear it more often.

Machine Learning Operations is “industrialized AI” and its being rolled out to unlock the full value of TV data monetization.

Deloitte calculates that it’s a market worth $4 billion by 2025. VO sees it accelerating even further the deployment of machine learning in the industry, “effectively doing for ML what the implementation of DevOps did for software development.”

Says Nochimowski, “Experience proves that applying MLOps best practices to the specific context of a TV platform production environment involves much operational trade-offs. Navigating these architectural, cost or performance trade-offs requires a great deal of familiarity with the specifics of TV data and a deep understanding of the various deployment (sometimes regulatory) constraints.”

The fourth tech trend reckoned to be keeping CTOs on their toes is also AI-related, but this time around AI-generated media with all the inherent risks of political misinformation attacks and deepfakes that entails.

“For content service providers, there’s no doubt the age of AI-enhanced media will bring about new threats as well as new opportunities,” VO warns.

Keeping Up With the Video Pirates

Content security is of course Viaccess Orca’s main business, and the company points out that instead of going away, piracy is now easier than at any other time. In part that’s because of the sheer volume of content and number of access points which hackers could attack.

In the first nine months of 2021, there were 132 billion visits to pirate sites worldwide, up 16% on the same period in 2020, according to Torrentfreak. Sixty-seven billion of these visits were related to TV piracy, making it roughly 50% of all pirate site traffic.

 Alongside this, the actual business of being a pirate has become “modular.”

“Pirates can simply chain easily located software tools and techniques together, delve into some specialist but still public message boards, and eventually they will come up with a combination that works,” says Pierre-Alexandre Bidard, VP Partnerships and Security Products Management at VO in another blog post.

The consequence of this is that when one platform is breached in any manner somewhere in the world, before you know it, that same technique is being used to create another breach in another service elsewhere.

“Piracy is interconnected and international, and fighting it is a constant battle that is only going to get more difficult, but also more necessary, as the year moves on.”

VO reckons broadcasters have around 15 minutes to take down a hijacked stream if you want to be able to move effectively against pirates.

“Pirates are fast. To fight them effectively you have to be faster,” he says.

Weapons of defense include dynamic watermarking and AI which the monitoring of huge amount of data whether broadcast or streamed, in real time, and quickly detect events that are anomalies.

But that’s not enough. Any security specialist will argue that having multiple layers of action and deterrent is the only way to effectively counter the moving target of piracy.

“We cannot talk in too much detail about our latest anti-piracy initiatives as we are in a constant arms race with the pirates,” says Bidard. “We develop a method to stop piracy, they find a way to circumvent it; they find a new way to force a breach, we find a way to plug the gap. That way we limit, sometimes dramatically, the losses that our customers incur in terms of lost revenue.”

 


 


The Four Most Important Media Technology Issues (Right Now)

NAB

The carbon cost of streaming, industrialized AI, hybrid cloud computing and unceasing piracy — these are the issues which CTOs of every broadcaster and streamer are grappling with, according to Viaccess Orca.

article here

VO’s own CTO, Alain Nochimowski, identifies in a blog post the key technology trends that should be firmly on the roadmap of any video service provider.

Chief among them is the rise of “green streaming” and the importance of being able to measure carbon footprints for different components of the video chain.

“That’s critical if we ever want to navigate the complex operational trade-offs towards ‘greener streaming,’” says Nochimowski.

For example, for an equivalent perceived quality-of-experience, will the expected gain in network bandwidth usage (and related reduction in energy consumption) associated with next generation codecs be sufficient to offset the increase in energy consumption due to more CPU-intensive compression techniques (compared to legacy codecs)?

Other related and hard to answer questions include: How to assess the impact of different architecture designs destined to run in data centers? Will companies, perhaps even consumers, be willing to pay extra to reach carbon neutral? Or will they perhaps prefer to compromise on their Quality of Experience in order to save some carbon emissions?

“Ultimately, there is the question of choice. We need to be able to assess the data across the whole chain so that we can present the option of an optimal route for our customers.”

There’s been a lot of hoopla about the overwhelming drive to public cloud, but the reality is that solutions will be at best hybrid for some years to come. Workflows will typically be a composite of on-premises equipment and possibly (sometimes several different) cloud provider(s).

“The key to future success is how we manage services that are deployed across these multiple platforms and orchestrate or scale them in an efficient and automatic manner,” says Nochimowski. “Both the business approach and the technology approach have to be optimized to fit this hybrid future.”

VO also points to the advent of MLOps. If you’ve not come across that acronym then be prepared to hear it more often.

Machine Learning Operations is “industrialized AI” and its being rolled out to unlock the full value of TV data monetization.

Deloitte calculates that it’s a market worth $4 billion by 2025. VO sees it accelerating even further the deployment of machine learning in the industry, “effectively doing for ML what the implementation of DevOps did for software development.”

Says Nochimowski, “Experience proves that applying MLOps best practices to the specific context of a TV platform production environment involves much operational trade-offs. Navigating these architectural, cost or performance trade-offs requires a great deal of familiarity with the specifics of TV data and a deep understanding of the various deployment (sometimes regulatory) constraints.”

The fourth tech trend reckoned to be keeping CTOs on their toes is also AI-related, but this time around AI-generated media with all the inherent risks of political misinformation attacks and deepfakes that entails.

“For content service providers, there’s no doubt the age of AI-enhanced media will bring about new threats as well as new opportunities,” VO warns.

Keeping Up With the Video Pirates

Content security is of course Viaccess Orca’s main business, and the company points out that instead of going away, piracy is now easier than at any other time. In part that’s because of the sheer volume of content and number of access points which hackers could attack.

In the first nine months of 2021, there were 132 billion visits to pirate sites worldwide, up 16% on the same period in 2020, according to Torrentfreak. Sixty-seven billion of these visits were related to TV piracy, making it roughly 50% of all pirate site traffic.

 Alongside this, the actual business of being a pirate has become “modular.”

“Pirates can simply chain easily located software tools and techniques together, delve into some specialist but still public message boards, and eventually they will come up with a combination that works,” says Pierre-Alexandre Bidard, VP Partnerships and Security Products Management at VO in another blog post.

The consequence of this is that when one platform is breached in any manner somewhere in the world, before you know it, that same technique is being used to create another breach in another service elsewhere.

“Piracy is interconnected and international, and fighting it is a constant battle that is only going to get more difficult, but also more necessary, as the year moves on.”

VO reckons broadcasters have around 15 minutes to take down a hijacked stream if you want to be able to move effectively against pirates.

“Pirates are fast. To fight them effectively you have to be faster,” he says.

Weapons of defense include dynamic watermarking and AI which the monitoring of huge amount of data whether broadcast or streamed, in real time, and quickly detect events that are anomalies.

But that’s not enough. Any security specialist will argue that having multiple layers of action and deterrent is the only way to effectively counter the moving target of piracy.

“We cannot talk in too much detail about our latest anti-piracy initiatives as we are in a constant arms race with the pirates,” says Bidard. “We develop a method to stop piracy, they find a way to circumvent it; they find a new way to force a breach, we find a way to plug the gap. That way we limit, sometimes dramatically, the losses that our customers incur in terms of lost revenue.”

 

Tuesday, 8 March 2022

An Open Metaverse is More Than Just Interoperability — It’s About Accessibility

NAB

Gaming is the gateway to the metaverse. Brands know it and marketers know it, so it’s no surprise that history was made when Microsoft spent $69 billion on Activision Blizzard in the biggest all-cash acquisition to date. Or that Take-Two Interactive, the video game publisher that owns Rockstar and 2K Games, would acquire social game developer Zynga in a deal valued at $12.7 billion.

article here

“For the industry, these two momentous deals are bursting with clues of what the future might look like,” says digital media agency Media.Monks. “Brands are moving into or doubling down on gaming as they seek to tap into that community, bring interactive features to their business and create new virtual worlds to connect with consumers.”

But it’s not just consumers who will be there — these experiences can also extend to the brands’ prospects and their own workforce. So, what exactly does the gaming trend mean for the future of work?

Media.Monks suggest that gaming “drives the desire for cooperation, which becomes easier and more engaging in immersive worlds.” On one hand, gaming “erases the notion of borders and physical distance,” meaning two people can be present in the same virtual space in a matter of seconds.

Game features might also improve diversity and accessibility in the workplace. Many games enable players to customize their experience through settings that benefit those with “low-vision” with other options focused on fine motor and hearing. The same level of personalization can be extended to virtual workstations.

“People with chronic medical conditions or disabilities can personalize their setup according to their own needs and preferences instead of adapting to the one-size-fits-all kind of equipment they would find anywhere else. Horizon Workrooms’ settings, for instance, include color correction filters that help color blind people better distinguish elements.”

Immersion can also be extremely powerful when it comes to networking. Media.Monks suggests that there’s a “special level of focus that comes from having one’s hands on a controller, which pushes you to be present in the moment.” Where regular video conferences lack some of the most engaging elements of an in-person meeting, the immersive worlds reduce the possibilities for distraction, leading to higher productivity.

Boiling this down, the virtual workspace offers the possibility to be designed and redesigned for everyone. With personal assistive technology, workers might focus more easily on their tasks instead of wasting energy on working around the same old barriers.

As Media.Monks’ Catherine D. Henry says, “An open metaverse is more than just interoperability; it’s about accessibility.”