Thursday, 28 April 2022

Coffee & TV use ClearView to finish Peaky Blinders VFX

copywritten for Sohonet

Coffee & TV is a leading Soho, London-based creative and finishing

studio that has moved effortlessly from high-end commercial grading, VFX and titles design to long-form TV post while retaining a boutique sensibility. Its latest work is on the climactic season of the international hit drama Peaky Blinders, with VFX done while artists worked remotely using Sohonet’s ClearView Flex for critical review sessions. Co Founder and Technical Director Jon Trussler discusses how Coffee & TV used ClearView to finish Peaky Blinders VFX.

article here

When did you start using Sohonet’s ClearView Flex?

The minute the pandemic hit we understood that we would have to change course. We knew there were some things you could do at home and how some bits of kit might work in that environment. No one really wanted to go that route unless they had to. Well, the pandemic was that pivot. Now we had two weeks to get everyone remote and ClearView Flex was a big part of that.

Had you tried other solutions?

I remember at the time [Spring 2020] I was finishing a commercial on Flame and I’d move the title a shade to the left and sent the client a QuickTime. They’d report back asking for a tweak to the right. Which I’d do, and then send another QuickTime. This was like moving through treacle. Telegrams would be quicker.

We also screen shared using things like Zoom but the picture quality is woeful, and playback isn’t there. It’s just not premium which is what you get with ClearView. Especially if you’re grading. There’s no way you can use Zoom for a director to judge their work. It’s got to be 10-bit true colour, which ClearView is.

How have you deployed ClearView Flex?

We’ve got one box in the studio into which we can connect any of our machines over SDI or NDI and a further three further boxes that live with our colourists at home. They can grade remotely but equally send that feed directly to directors. We use a mix of Resolve, Baselight, and Flame workstations plus Maya and Houdini among other software for CGI and motion graphics.

What has it enabled you to do?

It’s just the best solution we’ve found in terms of colour depth and fidelity. It’s also so easy for the client – that’s a big thing. They don’t have to download anything. We just give them a link and it’s all very secure. Clients love it because they get instant live feedback on all inputs. The main benefit has been in facilitating remote workflows. We’ve got directors who are so busy they can’t get into town, or somebody somewhere has Covid and needs to isolate, so it’s just been brilliantly helpful for us. We use it all the time on every project.

What work have you done recently?

Our biggest longform project to date is the 180 VFX shots we completed for the BBC drama Peaky Blinders.  This included 2D, 3D, buildings and digital matte painting work.

We also designed the titles for Sky’s F1 season, and we are incredibly proud and very excited that the Sky Sports Lions Tour 2021 title sequence that we designed has been nominated for a BAFTA Craft award. Coffee & TV’s Steve Waugh and Danny Boyle also co-directed the title sequence for Jimmy Savile: A British Horror Story which launched on Netflix. We used ClearView Flex extensively on each one.

How will working life change going forward?

A lot of our artists are still remote or coming in only two days a week. Apart from team building, they love working from home. They are more efficient and can manage their own time, take the kids to school and work the hours that suits them. I don’t think anyone wants to go back to what it was before, not now that we’ve proved it all works. ClearView Flex sessions are so good, why would you ever go back?

What do clients think?

I think the days of clients attending every session are over. They’ve learned the same lessons as us. Why struggle against the traffic to look at a screen for 30 minutes when you don’t have to? It makes their day easier. For them, ClearView remote sessions are amazing. Clients have learned that they can run two jobs at once and they can be, in effect, in two places at once remotely.

If a director is in the room with the artist but some of their creative team cannot be physically present, there’s this lovely crossover of attended and unattended whereby everyone is on the same page. It’s just the sheer flexibility of the solution that works so well.

We’re using it now on a project where the director is in LA. We can have a live session with someone in the States which is not something we could ever do before. It opens up the whole globe.

 

Tuesday, 26 April 2022

How NFT Marketplaces Can Work as the Web3 Version of Google or Amazon

NAB

article here

The promise of Web3 is that creators and users can finally own the upside from their work because trust can be set by code. But we’re not quite there yet. Exploring Web3 today is fraught with bad user experience, high fees, confusing terms, and scams.

Peter Yang, product lead at Reddit, outlines a three-point plan to get regular folks like you or me onto Web3 in a blog post Sponsored by Index Coop, a provider of on-chain crypto indexes. His plan includes an upgrade to the exchanges, wallets, and NFT marketplaces that support decentralized finance models.

Ultimately, the combination of these three elements could act like a social network and a recommendation and search engine for users to navigate and engage with the next-gen internet.

Getting Onto Web3 Today

Why would a regular person care about Web3? Yang thinks the most common reason is that they want to make money, and he’s probably not wrong. So, to get into Web3, there’s a three-step process:

1.      Use an exchange to buy cryptocurrency.

2.      Set up a wallet and transfer crypto there.

3.      Transact – such as buying an NFT.

Yang thinks each of these steps could be significantly improved in order to onramp the next one billion users. Here’s his recipe, taking each in turn.

Exchanges

Exchanges are close cousins of the way we currently transact online. With an exchange, you:

1.      Log in with a regular username and password.

2.      Deposit money from a bank or credit card.

3.      Use that money to buy and sell crypto.

 

However, many people only use exchanges to buy and sell crypto. They move their tokens to a standalone wallet to do anything else (such as buying an NFT). That’s why exchanges are diversifying into wallets and NFT marketplaces.

Coinbase, for example, already has 89 million accounts linked to a bank or a credit card, and has a Coinbase wallet from which users can buy NFTs from a Coinbase marketplace.

Wallets

Yang identifies two problems with wallets today.

“Moving assets to a wallet is nerve-wracking. A wallet’s public address is a long string of random characters. One typo and your assets are gone forever. Secondly, seed phrases just aren’t that secure. Scammers will go to great lengths to phish you into sharing it. One mistake and your assets are gone forever.”

To counter this, wallets can: let people buy crypto directly with their credit card or bank account, skipping exchanges altogether. Wallets can also improve UX and security.

“There’s a lot to improve in wallet usability,” Yang says. “For example, wallets can warn people when they’re signing a smart contract from an unverified source.”

But wallets can do so much more. They can be your identity in Web3.

“Wallets can ask you to set up a public address that’s much easier to remember (such as peteryang.eth). They can also let you create a public Web3 profile page to highlight your NFTs, on-chain credentials, and more. Wallets can make it easy for you to find, follow, and message other wallets. Wallets can show you what people you follow are actually doing on-chain and recommend top trending NFTs and apps in your network.”

In this sense, Web3 is a social network. Every dapp (decentralized application) and NFT project wants to onboard more users. The social context is incredibly powerful for discovery.

“Today, wallets acquire customers primarily through other NFT projects and dapps. In the future, wallets will be the central hub for your Web3 identity, social network, and app discovery.

 

NFT Marketplaces

If wallets will help drive discovery through social context, where does that leave NFT marketplaces such as OpenSea?

“Most NFT transactions come from a small handful of traders that are chasing the next flip. But marketplaces should focus on the next 1B people who can buy NFTs but don’t.”

Specifically, Yang thinks they should make it easy for people to buy NFTs directly with credit cards, which OpenSea already supports.

Yet OpenSea’s collector profiles are missing basic social features such as a follow button and the ability for collectors to customize their profile.

“Making people care about their profile and social graph will help marketplaces build stickiness.”

Yang also thinks NFT marketplaces need to evolve discovery from focusing on top trending projects to personalized recommendations and search based on each user’s on-chain activity and preferences.

“To put it simply, NFT marketplaces should be the Web3 Google or Amazon.”

Dark Horse Candidates for Web3 Onramps

Exchanges, wallets, and NFT marketplaces are the obvious onramps, but let’s not forget these dark horse candidates like games such as Axie Infinity, which already have a thriving in-game trading environment and existing social networks.

Twitter, for example, has shipped basic features like setting an NFT as your profile pic. “It’s already the go-to platform for Web3 people to connect with each other and discover cool projects. Why not let people add a wallet and track on-chain activity as well?”

The Next Phase

The next phase of Web3 could target many more people and look like this:

1.      Buy crypto and NFTs directly with your credit card or bank account.

2.      Set up a profile to showcase your NFTs and on-chain credentials. Follow and message other users, join interest groups, etc.

3.      Through your network, discover and use “amazing” Web3 dapps and NFTs.

Says Yang, “I truly believe that wallets are the closest to making this vision a reality.”

 

 

 


Nina Kellgren BSC / Young Soul Rebels

British Cinematographer

article here

Nina Kellgren BSC, director Isaac Julien, BFI curator William Fowler, and BFI technical delivery manager Douglas Weir discuss the process of restoring 1991 British indie feature Young Soul Rebels as well as sharing details of the film’s creation.

In 1991 a low budget British indie feature won the Critics Prize at Cannes. The Hollywood Reporter review of the time said: “The movie vibrates with a youthful vigor (sic) that’s contagious, aided enormously by Nina Kellgren’s slashing views of back-street London.”

This was Young Soul Rebels, the first (and only narrative) feature directed by Isaac Julien which has been treated to a restoration and 4K digital remaster by the BFI.

“It is strange to see your work of 30 years ago,” says Julien. “The film curiously occupies two camps. On the one hand the subject of the film is about what the generation were doing in the 1970s. Viewing the film today also says something about the perspective of films being made in the 1990s on the earlier era. What I am really excited about is the prospect of a whole new generation being able to view it. Young Soul Rebels encapsulates those things in quite a unique way – even to its director.”

Co-scripted by Julien, the film is set in 1977 in London during the week of the Queen’s Silver Jubilee when punk and the Pistols weren’t the only ones giving two fingers to the establishment. Young Soul Rebels centres on a black DJ (Valentine Nonyela) who is arrested when a friend is murdered in a London park. What the killer doesn’t realize is that the victim was carrying a cassette recorder with the sounds of their death captured on tape.

“It was important for Black British cinema when it came out and remains a fascinating piece of work,” says BFI curator William Fowler. “Isaac is also a very successful gallery artist exploring experimental and self-reflective ideas in all his films. Young Soul Rebels is important in terms of an artist’s filmmaking practice and its engagement with queer history.” 

The film’s rights are owned by the BFI as part of the Institute’s Production Board which funded projects between 1962-2000. The original print, duplicate neg and original neg prints stored at BFI Master Film Store near Gaydon were in excellent condition.

“Each reel was inspected for any damage or issues that need manual repair which could be broken perforations or bad splices,” explains Douglas Weir, technical delivery manager, BFI. “Those are repaired and fixed. The whole ethos of film restoration is not to detract in any way from the film image itself. In that sense it’s like restoring a painting.”

After removing dust with an ultrasonic cleaning unit, the neg was scanned to 4K on a Scanity machine at the BFI National Archive then delivered on hard drives to Silver Salt Restoration as a series of DPX files – one file for every single frame of film.

Silver Salt’s team go through every one of those DPX files and remove any tiny defects ranging from positive density dirt (dust, to you and me) and tiny spots or scratches using digital techniques derived from VFX.

“It’s painstaking work but Silver Salt find ingenious ways of patching up frames and replacing anything missing using restoration software,” says Weir.

At this point Julien and the film’s cinematographer Nina Kellgren BSC supervised the grade to match it to the original with colourist Steve Bearman.

“We want to be faithful to how we shot and lit the film and what is remarkable is what we can do now as a continuation of those aspects,” Kellgren explains. “The degree of control you have frame to frame to bring up tone and colour is extraordinary. But we don’t want to pretend it was not shot on 35mm. We retain the grain.”

YSR was shot on Kodak EXR Colour neg. 5296 and 5245, processed at Metrocolour and supervised by grader Clive Noakes “with a look designed to have strong contrast and strong key colours. Purple, blue, and red feature specifically,” Kellgren says.

It was Kellgren’s second film with Julien after Looking for Langston and they’ve collaborated on multiple projects since, most recently on Lessons of the Hour (2019).

“We shot [YSR] the summer of 1990,” she recalls. “We had a substantial number of night exteriors and one thing I remember was that we kept running out of darkness. It was only fully dark around 22.30 and when dawn arrived at 04.00am, with a dawn chorus well before that!”

The film’s soundtrack was remixed and remastered in 2009 and will be heard alongside the upgrade picture hopefully in 2022 when the BFI finalise exhibition plans.

Isaac says there’s additional strong interest from US film channel Criterion. “For me, it’s fantastic to see how the younger actors in our film, like Sophie Okonedo, have gone on to have major careers,” he says. “It’s also terrific to see how many talented black British actors and directors are at work today. I think it’s interesting to see how the opportunities for black British talent have developed over the last three decades.”

Memristors: Quantum computing breakthrough could take us back to the multiverse

RedShark News

It could be right out of Back to the Future but a device known as a ‘quantum memristor’ has been invented to open up the possibility of building a ‘brainlike’ supercomputer. Let’s call it Orac, Blake’s 7 fans.

article here

Detailing the creation of the first prototype of such a device in the journal Nature Photonics, Experimental photonic quantum memristor | Nature Photonics, scientists say the breakthrough could help combine quantum computing with artificial intelligence and the development of quantum neuromorphic computers.

A memristor or memory resistor is described as a kind of building block for electronic circuits that scientists predicted roughly 50 years ago but created for the first time only a little more than a decade ago.

These components are essentially electric switches that can remember whether they were toggled on or off after their power is turned off. As such, they resemble synapses—the links between neurons in the human brain—whose electrical conductivity strengthens or weakens depending on how much electrical charge has passed through them in the past.

In theory, memristors can act like artificial neurons capable of both computing and storing data. As such, researchers have suggested that neuromorphic computer would perform well at running neural networks, which are machine-learning systems that use synthetic versions of synapses and neurons to mimic the process of learning in the human brain.

Exponential growth in machine learning

Using computer simulations, the researchers suggest quantum memristors could lead to an exponential growth in performance in a machine-learning approach known as reservoir computing that excels at learning quickly.

“Potentially, quantum reservoir computing may have a quantum advantage over classical reservoir computing,” says study lead author Michele Spagnolo, a doctoral student in quantum physics at the University of Vienna.

The advantage of using a quantum memristor in quantum machine learning is “the fact that the memristor, unlike any other quantum component, has memory,” he adds.

Among the more profound benefits that quantum computers could be used for is to simulate quantum physical processes for much faster drug and material design; to accelerate AI development and to provide new levels of security and information communication. But they could also be used to break public-key encryptions, to amplify current AI risks at a faster pace, or be misused in biotechnology to design bio-weapons or other risks.

“We now live in a ‘Wright brothers’ moment’ in the history of quantum computing,” Ibrahim Almosallam, a consultant for the Saudi Information Technology Company, writes at World Economic Review. “When a commercial jet version arrives, it will deliver a new leap in information technology similar to what classical computation delivered in the 20th century, and, just like with any general-purpose technology — such as the internet, electricity, and, for that matter, fire — alongside great benefits, comes great risks.”

Then there’s more prosaic stuff like a super-AI “creating” the latest Pixar feature. This is where quantum can turbo-charge machine learning, “improving the ability of AI to derive useful information from photos and videos,” according to a recent report in the Harvard Business Review Quantum Computing for Business Leaders (hbr.org).

However, building and scaling a stable quantum computer is not easy. Photons and electrons are delicate; “their behaviour defies our ingrained view of how the physical world operates,” says HBR.

“One of the most formidable obstacles to building functional quantum computers is that qubits don’t stick around very long,” the article elaborates. “Vibration, temperature, and other environmental factors can cause them to lose their quantum-mechanical properties, resulting in errors. Today, the rate at which errors occur in qubits limits the duration of algorithms that can be run.”

“Scientists are working to build environments in which many physical qubits act together to create error-protected logical qubits, which can survive for much longer periods of time — long enough to support commercially viable applications.

Still, the most advanced quantum computers today have 50 to 100 physical qubits; it will most likely need ten times that to make a single error-protected logical qubit.

Flux capacitor – yes really

It is the state of flux (known as superpositions) in which photons exist which causes the inherent instability of quantum systems. Superposition means they can essentially be located in two or more places at once (or spin in two opposite directions at the same time).

The breakthrough quantum memristor in the new study, as outlined by IEEE Spectrum, is a technique that relies on a stream of photons existing in superpositions “where each single photon can travel down two separate paths laser-written onto glass. One of the channels in this single-qubit integrated photonic circuit is used to measure the flow of these photons, and this data, through a complex electronic feedback scheme, controls the transmissions on the other path, resulting in the device behaving like a memristor.”

In other words, while memristive behavior and quantum effects are not expected to coexist, the researchers appear to have overcame this apparent contradiction by engineering interactions within their device to be strong enough to enable memristivity but weak enough to preserve quantum behaviour.

Taking another leap into the theoretical, this could also have implications for our understanding of what it means to live in the multiverse.

Stay with me here. Yes, the multiverse is currently in vogue among storytellers as a means to spin more canon fodder out of tired IP franchises. Looking at you directly Marvel and your upcoming Doctor Strange in the Multiverse of Madness. Even season 2 of Netflix comedy Russian Doll loops its protagonists back to 1982 and riffs on Back to the Future.

Back to the multiverse

The multiverse as depicted in the movies, is a world full of endless potential; multiple parallel universes spinning in synchronicity; and the possibility of alternate, powerful, seemingly better versions of ourselves.

At Vox, a mathematical physicist at the California Institute of Technology, says this is possible – in theory.

Spyridon Michalakis is no random boffin – “I’m the science consultant for Ant-Man and I introduced the quantum realm [to Marvel],” he explains. 

Having established his credentials, Michalakis then explains that basically the multiverse is grounded in quantum mechanics.

“Space and time are one single, singular construct,” he explains in a 101 of Einstein’s theory. “There’s not like you have space and then time; it’s space X time. Moreover, quantum space time is a superposition: a quantum superposition of an infinite number of space times, all happening at the same time.”

That word again: superposition.

“This illusion — basic physical reality — is the fact that human beings have very specific points of view, ways of observing the superposition.”

He makes this startling observation by mixing science with a cinematic metaphor.

“The frame rate of the human mind is so low relative to the frame rate of the universe,” he says. “Let’s say we only perceive 100 frames per second. We can be aware of our lives and choices we make, but then the frame rate of the universe (where you could be flicking between different timelines) is 40 orders of magnitude above that.

“We’re all trying to figure out the plot of the universe by just watching the beginning and the end of the movie, the first and last frame. We’re just reconstructing the in-between the best we can. That’s where the multiverse hides; it hides there in between frames. Honestly, I think that the frame rate of the universe truly is infinite, not even finite, very, very large. And we’re so far away from that.”

So that means we’re stuck in observing just one reality, not the multiplicity of them – but we could if only we had a brain the size of a planet.

If only we could build one…

 


You May Not Want a New Friend, But the Metaverse Is Determined to Be Yours

NAB

article here

Even skeptics admit the metaverse is coming. Not soon, but soon enough to take the juggernaut seriously.

“Recognizing that some of the technologies in the early stages it seems like the market forces and market dynamics are aligned, such that the metaverse is coming,” says David Truog, VP and principal analyst at Forrester. “This is the time to start getting used to it, and practicing and experimenting and piloting.”

The analyst is co-author of a report published in March that concludes that the metaverse — despite the hype and investment — is “years from actualization.”

The research group also surveyed adults in the UK recently and found just 28% of them excited about the prospect of the 3D internet.

Bloomberg Intelligence estimates the metaverse market could be worth $800 billion by 2024 — though to get to this figure, they have classified a lot of existing technology as being part of the metaverse. (According to this logic, Bloomberg valued the size of the metaverse market in 2020 at $478 billion.)

Forrester, meanwhile, says the metaverse doesn’t actually exist yet, and won’t until users can move seamlessly from one virtual environment to another; much as users click a hyperlink to move from one website to another: “The metaverse will manifest over the next decade, in stages. The fully federated metaverse will contain standard protocols for the presence, persistence, and transfer of identity and assets.”

As Richard Kerris, Nvidia’s VP of Omniverse Platform Development, told Forrester: “The metaverse requires connective tissue for it to be a reality” — and that’s a long way off.

Another sceptic is tech journalist Eliot Beer. He joined a virtual press conference hosted by the IT company DXC in the metaverse, during which he had to don an avatar and was shown into a virtual boardroom for the presentation.

“DXC believes enterprise innovation is the ‘surprise route’ to metaverse adoption — and despite my skepticism, following my experience in its virtual world, this makes sense,” Beer reports at The Stack. “With open spaces, names above every avatar, and plenty of spaces to talk in private, there is a clear metaverse business use case for collaboration.

Forrester says organizations across many different sectors have been quietly trying out virtual environments, not necessarily to for demonstrable return on investment today but because it’s clear where things are going, even if it’s going to be some time before it’s fully actualized.

“I think CIOs need to consider as bringing this into their company as another collaboration tool, not as a replacement of traditional video conferencing, but as an complementary tool to bring new experiences to your employees, because there are some experiences that you cannot have with traditional video conferencing,” Nathalie Vancluysen, head of XR and a distinguished technologist at DXC, tells Beer.

She points to DXC’s recent EMEA sales event, which more than 1,000 people attended as an interactive virtual event — something that wouldn’t be possible with Zoom or Teams, where interaction would be severely limited. And virtual worlds also allow for much more spontaneity.

Says Vancluysen, “Teams and Zoom are scheduled meetings, you go from one meeting to another, we know upfront who we’re going to meet. But there’s no room for casual interactions and social collisions — in the virtual world, you can bump into a colleague that you haven’t seen for many years and just start a new conversation.”

There are many technical roadblocks ahead such as connection issues, amping audio quality, designing less clunky interfaces, making virtual world hopping seamless — but these are the sorts of things the tech industry is good at resolving, given time and a viable business use case.

But the question remains, will people actually use these tools?

Forrester says yes — eventually. And before we get there it’s time to experiment and help shape it.

 


An Alarming Bias Is Growing With AI

NAB

article here


More money and more capability, yes, but more bias and more men. That’s the questionable state of Artificial Intelligence and Machine Learning in 2022, according to the latest annual report from a team at Stanford.

The “2022 AI Index Report” measures and evaluates the rapid rate of AI advancement from research and development to technical performance and ethics, the economy and education, AI policy and governance, and more.

It is compiled and analyzed by a group of experts from across academia and industry at the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

Here’s a short summary of key findings.

Private Investment in AI Soars

The amount of money pouring into AI is mind-boggling — and most of it is coming from private investment. Private spend more than doubled in 2021 from 2020 to around $93.5 billion, but the number of newly funded AI companies continues to drop, from 1051 in 2019 and 762 companies in 2020 to 746 companies in 2021.

As IEEE puts it, “It’s a great time to join an AI startup, but maybe not to found one yourself.”

Language Models are More Capable but More Biased

Large language models like Open-AI’s GPT-2 are setting new records on technical benchmarks, but new data shows that they are also more capable of reflecting biases from their training data.

While language systems are growing significantly more capable over time, the Stanford team conclude, “so does the potential severity of their biases.”

Their term for this bias is “toxicity.” As an example, the report notes that a 280 billion parameter model developed in 2021 shows a 29% “increase in elicited toxicity” over that of a comparable model in 2018.

A number of research groups are working on the toxic-language problem AI presents, with both new benchmarks to measure bias and detoxification programs. This chart shows the results of running the language model GPT-2 through three different detox methods. Cr: Stanford Institute for Human-Centered Artificial Intelligence

A number of research groups, including Open-AI, are working on the toxic-language problem, with both new benchmarks to measure bias and detoxification programs.

Related: The report shows that algorithmic fairness and bias has shifted from being primarily an academic pursuit to becoming “firmly embedded as a mainstream research topic with wide-ranging implications.” Researchers with industry affiliations contributed 71% more publications year over year at ethics-focused conferences, the report found, though the “wide-ranging” implications are not defined.

AI Becomes More Affordable and Higher Performing

Since 2018, the cost to train an image classification system has decreased by 63.6%, while training times have improved by 94.4%. The trend of lower training cost but faster training time appears across other machine learning task categories such as recommendation, object detection and language processing, and favors the more widespread commercial adoption of AI technologies.

A Plateau in Computer Vision?

The AI Index shows that computer vision systems are tremendously good at tasks involving static images such as object classification and facial recognition, and they’re getting better at video tasks such as classifying activities.

But there are limits: As spotted by IEEE’s analysis, computer vision systems are great at identifying things, but not so great at reasoning about what they see.

The report notes that performance improvements have become increasingly marginal in recent years, “suggesting that new techniques may need to be invented to significantly improve performance.”

AI Needs Women and People of Ethnicity

AI research is dominated by men. The report finds that the percentage of new AI and Computer Science PhDs that are female has moved only a few points over the last decade, at least in North America. Not much has changed since 2021. The data for AI and CS PhDs involving people of color tells the same story.

“The field of AI needs to do better with diversity starting long before people get to Ph.D. programs,” IEEE urges.

 


In The Metaverse Who Creates the Creator?

NAB

article here

Astronomers tells us that our universe is expanding. Astrophysicists theorize that there are an infinite number of stars in the universe, that there are multiple universes, perhaps connected via wormholes, those folds in time and space. Scientists would dismiss any notion of a creator guiding all of this, while the concept of what it is exactly the universe is expanding into remains opaque.

All this by way of introduction to another #metaverse story. I hope we’ve found a new angle.

“It’s easy to talk glibly about how the metaverse is an interconnected nexus of 3D worlds without asking some pretty fundamental questions,” muses David Shapton at RedShark News. “One of these is ‘who’s going to make the metaverse?’”

Shapton thinks that a true metaverse — that is one with no boundaries — is a thing of infinite complexity, not just in a physical sense, but in the sense of interactions and outcomes. So, the problem boils down to how we design something infinitely complex with finite tools.

Perhaps evolution can provide the answer? In contrast to an intelligent “watchmaker” creating the complexity of living beings in one masterplan, evolutionary theory argues that only by natural selection — the blind watchmaker — has life on earth reached its current state.

“Evolution leads to absolutely staggering complexity and results in seemingly impossibly complicated biological machines,” says Shapton. “So maybe that’s what we need for the metaverse: evolution.”

Evolution of the metaverse can of course be sped up by the trial and error of artificially intelligent computers but Shapton doesn’t think this will work.

“Natural evolution went through a phase like that. It was called the Cambrian period, and it was characterized by an explosion of weird and sometimes wonderful life forms. Unfortunately, few of their descendants exist today because they were so odd and, well, speculative, that they just weren’t destined for this world. Because of the trend towards oddness in the short term, we can’t rely on artificial evolution to give us a convincing metaverse. Instead, we need more organization and purpose than that.”

That implies human input — and some level of supervision. None of which solves the problem of where all the detail in this all-embracing, immersive virtual world will come from.

“Every object in the metaverse will have to contain and share its own data. That’s data about the physics of the object — texture, softness, rigidity, elasticity: any number of physical characteristics that will need to be able to interact with other things

He looks to games for an answer: “What if we can distil the essence of a 3D world to a set of procedural rules?” Shapton ponders. “Like the rules for designing a city? You could teach a generative metaverse program what a city is ‘like.’ What a forest is ‘like’ or what an alien planet is ‘like.’ If we can distil that ‘essence of the experience’ into a set of rules, we can generate fully authentic experiences.”

I’ve no doubt there are teams of programmers on this very case at MIT or academia or the labs at Meta and Microsoft.

Incidentally this goal, or something like it, has been envisaged by cinematographer Greig Fraser as the ultimate in virtual production. Fraser is the leading cinematographer engaged in virtual production, having established the template with Jon Favreau for The Mandalorian and using volume stages most recently on The Batman.

The state of the art of virtual production at present means that filmmakers need to specify in advance where the camera will be looking in a volume. It’s more time and cost efficient to build the specific digital assets that will be shot in the games engine rather than create a full scale photoreal digital construct of the entire virtual world. But that is where virtual production is heading, Fraser predicts.

“In theory you could build an entire world for your film in the games engine much like Fortnite,” he says. “For example, if we did that for Gotham City, it would allow a director to choose anywhere in that city they wanted to shoot on any given day. You might decide to shoot on the corner of first and second street. Or high up on the Empire State. You can change the light, change the props and shoot. That’s what the future could be once the processing speeds up.”

 

Monday, 25 April 2022

Why the Podcast Medium Keeps Shapeshifting

NAB

article here

Podcasting continues to grow in the US, with 177 million Americans aged 12 and over having at least sampled a podcast in 2021 — roughly equivalent to the number of Americans that used Facebook last year, according to the Infinite Dial 2022 report.

Much of that trial of the medium happened as we were spending more time at home, especially in early 2021, according to the report compiled by Edison Research, Wondery, and ART19.

By the end of 2021, however, those patterns had changed — as more of us went back to work and school, restricting the time spent listening to podcasts.

Those who listen to weekly podcasts consume eight episodes in an average week, a number that remains unchanged in 2022.

Podcasting as a term is more familiar than ever as the word continues to penetrate the consciousness of America, rising to 79% of the population.

That high figure doesn’t necessarily translate into listeners — showing that there is huge room for growth in the sector.

“There is certainly a great deal of confusion out there among the Americans who have heard the term, but have yet to listen to a podcast,” say the report authors. “Fortunately for the medium, that percentage continues to shrink — and this year, we report a significant increase in the percentage of Americans 12+ who have at least tried a podcast at some point in their lives.

“So, the encouraging news is that there is at least more sampling of podcasts happening, which leads to a greater chance that new listeners will, in fact, find that perfect show for them.”

When podcasting first started attracting a measurable audience in the mid 2000s, listeners were primarily white and male. Now however, the gender disparity is slowly approaching that of the US population over time (currently, 48% male and 51% female).

Presently, podcast listeners are at least as diverse as the US population, the report finds.

“This is a great story for the medium, and with the resources being made available to podcasters who serve Black and Latino audiences, and the commitment to creating more diverse content among the largest producers in the medium, it is not hard to imagine the podcast audience becoming more diverse than the US population in just a few years.”

 

The Metaverse Will For Sure Change the Way We Work. But How?

NAB

article here

The remote distributed workplace of today is already vastly different from what we could have imagined just a couple of years ago, but this is nothing compared to the changes being ushered in by the metaverse.

The 3D internet and the technologies surrounding it promise radical new levels of social connectedness, mobility, and collaboration inside a virtual workplace that still sounds like science fiction to actually come true.

“Imagine a world where you could have a beachside conversation with your colleagues, take meeting notes while floating around a space station, or teleport from your office in London to New York, all without taking a step outside your front door,” invites Mark Purdy, an economics and technology advisor writing in the Harvard Business Review.

The implications of the emerging metaverse for the world of work have received little attention, he contends, yet companies everywhere need to get ready or get bypassed by talent and innovation.

He identifies four major ways future work will morph. These are: new immersive forms of team collaboration; the emergence of AI-enabled colleagues; the acceleration of learning and skills acquisition; and the eventual rise of a metaverse economy with completely new work roles.

I confess to being deeply skeptical that many of these ideas will actually contribute to be a better working environment. Being stuck inside a virtual office interacting with an AI-bot seems neither fun nor productive, nor particularly conducive to great team building and, in fact, seems to mask the drudgery of work, serving only to cut the costs of companies spending money on real-life infrastructure to house their workers and to facilitate round-the-clock surveillance.

But Purdy has amassed quite a collection of activity in support of his more outlandish claims, which do seem to point in the direction of significant change.

So, let’s take a look.

Teamwork and Collaboration in the Metaverse

The metaverse promises to bring new levels of social connection, mobility, and collaboration to a world of virtual work. For evidence we can look to NextMeet based in India and described as an avatar-based immersive reality platform.

With it, employee “digital avatars” can pop in and out of virtual offices and meeting rooms in real-time, walk up to a virtual help desk, give a live presentation, relax with colleagues in a networking lounge, or roam an exhibition using a customizable avatar.

Participants access the virtual environment via their desktop computer or mobile device, pick or design their avatar, and then use keyboard buttons to navigate the space: arrow keys to move around, double click to sit on a chair, and so forth.

Speaking to Purdy, Pushpak Kypuram, founder-director of NextMeet, gives the example of employee onboarding: “If you’re onboarding 10 new colleagues and show or give them a PDF document to introduce the company, they will lose concentration after 10 minutes. What we do instead is have them walk along a 3D hall or gallery, with 20 interactive stands, where they can explore the company. You make them want to walk the virtual hall, not read a document.”

Other metaverse companies are emphasizing workplace solutions that help counter video meeting fatigue and the social disconnectedness of remote work.

One of them is PixelMax, a UK-based startup that is developing technology that mimics the social interactions you’d experience in a real office. For example, it facilitates those chance encounters with colleagues in the corridors or water cooler and provides a “panoramic sweep” of the office floor so you can quickly see where colleagues are located (and who to avoid….)

The ultimate vision, according to PixelMax co-founder Andy Sands, is to enable work-based avatars to port between virtual worlds.

“It’s about community building, conversations and interactions,” he explains. “We want to enable worker avatars to move between a manufacturing world and an interior design world, or equally take that avatar and go and watch a concert in Roblox and Fortnite.”

Purdy argues that virtual workplaces can provide a better demarcation between home and work life, “creating the sensation of walking into the workplace each day and then leaving and saying goodbye to colleagues when your work is done.”

What’s more a virtual office doesn’t have to mimic most people’s experience of a “drab, uniform corporate environment” when you can have a beach location, an ocean cruise, or even another world?

Outlandish? VR platform Gather is already offering ‘dream offices’ that allows employees and organizations to ‘build their own office’ whether that’s a “Space-Station Office” with views of planet Earth or “The Pirate Office,” complete with ocean views, a Captain’s Cabin, and a Forecastle Lounge for socializing.

Introducing Your Digital Colleague

It’s almost a given that future work (and social) interactions will be carried out by a digital avatar of ourselves. Increasingly, these digi-selves will be joined by an array of automated digi-colleagues — “highly realistic, AI-powered, human-like bots.”

One example is UneeQ, a “digital humans” creator behind Nola, a digital shopping assistant, and Rachel, an always-on mortgage adviser.

AI-bots are also developing human-like emotions (using expression rendering, gaze direction, and real-time gesturing) “to create lifelike, emotionally-responsive digital humans.

Purdy reckons these AI agents will act as advisors and assistants, doing much of the heavy lifting of work in the metaverse and, in theory, freeing up human workers for more productive, value-added tasks. In theory.

Ultimately, they could force humans out of work because “they don’t take coffee breaks, can be deployed in multiple locations at once and can be deployed to more repetitive, dull, or dangerous work in the metaverse.”

Faster Learning in the Metaverse

The aspect of virtual work life, which is already taking off and proving its worth, is in training and skills development. Purdy claims further advances and greater adoption will “drastically compress the time needed to develop and acquire new skills.”

“In the metaverse, every object — a training manual, machine, or product, for example — could be made to be interactive, providing 3D displays and step-by-step guides. VR role-play exercises and simulations will become common, enabling worker avatars to learn in highly realistic, ‘game play’ scenarios.”

There is research in the piece “VoRtex Metaverse Platform for Gamified Collaborative Learning” that suggests virtual-world training can offer important advantages over traditional instructor or classroom-based training, as it provides a greater scope for visually demonstrating concepts, a greater opportunity for learning by doing (the game becomes the lesson), and overall higher engagement through immersion in games and problem-solving through “quest-based” methods.

New Roles in the Metaverse Economy

Just as the internet has brought new roles that barely existed 20 years ago — such as digital marketing managers, social media advisors, and cyber-security profs — so, too, will the metaverse likely bring a vast swathe of new roles that we can only imagine today: “avatar conversation designers, holoporting travel agents to ease mobility across different virtual worlds, metaverse digital wealth management and asset managers,” says Purdy.

Not to mention, the plethora of creative activities geared around the creator economy. IMVU is an avatar-based social network with more than seven million users per month, has thousands of creators who make and sell their own virtual products for the metaverse — designer outfits, furniture, make-up, music, stickers, pets — generating around $7 million per month in revenues.

 

Challenges and Imperatives

Significant obstacles could still stymie any or all of this. The computing infrastructure and power requirements alone need to be upgraded for everyone to participate. The metaverse also brings a maze of regulatory and HR compliance issues, for example around bullying or harassment in the virtual world.

Purdy has the following guidance for companies that want to take heed.

Make portability of skills a priority: For workers, there will be concerns around portability of skills and qualifications: “Will experience or credentials gained in one virtual world or enterprise be relevant in another, or in my real-world life?” Employers, educators, and training institutions can create more liquid skills by agreeing upon properly certified standards for skills acquired in the metaverse, with appropriate accreditation of training providers.

Be truly hybrid: Enterprises must create integrated working models that allow employees to move seamlessly between physical, online, and 3D virtual working styles, using the consumer technologies native to the metaverse: avatars, gaming consoles, VR headsets, hand-track controllers with haptics and motion control that map the user’s position in the real world into the virtual world.

Yet this is only the start. Companies like Kat VR are developing virtual locomotion technologies such as leg attachments and treadmills to create realistic walking experiences.

Learn upwards: In designing their workplace metaverses, companies should look particularly to the younger generation, many of whom have grown up in a gaming, 3D, socially connected environment. “Reverse intergenerational learning — where members of the younger generation coach and train their older colleagues — could greatly assist the spread of metaverse-based working among the overall workforce.”

Only in passing does Purdy mention that “the metaverse will only be successful if it is deployed as a tool for employee engagement and experiences, not for supervision and control.”

While we might concur that the future of work is bound to change with the introduction of many more digital online components, the idea that all of this is progress is worth further examination.

 


Is Web3 Magic Money or Actually a Global Internet-Native Economy?

NAB

Cryptocurrency, blockchain and tokens are more than just magic internet money. These digital finance technologies are maturing into a full-blown parallel economy that is native to the internet.

article here

“What we’re actually seeing is the birth of the world’s first global internet-native economy,” says Patrick Rivera, Product Engineer at Web3 developer Mirror.xyz.

Rivera argues that Web3 products perform specific functions in the digital economy that regular financial mechanisms just can’t fulfil.

“Crypto tokens are the first digitally native asset and an entire digital economy is developing around them that is parallel to the traditional economy. This is the real significance of Web3.”

To understand what he means, we need to contrast his understanding of Web3 with the two stages of the internet that have come before:

Web1: Read-Only. Period: 1980s-2003. The internet consisted of static websites like Geocities, Yahoo, and AOL. Users read online content or information without interacting with or generating content themselves.

Web2: Read/Write. Period: 2004-2022. UGC is enabled on social networks like Instagram, Twitter, YouTube, Facebook, and TikTok. Data is “written” to centralized databases that own user data where platforms (like those listed) can sell it to advertisers or arbitrarily taking down content and/or banning users without user consent.

Web3: Read/Write/Own. Period: now emerging. It leverages blockchain technology to enable users to own digital assets and their data. Web3 also represents a return to open-source protocols that cannot be altered or manipulated “according to the whims of centralized companies like Google and Apple,” Rivera notes.

“What distinguishes Web3 from the two previous eras is ownership,” he underscores. “That ownership is enabled by tokens, which are the fundamental unit of value in crypto economies.”

There is more than one type of token in Web3. Fungible (interchangeable 1-for-1, such as one bitcoin for one bitcoin) and non-fungible (unique), and they play different roles in the digital economy.

“Tokens stored on public blockchains distinguish Web3 from previous eras of the web. These tokens enable digitally-native property rights which for the first time will unlock the ownership layer of the internet.”

With that in mind, what are the constituent parts of the global internet-native economy?

Pieces of the New Digital Economy

‍Tokens: Internet-native assets that incentivize participants to work together without requiring a human intermediary or central authority. Instead, the operating rules are encoded at the inception of a protocol, enforced by smart contracts and can’t be changed without the consent of network participants.

Smart Contracts: Programmable open-source contracts that automatically execute when preset conditions are met, run on public blockchains. These are the productive assets of the digital economy, allow for the mass production of digital goods like NFTs and fungible tokens.

Decentralized Exchanges: These are to the Web3 economy what the stock market, retail stores, or ecommerce are to the traditional economy. Examples include Uniswap (a crypto trading protocol) and OpenSea (a peer-to-peer marketplace for NFTs). Just like you can buy physical goods at Walmart or online at Amazon or eBay, you can buy crypto-native digital goods on OpenSea.

DAOs: Decentralized versions of traditional companies which allow people who work or participate in the project to own it and make collective decisions using smart contracts. DAOs are to the internet-native economy what the joint stock corporations were to the traditional economy — a new way of organizing people to fractionalize ownership, engage in joint enterprises, pool together capital, produce products or services, and make collective decisions.

None of these elements work in isolation. “Only by putting it all together, are we able to see that Web3 enables digitally-native products to be produced, owned, and traded via blockchain technology in ways that were not previously possible,” says Rivera.

The Birth of the Crypto-Native Creator Economy

A crypto-native creator economy is emerging in which creators generate revenue through NFTs. This might happen with different types of NFTs:

·        1-of-1 NFTs — things like Beeple’s “Everydays: The First 5000 Days,” artwork, which was purchased for $69 million in 2021. Only one person can collect it, with the downside that this usually prices out most people.

·        Open edition NFTs — an emerging model. These can be priced at different tiers according to their relative scarcity and can be any type of media; art, content, music, writing, videos, and more.

·        Tiered subscription NFTs — instead of buying a subscription, you can buy something you can display, and play with collecting subscriptions to unlock other benefits. Subscription NFTs will be tradable on global 24/7 marketplaces and enable early backers to share in the upside of a creator’s work.

What To Expect Next?

Given the early stages of Web3, Rivera speculates on future developments for the internet-native economy. He thinks there could be “massive adoption” of the following:

Crypto Social Networks: Crypto-native versions of YouTube, Spotify, Instagram where people are rewarded for creating content with tokens. Instead of relying on algorithmic curation of feeds and lists, people with tokens get to decide how content is curated. Moreover, things like “likes” and “shares” can be turned into portable tokens owned by users themselves. This could be used to completely change how social interaction happens online.

Crypto Gaming: The ability to own in-game items that are portable across different gaming environments and tradeable on a global 24/7 marketplace.

Crypto Work: DAOs with freelancers getting paid in stablecoins and equity tokens with streaming payment solutions. This includes the emergence of DAO tools for specific verticals like musicians, artists, NFT collectors, investors, and writers.

Crypto Firms: New types of organizations built without a CEO or board of directors, and governed by tokens and small, focused-committees (i.e., DAOs).

Crypto States: Buy digital real estate (it’s pixels, folks) via tokens, and have decisions made on ownership (or rent?) through an on-chain governance process.

Rivera notes that “today the experience of onboarding to Web3 products is still pretty janky,” especially in the initial stages like transferring money from your standard bank account to crypto in your digital wallet but thinks this will change as Web3 begins to mature.

As it stands there are only about 10 million Ethereum users worldwide, which is just 0.2% of the total number of internet users. Nonetheless, Rivera thinks Web3 is past the point of no return.

“The internet-native economy is in its infancy…. Adoption is likely to massively scale over the coming decade. Builders and creators now have a generational opportunity to meaningfully impact the next era of the internet.”

It’s just a matter of time.

 


Is the Creator Economy Under Threat from a New Copyright Law?

NAB

article here

A new bill on copyright law is being introduced to Congress and those who support creators aren’t happy about it. To some it upholds Big Media — with Disney being explicitly called out — versus the bottom-up creator economy.

The “Strengthening Measures to Advance Rights Technologies (SMART) Copyright Act of 2022” forces every digital platform or website that allows for user-generated, uploaded content to use content monitoring software designated by the Copyright Office to avoid facing copyright infringement claims.

Those opposing the bill include Public Knowledge, a group saying it promotes freedom of expression, an open internet, and access to affordable communications tools and creative works.

“This bill is the latest example of legislation that threatens the vibrant, open, and innovative internet in the name of intellectual property protection,” Nicholas Garcia, Policy Counsel for Public Knowledge, said in a statement.

“This bill will force digital platforms and websites to implement technical measures that monitor all content that users upload, automatically scrutinizing everything we write, create, and upload online for the sake of copyright protection.

“What is worse is that the details of these technical measures don’t even exist yet, and Congress has decided to give authority over these still-unknown and untested monitoring programs to the Copyright Office, which has very little technical expertise and a known history of prioritizing corporate interests over the interests of internet users and individual creators.

To be clear, Public Knowledge declares that the bill “opens the door to online censorship on a massive scale.”

While believing that protecting copyrighted works are essential for promoting creativity and protecting the livelihoods of creators, Garcia argues, “this bill threatens the very values it claims to protect and would be disastrous for a free, creative, and culturally rich internet. Unfortunately, the SMART Act is anything but.”

Among the contentious issues is the appointment in January of a former Disney Deputy General Counsel, Suzanne Wilson, as the US Copyright Office General Counsel.

“If there is any one company that should obviously be kept from influencing copyright, it’s Disney,” says Revolving Door Project — which lists all the ways the Mouse House has used copyright litigation to defend its turf. “This is not just because it seems to be perpetually entangled in intellectual property disputes… they weaponize their copyrights in abusive ways.”

Instead of appointing copyright lawyers who have worked only for “Big Content,” the government should also bring in experts with experience in the creator economy, they argue.

In recent times, of course, the internet has been upended to potentially disintermediate media behemoths like Disney, Warner Bros. and so on in favor of one-person-band creators who can directly monetize their creativity.

Opponents argue that the proposed revisions to the Copyright law not only do not acknowledge this but actively swing law-making back decades in favor of media monopolies.

“Is one creative sector superior to another just because it has well-connected lobbyists and better-paid political fundraising operations?” poses Joshua Lamel, executive director of Re:Create, which argues for a balanced copyright system to benefit creators, users and innovators.

“After all, every consumer is also a creator in the digital age. Every single user-contributed tweet, photo, comment, and post is a creative work — and therefore subject to copyright law. Policy decisions about copyright and how the internet works have widespread impacts on all creators, especially the rest of us.”

Re:Create points to existing legal frameworks like the Digital Millennium Copyright Act, which helped facilitate the creative economy in the first place by supporting start-up platforms while protecting creators’ copyrights.

Freedom of speech advocates worry about proposals to impose filtering technology online and that enforcement would be in the hands of the big media.

Such filters would put startups at a disadvantage, they argue, and threaten internet users’ ability to post free speech online.

Lamel urges, “Policymakers must think for themselves about how stronger, more restrictive copyright laws might protect a small group of traditional creators but hurt the rights, livelihoods, and interests of all Americans.”