Tuesday, 7 September 2021

Why Big Data Is The Key To Tech R&D

NAB 

With macro leaps in technology from AI to 5G converging at pace, perhaps the only way to understand how to act is to apply big data. Using big data to R&D tech development is the key to staying relevant in today’s highly competitive tech market, writes Natasha Lane at the appropriately titled online publication insideBIGDATA.

https://amplify.nabshow.com/articles/why-big-data-is-the-key-to-tech-rd/

From a purely business perspective, using big data in R&D has several important benefits.

For one, it’s an effective way to save both time and money. Particularly so when using already available information or investing in the continuous collection and analysis of data.

Secondly, it’s a more accurate way to collect, interpret, and apply information. Well-made collection systems allow businesses to gain access to more relevant data.

Thirdly, using big data for research and development moves businesses from historical to predictive decision-making. This allows them to stay ahead of the market. Moreover, it encourages R&D departments to develop solutions relevant to the near future instead of the rapidly passing present.

In other words, “Using the right data prevents brands from spending their money and energy on products and services that are predestined to fail,” says Lane.

As an example, she cites Tesla’s use of in-car sensors that track user behavior and car performance. Analysis of this data helped the company successfully diagnose an overheating issue in 2014 which it then resolved with a firmware update.

“Big companies like Tesla or Apple are not the only ones who can utilize data to develop relevant products. Thanks to the wide availability of data sources, almost any player in the tech industry can do the same.”

Software solutions like eye tracking add-ons can help web designers develop UX features fully optimized for emerging consumer behaviors. Similarly, product developers can keep an eye out for relevant automation shortcuts on service websites like IFTTT.

“However, to get the absolute most out of the available information, organizations must understand the importance of consistent collection methods, expert interpretation, and the concept of the margin of error,” Lane says. “Only then can they look for ways to integrate big data into their R&D processes.”

 


US OTT Subs to Hit 277 Million by 2026

NAB News

More evidence of the rise and rise of OTT: Subscriptions in the US will increase another 20% to 277 million nationwide over the next five years.

https://amplify.nabshow.com/articles/us-ott-subs-to-hit-277-million-by-2026/

In a new report, The Evolving Digital Media Landscape, Parks Associates and partner Everise reveal that in Q1 2021, the average OTT subscription in US broadband households has an extremely strong correlation with age — subscription lengths for younger consumers are much shorter than for older consumers. Older consumers subscribe to fewer services but keep them for a longer period. By contrast, younger consumers may subscribe to a larger number of services but are more likely to churn through them.

Generational changes may prove to be a challenge to OTT companies going forward. Although a majority of Gen Z adults own and use televisions, adoption is declining — only 72% of Gen Z householders report owning and using a TV, compared to 77% of Millennials, 88% of Gen X, and 93% of Boomers.

Smart TV ownership is likewise lower — less than half of Gen Z householders report owning and using a smart TV, compared to 56% of US broadband households overall. “Companies must make multiplatform support a priority,” the researchers conclude.

Dave Palmer, President of Everise says, “The emergence of multiplatform viewing further drives the need for [media companies] to protect both themselves and their customers with a multichannel content moderation and omnichannel support strategy.”

Catching Service “Hoppers”

Further complicating customer retention is that, roughly one quarter of OTT service subscribers are “hoppers” who switch between services and re-subscribe multiple times.

Hoppers are unique in that they stay with their services for less time, the report says, have a higher average number of subscriptions, and cancel more services over the past 12 months. They do not necessarily subscribe to a service with the intention of cancelling it, but they are certainly more willing to cancel a service and move to another one offering a desired program. These customers are among OTT services’ most demanding subscribers and disproportionately contribute to services’ churn.

“Hoppers are younger than average OTT subscribers, well-educated, and earn higher incomes than average OTT subscribers. Certain services may offer seasonal content or have small catalogues occasionally refreshed with blockbuster shows. For these services, customer churn is less of a threat and more of a way of life. Their primary goal is to make sure that these customers return again later in the year.”

One emerging challenge for companies in the video entertainment space is a growing competition for consumers’ free time. Video gaming is a prime target and may be one reason why Netflix is launching a games unit, with a Stranger Things spin-off one of its first releases.

“OTT companies are increasingly recognizing games as their new competitor…Brands will need to change their retention strategies, offering value to consumers both at-home and on-the-go. Long term, gaming will either prove to be a challenge — or an opportunity — to players in this space.”

Live TV Remains a TV Zone

The live TV space has yet to embrace alternative, non-TV platforms. Per the report, video consumption of live TV by consumers remains heavily tilted towards televisions, while VOD is consumed on a mix of platforms and more closely represents the online video space overall. It is still difficult for viewers on these platforms to access the content they’re looking for, concludes Parks Associates, with many apps and services not offering users any way to watch content live.

“This is one of the reasons young consumers consume relatively little traditional live TV.”

 


The Future of Data Science in Six Parts

NAB

Everything is now a data point. We are all sensored. Machines are networked to the internet of things. Data collection is automated and decisions based on its aggregation are taken machine to machine in real time.

https://amplify.nabshow.com/articles/the-future-of-data-science-in-six-parts/

Here’s an overview of how this usage is evolving from futurist Bernard Marr — six signposts that point the direction of travel between where we are today and where data science will take us tomorrow:

AI As-A-Service

AI has been pushed into the mainstream by cloud-based as-a-service solutions, where the infrastructure is sitting in a data center and companies pay for what they use.

“The evolution of AI as-a-service means it is no longer simply helping us to automate repetitive workloads such as data entry or language translation. Increasingly it will help us make data-driven decisions such as setting strategic targets and creating smarter products and services.”

Content Creation

Marr thinks machines are increasingly giving us a run for our money when it comes to creativity, particularly in less ambitious endeavors — such as writing product descriptions or creating highlights videos for sports events.

“One huge advantage that AI has over human creatives is that the speed it can work at means it can far more efficiently produce targeted, personalized content. Product descriptions on websites can be tailored for the person that the AI predicts will be reading them, and adverts (or even movies) could have a personalized soundtrack, algorithmically created to appeal to a specific individual.”

Small Data

The events of 2020 upended many of our old customs and practices and, to an extent, wiped the slate clean on what we think we know. This means that people are now talking about “small data” — technology and practices that enable data-driven decision making — to continue when the amount of information we have is limited.

“Although it sounds like the literal opposite, small data practice is closely linked to big data concepts and will increasingly be brought into play when data becomes unexpectedly outdated due to unforeseen events or is otherwise incomplete or unavailable.”

Edge Analytics

With edge, the computational heavy lifting is carried out as close as possible to the point where the data is collected, often within the data-collecting device itself. Edge computing means decisions can be taken more quickly and reduces bandwidth taken up, sending information backwards and forwards from the cloud.

Perhaps the trend most familiar to M&E, Marr believes the edge is “undoubtedly starting to gain traction in the real world.”

 

Citizen Data Scientists

If data analytics are key to the survival of any company in any industry, the shortage of people who can actually understand it are a cause for concern.

“This lack of capacity to capitalize on opportunities to leverage data is undoubtedly causing a large amount of inefficiency within many organizations, purely through missed opportunities,” says Marr.

Hence, the rise of the “citizen data scientist” (a term coined by Gartner) — someone who is not necessarily academically trained as a data scientist or employed as a data analyst but has the ability to work with and implement data solutions as part of their day-to-day work.

“Closely related to the AI as-a-service trend, a big driver for the explosion in popularity of this trend is the emergence of ‘no-code’ and natural-language data science platforms, allowing anyone to have a stab at creating smart applications even if they don’t know anything about software development.”

Ethical and Responsible AI

Our understanding of ethics applied to AI is evolving alongside the technology itself. It is understood that human or systemic bias can lead to automated, large-scale machine bias. “This means mitigating AI’s potential,” Marr says, since if left unchecked, AI could magnify prejudice and accelerate inequality.

AI technology is expensive, and there are only a limited number of humans with the skill to deploy it. The Partnership on Artificial Intelligence to Benefit People and Society — founded by Google, Microsoft, Apple, and others — is one organization aiming to ensure equitable AI resource sharing. Marr says most forward-thinking organizations involved with AI now often have ethics boards “dedicated to ensuring that nothing they do could be perceived as having harmful effects, and this will become increasingly common.”

 


Monday, 6 September 2021

Timeline TV launches virtual studio broadcast centre in Ealing

SVG Europe

Timeline’s new broadcast facility in Ealing is has been hosting Channel 4 and Whisper’s live coverage of the Paralympics, but the build is part of a wider expansion by the west London facility group.

https://www.svgeurope.org/blog/headlines/timeline-tv-launches-virtual-studio-broadcast-centre-in-ealing/

Timeline already has five large galleries and small studios in Ealing for live sport but had been missing a sizeable studio. Planning for the Ealing Broadcast Centre (EBC) began last year and was kicked off in earnest last November.

After scouting for a location nearby, Timeline found a new build with a high ceiling and space for a lighting grid and green screen set plus edit suites. Building and integration commenced in April 2021 with the contract for the Paralympics accelerating time to completion.

“The Paralympics is a full remote operation where we bring all signals in,” explains Daniel McDonnell, Timeline CEO. “We’ve outfitted a studio in Leeds [from where Paralympics Gold Rush is presented by Clare Balding] with eight cameras controlled from Ealing. On the ground in Tokyo we’ve a remote camera operation [with host Rosie Jones] plus six roving cameras plus a presenter for additional coverage.”

Another remote controlled-from-UK camera is installed in the Team GB athlete’s village.  The new studio in Ealing is used for guests in the London area and daily live show The Last Leg with Adam Hills, Alex Brooker and Josh Widdicombe. All feeds, plus 20 signals from OBS, are switched at the EBC gallery.

VR studio

The 900 square metre space is set over three floors, the centrepiece of which is a 185sqm virtual reality studio with a 4.5 metre high lighting grid, green cyc and a fully configured multi-camera VR system using Unreal Engine and Brainstorm InfinitySet delivered in partnership with Moov. A Mo-Sys StarTracker system enables full 3D VR tracking on all types of cameras for virtual studios.

“We wanted to kit out the studio for high-end virtual reality using Brainstorm processing and six cameras,” says McDonnell. “In concert with Moov TV we will design some VR set building blocks so a production can come here and get a real headstart. Normally a VR design might take two months of development. We’re also offering, with Moov, experienced creatives and technicians. The aim is to bring down the time and expense of working in a VR studio so productions can turn up and go the next day.”

Physical sets are also an option via black cyc, but McConnell feels that virtual sets are a great way for productions to reduce their carbon footprint. There’s no need to build sets using precious resource or transport it.

Also part of the EBC are dressing/make-up rooms, production offices, green rooms, full online edit suites and VO booths. There’s a triple-row production control room and large VT replay room plus tier-three data centre. Connectivity to all major hubs includes BT Tower, NEP Connect and Tata. Satellite downlink capability is also available.

A large car park is equipped to be able to park an OB truck – which is what Timeline is currently doing for the Paralympics. Its UHD2 vehicle is connected to the facility “to give us an extra gallery to work on the programming”, McDonnell says. “It’s a useful feature, especially in London, and means we can upscale to provide more facilities if needed.”

After the Paralympics there are more bookings though it’s too soon to publicise these. “The proposition does seem to be taking off. We think there’s a shortage of studios in the UK with good connectivity and live galleries that can do EVS, graphics and Piero and that this fills the vacancy.”

Currently the facility is SDI-based, a decision McDonnell puts down to timing. “IP is in our long-term plan. We will have two more galleries open at the EBC by the end of this year and plan to expand to another two. At some point on that journey we will migrate to IP.”

Timeline has also implemented a large NDI network mainly for monitoring around the facility.

“It’s an IP-based bridge between an IPTV system and broadcast monitoring for green rooms and voice over rooms. It enables us to have a very expandable monitoring system using IP. It also enables us to do software-based multiviewing and timecode inserts.”

Medialooks software convertors are used to ingest NDI and Sienna’s broadcast toolkit is used to make multiviews and timecode inserts running on virtual machines.

“We’ve experimented and dipped our toes in the NDI world and the latest version is a game changer,” McDonnell says.

The whole set up is also capable of hosting live broadcast workflows on-prem or in the cloud.

“We have a large data centre in our new facility. It’s a key attribute. More and more clients are remotely producing programmes from their location using virtual machines and systems in our data centre. For me that counts as cloud broadcast.”

Of course, the facility can also connect to third-party data centres depending on show and region to expand its scope globally.

“If you’re doing a show in the Middle East and need a local hop to Amazon you can still bring the feeds back here. We have direct connection to Oracle Cloud [employed for its SailGP production]. One hundred percent it is the direction of travel for the industry to broadcast in the cloud whether that’s remotely from their own broadcast centres or using public cloud or a mix of both.”

 

BT Sport world-first standalone 5G broadcast for MotoGP at Silverstone is next step to more

 SVG Europe

https://www.svgeurope.org/blog/headlines/bt-sport-world-first-standalone-5g-broadcast-for-motogp-at-silverstone-is-next-step-to-more/

Trials of sports media innovations like augmented reality (AR), 8K and 5G have been delayed by the pandemic, but as crowds and crews get back to the stadiums tests are starting up once more.

Leader of the pack – in Europe at least – is BT Sport which has plans for all three technologies in the works.

At the recent British Grand Prix, MotoGP at Silverstone [Sunday 29 August], BT Sport was able to advance the 5G part of its plan in concert with long-standing motorsport partner, MotoGP, and bag a world first in the process.

“We’ve been talking with MotoGP and rights holder Dorna about exploring 5G and how it can transform some of their workflows,” explains BT Sport chief engineer Andy Beale. “There’s a real-world problem to solve. Increasingly the wireless bandwidth traditionally used for radio cams and mics is being squeezed out and so reclaiming some of the public 4G and 5G spectrum is potentially a good way of solving reduced frequency count.”

BT Sport embarked on a project with Dorna to explore whether 5G was a viable use case as replacement technology for radio cameras in the pit lane and paddock.

Also in the consortium were RF partner Vislink and the University of Strathclyde, which has a 5G lab. The aim was to build a standalone 5G network and explore its potential during live broadcast.

“We didn’t believe the main 5G network at Silverstone would support the required quality of service,” Beale says. “With 60,000 fans, a large circuit area and just one 5G cell we didn’t think it would be reliable.”

Bespoke network

So, the University of Strathclyde built a bespoke one. “A standalone network uses the same principals as a non-contested private network, which is why they think it’s a world’s first,” Beale says. “It’s part of a wider project basically looking at 5G tech longer term as a replacement for traditional coded orthogonal frequency division multiplexing (COFDM).”

For the British GP 5G trial, Vislink customised a version of its H-Cam transmitter that clips to the back of a camera and built a 5G transponder on that for use in the pit lane before the race. These pictures went to air on the International Programme Feed (world feed) and were also clipped into BT Sport’s live coverage.

“That was used very extensively and successfully up and down the pit lane,” says Beale.

The small cell built by Strathclyde Uni was rigged on the outside of the circuit and, unlike traditional hardware based, specialised and proprietary cells from the likes of network operators Nokia, Ericsson and Huawei, this was built entirely in the CPU as a software-defined network.

Beale adds: “This is a CPU-run 5G cell running virtually (apart from the physical antenna,) on local equipment, and that’s important as we do more and more remote productions and start taking feeds into the cloud.”

As a next step, BT Sport will take it fully into the cloud.

Vislink also built a ‘militarised’ version of the transmitter/transponder for mounting on the back of the media bike, pictures from which are made available for rights holder to use for features.

For BT Sport at Silverstone, former world superbike champion-turned-pundit, Neil Hodgson, rode around the track on the Thursday before race day commentating on the lap.

“5G cells have a lower footprint than 4G [meaning more of them are required to cover large areas] so we knew the 5G coverage wouldn’t work for the whole lap, but it did work for the section the 5G cell was on,” he says.

That’s important because the tests were made at various speeds from 100kph to 180kph.

“We’re pleased it still did a pretty good job,” Beale continues. “The signal has to authenticate when it arrives on the cell so when the bike is going top speed you do lose a bit of track while that handshake takes place, but it kept a clean signal all the way around the complex before it was lost into the start of turn one.”

For Dorna, the issue is that at every circuit it attends around the world, the bandwidth for radio cameras and mics is wildly different. So the team has to have kit that works in all these frequencies and as that bandwidth gets compressed it becomes harder and harder to make all those work reliably.

Dorna has around 90-plus radio cameras, let alone radio mics and talkback, each weekend. Beale says: “What we’re hoping as an industry is that, with 5G, we can use more consistent frequencies which should make it easier operationally for rights holders and broadcasters. The 5G spectrum has a much more consistent specification worldwide.”

Network slicing still to come

Extensive 5G live production will require network slicing, which is the ability for a broadcaster to licence and hive off a part of the spectrum for the duration of an event.

“That’s still not available unfortunately,” says Beale, who attributes the tardiness of its standardisation to COVID. “Slicing would enable us to safely join a public network without contention but we’re still waiting for that to arrive. That’s why we’re using a standalone network to give us the same effect.”

That said, the advantages of the standalone model are not great for either editorial or cost reduction. That will only come with full network slicing.

“At the moment we are still rigging a standalone network much like COFDM network. The benefits will come when we can licence the public 5G network,” he adds.

“But we’ve proved we can build a standalone network and successfully connect cameras to it.”

BT Sport is set to perform a further trial as part of the IBC Accelerator programme in December and also has its eye on use cases pitchside in rugby.

The IBC Accelerator programme includes Al Jazeera, BBC Sport, BT, Olympic Broadcasting Services, Multichoice and Supersports. It hopes to showcase elements of a live sports production over a 5G sliced network, “with glass-to-glass latencies that can match those of a traditional broadcast solution, in the region of 100ms 120ms”.

There is, however, still plenty of work to be undertaken before those targets can be reached.

 

Friday, 3 September 2021

Even in the cloud, managing assets doesn’t get any easier

TV Technology pp14-16 Sept issue

https://issuu.com/futurepublishing/docs/tvt465.digital_september_2021?fr=sM2QyYTM4NzgxNjI#

As the cloud is increasingly applied to media production, the lines between ground-based and cloud-based media environments are becoming blurred. The asset management system provides the glue between them but perspectives on the best way to use it are changing. 

“Even if you think you know what cloud storage might cost be sure you’re updated on all the recent facts, variables and combinations of services before making a move in either direction,” advises Karl Paulsen, CTO of Diversified. “Like SAN storage, cloud is an evolving and frequently changing environment.”  

We asked experts at a number of vendors to offer guidance toward the migration of assets, the cost of storage and MAM’s crucial role in orchestrating this. All agree that no matter the long-term advantages of cloud production a ‘big bang’ move is unlikely to pay dividends. 

Big bang or piecemeal approach 

“Anyone involved in technology migration projects will probably tell you that it’s much easier building systems from scratch than migrating existing workflows to new systems piecemeal,” says Raul Alba, Avid’s director of solutions marketing. “But the reality is different. From our deployment experience, we’ve found a ‘tabula rasa’ approach isn’t always right because a return needs to be made on past investments, or because entirely-new workflows will cause too much operational disruption. This means hybrid workflows will be important for several years, allowing every company to transition to the cloud at their own pace.” 

A hybrid scenario implies that the media lives, and is processed, both on-prem and in the cloud. In some situations, hybrid operations will be required regardless as it may be more cost effective to work on-prem before assets are moved to the cloud. 

“Migration could begin with archive or short-form digital workflows, then move onto complex workloads once the organization understands the new ‘physics’ of working in the cloud,” says Lincoln Spiteri, VP Engineering, Dalet Flex. “Users should be aware of both the advantages and limitations that come with working on the cloud. Lessons learnt should be used to optimise workflow.” 

SDVI’s Simon Eldridge similarly recommends taking a specific use case and getting that up and running before starting a process of iterating and optimizing.  

“Don’t try to do everything at once – iteration leads to transformation,” he says. “The value of that quick win, coupled with the experience the team will gain from that initial migration will accelerate the process of implementation for the next use case, which will be logically selected based on its returned value and business impact.” 

Tedial say cloud has had a major impact on traditional MAM workflows and makes the case for a Media Integration Platform.  CTO Julian Fernandez-Campon explains that this allows broadcasters to have the same workflows running on-prem or in the cloud, as the applications are integrated and multiple storage locations will be transparent to the platform. 

“A hybrid cloud deployment using a Media Integration Platform provides a low-risk transition by moving operations in-line with business needs allowing them to pivot between local and remote operation depending on requirements.” 

 

Cost considerations 

The cost of cloud can be a minefield. Among other things, costs vary based on where the physical data centers are geographically located. “It’s a bit like going to the smorgasbord,” says Paulsen. “Many items are à la carte.” 

Fees include monthly access, retention time, storage volume and/or the use of inherent capabilities of the store itself.  There are hidden costs for deletion of content. 

Paulsen says, “Use it once and quickly—a short term ‘put it there and take it back out’—and you’ll have one price. Leave it there for a lengthy period of time—another price. Need rapid access to an archive? You can watch previously expected low-budget costs to take off like SpaceX launching multiple satellites one at a time.” 

It stands to reason that M&E needs to have a good understanding of the assets they ingest and how these will be used. This information drives decisions about how to optimize storage in terms of hot, cold, and archive storage tiers, which directly impacts costs.  

“MAM platforms should fully support cloud storage to balance out storage costs and timely media access,” says Spiteri. “For example, low-bitrate proxies enable low-cost access to archives. Source assets should be pushed to archive storage as soon as possible while lower bitrate mezzanine formats are a good option for editing.” 

Geoff Tognetti, SVP & GM Content Management Business Unit at Telestream offers four questions each organization has to address to “accurately estimate” cloud storage costs: 

  • What percentage of your existing content footprint should be migrated (typical cloud migrations we’ve seen range from 30-75% of current LTO storage, not counting copies). 
  • Understand your cloud provider’s tiering policies and how those map related to your lifecycle rules on premise. 
  • What media format do you want to use for the assets that you store in the cloud, i.e. your production format, a separate mezzanine format, proxies, etc. 
  • Estimate the impact on your current production workflows (number of drives/data movers dedicated to migration, off-hour scheduling, etc.) 

 

The essence of metadata   

Metadata plays a key role in giving content owners visibility of their assets to be able to quickly locate relevant media before it is moved around.  

“Having media stored anywhere is pointless if you can’t find it quickly when you need it,” says Alba. “If you can easily find and retrieve only what you need, you’re saving both time and money.” 

Having an adjustable data model, “with time-based metadata and a sophisticated search interface is key to making media management systems efficient,” he says. 

In addition to object or technical metadata that is mostly considered in a cloud storage environment, you also have to consider other descriptive metadata such as transcripts, and visual logs.  

“Cloud access makes it easier to generate and preserve this data, and the ability to relate it with media files can save a lot of time in post,” informs Russell Vijayan, Business Manager, Digital Nirvana. “While such metadata is enhancing process speed, reducing time to market, and improving overall efficiency, there’s still no considerable data available to determine whether the current price points of storage could be justified. For the most part, high-volume users with a clear cognitive metadata plan and usage would stand to benefit.” 

“Strong” metadata gives organizations the ability to “track and understand the value of assets, drive asset lifecycle to help manage cost and ensure that correct assets are located and moved through production workflows,” according to Spiteri. 

Generally speaking, the more metadata that can be added automatically the better. The most important point, says Fernandez-Campon is the concept of Metadata Aggregation from multiple sources. “This provides a standardized metadata model to combine multiple metadata sources and provide business value.” 

Eldridge asserts that as soon as you move to the Cloud, the data universe of “what can be automatically generated and collected by the system just explodes.” 

Contextual, time-based metadata is generated by QC or ML tools that have analyzed the content, he says. That metadata can assist operators looking to segment or make compliance decisions. Time, cost, and performance data can inform decisions around which tools you use and for what purpose.  

“Tagging data can be used to assign costs at a granular level based on process, asset, project, or network. Rather than being considered as a sidecar to the content, metadata becomes a critical driver of your supply chains and an output that helps you further optimize.” 

The evolution of MAM 

Eldridge is specifically referring to a “cloud-native supply chain model” which is the natural evolution of MAM, according to SDVI. It argues that traditional MAMs were built around fixed capacity infrastructure and heavily customized around use cases – therefore unfit for the agility required of cloud production.  

“Today’s media enterprises don’t want to worry about capacity planning or constraints, and they want to pay only for the services they consume,” he says. “Perhaps most importantly, they need to be able to predict cost easily in order to support good decision making. They need to allocate those costs accurately to fully understand the profitability of the products they make.” 

Dalet’s Spiteri, agrees; “MAM systems are morphing to media supply chain management systems, where a wide range of media types are managed with a wide range of automated workflows. Once these systems sit in the cloud, they become collaboration platforms, expanding their utility.” 

He adds, “Modern MAM systems cannot always fulfil every use-case and this is where it’s important that they offer integration tools and open APIs to create a truly business-driven ecosystem.” 

Avid’s Alba acknowledges that the term ‘MAM’ can be associated with “complex, expensive and hard to maintain systems” but says that’s not the case in practice. “In fact, our users tell us these solutions are more important than ever both because of the amount of media and metadata being managed, and the number of outlets to which it needs to be delivered. MAM is an essential part of making media production workflows efficient.” 

A further point worthy of consideration is that cloud relegates the once painful, time-consuming, and labour-intensive process of media migration to history. That’s a misconception asserts Tognetti. “Just as moving from one LTO version to another was inevitable, so is moving from one cloud vendor to another. Cloud storage providers have no incentive to facilitate any migration outside of their products.” 

The solution for this, he says, is to look for agnostic content management provides tools “to automate the migration process for all vendors” through cost-effective, and non-disruptive workflows while serving the lifespan of content. 

M&E organizations need a scalable and elastic platform that allows them to rapidly build, deploy and iterate on its supply chains as business needs evolve. Call it a MAM if you like but asset management has never been more important.

Thursday, 2 September 2021

This is the world's first standalone 5G network for live broadcast

RedShark News

5G is being unleashed in phases but it has now reached a level of sophistication where it can be applied to production in earnest.

https://www.redsharknews.com/this-is-the-worlds-first-standalone-5g-network

Elements were trialled around the edges by the Olympics host broadcaster in Tokyo and greater live deployment can be expected during the Beijing Winter Games next spring.

That’s left the field open for motorsport MotoGP, BT Sport and tech partners including Vislink to claim the world's first stand-alone private 5G network in action.

This past weekend’s broadcast of the British GP at Silverstone was accomplished in part using live feeds streamed from race bikes.

Live broadcast with 5G

Live pictures were broadcast from a 5G handheld camera which was on the grid before each of the races. An onboard 5G camera also beamed back pictures from a test bike.

Standalone is distinct from non-standalone where 5G networks are supported by existing 4G infrastructures. Here, the network is 5G-enabled and 5G reliant all the way across the chain.

Vislink supplied two products for this trial. The first is a 5G version of their tried and tested H-cam handheld wireless camera transmitter, the second is a brand new 5G bike onboard transmitter that was fitted to the media bike. These were connected to a private standalone 5G network provided by the University of Strathclyde that covers the pitlane, paddock and part of the circuit. These pictures were then supplied to the Dorna production team producing the host feed which is then shared with rightsholders including BT Sport.

BT Sport also had the feed in their production gallery to cut into their event coverage.

It is still badged a trial but 5G is expected to find its way into more and more outside broadcast and news coverage. It is certainly a step up from demos such as one made by the BBC, ITV and others at IBC last year which suffered glitches – albeit over public contested mobile networks.

A 5G network can provide productions with greater flexibility than having to tether and plug in wired cameras, it reduces costs and reduces setup times.

In another example, a 360-degree spherical camera can be mounted on a drone. This could communicate with a receiver on the ground to send 8K video to a server in the cloud.

“People with AR glasses can then enjoy live video broadcast as if they were present at the venue,” suggest technology vendor Rohde & Schwarz in its white paper.

BT Sport has also been tracking this. It’s exploring how to capture volumetric video and deliver interactive immersive experiences over 5G, both within sports stadiums and to augment the live broadcast at home. Ideas include streaming a real-time virtual volumetric hologram of a boxing match onto a viewer's coffee table, simultaneously with the live feed. It is all at the proof-of-concept stage, but is part of a nearly $40 million U.K. government-funded program to develop applications that will drive 5G take up. 

The future of 5G and 6G

5G standardisation is at release 16 with further specification releases in the works.

XR services are being evaluated as part of release 17 while, longer term, a 6G spec is being scoped. A 6G wireless network will feature data rates in the terabits per second and latencies as low as tens of microseconds.

5G live production will require network slicing – the ability for a broadcaster to licence and hive off a part of the spectrum for the duration of an event.

6G could take these concepts to the extreme, Rohde Schwarz suggest, allowing customized network slices according to an individual’s needs and applications to create a truly customized quality of experience for that individual.

And while 5G could permit the introduction of holograms, 6G would likely enable high-fidelity holograms on a massive scale.

“High-fidelity holographic communications, pervasive artificial intelligence and multisensory communications (e.g. touch, taste and/or smell!) could become part of our daily lives,” it speculates