InBroadcast
p58
http://europe.nxtbook.com/nxteu/lesommet/inbroadcast_202106/index.php?startid=58
https://inbroadcast.com/magazines/managing-migration
Vendors offer solutions oriented to on-premise, cloud and hybrid working environments but media organizations must judge the right time to move
From a technical and business perspective, customers are currently at a design crossroads concerning library and archive storage management solutions. The dilemma boils down to on-premise, cloud or hybrid solutions. There is no right answer, since investment and timing is dependent on the existing infrastructure and business requirement of the organisation.
VSN’s Global Sales Director, Roberto Pascual, lays the ground work. On the one hand, he says, an on-premise system generally requires an investment in the equipment and technical solutions that guarantee maximum control, maximum availability and access. This demands greater dependence in terms of manual supervision and maintenance by the client.
A cloud model avoids this initial investment in exchange for shifting to a periodic operating cost structure. Cloud is generally held to provide greater flexibility to scale, grow or decrease its use according to workload and without impact on human resources.
Nonetheless, says Pascual, this is subject to reduced availability and access to content, since it's hosted on third-party equipment. “That is why hybrid solutions allow for the creation of solutions that, if well-constructed, take advantage of both models to create a single solution.
He says, “It’s important to bear in mind that the success of a hybrid solution will not depend solely on the selected technical resources, but rather on adequate prior consulting work to detect the specific needs of the project and deploy the most suitable solution.
“The goal is for clients to have all the available options at their disposal and to be able to move from one environment to another once the solutions have been deployed and working,” Pascual explains. “In this sense, VSNExplorer and VSNCrea are focused on offering the flexibility that current workflows require to support our customer’s business success.”
Telestream also points to the challenges of managing video in a hybrid environment and suggests that users want application agnostic storage. Users of media processing applications do not care or want to know where the content lives on storage, the firm says. Abstracting the storage layer from the application layer allows users to choose best of breed storage technologies for different departments and requirements. Another challenge is validating video quality after transfer. As content moves from camera card, to local disk, to cloud, to archive, video file quality must be preserved.
Telestream’s DIVA Solutions now includes Kumulate (formerly part of Masstech which Telestream acquired). DIVA stores media in what Telestream call “content-aware objects” and handles the storage of those objects through policies.
“Policy-based asset movement allows for easy automation of media movement between storage tiers including disk, tape, cloud, and removable media on shelves,” explains Geoff Tognetti, CTO, Telestream Content Management Unit. “Telestream will leverage the strengths of DIVA and Kumulate to provide the market with the most advanced content management solution."
DIVA can sync content across multiple data storage centers, on-prem or from any cloud provider and offers “unlimited” storage when integrated with media platforms from the likes of Avid, Grass Valley or Pebble.
From Tedial’s perspective the main challenges are: The selection of technology and the management of new media formats like UHD and component-based media (IMF, cameras and so on)
“The first challenge relates to the selection of robust technology that is cost-effective and future-proof and simultaneously offers the reliability required for preservation in a long-term archive,” says CTO Julian Fernandez-Campon. “If broadcasters are storing on-prem they need to implement security mechanisms to guarantee that content won’t be lost, such as having double copies and definition of a policy to migrate from one storage technology (i.e LTO) to another.”
When archiving in the cloud all these problems disappear, he says, as redundancy is guaranteed and the underlying hardware is transparent. “AWS provides almost 100 percent durability of objects over a given year for the whole storage portfolio. The downside of this might be cost and the vendor lock-in.”
For the second challenge, Fernandez-Campon says storage systems don’t provide an efficient way to group and describe all the elements that belong to specific content. “This means some logical structure is required that groups them and can store in the most efficient way, depending on the underlying technology.”
Tedial’s Content Management solution aSTORM addresses both issues by providing a media abstraction layer where all content is defined by a unique ID that’s archived in a storage group.
“The underlying technology is transparent and the applications on the top request to archive a piece of content,” he explains. “aSTORM then selects the correct storage technology (on-Prem, cloud) depending on the defined policy. Most importantly, if migration is required because of technology obsolescence or because the price of the storage service in the cloud has been increased, it will be done automatically without disrupting the service.”
aSTORM also offers a set of microservices that cover storage, technical metadata and media files. These provide the abstraction layer required to describe the technical metadata of the content at the component level and the information of each track (language, video/audio class, etc.), “offering a unified management of all these elements.”
Of course, media files vary greatly in size and complexity, in line with better quality and resolution. GB Labs points to an often-ignored issue which is that each frame in an image sequence movie is seen as an individual file and therefore requires an archive file to be created for every frame (up to 30 frames per second).
“This has the potential for millions of archive container files to be created and can turn a theoretical 6-hour complete tape write to 30 hours or more,” asserts Chief Business Officer (Asia) and Co-Founder Ben Pearce. “While backup is two or more copies of your data, archive is a single asset moved from online or nearline storage into a more energy-efficient and cost-effective resting place. However, since it may be required again, the time it takes to get the asset back and the provision of a suitable storage for working with that asset again (archive storage is not suitable for editing in place) must be considered.”
In terms of cloud storage, the cost of egressing files back onto suitable storage must also be considered. GB Labs has an LTO archiving platform ideal for archiving large amounts of data that uses the GB Labs technology ‘Hyperwrite’ to efficiently archive a full tape’s capacity of millions of files close to the native speed of the tape. Its new ‘Unify Hub’ solution links Cloud storage and on-premise storage and utilizes intelligent caching to massively reduce repeat file egress charges.
Facilities using LTO tape to store project files run into several challenges “which often result in workflow compromises,” argues Tim Klein, President & CEO, ATTO Technology. “Aside from running daily backups of project files and assets, many studios transfer files and assets from outside the studio via LTO tape. It’s not uncommon for the files and assets for a single project to span multiple tapes and that number increases as the project develops.”
Tape drives housed in server rooms create challenges where post production artists need to make frequent trips into server rooms, interrupting their creative process. Constantly invading the server room also raises security concerns as well as disrupts the regulated room environment.
Klein explains that as simple redesign of the storage infrastructure using an ATTO XstreamCORE intelligent Bridge eliminates these workflow and security problems while retaining network availability. XstreamCORE, he says, enables LTO tape drives to be relocated outside the server room and into the studios near the workstations using the existing Ethernet network while the drives remain available to the rest of the facility.
XenData is mining a similar theme. The pandemic has of course put even greater emphasis on supporting distributed working which includes making archived media files available across multiple facilities and to remote workers. Xendata understands that organizations with multi-petabyte quantities of media files have historically benefited greatly from the much lower cost of on-prem LTO tape archives. But the cloud makes it much easier to share files. Consequently, hybrid approaches that combines on-prem LTO and cloud are now becoming attractive.
“XenData Multi-Site Sync is a cloud service that provides highly scalable file sharing across multiple sites and remote users, ensuring that everyone uses the same media file-folder structure,” explains Philip Storey, XenData CEO. “It boosts the productivity of distributed teams by enabling them to seamlessly synchronize files across all locations.”
The latest version of XenData Multi-Site Sync was released in April and includes optimizations for conventional Adobe Premiere Pro Shared Projects. Storey says this means that editors do not have to switch to Team Projects when working remotely.
“The XenData service instantly propagates project and project lock files to all users, no matter whether they are connected to the network or are working remotely.”
All organisations face the challenge of protecting data at scale but for M&E organisations, the data they are protecting is their “crown jewel” in the words of Maziar Tamadon, Product & Solution Marketing, Scality.
“Storing and managing digital media assets, whether during the production process or afterwards, is critical to smooth workflow and maximising asset revenue,” he says. “Security and reliability are therefore of paramount importance, and must be balanced with availability and performance to ensure that data is always and rapidly available when, where and in the format in which it’s needed.”
Software-defined Scality RING scalable storage enables enterprises and cloud service providers to run petabyte-scale, data-rich services such as web applications, video on demand, active archives, compliance archives, and private storage clouds. Acting as a single, distributed system, the RING scales linearly across thousands of servers, multiple sites, and an unlimited number of objects.
Scality says data is protected with policy-based replication, erasure coding and geodistribution, “achieving up to 14x9s of durability and 100% availability” adding that RING provides high performance across a variety of workloads at up to 90% lower TCO than legacy storage.
Trust is also the theme at Rohde & Schwarz. “We now accept the storage, transfer and security of digital content but for some, the same fears remain – ‘where’s my content, is it backed-up, is it safe,’” observes Marketing Director Ciaran Doran.
The firm’s SpycerNode solutions come into play here. Doran explains a recent project demanded a scalable networked system with ultra-high-speed access or ultra-capacity modes connected to remote backup. Users don’t notice whether their content is located in London or New York, Doran says.
Another project required a “bare metal, single-setup with small/medium capacity with the ability to decide whether speed or capacity is the priority, together with easy add-on expansion of flash or disk”. This sort of solution with up to 22GB/s access speed is often critical in high-end post.
Doran also points to a networked storage across two sites 10 miles apart. More than 300 clients were registered to a graphics network “with consistently high-user demand for engaging graphics to be available to all users at all times while using several software creation tools.” The storage network is kept in perfect synchronisation with HQ via a third-party ISP 100Gig connection and SLA with <1ms latency between sites.
“A top-class storage solution should act like a top-class restaurant waiter,” he says. “You shouldn’t even notice it. The content should be available and appear exactly when needed without having to think about it. However, the waiter serving your meal is unlikely to deliver at 22Gb/s or with a latency to the kitchen of less than 1ms!”
Quantum’s Marketing Director Skip Levens is seeing a huge surge in demand for content creation. Instead of developing a television episode a week, or working on a single feature at a time, content creators need to deliver entire seasons and juggle multiple ongoing projects in different stages of production simultaneously.
“If teams want to keep up with the pace of production but also thrive, they need the agility to adapt their production and content workflows,” he says. “The sheer scale and complexity of these demands has required many customers to rethink how their production workflows ingest, create, and in particular, archive and retrieve content to optimise performance, capacity, and economics.
“Many customers find they need a mix of on-premise shared storage with cloud or tape, with a safety copy on tape or cold cloud. They rely on their content management application and file sharing and collaboration platform to work seamlessly with that platform to ensure content is exactly where it’s needed.”
Quantum’s solution is the StorNext 7 shared file system platform. Levens say StorNext lets customers fine tune their mix of performance and capacity, from high-performance shared storage collaboration to Petabytes of object storage or cloud.
No comments:
Post a Comment