Tuesday 31 October 2017

Winter is Coming: Grab a Production Deal


Content Marketing for VMI

http://vmi.tv/news/article/237

It’s that time of year when the nights are drawing in, the Halloween lanterns are primed and Christmas parties are being planned. It’s also traditionally the season when the TV and film industry is at its least busy as productions wind down for the winter months.
But if you think about it, that doesn’t make financial sense. With the majority of productions packed into the summer months every line in the budget will be maximised as projects compete for time, equipment, crew and space. In the autumn and winter, however, smart thinking production managers may be in for a pleasant surprise.
We’ve come to expect better rates if we book during off-peak periods in pretty much every other industry – transport, holidays, hotels. Why should the same not apply to filming?

Camera rental companies are bound to have more inventory on their shelves. The same goes for lighting and hire of all the standard auxiliary shooting kit. Will DPs, camera operators, DITs and sound mixers feel able to command as high a fee as the rest of the year? Studio space, usually at such a premium, will also be impacted positively by simple supply and demand. All of which could add up to you getting much more for your budget.
Not all productions need to have 15 hrs of daylight and the smartest producers and line managers will always examine the potential of every angle to make their budget go farther.
'Choosing to shoot and hire equipment in quieter months of the year has been one of the wisest moves we've made - VMI have not only given us a wonderful service, but they have been able to provide us with the very best resources and equipment for our latest movies Fox Trap and 12 Deaths of Christmas - which we shot January last year and January this year! It's a great time for any filmmaker to be shooting and has truly benefitted our projects.'
Rebecca Fletcher, Proportion Productions
Clearly, if you’re thinking about this in December then it’s already too late. Which is why now is just the right time to consider greenlighting a project with principal photography through the winter months.
Don’t wait until the New Year with everyone else to scrabble for a bargain, when there are great tailored deals to be had today.
Image courtesy of Proportion Productions, shot on location during the production of "Fox Trap"  

Discovery and Eurosport Unveil Digital First Winter Olympics

Streaming Media

European rightsholder marks first Olympics coverage with major social media, mobile and interactive overhaul.
http://www.streamingmediaglobal.com/Articles/News/Featured-News/Discovery-and-Eurosport-Unveil-Digital-First-Winter-Olympics-121439.aspx

With just 100 days before South Korea opens the Winter Olympics, European rightsholder Discovery promises a record-breaking online viewing experience. It will make every minute of coverage available online for viewers across Europe via its sports platform Eurosport. This will make it the first fully digital Olympics for Europe, Discovery claims.
Its live and on-demand coverage will deliver to 48 markets with all the action collated on Eurosport Player available online on mobile, tablets, and connected TVs. More than 4,000 hours of coverage and 100 events will be available, including 900 hours of live action, more than ever before across the continent.
Editorially, Discovery are offering fans the chance to personalise their own Olympic viewing. Beyond choice of stream, few details are offered at this stage but we expect this to mean users can curate their own Olympic programme by selecting to follow countries, athletes, events, and other criteria.
Social media is a significant new approach to previous Olympics coverage. Discovery’s partnership with Snap Inc will see UGC and behind-the-scenes content from PyeongChang published to Snapchat users in Europe.
Discovery and Snapchat already work together in the U.S., where Discovery creates mobile shows for Snapchat’s Discovery platforms. During Discovery’s Shark Week, 17 million viewers watched Shark Week content on Snapchat.
Snapchat is used by 57 million people daily in Europe.
A dedicated mobile digital studio on the ground in PyeongChang will work with digital influencers, embedded into editorial teams, to create bespoke content for social media focused on engaging millennials.
Jean-Briac Perrette, Discovery Networks International President and CEO, said in a statement: "PyeongChang 2018 marks the first Olympic Games of our ground-breaking long-term partnership with the International Olympic Committee. We want to redefine the Olympic Games experience for the viewer with immersive storytelling, unrivalled expert talent and an all-screen strategy reaching new and younger audiences. We know that viewers want more: more access to their local heroes, more expertise and more ways to watch the Olympic Games, and this is exactly what we are bringing to fans across Europe."
Augmented reality technology will be used to share a greater understanding of winter sport events. For example, Sport Explainers will use AR and data to create films "that delve deeper into the technical explanations of winter sports, showing how the world’s best athletes win gold," the company said.
Discovery paid €1.3 billion ($1.45 billion) for pan-European rights for four Olympic Games from 2018 to 2024. Under terms of the deal broadcasters in some territories, including UK and France, also retain rights to free to air coverage for those tournaments.
A year ago Discovery inked a deal with digital streaming specialist BAMTech to launch BAMTech Europe, a new digital technology provider, intended to re-invigorate Eurosport’s digital streaming offer as well as provide expertise to third parties.
Since then, ESPN parent Disney has taken full control of BAMTech. It is not clear how this has impacted the deal with Discovery. In Europe, Eurosport and ESPN would not necessarily be seen as competitive, but other sports broadcasters may not want to share a tech partner with a potential rival.
Ahead of the BAMTech announcement, Discovery poached Ralph Rivera from the BBC to lead Eurosport’s digital transformation. At the BBC, Rivera, was responsible for all of the BBC’s digital media services, including delivering the first truly digital Olympic Games for London 2012.

Monday 30 October 2017

Sales of 4K Playback Devices Belie the Lack of Content

Streaming Media

4K was always a format pushed onto the market by consumer electronics vendors, and the relative lack of consumer interest in the higher resolution along with distribution bottlenecks are restricting adoption.

http://www.streamingmediaglobal.com/Articles/Editorial/Featured-Articles/Sales-of-4K-Playback-Devices-Belie-the-Lack-of-Content-121417.aspx
The volume of Ultra HD source content and the number of end displays capable of viewing the higher resolution on is at an all-time high, yet the amount of 4K content that makes its way to an audience—either via SVOD, Blu-ray Disc, pay TV, or theatrical release—is remarkably small. 
Ampere Analsyis calls this the content gap and Futuresource Consulting states that the gap is widening.
"There are the usual claims of 4K availability via pay TV operators and OTT SVOD operators, but realistically these are marketing claims and produce a largely sub-par experience both in terms of the resolution actually received (owing again to file sizes) and volume of content," says Richard Cooper, research director at Ampere. "The whole in-home 4K experience is very dependent on having a fully compatible setup from end to end."
Anecdotal evidence reported by Ampere suggest that achieving the full 4K environment requires no small amount of expertise on the part of the consumer, and even those with all the kit may not actually be experiencing 4K if one aspect or component is set-up incorrectly.
The number of UHD devices in consumer homes continues to rise as prices fall. Some 35% of TVs sold globally this year are expected to be 4K UHD, a total of 79 million bringing total penetration to 8% according to Futuresource data. This is expected to reach a global average of 21% by 2021.
The UHD streaming market is on the rise, with worldwide shipments of 19.5 million expected through 2017 and accounting for 36% of all media streamers sold.
"There remains a widening content gap for 4K UHD content," says Futuresource analyst Tristan Veale. "While there is a significant quantity being shot, produced, and stored in 4K, a small proportion of that is reaching consumers despite the strong hardware sales."
Apple has launched a 4K Apple TV as well as an iTunes store with UHD titles. Its titles are bought without selecting a resolution quality and therefore users can stream 4K content if they have correct AV setup and fast enough broadband. Aggressive pricing of $20 per title means UHD movies on iTunes are $10 less per title than the physical equivalent.
"Other major providers are removing or simplifying their pricing options," says Veale, citing moves by Google and Amazon to reduce the cost of UHD titles. "18 million homes worldwide had a UHD SVOD subscription and a UHD TV on which to watch it in 2016. This will almost double to 33 million by the end of 2017 as not only device penetration increases but as services add UHD content making it either free or on a higher priced tier, which an increasing number of people are subscribing to."
The principal issue for 4K and higher resolutions lies not in the number of screens but the issues of in-home capabilities and content distribution.
"The processing of high-resolution images through post-production lowers the resolution largely due to the file sizes and processing times involved," says Cooper. "The result is an absence of 4K resolution in the content pipeline."
That is changing, though very slowly, and it's more common among 4K movies and TV shows currently available that they have just been upscaled at some stage rather than held in native 4K throughout.
Half of all content shot for cinema and high quality drama in the UK or U.S. in 2016 was recorded with 4K-capable cameras, however just 22% was subsequently mastered in 4K, states Futuresource.
Broadcasters aren't prepared to invest in the uplift required for content production and distribution of 4K UHD in part because consumers aren't demonstrating too much interest in the format.
UHD adoption is considerably slower than the take-up of HD from SD.
"Broadcast is currently displaying the most significant content gap with a high number 4K UHD device owners not yet having access to content. This is partly because the high capital outlay for the contribution and distribution of 4K UHD means that most operators see little incentive to shorten replacement lifecycles in order to provide 4K UHD STBs to consumers," says Veale.
Additionally, historical content originally shot in 4K is often not be held in that resolution long term; once the post production is done the original 4K footage is typically deleted due to the higher storage commitments.
"There is a 4K content gap in catalogue titles from the beginning of the digital revolution even through to the present day in many cases," says Cooper.
Broadcast 4K content remains relatively limited. By the end of 2017 Futuresource estimate there will be just over 100 UHD channels in operation across the globe, and Asia Pacific accounting for nearly 40% of those.
Where 4K content does exist, the only effective commercial way of getting this content into homes is really through physical media, says Ampere. Even here the analyst finds 4K Blu-ray Disc is niche even within the Blu-ray market and likely to remain so for the foreseeable future.
"Sales of Blu-ray players, again anecdotally, have been low even among households with 4K TVs," says Cooper. The number of titles (upscaled and native) available on the format hit the 200 mark this year; almost exclusively studio content supplemented by the extreme sports and panorama footage on manufacturer-supplied disc. "This rate of release places BD 4K way behind BD (about a third of the number) at a similar period after the format’s launch," he says.
Dynamic range is considered by many to matter more to the perceptual quality of an image than resolution. Even here, rollout is not fast. Just 7% of production companies are being asked to deliver in HDR despite HD HDR providing an increased quality of picture with just a small increase in bandwidth requirement, says Futuresource’s Veale. "However, HDR is a more difficult consumer message to convey, and therefore monetise, than 4K resolution."
It may require the impetus of an all-UHD/HDR FIFA World Cup which will be produced and delivered live from Russia next summer to kick-start enthusiasm for the format and for the necessary upgrades to internet connections into the home.

Thursday 26 October 2017

Pro AV trumps tariffs

AV Magazine

Not even withdrawal behind borders threatens to stop the juggernaut that is the US’s massive $53 billion pro AV market.
https://www.avinteractive.com/features/market-sectors/pro-av-trumps-tariffs-25-10-2017/
There’s no way around it. The United States’ AV market is the world’s biggest by some margin. Relative to China – the world’s second biggest AV market – the US generates roughly twice as much demand. In 2016, the nation’s AV industry was worth nearly $53 billion. Through 2022, the industry will grow by four per cent year-on-year, according to AVIXA.
Here are more superlatives: the US accounts for over 80 per cent of the overall pro AV spend in the Americas region (encapsulating Canada, Brazil and Mexico) which is tipped to attain $83 billion and retain its number one status by 2020.
That’s not withstanding any road bumps of contraction as a result of erratic White House policies or a more sustained withdrawal of trade behind its borders. “The United States’ relative importance to the Americas market is not anticipated to erode,” says IHS Markit analyst, Merrick Kingston.
Jim Vasgaard, who is described as spectacular projects manager for Daktronics, says pro AV there is growing “at a very aggressive rate in multiple different niches and applications”.

“Opportunities are continually presenting themselves in retail, digital signage, billboard, live event and commercial markets,” he says.
The market “is vibrant, dynamic, and growing” reports Jeff Singer, executive director, product marketing for Crestron. “All indicators point to increased market growth and adoption. Video is rapidly becoming an integral part of business, education, government, and across all vertical markets. It’s no longer a luxury.”
Jay Rohe, vice-president of US sales at Milestone, says its integrators “can’t find enough people to hire because they have so much work,” and Nick Belcore, executive vice-president, Peerless-AV, finds the market very robust, adding: “we expect dynamic expansion in the coming years.”
Tariff threat
Foreign manufacturers, however, might be more alarmed by President Trump’s threatened protectionism. “Any EU manufacturer could be affected by import tariffs if the EU were targeted with these policies,” notes Josh Radin, US director of sales at Italy-based K-array. “At this time, very few of the AV products going into most installations are built in the US, so protectionism will increase the cost of installations and/or profit margins for suppliers and integrators across the entire industry.”
When it comes to signage, Belcore says that typically, LCD and LFD panels have been driven by components that have generally been imported. However, recent shifts in manufacturing of these within North America “should allow the US to retain a competitive edge should protectionist regulations come in.”
Howard Newstate, vice-president, Experience Innovation, Holovis predicts pricing pressure due to the tariffs but most respondents quizzed by AV Magazine on this topic preferred not to speculate.
Tim Albright, founder of AVNation, has no such qualms. “The economic situation is in a slow growth mode,” he suggests. “Most integrators we talk with are relatively busy and have been experiencing sustained growth for the last three years. That said, there are several indications that the economy will take a dip in the next five years. As much as they are growing they are not in a hiring frenzy. They have added staff but not to the level that they might have in the past.”
He believes import tariffs will have a negligible effect given that most US-based integrators use primarily US-made products. “With control and video distribution, the main players are Crestron, Extron, and AMX – companies which do use parts from overseas, but their main production plants are in the US.”
Digital signage growth
A significant area of growth is digital signage, particularly incorporating interaction. “Screens are bigger, thinner and brighter. Outdoor displays are more versatile and robust, as are the mounts and media players,” notes Belcore. “The ability for content to learn about the target audience engaging with it is growing at an exponential rate.”
Daktronics says its ADFLOW business segment is trending in colleges, universities, grocery stores, and quick-serve restaurants. “The idea of connected devices is driving a lot of experimentation and creative solutions,” says Vasgaard. “Our customers are looking to capture data to target their customers with intelligent solutions. Live event and retail locations want to be able to track and understand more about their customers’ habits in multiple environments, looking to see any noticeable patterns based on metrics gathered by these technologies.”
He points to “an ever-quickening race” to higher and higher resolution in LED displays leading to Near Pixel Pitch (NPP) technology. “With viewing distances as close as three feet from the displays, these are being used in applications where viewers can walk right up to the displays,” he says. “Of course, 4K is continually sought out and the industry is even looking into 8K or even 12K solutions. From the boardroom to the university lunchroom customers are looking for content at higher resolutions and larger sizes. They are looking to social media integration as the viewing experience is going from passive to active and engaging.”
Albright attributes a rise in demand to “bad experiences with IT companies doing a poor job and giving integrators an opportunity to shine in these instances.”
2028 Olympics ahead
Although a decade out, this sector is likely to be galvanised by LA’s 2028 hosting of the Summer Olympics. The country is also in the running – in a joint bid with Canada and Mexico – to hold the 2026 FIFA World Cup which will be the first tournament with an expanded line up of 48 teams.
Milestone also signposts growth. “Customer demand is driving it,” says Rohe. “The ability to access information digitally in the marketplace, airports, or anywhere… it’s just where the consumer expectation has gone.
“There’s also a lot of ‘one-to-one stuff’ in the classroom with notebooks and chrome and other new interactive technologies,” he explains. “Unified communication is another big area. Conference room technology and the ability to connect through Skype is definitely changing the way we communicate with each other.”
This is a trend noted by Polycom. “Smaller organisations especially want a videoconferencing solution without having to invest in hardware,” informs Tim Stone, vice-president of marketing EMEA. Polycom partners with cloud-based video communications provider Zoom in North America. “This partnership transforms any huddle room or conference space into a productive collaboration environment.”
The need for flexible working is also driving the growth of AV solutions in North America, Stone observes. “With more and more businesses moving into co-working spaces or encouraging a more flexible working approach they need to have the technology in place to ensure this is possible. Huddle rooms in co-working spaces are huge in the US and they need to be equipped with collaborative technology to work effectively.”
The firm can name NATO (North Atlantic Treaty Organisation); and credit management firm TransUnion as users of its solutions.
Coast to coast
Rohe characterises the overall user base wanting the cutting edge of technology. “I believe they take mitigated risks, but they are conservative in their approach to make sure that their customers get a quality product that exceeds their expectations.”
Albright, though, cites Fortune 500 companies as “pretty conservative”. “Education wants cutting edge,” he says. “This is an over generalisation, but an accurate one.”
There’s a danger in treating the 50 states as a homogenous market, although the differences aren’t that diverse.
Major metropolitan areas will naturally have a higher concentration of AV. “The age of the city, geographic size and population will decide if AV is more spread out or concentrated,” Belcore observes. Chicago differs greatly in terms of digital signage to New York, for example.
“The AV market is not differentiated by geographic region,” reckons Singer. “Regions have different cultures, but those differences may influence messaging and style, but not necessarily type of technology or solutions. Different cities or states may have different economic drivers. Each local economy may rely more heavily on a different industry, for example, healthcare, energy (oil and gas), pharmaceuticals, technology, manufacturing, or even tourism. Some have a high concentration of universities while others may not.”
In the market for more than 40 years, Crestron, has an extensive internal sale force, which is organised by regional territory and then by dealer size and vertical market. It subsequently sells through a vast dealer network.  According to Singer, “the market is highly complex, rapidly changing, and very competitive. The main challenges are differentiating from the competition and reaching such a large and highly segmented market.
“Obviously, New York, Las Vegas and Los Angeles account for more of the high-end hospitality/dining installations, but otherwise the AV is pretty consistent across the US,” says Radin.
What’s odd, according to Albright, are cities like St. Louis which have “an inordinate amount of AV companies. I would not characterise the States as homogenous by any stretch.”
Orlando’s concentrated AV
Newstate points out that Orlando has the highest concentration of media-based attractions “making that a very desirable location for manufacturers looking to flourish in that area.” He predicts a growth in attractions across the rest of the nation.
“Every (themed attraction) operator wants to be first to market with new technologies and drive storytelling though new solutions that are as impactful as possible. Current technology of interest includes 3D LED tiles and methods for intelligent interaction that goes beyond the standard notion of ‘point and shoot’.”
Vasgaard reckons the market is “fairly homogenous” with an equal interest in the technology. “Cost is a much larger driving force than the geographic region,” he says, a point reflected by Holovis. “The biggest competition is white noise from companies who base their entire value proposition on price,” states Newstate. “This has a negative impact on true innovators in the AV space whose promise of value is to provide high quality products that deliver superior results. The ability to educate and manage the expectations of the market are muted by the white noise of price leaders who only copy and follow.”

Tuesday 24 October 2017

5G trials hot up – but what does it mean for DTT infrastructure?

Rohde & Schwarz

While the business benefits and standards are still being thrashed out, the 5G mobile network is currently receiving its first public trial in Europe.
https://www.rohde-schwarz.com/uk/solutions/broadcast-media/always-on-blog/posts/10-17-5g-trials_231622.html

Led by Arqiva, the central London trial is testing the performance of Samsung base stations in delivering data over the 28GHz spectrum licensed by Arqiva to business and residential premises. The idea is that 5G fixed wireless access (FWA) such as this will have significant cost and performance advantages over alternate means of delivering ultra-fast broadband using ‘last mile’ fibre-optic cable.  
Arqiva has reported establishing a stable two-way mmWave link with downlink speeds of around 1Gb per second. Just to give an idea of this level of performance, it would allow for the simultaneous streaming of more than 25 UHD 4K TV channels.
Though only a proof of concept at this stage, mobile operators, fixed broadband providers, broadcasters and media companies are interested.
A final 5G standard is on track to be passed by the ITU in mid-2020. According to analyst Ovum, more than 50 operators will be offering 5G services in 30 countries by the end of 2021.
Powering growth in mobile video streaming
Since Arqiva operates the UK’s broadcast TV network and most of the country’s radio transmitters, together with renting 8,000 sites on which mobile operators EE, Three, O2 and Vodafone install their own signalling equipment, the company is betting that 5G FWA will power huge growth in demand for mobile video streaming – and eventually replace digital terrestrial transmission (DTT) in the home.
A mobile network does not however spell the end of physical infrastructure. On the contrary, among the as yet undetermined costs of 5G rollout are wireless base stations which the 5G Infrastructure Public Private Partnership (a consortium of manufacturers and telco operators led by the EU Commission) reckons will need very dense deployments of links to connect over 7 trillion wireless devices serving over 7 billion people worldwide.
Base stations every 150 metres
Most research is concentrating on Massive MIMO (multiple input, multiple output), a technology that uses antennas located at both the transmitter and receiver and incorporated into wireless standards including 802.11ac (Wi-Fi), HSPA+, WiMAX, and LTE. There are calculations that this means installing a base station every 150 metres.
The most efficient model, it has been suggested, is a 'high tower – high power' approach on which current broadcast networks are built.
Over and above physical antenna, if 5G is used as a terrestrial substitute, it would likely require upgraded TVs and set top boxes. It is also theorised that a 5G broadcast would compete with other data connections for bandwidth, unless it had a dedicated bandwidth assignment.
With video over mobile forecast by almost everyone to multiply exponentially in the next five years to represent 70-80% of all traffic by 2021 something, somewhere has got to give. Mobile spectrum is only finite after all.
East Asian trials
While the Arqiva test will be eyed with interest it is operators in East Asia that are leading the charge to 5G with major public trials being timed to coincide with sports events. Japan will feature 5G at the 2019 Rugby World Cup and at the Tokyo Olympics the following year.
The technology will take another leap forward at the Beijing Winter Olympics in 2022. Before that, though, in February the spotlight will be on PyeongChang, South Korea, host of the 2018 Winter Games.
5G Media Initiative
In June 2017, the 3G Partnership Project (3GPP) international standardisation body finalised Release 14, which supports critical prerequisites for broadcast content delivery in large-cell 4G and 5G networks. At their third conference in Munich, the 5G Media Initiative, a special-interest group made up of leading corporations and organisations, announced that with these enhancements, Release 14 offers characteristics that approximate those enjoyed with classical terrestrial broadcast methods.
With Release 14 finalised, implementation in devices, services and networks can be started. The extensions to the 3GPP standard include numerous improvements to the existing evolved Multimedia Broadcast Multicast System (eMBMS). These improvements provide the technical framework for economical program delivery and unrestricted access to TV programs.
The first version of the new 5G standard is expected to be available by 2018, after which it will be continually enhanced to become a universal system for high-bandwidth data applications. Starting in 2020, additional enhancements for broadcast applications are expected as part of 5G; these could be available by 2025 as popular broadcast services for the mass market.

How new technologies and careful planning can ‘fix the live stream’

SVG Europe

http://www.svgeurope.org/blog/headlines/how-new-technologies-and-careful-planning-can-fix-the-live-stream/High profile glitches with live streamed sports indicate that, if the net isn’t broken, it is in need of a fix. The Mayweather-McGregor bout in August was targeted by 239 pirated streams (identified by security specialist Irdeto) but many viewers turned to illegal sites when the official pay per view stream failed to keep pace with demand. SVOD sports aggregator DAZN had to manage the ire of NFL fans in Canada when audio and video problems dogged the launch of its service in the territory (in September). Twitter’s live stream of 10 NFL matches last season was considered pretty successful in terms of quality but suffered badly from negative reaction to Twitter’s own integrated social feed, which was often many seconds ahead of the video.
“Last Super Bowl 110 million people tuned in to watch the Patriots’ comeback win,” says Conrad Clemson SVP & GM, service provider platforms at Cisco. “It was watched live online by 2 million people. Two million out of 110 million—that’s small. Why? Because the Super Bowl experience is simply better on satellite or cable delivered in HD.”
He relayed his experience trying to stream Boston Red Sox games this summer while travelling abroad.  “Sometimes the video wouldn’t start. Other times it would pause. And sometimes the resolution would be so low you couldn’t tell what was happening. Consumers have come to expect a pretty high standard for video experiences.”
Cisco plans to fix this with Cisco Media Blueprint, presented as a set of IP-based infrastructure and software solutions that help media companies automate much of the workflow in the cloud.
Customers on board with this approach include outside broadcast provider Arena TV (which has based its IP-only UHD trucks around a Cisco IP switch); BBC Wales (which is building the corporation’s first all-IP broadcast hub in Cardiff around Cisco fabric); Sky New Zealand; CANAL+, Fox Networks Engineering & Operations, and NBCU.
This makes the problems with live streaming sound simple, but in fact they are anything but…
Live stream complexity
“The internet does not have enough capacity to stream (unicast) to everyone,” says Charlie Kraus, senior product marketing manager at Limelight Networks. “It grows in capacity every year, compression is improving every year – but so does traffic. Most CDNs, including ourselves, work with optimisation strategies to provide the best we can do with that amount of bandwidth available.”
Internet delivery is subject to many contentious factors, including CDN capacity, congestion on the internet to the ISP peering point, lack of Quality of Service to the end device, Wi-Fi connectivity at home and for mobile networks, occasional poor connectivity, due to either poor network coverage or too many users on the network.
For live use cases, additional issues exist. Most commonly, resources in the network might not be sufficient. A major event such as the World Cup or Olympics must be planned one year ahead by Content Delivery Networks (CDNs).
“For mass events like a local basketball game in U.S., the network sometimes collapses as the consumption is unbalanced,” says Thierry Fautier, Harmonic’s VP video strategy.
It varies between countries, but it’s becoming increasingly difficult to categorise typical bottlenecks and day-to-day limitations when it comes to live streaming.
CDN Akamai still sees the majority of last mile networks (from the exchange into the home) running contention ratios. “It means that, depending on the volume and scale, you will always hit a bottleneck there,” explains James Taylor, director of media services EMEA. “However, thresholds vary by county. The UK, for example, is able to service huge multi-terabit events with no systemic issue, which in other EU markets, such as Italy, 1-2 Tbps is a lot for the local infrastructure to handle.”
The other prevalent area for bottlenecks lies at the point on which content is ingested wherever it is hosted. Explains Taylor: “You get efficiencies when a single piece of content or live event is streamed, but there are incremental loads put on the origin in a non-linear fashion, and if that isn’t actively designed and thought through, then issues on scale events will occur, impacting all users as a result.”
On top of that users now expect a ‘broadcast-like’ experience when watching streamed content. Premium sports content attracts large audiences, which stresses the distribution network. The content is usually detailed and highly dynamic, which requires HD to be delivered at a high frame rate (50p or 60;), necessitating higher bitrates, at least when watching on large screen TVs. Higher bitrates add to the network load.
When it comes to video fidelity, Akamai research finds that a viewer’s emotional connection to content at 5Mbps generated 10.4% higher emotional engagement than for viewers watching the same content at 1.6Mbps.
Latency challenges
As important as overcoming these issues is increased latency. As Peter Maag, CMO at Haivision puts it, “live sports depends on the immediacy of the programme to assure contextual experience with all information delivery (to second screen, social media, etc.). If the programme stream is delayed by over 5 – 10 seconds end-to-end, the experience falls apart.”
Harmonic reckons the latency of an end-to-end OTT distribution system is typically between 20 and 70 seconds behind that of broadcast. Fautier points to new streaming formats like CMAF which he says will allow the industry to get much closer to the 3 to 5 second delay typically experienced in a broadcast chain (Harmonic demonstrated this at IBC with Akamai.)
“The gap is definitely closing between streaming and broadcast capabilities,” says Chris Michaels, communications director at Wowza Media Systems. “Online streams are increasing in quality and stability. But scalability will be a challenge while we have it at the cost of latency, and sports fans won’t accept that for long.”
The lawsuit against ShowTime takes this issue to a whole new level. Zack Bartel of Oregon paid $99.99 to watch the Mayweather-McGregor fight only to find his expectations dashed, as this extract from the suit filed against the cable co. outlines:
“On August 26, 2017 at 6pm PST, like thousands of other fight fans across the county, plaintiff turned on defendant’s app in anticipation to watch the Mayweather fight. To his extreme disappointment and frustration, plaintiff (and thousands of other consumers) quickly learned that defendant’s system was defective and unable to stream the Mayweather fight in HD as defendant had advertised. Instead of being a ‘witness to history’ as defendant had promised, the only thing plaintiff witnessed was grainy video, error screens, buffer events, and stalls.”
“This demonstrates that it’s not just about providing a good video experience – it’s about viewers missing out on a major bonding event that millions of people had been eagerly awaiting,” says Stuart Newton, VP strategy & business development, Telestream. “Paying a lot of money for content that doesn’t arrive on time and in good shape obviously doesn’t sit well with viewers, but this lawsuit takes dissatisfaction to a whole new level. Content providers will now have to invest more in quality assurance and risk mitigation if they want to continue moving premium content over.”
The challenge of figuring out where the video feed is going wrong is highly complex, and takes a combination of passive monitoring (watching ‘on the wire’) and active testing of the availability of streams in different regions.
“Ideally, we want to provide an early warning system for video problems such as bad picture quality, accessibility errors, buffering and outages,” explains Newton. “This includes testing immediately after the content is produced and packaged, and then periodically at multiple geographic locations after it leaves the content delivery networks (in data centres, on premise or in the cloud). Sampled coverage testing at the edge of access networks – whether broadband cable, Wi-Fi or cellular – must also be part of it.”
Monitoring, accountability, scalability
“The trigger source could be the existing monitoring system, artificial intelligence (AI) from cloud-based client analytics, or a trigger from equipment or a virtual function in the network itself,” says Newton. “Whatever the trigger mechanism, the ability to be able to diagnose root cause and analyse impact severity in near-real time will be a major factor in not only detecting, but dynamically repairing video delivery problems in future.  This will allow better scaling of the systems, and at the same time provide more intelligence for targeting and reducing latency across the networks.”
Is it then possible, or desirable, to pinpoint the exact point of failure during a particular live stream and therefore for the rights holder to hold that vendor or service partner to account?
“It is possible with the right due diligence,” says Taylor. “Over time it will likely become mandatory for a vendor or service provider to be held to account. The challenge is that it’s not a like-for-like comparison between traditional TV and online. Today, the inability to measure a user’s quality at the point of consumption for TV distribution in real-time means the focus is on availability of a channel or specific programme. OTT also has multiple third parties and technologies involved that are very interdependent, resulting in a much more complex problem to solve.”
Increasingly, the CDN is also seeing that quality is subjective, and social platforms and direct feedback from viewers are becoming a growing source of insight.
“Being able to scrape negative Tweets and feedback from a customer’s social feed and parse out the insights can enable issues to be flagged as they arise,” says Taylor. “Social media has great potential as an early warning system for poor streaming quality.”
Recently, Akamai’s platform peaked at 60Tbps, which is the equivalent of 20million concurrent 3Mbps streams. During the 2016 Olympics it delivered 3.3bn streaming minutes to over 100m unique users. In the scheme of total TV viewing this is still quite small compared to the hundreds of millions of concurrent streams going on around the world at any moment, but the internet as a medium for video distribution has shown it can scale.
Newton stresses: “If the industry can work together to enable more transparency and interaction across the video delivery chain, we will be able to avoid, or at least rapidly mitigate, premium event problems for future viewers.”

5G trials show just how powerful the technology will be

RedShark News

While the business benefits and standards are still being thrashed out, technical advances to the fifth-generation mobile network are coming thick and fast. The successor to today’s 4G mobile network promises to transform wireless communications by transmitting data at least 10 times faster and at next to zero delays.

No longer theory, 5G has moved from lab to practical test with German telco Deutsche Telekom (DT) leading the way by announcing Europe’s first public 5G connection, this week, in Berlin.

Not so fast Germany. Arqiva already kicked off what it says is Europe’s first public trial of 5G in August. The central London trial, which runs another few weeks, is testing the performance of Samsung base stations at delivering data over spectrum licensed by Arqiva to select business and residential premises.
The idea is that 5G fixed wireless access such as this will have significant cost and performance advantages over alternate means of delivering ultra-fast broadband using ‘last mile’ fibre-optic cable of the type being rolled out by BT and Virgin.
It matters not who is right since all the world’s leading mobile operators are testing the tech. What matters is that the results are in — and they’re impressive.
Arqiva has reported establishing a stable two-way link with downlink speeds of around 1 Gigabits per second. Just to give an idea of this level of performance, it would allow for the simultaneous streaming of more than 25 UHD 4K TV channels.
DT is reporting download speeds of 2 Gbps at a latency of just 3 milliseconds (ms). That compares with an average latency of 50-60 ms for some of the best current souped-up 4G networks.
Though still only a proof of concept, mobile operators, broadband providers, broadcasters and media companies are interested. According to Arqiva chief executive Simon Beresford-Wylie, “This trial will be particularly interesting for this audience as it looks to a future of ubiquitous UHD, and the file sizes that go with it.”
Since Arqiva (currently looking for a multi-billion pound sale) operates the UK’s broadcast TV network and most of the country’s radio transmitters — together with renting 8,000 sites on which mobile operators EE, Three, O2 and Vodafone install their own signalling equipment — the company is betting that 5G will power huge growth in demand for mobile video streaming and eventually replace digital terrestrial transmission (DTT) in the home.
EE is backing this too. The mobile arm of UHD sports broadcaster BT is looking for media and entertainment applications that would suit the power of 5G. The ultra-low latency would make live Virtual Reality streams and multi-angle viewer-selected switching of live sports practical.
DT’s demonstrations included a live transmission of UHD video and an augmented reality (AR) application around a car track (actually a slot car track — think Scaletrix — but nonetheless the intention is there).
Meanwhile, in South Korean city PyeongChang, preparations are gathering pace for AR and multi-switched live streams over 5G during next February’s Winter Olympic Games. A limited number of spectators will be handed special 5G enabled mobile devices to view the content while at the event.
Future public tests, again built around sports and in Asia, include the 2019 Rugby World Cup in Japan, the Tokyo Olympics the following year and at the Beijing Winter Olympics in 2022 — by which time the International Telecommunications Union (ITU) will have ratified the first 5G standard.
Long before then, perhaps as early as 2019 in the U.S, carriers like Verizon may have launched the first commercial 5G network.
Verizon has just partnered with chip-maker Qualcomm to collaborate on the development and over-the-air trials of 5G New Radio millimetre wave technology. This technology will be based on specifications being developed by 3GPP, the global 5G standard and using Qualcomm’s Snapdragon X50 5G modem chipset.
This is the piece of silicon small enough for average smartphones onto which Qualcomm has managed to cram all the necessary processing power for 5G — a feat thought unlikely within such a short timeframe only a year ago.
Qualcomm says it managed to reach 1.24 Gbps using the chip, beating rivals Intel and Huawei to the punch. It is also touting a smartphone reference design, for phone manufacturers to play with, with plans for a commercial launch of a phone by 2019.
While networks will be rated 5G if it has a 1 Gbps download speed, this is on the lower end of expectations. Connections could reach 10 Gbps — some 1,000 times faster than current 4G speeds — in a few years. Some operators have already claimed to have achieved this and more in their labs.

At what cost?

Whatever the case, the benefit for consumers and industry (the fourth industrial revolution is predicated on 5G) will be huge. Yet, at what cost? There lies the biggest question. No matter how impressive the speed, how much premium will consumers want to pay to receive HD or 4K images of Real Madrid V Tottenham Hotspur to their mobile? Some may, but most will want to be watching such major sports events on the biggest screen possible ± and we’re a long way from 5G replacing or even augmenting fast fibre broadband or DTT to the home.
Rights holders of premium sports and movies like BT or Facebook will be conjuring up compelling, different applications for the way we watch — and interact — with content in order to sell 5G to the customer.
Operators too will need some way of paying off the large sums of R&D and antenna infrastructure which will bring 5G to the masses. A clearer use case is for reaching people in rural areas, underserved by fixed fast broadband access.
The value appears to be there. An IHS Technology report suggests that 5G could attain a global value of £2.7tn by 2035. Qualcomm itself reckons 5G and related industries will result in a $12tn net gain by the same date. According to a study released by mobile operator O2, 5G could add £7bn annually to the UK economy by 2026. 5G is on the UK government’s agenda too, with regulator Ofcom setting a timetable for 5G services to be launched in Britain by 2020.
However, don’t throw away your 4G phone yet. Upgrades are being variously marketed as 4.5G, 4.9G, LTE Advanced Pro, LTE+ or Gigabit LTE. Nokia’s 4.5G, for example, can boost speeds up to 1Gbps, which is the nominal 5G target.

Friday 20 October 2017

LVDC projects pave the way for standardization

IEC

A number of low voltage direct current (LVDC) trials are preparing the ground for a wider use of the technology, both in developed and developing countries.
https://iecetech.org/issue/2017-07/Tried-and-tested
LVDC is seen increasingly as a green and efficient method of delivering energy, as well as a way of reaching the millions of people without any access to electricity. It’s fully in line with the UN’s Sustainable Development Goal 7, of providing universal access to affordable, reliable and modern energy services by 2030.
In direct contrast to the traditional centralized model of electricity distribution via alternating current (AC), LVDC is a distributed way of transmitting and delivering power. Today electricity is generated mostly in large utility plants and then transported through a network of high voltage overhead lines to substations. It is then converted into lower voltages before being distributed to individual households. With LVDC, power is produced very close to where it is consumed.
Using DC systems makes a lot of sense because most of the electrical loads in today’s homes and buildings – for instance computers, mobile phones and LED lighting - already consume DC power.  In addition, renewable energy sources, such as wind and solar, yield DC current. No need to convert from DC to AC and convert back to DC, sometimes several times, as a top-down AC transmission and distribution set-up requires. This makes DC more energy-efficient and less costly to use.

IEC expertise comes in handy

The environmental gains from using a more energy-efficient system supplied from renewable sources make LVDC a viable alternative for use in developed countries as well as in remote and rural locations where there is little or no access to electricity. “The potential benefits of LVDC already have been demonstrated by a number of pilot projects and niche studies in developed nations. For example, a pilot data centre run by ABB in Switzerland running on low direct current power has shown a 15% improvement in energy efficiency and 10% savings in capital costs compared to a typical AC set-up. This is interesting because data centres consume so much power,” comments Dr Abdullah Emhemed from the Institute of Energy and Environment at Strathclyde University in the UK. Dr Emhemed leads the University’s international activities on LVDC systems. He is a full member of the IEC’s new Systems Committee (SyC) on LVDC and LVDC access. He is also a member of the IEC UK National Committee (NC).
According to Emhemed, further standardization work is required on “voltage levels, as well as safety and protection issues” amongst other things. The IEC is leading efforts to promote the benefits of LVDC and to assist in the specification and ratification of these new Standards. SyC LVDC has begun standardization work through a systems-based approach, identifying gaps where International Standards are needed.
Many of these gaps can be filled by adding provisions about DC into existing Standards. The IEC has also published a number of Standards and Technical Specifications (TS) already relevant to LVDC. They include IEC 62031 on the safety specifications for LED modules for general lighting, published under the remit of IEC Technical Committee (TC) 34: Lamps and related equipment, for instance.
IEC TC 82: Solar photovoltaic systems, provides another example. It has published a number of TSs on rural electrification, the IEC TS 62257 series, which make a huge raft of recommendations for renewable energy and hybrid systems.

Trial and error

Japan is one of the countries in which DC trials have mushroomed. More than ten different projects scattered across the country rely on DC power. They include the hybrid AC/DC Fukuoka Smart House inaugurated in 2012, which utilizes energy supplied from a number of different DC sources.
In Europe, one of the most advanced projects is in Finland. LVDC RULES began in October 2015. It is led by the Lappeenranta University of Technology (LUT) and financed by the Finnish Funding Agency for Technology and Innovation (TEKES).The project aims to take the final steps towards the industrial scale application of LVDC in public distribution networks by building on the data gathered from laboratories and research sites and transferring the technology into everyday use in Nordic distribution companies. The data is drawn from trials which started in Finland as early as 2008.
“The LVDC RULES project consortium has put together complete specifications for LVDC equipment optimized for public power distribution, especially in a Nordic environment,” explains Tero Kaipia, one of the researchers from LUT involved in the project. “The development of the equipment is in good progress and the critical tests have been completed. The construction of the pilot installation into the distribution network will start in 2018. Design methods and practical guidelines have been developed to enable the economic utilization of LVDC networks as part of a larger distribution infrastructure,” he adds.
While this project demonstrates a workable LVDC system, a number of key outstanding challenges have been identified by the researchers involved. Chief among them is the lack of appropriate Standards. “Standardization at system and equipment level is an essential prerequisite for the wide-scale rollout of LVDC in Finland,” says Tero Kaipia. “Without standardization there will be incompatible components and it will be difficult to construct systems using components from different manufacturers. And most of all, the network companies will not buy LVDC systems, if the certified components and standard design guidelines are not available.”

Indian summer

In India LVDC is seen as one of the solutions for bringing electricity to the millions of homes which still have no or only intermittent access to power, as is the case in many other developing nations.The Indian government’s Ministry of Power and the Rural Electrification Corporation (REC), a public infrastructure finance company in India’s power sector, are piloting a number of projects.
One of these is the Solar-DC initiative led by the Indian Institute of Technology Madras (ITT-M). As a result, an ecosystem for DC appliances and DC microgrid projects is emerging. As part of this global effort, ITT-M has been working in collaboration with Telangana State Southern Power Distribution Company Ltd and REC to bring uninterrupted power to four hamlets in rural Telangana, which had been living without electricity for six to eight hours a day. The technology in this particular case comprises a 125 W solar panel, a 1 kWh battery, an inverterless controller unit and DC loads operating on a 48V DC internal distribution line, installed in each small hamlet.
Other similar trials have also been taking place in the Indian states of Bihar, Assam, Rajastan, Karnataka, Odisha and the city of Chennai. The Bureau of Indian Standards (BIS), which is the IEC's Indian NC, has been drafting documents based on these trials aiming to standardize 48V for microgrids.
“India is in the process of finalizing a 48V standard for electricity access suited to local needs. It is my hope that this new standard will be presented soon to the IEC community, as an input for further discussions to formulate a universally accepted IEC Standard for electricity access,” says Vimal Mahendru, member of the IEC Standardization Management Board (SMB) and Chair of the IEC SyC LVDC.

Is this the solution to the ultimate sound experience in VR?

RedShark News

George Lucas once said that “sound and music is half the entertainment in a movie”, but he may want to up that percentage when applied to VR. As anyone who has popped on a headset may testify, the feeling of disorientation is amplified by sensory deprivation. Sound amplifies emotions and adds realistic depth to otherwise hollow visual experiences. If the sound doesn’t faithfully match or fully immerse you in the picture as you turn your head, your ability to navigate and enjoy the experience suffers.
https://www.redsharknews.com/audio/item/4950-is-this-the-solution-to-the-ultimate-sound-experience-in-vr
This is, even more, the case with so-called full VR, otherwise known as 6DOF (six degrees of freedom), the ability to walk around within a 360-degree environment. Without convincing audio, the audience can’t feel fully embedded in a virtual reality story.
Start-up G’Audio Lab thinks it has the answer. It has devised a format that it claims provides a superior sense of localisation and sound quality compared to any other current technology.
The founder members of the LA-based outfit contributed to international audio standardisations including the binaural rendering aspect of MPEG-H 3D Audio. One of them, Dr Oh Henney, holds more than 1000 patents. 
While MPEG-H 3D Audio was developed to support channel, object, and/or Ambisonics signals, it is not optimal for VR, they argue. At its inception, of course, VR applications weren’t really on MPEG’s radar. MPEG-H 3D Audio was created as a standard for situations in which there may be many loudspeakers used in the audio presentation. The focus became UHD-TV and then multi-channel configurations like 22.2 surround sound.
As they point out, having audio replayed through headphones or through defined speakers works well if the content is pre-rendered and viewed from a fixed position as in 3DOF but far less well for 6DOF when it can’t accurately reflect the movement of the VR user.
Additionally, they point out that MPEG-H 3D Audio is essentially focused on receiving, delivering or playing a signal but not about recording sound in the first place — and it neglects post production.
G’Audio Lab’s proprietary format, GA5, incorporates channel, ambisonics and object-based audio formats. It supports object tracks to provide pinpointed sound and ambisonics for ambiance while metadata contains the positional information for playing and rendering the respective object, channel, and ambisonics signals.

Spacial audio

Its spatial audio post production solution, Works, can be added to Avid Pro Tools as an AAX plugin and assists in the positioning of object sounds in a virtual environment. It has also built a renderer SDK, called Sol, to support GA5, which can be integrated into any web player, HMD, or standalone app.
Its process combines binaural rendering with a head-related transfer function (HRTF) — which is a response that characterises how an ear receives a sound from a point in space.
“Because spatial audio uses HRTF, sound can be placed anywhere in a 3D space, with elevation and distance also being taken into account,” the firm explains. “Using binaural rendering and HRTF, even if spatial audio is consumed through headphones, it's possible to hear sounds as if they were coming from external sources.
“When content created using Works is played on an HMD, sound objects change according to the users’ interactions,” it states. “What they see is synchronised with what they hear.  When each sound source is delivered to the playback side as an individual object signal, it can truly reflect both the environment and the way the user is interacting within the environment.”
And since the format supports the simultaneous use of three different kinds of audio signals — object (mono), channel and ambisonics, this combination “enables new levels of freedom and intuition to deliver the most realistic sounds possible”.
Making a VR user feel present in the virtual world is the key to alternate reality experiences. From an audio perspective, that feeling of real presence only happens when users can hear the action the way they see it. Anything which can bring higher fidelity and sophistication to sound production and reception is a step in the right direction.

Thursday 19 October 2017

Should CGI performance capture be recognised as an acting art?

RedShark News

When Gary Oldman walks away with the best actor Oscar next March, spare a thought for fellow thesp Andy Serkis. Oldman probably deserves the gong for his powerful incarnation of Churchill in Darkest Hour as well for his back catalogue of crazies (Sid & Nancy, Leon, True Romance), but his former Planet of the Apes co-star can point to an arguably more impressive recent CV — albeit behind the mask of CGI.
https://www.redsharknews.com/post/item/4973-should-cgi-performance-capture-be-recognised-as-an-acting-art
In 2014, when Oldman was paired with Serkis’s Caesar for Dawn of the Planet of the Apes, studio 20th Century Fox mounted a campaign to get the latter Oscar-nominated. It failed, not because of any lack of admiration for the actor’s art but because of confusion as to whether or not Serkis alone deserved to be honoured for the animation that overlaid his original performance.
Performance capture is still a relatively infant creative process, despite rapid advancements since the efforts of Robert Zemeckis in 2004 to animate Tom Hanks for The Polar Express. Many in the industry remain dubious of the blurred line between performance and technology and the artists and animators — usually from New Zealand’s VFX shop Weta — that contribute to the digital character. The matter has not been helped by Serkis’s own shorthand for what he does as “digital make-up”.
The well-received finale in the trilogy, War for the Planet of the Apes, should change things — at least as far as recognition by various critics circles is concerned and by BAFTA which tends to be less conservative in these matters than bodies like the Golden Globes or the Academy. As with ‘Dawn’, much of the performance capture in ‘War’ is moved outside the studio in Canadian locations, assisting in making the look of the picture more naturalistic and essentially helping the audience to suspend belief that they are watching a talking ape. Ironically perhaps, the animation is so good that Serkis’s performance — and consequently the emotion of the character — comes through more than ever before.
“What Andy is doing is acting and performance capture is recording it,” director Matt Reeves, told Indiewire. “In this story, we pushed Caesar to a place where you’re able to empathise with his desire for revenge and then question how you’ve been provoked and implicated. And what these effects represent is a high water mark. It takes tremendous artistry on both sides (actor and animator).”
Speaking to the Independent, Serkis claimed performance capture is, “no different” from any other kind of acting.
“An actor in a performance-capture role receives a script, works on psychology, emotions and motivation, and goes on set to be shot in exactly the same way as any other character,” he said. “That performance is used to cut the movie and it’s that performance that creates emotion, pace and drama. The visual effects render the character, just like putting on makeup, except here it happens after the fact.”

No discrimination

Serkis emphasised that he doesn’t want to deny the “brilliance” of the visual effects team. “But the awarding bodies should not discriminate about this being different,” he says. “If they don’t think Caesar is good, that’s fine, but it’s a different issue.”
The performance capture pipeline features many similarities to those carried out on an everyday film set. However, the main importance still lies with the motion capture system and its ability to capture the highest quality data possible.
Serkis is more likely to win some kind of special achievement award from the Academy sooner than the best acting nod, but a performance captured role is more likely to win awards in future as the technique becomes more familiar.
JJ Abrams’ Star Wars reboot and the forthcoming The Last Jedi feature Serkis’ work as Supreme Leader Snoke. Pretty much every recent superhero film from Marvel and Universal features it. Spielberg is a convert, using it extensively (and dubiously) on The Adventures of Tintin (with Serkis as Captain Haddock), with Mark Rylance on The BFG and for 2018 release Ready, Player One. Serkis is in production on a version of Animal Farm at his Ealing-based performance-capture studio, The Imaginarium. James Cameron can be counted on to push possibilities further still with the Avatar sequels which began shooting last month and for which the director has reportedly experimented with performance capture underwater.
Such ‘cyber-thespianism' or 'post-human acting' goes hand in hand with production techniques for visualising the animated characters and CG background in realtime on set — processes which Cameron led the way with while making Avatar. The virtual production process was showcased by the stunning CGI work on last year’s VFX Oscar winner, The Jungle Book.
According to The Foundry, maker of VFX tool Nuke which was used to composite The Jungle Book, the most immediate future scenario is “that we manage to create CG characters so realistic we can’t tell which performances are given by a real-life human and which by their digital replica.”
Although, as Serkis himself has observed, what would be the point of trying to replicate humans with human performance capture?
To which one might add — no point at all. Nonetheless, what if one day we might be able to do without human actors altogether by melding visual effects with AI. Might Hollywood create its own believable, fully digital actors?
As The Foundry points out, in large part, this will come down to how well emotion can be realistically digitised. Currently, we’re not quite there yet. Every CGI character you see in a film or a game that gives a truly realistic emotional performance does so because there was a real actor who gave that performance.
And even then, we’re often led into the ‘Uncanny Valley’ — the place where human replicas which appear almost (but not exactly) like real human beings elicit feelings of eeriness and revulsion. Cracking this has been a perennial challenge for CGI artists.
Encouragingly, reckons The Foundry, the last three years have seen huge progress in this field. The reception — particularly by younger viewers — of a digital Peter Cushing in Rogue One: A Star Wars Story indicates VFX technology has reached a stage where we can create a human likeness to a compelling degree of accuracy.
“Truly conquering the Uncanny Valley will mean mastering human emotion to the point where we can create fully digital actors who can give convincing pathos-laden performances, indistinguishable from the real thing,” states The Foundry.
It believes a key to this will be rendering. The VFX industry has made huge strides in rendering surfaces and lighting, which is why digitally created humans are looking more and more realistic: “As rendering improves in the future, it will become even easier to make things ‘look right’, which in turn will make it even more difficult for us to distinguish between digitally created human faces and the real thing.”

Quality control for OTT

Digital Studio

As video viewers continue to shift towards OTT video consumption on smart TVs and mobile devices, they expect broadcast-level quality, and with the advent of 4K/UHD and HDR streaming, expectations are even exceeding broadcast quality.
http://www.digitalstudiome.com/article-11278-quality-control-for-ott/
At the same time, many OTT video providers do not own or operate the infrastructure they rely on for the quality that will attract and retain viewers, and secure revenue. This shift has created an increasing need for flexible, scalable measurement tools that can address the new reality of video delivery both from a business and a technical perspective.
“There’s never been a time like this in the broadcast industry, with so many fundamental changes in technology and workflows coming together at the same time,” says Charlie Dunn, general manager, video product line, Tektronix. “An industry driven by rapid change requires more than technical evolution. It requires what we call ‘revolutioneering’. This means we’re doing more than just accommodating change and new technologies – we’re helping to lead the charge.”
IP introduces new technical and skills challenges. These include jitter, latency and the risk of dropped packets, and network asymmetry that results in different path delays upstream and downstream.
“Deploying IP for video production applications is effectively the collision of the two worlds of video and network engineering,” says Mike Waidson, application engineer for Tektronix’s video business division. “Video engineers are comfortable with the use of SDI, coax cable, patch panels, black burst, and tri-level sync for timing and, above all, monitoring signal quality. The challenge for the video engineer is to understand IT technologies and the impact of an IT infrastructure on the video.”
On the other hand, network engineers are familiar and comfortable with IP flows, protocols, network traffic, router configuration, precision time protocol (PTP), and network time protocol (NTP) for timing.
“The biggest difference is that in most data centre applications, lost data can be re-sent. This is not the case with high bitrate video,” says Waidson. “The challenge for the network engineer lies in understanding video technology and its impact on the IT infrastructure.”
Tektronix offers PRISM for broadcast engineers and IT professionals to monitor and analyse IP streams and the associated content in real time. This enables early identification and diagnosis of network or content issues such as intermittent loss of video, audio, or data content.
The latest capabilities in this area include analysis of PTP synchronisation timing, support for SMPTE ST 2022-7 redundancy, IGMP V3 and new API support for system integration into network management systems, and IP stream capture for even deeper analysis.
“An ‘all IP’ infrastructure is the vision for most broadcasters, and is already starting to happen in many facilities,” says Waidson. “The reality is, however, that the transition won’t happen overnight, leading to the need to manage hybrid SDI and IP infrastructures, and thus a need for IP and video engineers to work closely together to ensure seamless operation and quickly track down faults.”
Adding UHD to the mix
Added to this complexity is the increasing amount of 4K UHD content being put through the chain, with requirements to ensure compliance with enhanced colour and dynamic range attributes.
New software for Leader Instruments’ LV5490 waveform monitor is claimed to be the world’s first “direct-digital 4K noise measurement processor”. The LV5490 module measures noise in luminance or RGB component chroma channels.
“Camera noise measurement is a complicated and long-winded operation requiring digital to analogue conversion of signals that could also introduce noise to the signal,” explains Leader’s European regional development manager, Kevin Salvidge. “Now, camera owners can simply connect the SDI output of their camera directly into the LV5490 and measure the noise levels.”
The LV5490 4K is the company’s flagship waveform monitor, which can be twinned with the LV7390 4K rasteriser to reap the full benefits of cameras operating in UHD and HDR. The LV7390 can be used to measure up to four 3G-SDI, HD-SDI, or SD-SDI sources simultaneously. An HDR option for the LV5490 also enables this instrument to measure both 4K and HD HDR in ITU.BT.2100 Hybrid Log Gamma, Dolby PQ, and Sony SLOG-3 protocols.
The Ultra XR is Omnitek’s UHD waveform rasterizer that addresses HDR and wide colour gamut compliance requirements at frame rates up to 60Hz in an SDI data stream. In addition to the high resolution waveform monitor, traditional instruments such as vectorscope are included. Although this tool provides a level of usefulness in 4K UHD applications, additional instruments (such as a BT.2020 colour space waveform display, histogram, and CIE gamut charts) are provided to ensure that the image meets the UHD BT.2020 colour requirements.
Blackmagic Design’s SmartScope Duo 4K model features displays for waveform, vectorscope, RGB and YUV, histogram, audio phase, and level scopes. It is possible to monitor video on one LCD while running a scope such as waveform, histogram, or audio on the other screen – a userful combination that enables the video being monitored to be viewed alongside a real-time scope measurement, all on the same device.
Phabrix is also majoring in 4K/UHD and HDR in its most recent tools. Its Qx 12G is offered as a single device for “essential product development, infrastructure compliance testing, and 4K/UHD content monitoring”, using a hybrid of IP and SDI video formats.
A recent official convert to AIMS, PHABRIX says the unit will support TR03, AES67, and PTP (ST 2059-2), as defined in the draft SMPTE 2110 specification, in upcoming software releases.
“Qx 12G offers the fastest 12G-SDI physical layer testing, with its real-time Eye technology instantly highlighting any SMPTE compliance issues, including eye under/overshoot,” explains Phabrix’s managing director, Phillip Adams. “Advanced 12G/6G-SDI physical layer tools include Jitter waveform, Jitter insertion and FFT analysis, and pathological test patterns. Built-in automation control allows testing to be performed faster, more reliably, and at lower cost.”
For HDR and wide colour gamut applications, Qx 12G provides new generator patterns plus a CIE 1931 X Y chart with Rec.709 and BT.2020 overlays for measuring chromaticity. There’s also a programmable HDR heat-map to highlight luminance zones of a signal, as well as HDR vectorscope and waveform tools. All these HDR instruments support the Dolby PQ standard. Future software releases will support the HLG and SLOG3 HDR standards.
IneoQuest Technologies, which since March has been part of the Telestream stable, has evolved a FoQus Delivery service to help clients migrate to “all-software” system architectures that could be deployed as virtualised solutions in cloud-supporting data centres. Akamai, the world’s largest CDN, is one such customer.
“The FoQus Delivery service provides a pay-as-you-go subscription-based business model that is prevalent in the OTT market, supporting both network-less video providers that are offering pure OTT services, as well as video providers that are extending beyond their on-network offerings with OTT,” explains Calvin Harrison, CEO. “What we can do is test, monitor, and fault-detect OTT data streams right the way through a distribution network, from the broadcast centre to the point of consumption on literally any device or platform.”
Other solutions for monitoring the transport stream include Dektec’s StreamXpert, available in both full and “lite” versions for real-time analysis, monitoring, and recording; and Axon Digital’s Smart 90-series. The latter blends the capabilities of its SMART DVB 10-series/50-series and SMART DVB IP Viewer. Supporting up to 8 MPTS or 40 SPTS, the SMART 90-series also provides black and freeze-frame detection, running continually on all services as a background process.
Sencore offers the CMA 1820 compressed media analyser that allows engineers, system integrators and network operators to verify standards compliance, identify media interoperability issues, develop products around new codecs, and troubleshoot transmission issues. There’s an OTT option that allows direct examination of manifest files and the corresponding media. Other options for this product include closed caption, subtitle, SCTE35-DPI, and PTS/DTS alignment analysis.
Automated and manual QC
Broadcasters and content owners spend a large portion of their revenue on acquiring content, but this content cannot be monetised until it has successfully made it throughout the complete production workflow. Making quality control (QC) part of the ingest and production process ensures that only high-quality content will be delivered on multiple platforms.
Until recently, this was a tedious task during which an operator had to view all the content to detect errors. It resulted in a costly and time-consuming procedure, in which metadata couldn’t even be checked. The explosion in file-based workflow means traditional manual methods are no longer effective. With tens, even hundreds, of deliverables needed for each media asset, human operators using traditional video and audio test equipment struggle to cope without help. Automated QC (AQC) has become a commercial imperative.
“AQC will save a lot of manual effort and actually do a better job in checking all the technical parameters of the different versions,” says Thomas Dove, director of technology, QC products for Telestream. “With the correct workflow, in many cases manual QC need only be done on a master file and output files may not require any manual QC at all.”
“Efficient QC requires consistency,” stresses Ian Valentine, director business development, video test, at Tektronix. “This becomes especially important as the reliance on automation increases. We have found that measurements and checks made during acquisition and post production ultimately propagate throughout the whole workflow.”
Beyond QC, Interra Systems talks about QA (quality assurance). The latest version of its Baton solution enhances AQC by allowing users to add manual (eyeball) checks. With this hybrid approach, operators can detect certain issues, such as lip sync, which AQC solutions cannot.
“While automation has been transformative for the broadcast community in terms of speeding up workflows, AQC tools do not fully implement an organisation’s overall QC policy,” says Ashish Basu, vice president of global sales and business development. “Relying on a combination of AQC and manual intervention is ultimately the best approach to understanding false negatives and positives, detecting critical issues that aren’t yet detected by automation alone, and taking appropriate corrective measures.”
In the next few years, artificial intelligence will make significant inroads into these areas: initially these tools will be limited in functionality and value, but they will gradually replace much of what must now be done manually.