Wednesday 28 November 2018

AI and machine learning make inroads in broadcast – but ROI will need to be seen in 2019

Video Net
If 2018 was the year when AI/ML (artificial intelligence and machine learning) took off in video production, then 2019 has to be the year the industry starts to see some payback- otherwise it may go the way of other recent ‘flash in the pan’ tech trends like 3D and VR.
That is unlikely given that AI/ML – no matter how you slice the definition – is not a niche product but a global tool for vendors to develop media specific applications. According to the IABM’s latest bi-annual Buying Trends survey, which tracks technology trends, actual AI/ML adoption in the broadcast and media industry is up from 2% to 13% in just six months from April to September this year.
The survey data shows that it is larger organisations that are much more likely to deploy AI technology, with adoption varying across different segments of the content supply chain. Another 68% are likely or very likely to deploy it in the next 2-3 years.
While broadcasters are undoubtedly experimenting with AI/ML tools to automate and augment current workflows, as one unnamed broadcaster told the IABM, “The challenge is monetising it, or creating real commercial value”. In other words, the use of AI/ML demands a real business case, otherwise costs can quickly escalate to outweigh any potential advantage.
Alongside the cost/benefit equation, there are also other challenges that may prevent media companies from taking full advantage of AI/ML. Namely, it is best suited to working with large amounts of data.
“While subscription-based broadcasters and media companies already have significant viewer data to work with, FTA [free-to-air] broadcasters who are moving into OTT need to build new data to better understand their customers; this is why the uptake is slower than it might be,” infers IABM CEO Peter White. “Other challenges relate to data management [i.e. training, and updating the data] and data gathering. Also, media companies need to manage different types of data together in a single pool – avoiding data silos – to create real value from AI/ML algorithms.”
IABM expects AI/ML adoption to increase in 2019 as current solutions mature and broadcasters build their databases, enabling them to drive more automation and liberate resources within their organisations. There is definitively a growing interest in the many potential applications of AI/ML, but this is tempered by an overriding need – as with any new technology – to prove its business value.
The trade body, which represents the interests of equipment manufacturers, also charted a slight rise in adoption of IP technology in the production sector this year. An example of this trend is UHD production: IABM data shows that companies that intend to move to UHD are much more likely to invest in IP technology for its format independence.
IABM does observe that most broadcasters have a cautious culture that is holding back transition. “Many broadcasters have been reluctant to abandon SDI and have so far been inclined to adopt a ‘wait and see’ approach,” notes White. “Maybe the floodgates are now beginning to open.”
The finalisation of the SMPTE transport protocol ST 2110 earlier this year has given IP roll-out an uptick. But this is a slow burn. As the IABM underlines, “IP not only impacts technical facilities and workflows, but also the business and cultural environment.”
From a cultural perspective, there is also a convergence between IP and broadcast that is very difficult to put into practice at traditional organisations. The move to IP is as much about transforming culture as technology change.

Tuesday 27 November 2018

Post production in the cloud

PostPerspective
Talked about for years, the capacity to use the cloud for the full arsenal of post workflows is possible today with huge ramifications for the facilities business.
Rendering frames for visual effects requires an extraordinary amount of compute power for which VFX studios have historically assigned whole rooms full of servers to act as their renderfarm. As visual quality has escalated, most vendors have either had to limit the scope of their projects or buy or rent new machines on premises to cope with the extra rendering needed. In recent times this has been upended as cloud networking has enabled VFX shops to relieve internal bottlenecks to scale, and then contract, at will.
The cloud rendering process has become so established that even this once groundbreaking capability has evolved to encompass a whole host of post production workflows from previz to transcoding. In doing so, the conventional business model for post is being uprooted and reimagined.
“Early on, global facility powerhouses first recognized how access to unlimited compute and remote storage could empower the creative process to reach new heights,” explains Chuck Parker, CEO of Sohonet. “Despite spending millions of dollars on hardware, the demands of working on multiple, increasingly complex projects, simultaneously, combined with decreasing timeframes stretched on-premise facilities to their limits.”
Public cloud providers (Amazon Web Services, Google Cloud Platform, Microsoft Azure) changed the game by solving space, time and capacity problems for resource-intensive tasks. “Sohonet Fastlane and Google Compute Engine, for example, enabled MPC to complete The Jungle Bookon time and to Oscar-winning standards, thanks to being able to run millions of Core hours in the cloud,” notes Parker.
Small to mid-sized companies followed suit. “They lacked the financial resource and the physical space of larger competitors, and initially found themselves priced out of major studio projects,” says Parker. “But by accessing renderfarms in the cloud they can eliminate the cost and logistics of installing and configuring physical machines. Flexible pricing and the option of preemptible instances mean only paying for the compute power used, further minimizing costs and expanding the scope of possible projects.”
Milk VFX did just this for rendering its Oscar-winning sequences on Ex Machina. Without the extra horsepower, the London-based house could not have bid on the project in the first place.
“The technology has now evolved to a point where any filmmaker with any VFX project or theatrical, TV or spot editorial can call on the cloud to operate at scale when needed — and still stay affordable,” says Parker. “Long anticipated and theorized, the ability to collaborate in realtime with teams in multiple geographic locations is a reality that is altering the post production landscape for enterprises of all sizes.”
Parker says the new post model might look like this. He uses the example of a company headquartered in Berlin — “an innovative company might employ only a dozen managers and project supervisors on its books. They can bid with confidence on jobs of any scale and any timeframe knowing that they can readily rent physical space in any location, anywhere in the world, to flexibly take advantage of tax breaks and populate it with freelance artists: 100 one week, say, 200 in week three, 300 in week five. The only hardware (rental) costs would be thin-client workstations and Wacom tablets, plus software licences for 3D, roto, compositing and other key tools. With the job complete, the whole infrastructure can be smoothly scaled back.”
The compute costs of spinning up cloud processing and storage can be modelled into client pitches. “But building out and managing such connectivity independently may still require considerable CAPEX — one that might be cost-prohibitive if you only need the infrastructure for short periods,” notes Parker. “Cloud compute resources are perfect for spikes in workload but, in between those spikes, paying for bandwidth you don’t need will hurt the bottom line.
Dedicated, “burstable” connectivity speeds of 100Mbit/s up to 50Gbit/s with flexibility, security and reliability are highly desirable attributes for the creative workflow. Price points, as ever, are a motivating concern. Parker’s offerings “move your data away from Internet bandwidth, removing network congestion and decreasing the time it takes to transfer your data. With a direct link to the major cloud provider of your choice, customers can be in control of how their data is routed, leading to a more consistent network experience.
“Direct links into major studios like Pinewood UK open up realtime on-set CGI rendering with live-action photography for virtual production scenarios,” adds Parker. “It is vital that your data transits straight to the cloud and never touches the Internet.”
With file sizes set to continue to increase exponentially over the next few years as 4K and HDR become standard and new immersive media like VR emerges to the mainstream, leveraging the cloud will not only be routine for the highest budget projects and largest vendors, it will become the new post production paradigm. In the cloud creative workflows are demystified and democratized.

Monday 26 November 2018

Gramco.studio plays games with Phantom VEO 4K for Manolo Blahnik

Content marketing for VMI


As the fashion world revolves more and more around social media so creative directors like Graeme Montgomery have cut their cloth accordingly. As one of the world’s leading photographers and moving image makers for luxury brands, Montgomery create high end, product focused social media imagery for clients including Graff, Harvey Nichols, GHD, Missguided and Primart from his London-based Gramco.studio.
Shoe designers Manolo Blahnik have taken to Instagram in particular with campaigns that showcase their chic but quirky image.
Delighted with Montgomery’s creation of a series short films for Instagram based on a circus theme, the brand invited him back to help them promote this season’s new collection.
“I’d just shot a huge production with a Phantom FLEX 4K camera for Essie nail varnish which was the first super slow-motion project I’d done, and I wondered if we could do something similar but on a fraction of the budget.”
He explains, “The Phantom FLEX 4K is a terrific camera but it’s expensive to rent, you need loads of lights and a specialist operator. Since the Manolo Blahnik campaign was for social media we needed to achieve the same high production value they’d expect for any other media but without a massive production budget.As he was casting around for options, VMI invited Montgomery to an open morning workshop all about the Phantom VEO 4K-PL.
“I often get my kit from VMI so myself and my DP Joe Dyer decided to go along and find out more. It would be far cheaper to rent but would it work for our project?”
“It was great,” he continues. “They showed us the camera and went through its use step by step. The VEO immediately felt much simpler to use than its big brother and we came away thinking we should just go for it.”
Montgomery came up with the concept of games for the new series of vignettes showcasing various shoe designs worn by a model interacting in colourful and funky ways with scrabble, hoola hoops, jenga, playing cards and Connect 4.
“The trick is to find a concept that you can hold together for six or seven quite different looks. We felt the games idea could accommodate different styles of shoes and a mix of storylines that would be stylish, fun and quirky.”
They shot 4K at 500 fps, the oversampled resolution useful for cropping into the frame in post.
“If we shot 1000 fps, for instance of knocking over the scrabble board, the clip would be too long and boring when played back so 500 frames is about right. We played around with it in the edit, speeding the clip up and using the super slo to keep it interesting. We also looped most of the clips back so that they can play as long as the viewer wants.”
Montgomery says he found the Phantom VEO itself easy to operate. “You just have to remember to cut as soon as the end of the action happens in order to stay within the camera’s memory buffer but that’s not hard to do with a touch of practice.
“You get everything that you’d technically get with the higher end Phantom but I’d say be aware that the workflow is quite a lot slower because instead of downloading as you go, you need to allow time to download the data. That doesn’t matter so much in still life photography or product shoots such as this where you don’t have that pressure of time and talent on set that is costing you a lot of money.
Montgomery edited, added audio and finished the series on Final Cut Pro.


Why Jaunt’s VR exit is good news for Virtual Reality

RedShark News
The sale by Virtual Reality production pioneer Jaunt of its VR equipment division should not be seen as another nail in the VR coffin. On the contrary, VR – like stereoscopic 3D – will return bigger and better than before with augmented reality a means to that end. It is AR to which Jaunt is turning its attention.
Jaunt is the Californian company launched in 2013 which helped drive the recent wave of ‘cinematic’ VR. It not only produced news, documentary, sports and narrative 360-degree content from its studios in Santa Monica (including a VR mini-series directed by Jumper director Doug Liman) but it built its own VR rig, Jaunt ONE, and post pipeline on the back of $65million in funding. It is this equipment, services and R&D which is being offloaded (with London-based VR developer Spinview a reported suitor).
The writing has been on the wall for the current incarnation of VR for almost as soon as it began. While Hollywood seemed to overboard with its potential for usurping 2D cinema, many observers recognised the commercial risk of piling money into head-encasing hardware. After all, this was a key part of the failure for stereo 3D to take root. Uncomfortable wearable experiences aside, the bulk of VR content hasn’t so far been good enough, hampered by low resolution graphics and video, disorientating navigation and a steep learning curve among developers about working in the medium’s new audio visual language.
A year ago, Nokia exited the VR business after realising it wasn’t going to make money back any time soon on its Ozu VR camera and sales of headsets seem to have stalled.
However, the smart money for some time has been on AR, the overlaying of graphics and internet driven communications on our regular vision. Most people don’t see it stopping there, but view AR as a stepping stone to mixed reality – where simulated or narrative driven objects and characters intermingle with the real world. Another term for that is extended reality but the future of the internet and arguably of entertainment and media, is likely to be accessible on a spectrum from mobile AR to mobile 360 to full blown VR.

Sidelining VR

Jaunt has sidelined VR to concentrate on doing AR content at large scale. Specifically, it means to focus on the Jaunt XR Platform, described as a VR and AR distribution system (or (Xtended Reality).
According to Jaunt, the platform “allows augmented reality assets, virtual reality content, and 2D assets to be delivered across devices and live side-by-side with existing media libraries.”
It has already been jumped on by one of Jaunt’s own investors, Sky which has been using XR to deliver VR content to consumers via the Sky VR app since the beginning of the year.
Jaunt is also integrating Personify’s Teleport system into XR. The software, which Jaunt acquired in September, captures and streams 3D AR footage of objects and people.
There are far bigger players than Jaunt which are trained on AR. Facebook has hived off a chunk of its development on headmounted VR gear Oculus to build AR glasses. Google continues to develop its Glass, principally for business rather than consumer applications where wearers are going to be less minded about wearing goggles for their job than walking down the high street.
Amazon-backed Canadian company Thalmic Labs (being rebranded as ‘North’) is releasing Focals – smartglasses with Amazon Alexa inside. Microsoft continues to pump HoloLens and Apple has made AR a fixture of the iPhone, recently buying AR hardware developers Vrvana and Akonia to boost development of its own headset.
Magic Leap has perhaps the most ambitious and certainly most financially muscular approach. On the back of $2.44 billion and five years of development it just released version one of its glasses and software for content creators with a consumer model due next year. It says its spatial computing software – which in many ways sounds similar to Jaunt XR – won’t be restricted to Magic Leap hardware.
John Gaeta, the VFX legend behind Bullet Time and now on the board of Magic Leap, told me last month, “VR is a car wreck.
“You have to make VR first to understand it which is what headset developers do not understand. I am concerned that VR could become isolated. On the other hand, it could be the most mind-bending computer interface. But MR is an easier path to imagine use cases emerging.”

Turn your camera into a light field device, simply by adding a new lens!



RedShark News
The demise of camera maker Lytro in March hasn’t stopped optical engineers finding new ways of changing the focus of an image after the event. But while most previous attempts to solve this have rested on using either arrays of moving cameras (as is currently the case at Google) or on micro-lens arrays (Lytro’s approach) plus a tonne of processing, German start-up K-Lens has developed a lens that can give any standard camera the attributes of light field capture.
What began as a research project of the Max Planck Institute for Informatics and Saarland University, is expected to become launch commercially by end of next year.
Capturing depth information makes possible extended depth of field, adjustment of focus or blur in post-processing, functions like depth-based segmentation and the reproduction of 3D images – possibly for viewing as a holograph over future displays. All these benefits, though, come with high acquisition costs and extremely cumbersome workflows. Ultimately, this floored Lytro.
The Saarbrücken start-up’s core component in its K | Lens is the so-called Image Multiplier, a system or tunnel of internal mirrors that, like a kaleidoscope, produces different perspectives of the same scene, which are then simultaneously projected onto a single camera sensor.
It not only offers complete control over focus and blur, fully automatic segmentation, depth-based segmentation, perspective change and 3D images, but also complete access to the depth planes of the recorded image.
It is optimised for full format sensors (36mm x 24mm) and works “in principal” with all major DSLRs and lens mounts.
A worldwide patent has been applied for the K | Lens which will shortly be released in prototype and for which K-Lens, the firm, received the Photokina Startup Award 2018.
K-Lens claims there is no other product on the market that can match its possibilities - and we can’t disagree, although we note that it’s still experimental.
The start-up has also written a software plugin for generating the light field image and achieving effects such as realising focus and blur within the same image plane and digital simulation of lens bokehs. Based on depth information you can also segment the image into different layers which can be edited or colour graded separately.
What else do we know? Well, the K | Lens has an adjustable aperture, is about 25cm long and weighs around 1000g, similar to standard hand-held zoom lenses. 
Videographers will also be able to control focus and blur post-event and perform multi-layer editing in addition to being able to shoot content for 3D, VR, AR and “holo”. There’d be no need for green screens either.
Worth noting, though, that video functionalities “will probably not be available” with the first version.
The company is also researching a commercial light field camera targeting the professional film industry. Supported by Germany’s Federal Ministry of Education and Research, this project is also seeking a partner like Sony, Nikon, or Canon. 
Dr. Klaus Illgner is the brains behind technical development. He has a Ph.D. in engineering and extensive experience in image and video technology from time at Texas Instruments, Siemens and research institute IRT.
A little digging by light field forum confirms the 2016 U.S. patent application titled Plenoptic imaging device (US20160057407A1), which details how such an image multiplier works.

Friday 23 November 2018

The future of TV is very, very thin

RedShark News
Go up and down the housing estates of this glorious land and you might think that TV set size is chosen in inverse proportion to that of the living room. TV displays tend to dominate any room given the way we arrange our sofas toward them but anything more than 40-inches in your average habitation seems to feel like excess, especially when the screen is switched off and you’re left with a gaping plastic black rectangle.
This is the market which display makers are angling towards with new technology that renders the TV wafer thin – and roll uppable.
LG have been one of the leaders in this space, revealing successive prototypes over the last few years and which will form a centrepiece of its presence at the Consumer Electronics Show in Vegas come January, if Endgaget is to be believed.
There are no details on the internal documents the site claims to have seen but there are more than enough clues from what LG showed at CES last January.
This was a 65-inch 4K display that could be rolled up into a box the size of a soundbar.
It’s OLED, of course – or ROLED – the organic bits removing the necessity of a rigid back panel providing light to the screen. The company claims that a ‘high molecular substance-based polyimide film’ on the rear of the screen is the key to its flexibility. This also provides the additional benefit of allowing thinner construction.
In 2014, LG exhibited an earlier version which was 18-inches and HD only but so flexible it can be rolled into a tight tube just 3cm in diameter - a size that makes it inherently portable and clearly another useful application for the technology. Instead of reading the Metro on the Tube into town why not unfurl CNN?
LG returned in 2017 with a 55-inch concept weighing just 1.9kg which could be mounted on a wall and held in place using magnets.
LG’s 2018 0.18mm thin ‘wallpaper’ OLED is also light enough to be carried although quite how this is possible when the power unit (and speakers?) are contained within the box is not clear.
A neat gimmick is that as the display is rolled away into its box it exhibits different aspect ratios with the display switching from 16:9 to 21:9. The 2018 demo showed the panel leaving a strip about a quarter the screen’s total size, to show time, news updates or whatever smart home information you set it for.
It seems such screens need to be rolled, not folded flat. Perhaps that’s obvious, being less damaging to the underlying pixel structure, although if this is to become a commercial product this glitch, may need ironing out (pun intended).
Folding it flat would permanently damage it, and therefore the screen doesn't represent a chance for something many have lusted over for a while, an interactive video newspaper that feels just like the paper product.
Ironically, a wafer-thin cylinder would also make carrying mammoth displays through the front door and into a living room more possible, though they could always be put away afterwards.
A short throw projector could do the same black plastic saving trick but would arguably come with a bigger hardware footprint (especially if you include the projection screen).
Bendable tech like this is not new, nor is it exclusive to the Korean manufacturer. Sony and Samsung for example have had curved OLEDs on sale for a while and there’s a veritable battle among mobile handset makers to bring out ultra-flexible cell phones.
L|G, Apple, Huawei, LG, Motorola all have foldable phones in the works with Samsung touting its 2019 Galaxy X as a device that transitions from a phone to a tablet by combining two 3.5-inch screens into a single 7-inch one.

Tuesday 20 November 2018

Night Sight on the Pixel 3 is a step change for mobile photography

RedShark News
Night Sight is a new feature of the Pixel Camera app that lets you take sharp, clean photographs in near darkness. It works on the main and selfie cameras of all three generations of Pixel phones, using a single shutter press, does not require a tripod or flash.
How dark is dark? Well imagine it’s so dark you can’t find your house keys on the floor.
Google rate this light level as 0.3 lux. Technically, lux is the amount of light arriving at a surface per unit area, measured in lumens per meter squared. So, 30,000 lux is direct sunlight, 300 lux is typical office lighting, 50 lux is your average restaurant, 10 lux is the minimum for finding socks that match in your drawer, 3 lux is the pavement lift by street lamps and 0.1 is so dark you need a torch to find the bathroom.
According to Google, smartphone cameras that take a single picture begin to struggle at 30 lux. Its boffins have deployed a barrage of software tweaks to improve this performance.
HDR+ with a machine learning twist
The technology builds on HDR+, a computational photography technique introduced a few years ago that captures a burst of frames, aligns them in software, and merges them together to improve dynamic range.
As it turns out, merging multiple pictures also reduces the impact of noise, so it improves the overall signal to noise ratio in dim lighting.
To combat motion blur that Google’s existing optical image stabilisation algorithm can’t fix, the Pixel’s use ‘motion metering’, which measures recent scene motion and chooses an exposure time that further minimises blur.
All three phones use the technique in Night Sight mode, increasing per-frame exposure time up to 333ms if there isn't much motion.  On the Pixel 3 the function uses Super Res Zoom (whether you zoom or not) which also works to reduce noise, since it averages multiple images together.
A machine learning based enhancement to auto white balancing has been trained to discriminate between a well-white-balanced image and a poorly balanced one. In other words, the colours of the scene illuminated in extreme low light conditions should appear more neutral. 
A related problem is that in very dim lighting humans stop seeing in colour, because the cone cells in our retinas stop functioning, leaving only the rod cells, which can't distinguish different wavelengths of light. Scenes are still colourful at night; we just can't see their colours. 
It hasn’t gone so far as making night scenes ‘day for night’ (which for most people who want that shot at night look might be pointless) although Google intimates it probably could. Instead, it’s employed some tone mapping tricks to lend colour to the night time shot while ensuring it reminds you when the photo was captured. 
Now, Night Sight can't operate in complete darkness, so scenes do need some light falling on it.
Also, while Night Sight has landed on Pixel 2 and the original Pixel, Google says it works best on Pixel 3. In part because its learning-based white balancer is trained for Pixel 3, so will be less accurate on older phones.
Below 0.3 lux, autofocus begins to fail. If you can't find your keys on the floor, your smartphone can't focus either.
Below 0.3 lux you can still take amazing pictures with a smartphone, and even do astrophotography, but for that you'll need a tripod, manual focus, and a third party or custom app written using Android's Camera2 API.

“Eventually one reaches a light level where read noise swamps the number of photons gathered by that pixel,”  explains Google in a blog by imaging engineers Marc Levoy, and Yael Pritch. “Super-noisy images are also hard to align reliably. Even if you could solve all these problems, the wind blows, the trees sway, and the stars and clouds move. Ultra-long exposure photography is hard.”

Monday 19 November 2018

What’s Driving the Mass Adoption of Remote Production over the Internet? The Clue is in the Title

content marketing for Haivision
Anyone for segway polo? Fancy checking international dodgeball? Catch-up on the latest drone racing league? Now you can – thanks to the internet. Dozens of niche, minor, and so-called second tier sports like these are rapidly growing a fanbase online.  They can look to esports as the benchmark – the global competitive video gaming phenomenon was built on live streaming.
It’s not just new or high tech sports benefiting either. Curling – which dates back to medieval Scotland – and surfing, which has never enjoyed much broadcast coverage, are among dozens of established events now thriving on the oxygen of the internet.
For the rights holders, clubs, athletes and media partners of these sports, online distribution has opened up unprecedented revenue-generating and audience-reaching opportunities.
But it’s not just about ubiquitous connectivity to consumer devices. The pent-up demand for live events is being bust wide open thanks to new technology enabling reliable, high-quality and cost-efficient at-home production over the public internet. In turn, broadcasters are finding the flexibility to produce and distribute more video content with fewer resources.
Outside broadcast production with its attendant multi-million dollar mobile trucks, uplink vehicles and legions of engineers has always been a premium – even for news and music events. Stands to reason that the best way to streamline costs is to reduce the burden of its most expensive resources: the facilities required to capture, process and produce at a remote venue and the crew needed to set up, operate and manage it.
Correctly implemented, remote-integration model (REMI) can reduce the movement of people and equipment; increase the utilisation of kit; reduce on-site setup times; and maximise the efficiency of production teams.
The REMI concept is not new, but the cost of contribution and onward delivery has remained a pretty significant barrier to implementing it for all but the largest live events.  
A decade ago, for example, NBC was clipping feeds sent back over satellite from the Beijing Olympics at its base in New York. Broadcasters have since moved to transmit more and more of the raw (ISO) feeds, audio, and equipment control from venue to a central studio facility. However, they still relied on satellite, telco infrastructure, private fibre or other managed networks to contribute signals – all options which are often so expensive they negate the incentive to go remote in the first place.
Making remote production cost-effective
What has finally tipped the balance in favour of the mass adoption of REMI is the introduction of technology capable of giving clients the confidence in using the public internet for live contribution video.
There have been justifiable concerns which have made some video providers reluctant to commit to this path. Technologies that transport high-bandwidth, low-latency video streams over unmanaged networks must be able to handle large amounts of packet delay variation (jitter) and be able to recover packets that have been lost in transmission. Any delay or out of sync audio is commercial suicide for the live event producer and its partners – their subscribers won’t hesitate to vent their frustration on social media.
Such concerns should now be assuaged with one of the biggest game changers in broadcast production and the internet video production industry today.
The open-source SRT (Secure Reliable Transport) protocol solves the problem, which everybody has, of transmitting video at very high quality over a bad or dirty internet line.
In simple terms, the video is encrypted so no third party can listen (Secure). It automatically recovers from severe packet loss with implementation of techniques like error correction (Reliable). And it is dynamically adaptable to changing bandwidth conditions (Transport).
Whether production is performed in the cloud or at a traditional studio, you can use SRT to contribute low-latency feeds where bandwidth at the venue is unpredictable or you can transmit high quality video 24/7 from point A to point B on a limited budget using the public internet.
For example, ESPN has deployed SRT-equipped devices to 14 collegiate athletic conferences that have been used to produce more than 2,200 events via low-cost internet connections, in place of using traditional satellite uplink services that would have cost $8-9 million.
Changing the landscape for new and traditional media players
What’s more, as an open-source, royalty-free and flexible specification it performs as well as, or better than, proprietary solutions and is supported by a growing range of applications including IP cameras, encoders, gateways, OTT platforms and CDNs.
REMI over the internet is now a viable avenue for broadcasters to face down the fierce competition from new media players. OTT distribution provides a more cost-efficient alternative to dedicated distribution platforms, to deliver a richer content to a global audience.
REMI over the internet not only enables broadcasters to reach audiences with niche content, it allows them to increase coverage of a major event by permitting more feeds from multiple cameras around a venue. Fan-cams in the bleachers to player or bench cams, streams overlaid with real-time stats to video with bespoke commentary give the viewer more personalised viewing options and the video service provider great potential for targeted advertising. With no cost restrictions around broadcasting time, providers have greater flexibility in building programming around an event.
Reduced cost, voracious demand, IP maturity and low-latency jitter-free streaming have combined to make the perfect storm for mass adoption of remote production.
Now, what’s the score in that segway polo match?

Wednesday 14 November 2018

Robotic Camera Systems


InBroadcast

Camera robots can empower the cameraman, motion control operator and photography director to get the camera exactly where it needs to be for unique tight choreographed camera angles


Broadcasters increasingly rely on robotic camera systems to drive operational efficiencies through automation. While automated camera moves have been used in news studios for many years, in highly choreographed productions where consistency of product is key, robotic cameras moves have become integral to overall workflow automation.
With the rise in quality of small format cameras, advancing motor technology and motion control software, together with new levels of product design aesthetics, robotic camera positions are bringing a level of motion usually associated with manually controlled fluid heads and in positions that add further value to productions. Such advances are allowing robotic cameras moves to be cut live to air (rather than just replays) and, through a wider range of payload and mounting options, opens the possibility of more camera positions without compromise to venue audience space.
Key buying criteria include speed, smoothness, payload capacity, safety, and reliability. Live productions have the added challenge to make cameras as invisible as possible to the live audience.
“Another area of growth for robotic cameras has been the rise of remote productions,” says Mark Roberts Motion Control CEO Assaff Rawner. “With the increasing availability of stable high-bandwidth networks, the control of camera robotics over IP is an attractive proposition to lower production costs and minimise travel.”
MRMoCo has standardized on IP control for all of its robotic range with built-in features such as network diagnostics, IP video encoding at the camera head and localised user client applications for full feature remote control.
Automating camera motion in sports can be achieved by using machine vision to analyse ball and player positions in real-time, feed those positions to the robotic cameras and automate the camera motion. Advanced algorithms working in real-time are used to frame the shots in a fluid and highly adaptive way to provide this level of automation. MRMoCo’s Polycam Player, for example, provides automation for certain camera positions in football using robotics.
“Integrating robotics in existing workflows allows for the best of both world – great storytelling and emotive shots from manned camera positions and consistency of coverage, space saving and unique angles from automated cameras,” says Rawner. “However, within any automated live event camera workflow their needs to be, in our experience, a level of a human intervention that is seamless to the operation.”
Telemetrics designs robotics products and remotely controlled systems for broadcasters to get the most out of the available studio space. Small studios get that ‘big’ look with a single operator and automation technology such as the RCCP-1A STS Control Panel which includes software that locks cameras onto the talent and automatically trims the shot and the PT-LP-S5 – pan tilt head system which enables multiple cameras to be quickly configured as pan, tilt and pedestal parameters are accessed via buttons on the camera base (thus eliminating the need for a dedicated control panel). In addition, LED indicators provide real-time warning if the load is out of balance.
Twenty years ago, when Mo-Sys introduced the industry’s first automatic, real-time optical camera tracking system, it was working with active LEDs and a fluorescent optical amplifier for a sensor. The latest versions of StarTracker works with low-cost reflective stickers on the ceiling for tracking while software provides enhanced smoothness, larger tilt capability, faster recovery, IP data output and a networked user interface.
It argues that competitive systems are triple the cost and employ several high-cost infrared cameras, while StarTracker uses a single optical sensor on one navigation camera and retro-reflective stickers that are peeled from a roll and placed as a ‘constellation’ on the studio ceiling. LED light reflected by the ‘stars’ is detected by the navigation camera. The camera transmits the location data to the graphics engine which produces the virtual environment. Initial set-up can be achieved in hours, after which no further calibration is required. It works equally well mounted on pedestals, cranes, jibs or for use with handheld cameras.
Ross Video offers a wide range of robotic camera systems including track-based dollies, free-roaming pedestals, and standalone pan & tilt solutions.  Furio Live, for example, has the capacity to support full-sized cameras and teleprompters (unlike most jibs or other specialty camera systems) giving it the flexibility to deliver beauty shots, while also serving as the primary production camera. All Ross studio robotic solutions communicate natively over IP, making its robots easy to implement and manage, simplifying installation into new and existing studios.
Shotoku’s TR-XT is a camera control system which can store a virtually unlimited number of shots since they are held on the hard drive of a computer at the heart of each system. Shots are stored with a thumbnail image, captured (in SD/HD-SDI) and may be displayed in small, medium or large format depending on the operator preference. Shots may be displayed in a random mode (any grid position for any camera shot) or column mode (shots from a camera arranged vertically below the relevant camera selection button). Our unique single camera mode enables live video of the selected camera to be broadcast on the touch screen, along with all stored shots for the camera.
As on all Shotoku control panels a three-axis joystick enables smooth control of PTZ axis at with Pan and Tilt response speeds automatically compensated according to the zoom angle such that even on very narrow angle shots head movement is smooth and under close control.
The TR-T connects to all Shotoku camera systems via Ethernet network connection, and to third party devices either by serial, or Digiport protocol convertors.  Paired with the firm’s automation software users can control the system from an external third-party computer, managing everything from lights, cameras, graphics, and video rendering.
Shotoku also offers the SmartPed Robotic Pedestal, a fully robotic XY pedestal, for on-air environments; the SmartRail which supports floor or ceiling operation and can optionally provide tracking data for AR/VR graphics applications; and the Free-d2 Absolute Tracking System which uses simple ceiling markers and video processing algorithms to determine the exact position and orientation of the studio camera.
Already claiming the world’s fastest robotic camera system in Bolt, MRMoCo debuted a junior version earlier this year. The Bolt JR rig is a compact 6-axes camera robot arm developed for film studios, photographic studios, and table-top work where studio space and budget are a key criterion.
Available in pedestal and on-track versions, the Bolt JR cinebot has an arm reach of 1.2 metres and can move high-speed on-track at over 3m per second with a camera payload up to 12kg. The firm’s Flair software offers a variety of automated functions and precision repeat functions and Bolt JR can also automate lighting, trigger synchronised SFX timecodes, rig movement or model movement.
PTZ systems have entirely replaced a manual operator in many situations such as news reporting or interview spaces, where automation allows for a consistent product day-in, day-out. The new range of three chip sensor PTZs offer many of the same features as broadcast cameras, with a lower price point and with robotic movement built in, creates the opportunity for a lower cost studio production.
Panasonic for example has partnered with Tecnopoint and Movicom to create robotic systems, new protocols and tracking systems for easier studio integration. Put that togetjer with a PTZ like its new AW-UE150, which offers 4K 60p capture and fresh possibilities open up - though it does cost €11,000.
It has also partnered with AREPLUS to link Pana PTZ cameras to motion control robotic camera systems like the UR10 for broadcast applications.  Panasonic’s entire remote camera line-up can be controlled via AW-RP50 controller that includes a joystick and additional operating enhancements. Both controllers allow up to five remote cameras to be controlled via serial, and up to 100 cameras via a switching hub. On top of this, Panasonic now offer full integration support of NewTek’s video over IP protocol NDI Version 3, allowing users to connect Panasonic cameras directly into a NDI network.
Sony’s first 4K PTZ is the BRC-X1000 for remote capture of broadcast quality images. Sony describes the large Exmor R CMOS sensor creating “beautiful 'bokeh' effects with a shallow depth of field to suit any artistic intention.”
“When integrated with control systems that support not only joystick control, but also named and saved presets, motion presets, dynamic and programmable auto-zoom and other sophisticated features, PTZ cameras can perform as well as many human operators,” claims Rushworks president Rush Beesley.
A number of facilities create one central control station for camera operation. For instance, using AJA RovoControl software, one person can control multiple RovoCams through an easy to operate GUI on a single PC. AJA’s $2500 RovoCam offers a Sony UltraHD sensor that gives users the ability to extract an HD raster from source and even explore pans and tilts from a stationary camera as you scan the original UltraHD raster.  
Mobile Viewpoint’s NewsPilot uses Artificial Intelligence and either PTZs or fixed lens to automate the low-cost delivery of content from remote locations. It consists of three PTZ cameras and the firm’s Automated Studio control box. It also includes CameraLink, a robotic arm which can move a 3kg PTZ camera much like a traditional dolly arrangement, offering the same camera control normally associated with high quality news productions.
VR and camera-op robots
VR filmmakers and video production crews face the unique challenge of needing to hide the crew and equipment in a location away from the set. Traditional camera dolly systems often require human operators and tracks. A solution to this has been devised at Double Robotics. Claimed as the world's first robotic camera dolly made for 360 filmmakers, the Universal 360 Camera Mount attaches to the Double base using an industry standard ¼”-20 bolt. An iPhone is secured in the mount and the camera operator can drive Double wirelessly from behind the scenes via LTE/4G, Wi-Fi, or Bluetooth. A package costing $3,000 gets you the Double 2, Universal 360 Camera Mount, and a Travel Case. Its primarily aimed at education and telemedicine users but why not media and entertainment?

Soloshot has an automated system that lets you film yourself as you move through a scene, - no human camera operator required. Its functionality is pretty straight-forward: you wear a wireless transmitter and the Base unit automatically pans, tilts, and sends zoom commands to the camera to keep you in the shot at up to 2,000 feet away. 
 With a load capacity of less than 1.5 lb, the Soloshot3 isn’t really meant for use with DSLRs or camcorders but, rather, for use with the Optic25 and Optic65 cameras, which are custom designed for use with Soloshot Bases. These interchangeable cameras are lightweight, attach directly to the Base, and provide automatic zoom and focus tracking. The Optic65 features a massive 65x optical zoom range and records up to 4K video at 30 fps and 1080p video up to 120 fps.  The system also features built-in automated editing software and Wi-Fi to connect to a mobile app.




Fake news? China has an AI humanlike news presenter

RedShark 
In the week that Republican cheerleader and Fox news anchor Sean Hannity actively campaigned for President Trump while condemning all journalists as "fake news"  the Chinese have unveiled an AI-driven humanlike news reader.
That would be the same week that the White House has unashamedly attempted to support its banning of another TV news journalist with a doctored video of an incident that blatantly contradicts the truth.
You couldn’t make this up.
Except that of course you could. That’s the point.
The Manchurian news candidate has been manufactured by China’s state press agency Xinhua with search engine developer Sogou.
To be fair they weren’t hiding its origins – although with such an obviously robotic delivery reminiscent of straight to video animation they can’t yet pull the wool over anyone’s ears or eyes.
 “Hello, I am an English Artificial Intelligence Anchor,” the digital presenter informs at the beginning of its first English-language broadcast. "I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted.”
He’s apparently modelled on Zhang Zhao, a human Xinhua presenter, although Chinese audiences are fed the face of another virtual anchor modelled on Zhao’s colleague Qiu Hao.
Neither are quoted anywhere giving their impression of what this means for their actual jobs.
That’s despite the cost savings which is ostensibly what Xinhua is aiming for.
Xinhua says the virtual presenters can “work” round the clock on its website and social media channels, “reducing news production costs” learning from live broadcast videos and reading “texts as naturally as a professional news anchor.”
Not yet perhaps, but eventually.
AI and ML is being introduced into news rooms everywhere, not just China. Bots can scour social media, for example, and clip up stories far quicker than any human, but sections of Chinese media seems to have embraced it more than most. Perhaps that’s a function of the vastly larger audience it needs to reach.
Xkinhua is rebuilding its entire approach to news gathering and dissemination around a “Media Brain” which integrates cloud computing, the Internet of Things and AI into news production, with potential applications “from finding leads, to news gathering, editing, distribution and finally feedback analysis” it stated in a release [http://www.xinhuanet.com/english/2018-01/09/c_129786724.htm] which may or may not have been written by a human.
Dreamwriter, an automated newswriting programme developed by Chinese web giant Tencent, uses speech to text software to turn conference speeches into stories. It apparently churns out 2500 stories daily.
The Press Association in the UK has been doing similar. It has worked with Urbs Media to deliver hundreds of semi-automated stories for local newspaper clients.
Of course, it couldn’t really happen here. The BBC is the most trusted news brand among American viewers, according to research by Brand Keys published in August.
The same research disturbingly found that Fox News was the most trusted US news channel by a country mile with Hannity’s programme the most watched news show.
Ofcom says that nine out of ten British people said it was important that they can trust news from UK public service broadcasters.
“Amid the volatile seas of politics and technology, our public service broadcasters remain a trusted port of call for people seeking fairness, accuracy, insight and impartiality,” said Ofcom boss Sharon White in March this year.
That’s all well and good but conspiracy theorists will already be saying that ‘charismatic’ BBC stalwart Huw Edwards is in fact an AI that has been perpetuating the BBC’s liberal agenda for a decade.


The 8K future arrives in December

content marketing for Rhode Schwarz
At the beginning of December 2018, the world’s first regular broadcasts in 8K begin, the climax of a 23-year development programme for Japanese broadcaster NHK. The question is whether it is truly the start of something that will eventually sweep the industry or will remain niche. Let’s not forget that it is NHK boffins who led development and implementation of High Definition.
While 8K began life as an exotic science project and was given renewed impetus by the Japanese government following Tokyo’s award of the 2020 Olympics, the Ultra HD standard is being cherry picked for application by creatives and broadcasters and promoted by vendors worldwide.
The exceptionally high-resolution raw material is being used with some regularity by cinematographers making top end Netflix and Amazon drama, with Lost in Space (shot 7K on a Red Helium 8K chipped camera) one example. Even downscaled for a HD or 4K delivery the additional super-sampled data provides headroom in post to zoom into the shot or add VFX.
Las Vegas’ consumer electronics show in January promises yet more 8K consumer displays, including a remarkable wallpaper thin and rollable 8K OLED from LG despite the fact that outside of Japan there is literally no content to watch at the full fat 8K 120p HDR which is what NHK plan to air.
Meanwhile Turkey’s satellite operator Türksat recently test broadcast 8K pictures of Istanbul’s “historical and natural beauties” to showcase the prowess of the country’s broadcast and tech business.
Astronauts and cosmonauts are even wielding an 8K camera onboard the International Space Station with footage downscaled for NASA’s video channel.
Kit manufacturers have developed an ecosystem of products that can support Super Hi-Vision in time for the Tokyo 2020 Olympics. This includes a series of 8K capable systems cameras from Sony and a camcorder from Sharp.
IntoPix, the Belgium compression experts, have worked with NHK to devise a version of the TICO codec which makes it possible to transport 8K 4:2:2 60p 10-bits on a single 12G-SDI cable.
Even compressing 8K video by a factor of 4 to 1 is going to require some serious storage management. 4K UHD has taken a while to take-off in part because of the premium cost in both time and equipment in handling the larger data volumes.
NHK has amassed the world’s largest library of 8K content including games from FIFA Russia 2018 and landscapes of Yellowstone national park but knows full well its limited programming cycle from December isn’t about to kick-start an 8K viewing revolution. Like elsewhere, most Japanese are HD viewing only.
The content budgets of Netflix, Amazon or Apple may be in the billions but even they won’t be subsidising internet bandwidth capacity to get 8K content into homes.
So, 8K is a slow burn but has its application today and come the hi-tech shop window of Tokyo
2020, the insatiable business imperative to get us to upgrade and, crucially, the arrival of the 5G communications network and 8K will become a fixture sooner than you think.
After all, the Tokyo Games is less than 1000 days away.