Friday, 28 February 2020

BT Sport Brings 8K Broadcast Closer to Home

StreamingMedia

BT Sport's dramatic raising of the bar by making the UK's first public live 8K sports broadcast is more than just a proof of concept, according to sources close to the broadcaster. It could herald launch of a commercial 8K service sooner than anyone thought possible only a few months ago.
Last night, BT Sport teamed with Samsung to screen the UEFA Europa League match between Arsenal and Olympiacos live from the Emirates Stadium in North London.
The technical setup was similar to the closed demo which BT Sport showcased during the IBC show last September. On this occasion an 8K picture supported with HDR10+ was screened to a small audience watching within the stadium on Samsung QLED 8K TVs.
BT Sport has long since taken over the mantle of the UK's—if not Europe's and arguably the world's—most technically progressive broadcaster. Last year it launched BT Sport Ultimate, the world's first service to feature regular programming in High Dynamic Range (HDR), as well as 4K UHD and Dolby Atmos.
This trial was another example of its boundary-pushing prowess, but Paolo Pescatore, an analyst who runs PP Foresight and who has close professional links with BT Sport, hints that there is more to this than meets the eye.
"This is more than a proof of concept," he tells Streaming Media. "8K is happening a lot faster than 4K ever did. It's similar to the transition to mobile 5G networks. This will proliferate when forthcoming big global sporting events are produced in 8K, including the Olympics."
BT Sport, he says, is proud of its heritage of taking fans closer to the action and does not want to lose this technical leadership.
This latest broadcast over IP enhances the potential of BT Sport Ultimate with providing viewers with the best viewing experience possible on the platforms and devices they watch on, BT Sport claimed.
"In this rapidly converged world, it is becoming increasingly hard to differentiate beyond price alone," Pescatore says. "BT is uniquely placed with its network assets to provide sports fans with the best experience on any device. People can choose whatever device, knowing they'll get the best experience. This is the future of watching programming, all about personalisation.
"Fans will need to upgrade their existing packages (of TV, broadband line etc) to superfast fibre. Therefore, expect to see some interesting bundles in the future … delivered to any 8K TV." 
While most content of this resolution can be expected to reach viewers via an upscaling engine, this was not the case here. This was a pure native 8K production. 
"It was impressive as the image on screen looked to the eye as good if not better than watching the match in the stadium," reports Pescatore.
There's also an added benefit of one master ultra-high resolution plus HDR workflow, which will improve images for 4K, HD, and HDR-compatible services and devices downstream.
BT Sport is already trialling 8K around 360°, allowing users to pinch and zoom into the picture on their mobile device. 
Any 8K channel, even a pop-up one around a major live event, would require sufficient 8K device penetration in the market. Even by 2023, 8K TV shipments are predicted to reach just 5.6 million globally, of which North America will have 1.4 million and China 2.1 million, according to Futuresource Consulting.
The highest margin product which BT has within its portfolio is broadband—an asset tricky to upsell, since many consumers treat it as a utility.
"Telcos are finding fibre is a difficult upsell: Once consumers have 50-70 Mbps to the home over copper then there is presently only a limited need to upgrade," says Futuresource market analyst Simon Forrest.
"However, if you are to introduce 8K—which requires full end to end fibre or fixed wireless 5G—then this becomes a reason to sell the higher cost bundle. So telcos will be looking towards new video and smart home technology that supports the infrastructure investment."
Other technical components in the 8K trial include Fujinon 8K lenses mounted on Sony 8K HDR cameras, Socionext Quad HDMI 2.0 to HDMI 2.1a converter, Blackmagic Design's Quad 12GB SDI-to-HDMI converter, and Beamr's 8K HEVC encoder. Telegenic and Timeline TV were facilities partners.

Thursday, 27 February 2020

Sony’s latest Xperia is also a broadcast terminal

RedShark News
A proposed new mobile handset from Sony will double as a wireless video transmitter for beaming live over 5G. In fact, so intentional is the design function of the device that it’s worth rephrasing its description as a professional broadcast links module that doubles as a smartphone.
The Xperia PRO is still in development but it’s pretty clear that Sony plans to launch it soon. What we know about it – from Sony’s website and a ‘virtual’ press conference held in lieu of the cancelled Mobile World Congress - is that it has all the trimmings of a flagship Android device with specs including a Qualcomm Snapdragon 865 chipset, 8GB of RAM, 512GB of storage and a 6.5-inch 21:9 4K HDR OLED display.
What sets it apart will be the ability to livestream images over high-speed mmWave 5G connections from professional cameras via a micro HDMI port.
Sony teased this development at the back of last year by capturing video of a Houston Texans vs New England Patriots football game from Sony’s shoulder camcorder PXW-Z450 streamed through Sony’s prototype transmitter box and Xperia 5G mmWave device, via Verizon’s 5G network to a production room in the stadium. Sony and Verizon then repeated the trick at CES 2020 in January, emphasising how 5G can allow for more creative and untethered camerawork while also reducing set-up time and costs.
“It’s a game-changer for working on location,” says Sony, reinforcing its targeting of video and broadcast professionals - an important but nonetheless niche market for a smartphone.
5G mmWave is “a new era for business broadcasting,” Sony goes on to shout. “Capable of exceptionally fast uploads and downloads, 5G mmWave also offers remarkably low latency—essential if you’re broadcasting live content such as sports or news.”
It is aided in this by a 360° antenna design, which covers the four sides of the device with 16 antenna pieces, plus beamforming technology to direct 5G mmWave signals most effectively. In addition, what is described as “a low dielectric constant material” (graphite sheet, vapour chamber, and air gap) enables radio waves to pass through more easily.”
Together, Sony says, these technologies help ensure the best 5G mmWave connection, no matter how you’re holding your device or where it’s positioned.
Its OLED display - widely considered the Xperia’s best feature on previous models –in this version would work as a high-resolution, colour-accurate external camera monitor.
Its specified with a cinematic aspect ratio of 21:9, is 6.5-inches and capable of 4K HDR. New Motion blur reduction technology reduces the in-between frame lag for a clearer image quality. It’s coupled with Dolby Atmos sound. There’s a USB-C port available, too, for power and it’s protected with Gorilla Glass.
In addition, Xperia PRO has a unique monitor function that displays the connection direction of 5G mmWave and data transmission/reception speed on the screen.
“Xperia PRO supports professional broadcast video transmission workflows by visualizing and confirming communication status,” Sony confirms. “The HDMI2 connection allows the device to be connected to virtually any camera with an HDMI output. While using the camera as a monitor for interchangeable lens DSLR cameras or professional camcorders, it is possible to transmit broadcast video data during shooting to a server or cloud via 5G connection.”
Should dedicated broadcast links vendors like LiveU or Dejero be concerned? The Xperia Pro isn’t yet ready to go on sale and won’t have the anywhere near the smarts of product like Engo and LiveU’s backpacks for bonding 3G, 4G and 5G networks as well as satellite for best-effort connectivity. But as 5G inexorably widens its spread the Xperia device could find favour as an additional contribution link for news and sports coverage or by anyone wanting to live stream. While pricing details have not been given it can be expected to be at the top end for a smartphone but a lot cheaper than conventional backpack or camera-back link – plus it’s got that monitoring component too.
You’ll find many of the same features, minus the HDMI and 5G mmWave, on the Xperia 1 Mark II landing in Europe later this Spring. There’s no price confirmation but as a guide the Xperia 1 cost just under $1000.
Like the Xperia Pro, the Xperia 1 Mk II include features from the company's latest Alpha 9 series cameras, such as eye-tracking autofocus. Real-time Eye AF, which locks focus on the subject's eye for portrait shots, is now available for animals as well as humans. Sony says this is the first smartphone that can do 20 fps bursts with AF tracking- especially useful for shooting moving subjects such as animals and sports.
Equipped with three lenses (main, ultrawide and a telephoto ZEISS lens), all 12MP, there’s also an 8-megapixel selfie camera. It houses a 5G modem via Snapdragon 865 but only transmits over sub 6 Ghz networks which aren’t quite as fast as 5G mmWave. Headphones can be connected to the 3.5mm jack, or wireless headphones via Bluetooth.
Sony also announced a less full featured counterpart. The Xperia 10 II’s OLED displays only 1080p and isn’t 5G ready since its powered by the Snapdragon 665 with 4GB of RAM and 128GB of storage. It does still have a triple camera setup like the Xperia 1 II, but the resolution of the ultra-wide and telephoto are only 8MP.

Wednesday, 26 February 2020

New Intel chip could accelerate the advent of quantum computing

RedShark News
The marathon to achieve the promise of quantum computers has edged a few steps forward as Intel unveils a new chip capable, it believes, of accelerating the process.
Called ‘Horse Ridge’ and named after one of the coldest places in Oregon, the system-on-chip can control a total of 128 qubits (quantum bits) – which is more than double the number of qubits Intel heralded in its Tangle Lake test chip in early 2018. 
While companies like IBM and Microsoft have been leapfrogging each other with systems capable of handling ever greater qubits the breakthrough in this case appears to be an ability to lead to more efficient quantum computers by allowing one chip to handle more tasks. It is therefore a step toward moving quantum computing from the lab and into real commercial viability. 
Applying quantum computing to practical problems hinges on the ability to scale, and control, thousands of qubits at the same time with high levels of fidelity. Intel suggests Horse Ridge greatly simplifies current complex electronics required to operate a quantum system. 
To recap why this is important let’s take it for read that Quantum computing has the potential to tackle problems conventional computers can’t by leveraging a phenomena of quantum physics: that Qubits can exist in multiple states simultaneously. As a result, they are able to conduct a large number of calculations at the same time. 
This can dramatically speed up complex problem-solving – from years to a matter of minutes. But in order for these qubits to do their jobs, hundreds of connective wires have to be strung into and out of the cryogenic refrigerator where quantum computing occurs (at temperatures colder than deep space).

Harnessing the power

The extensive control cabling for each qubit drastically hinders the ability to control the hundreds or thousands of qubits that will be required to demonstrate quantum practicality in the lab – not to mention the millions of qubits that will be required for a commercially viable quantum solution in the real world.
Researchers outlined the capability of Horse Ridge in a paper presented at the 2020 International Solid-State Circuits Conference in San Francisco and co-written by collaborators at Dutch institute QuTech. 
The integrated SoC design is described as being implemented using Intel’s 22nm FFL (FinFET Low Power) CMOS technology and integrates four radio frequency channels into a single device. Each channel is able to control up to 32 qubits leveraging ‘frequency multiplexing’ – a technique that divides the total bandwidth available into a series of non-overlapping frequency bands, each of which is used to carry a separate signal.
With these four channels, Horse Ridge can potentially control up to 128 qubits with a single device, substantially reducing the number of cables and rack instrumentations previously required.

Giving with one hand, taking with the other

The paper goes on to argue that increases in qubit count trigger other issues that challenge the capacity and operation of the quantum system. One such potential impact is a decline in qubit fidelity and performance. In developing Horse Ridge, Intel optimised the multiplexing technology that enables the system to scale and reduce errors from ‘crosstalk’ among qubits.
“While developing control systems isn’t, evidently, as hype-worthy as the increase in qubit count has been, it is a necessity,” says Jim Clarke, director of quantum hardware, Intel Labs. “Horse Ridge could take quantum practicality to the finish line much faster than is currently possible. By systematically working to scale to thousands of qubits required for quantum practicality, we’re continuing to make steady progress toward making commercially viable quantum computing a reality in our future.”
Intel’s own research suggests it will most likely take at least thousands of qubits working reliably together before the first practical problems can be solved via quantum computing. Other estimates suggest it will require at least one million qubits.
Intel is exploring silicon spin qubits, which have the potential to operate at temperatures as ‘high’ as 1 kelvin. This research paves the way for integrating silicon spin qubit devices and the cryogenic controls of Horse Ridge to create a solution that delivers the qubits and controls in one package.
Quantum computer applications are thought to include drug development – high on the world’s list of priorities just now, logistics optimisation (that is, finding the most efficient way from any number of possible travel routes) and natural disaster prediction.

Tuesday, 25 February 2020

Xiaomi beats its competitors to the 108MP punch

RedShark News
Chinese electronics brand Xiaomi is on the verge of launching the first smartphone with a 108 Megapixel camera beating Samsung’s latest flagship Galaxy S20 series by a couple of weeks.
Samsung won’t lose too much sleep, though, since both models carry the same 1/1.33” large image sensor which was co-developed by both companies, and only Samsung’s has the power of Qualcomm’s 5G and 8K ready Snapdragon 865 chip.
The ISOCELL Bright HMX is the first mobile image sensor in the industry to go beyond 100 million pixels. Put another way, you can now get single picture resolutions of 12032 x 9024 on a phone that were previously available only in a few high-end DSLR cameras.
It has 0.8 micron pixels or photo-sites — the same size as current 48MP and 64MP cameras. The sensor also uses ‘pixel-binning’ to deliver shots that are comparable to a 27MP 1.6 micron pixel camera.
Of course, just because you have mega megapixels doesn't automatically equate to better pictures. Optics play a major role in the quality of photos. A fantastic sensor coupled with poor optics can be worse than a mediocre sensor with fantastic optics. Sharpness, vignetting, chromatic aberration, distortion (barrel, pin-cushion), lens flare and so on all affect the quality of a photo.
Xiaomi has put the sensor into Mi Note 10, the latest of its premium flagship line-up. With the 108MP wide-angle lens photographers can now produce ‘billboard-level’ prints up to 4.24 meters high.
The Mi Note 10 encompasses five other lenses – including 5MP and 12MP telephotos, a 20MP ultra wide-angle lens, and a macro lens. This setup enables a zoom from 0.6x to 50x.
Aside the penta rear camera setup, the phone has a 32MP front camera, which offers AI modes for ‘beautify’, ‘portrait selfies’, ‘scene detection’, and a AI face unlock, panorama selfie, and palm shutter features.
The Mi Note 10’s rear camera supports a Night Mode which allows for greater light capture and combines multiple shots of the same scene for increased versatility.
There’s a 960 frames-per-second macro slow-motion video function and a 4K video option with an ultra-wide angle.
The Mi Note 10 has a 6.47” 3D edge-to-edge AMOLED display and a 400,000:1 contrast ratio for deeper blacks and higher colour fidelity. Battery life is claimed to last for more than two days. The device houses the Qualcomm Snapdragon 730G processor.
The Xiaomi MI Note 10 comes in two models: a 128GB version costs £459 from 25 February; a 256GB model cost £100 more.
If you wait until 13 March but are prepared to shell out twice that amount you can pick one of the Galaxy S20 series.
All three S20 models offer three main cameras—ultra-wide, wide, and telephoto—with the S20+ and S20 Ultra adding in a special depth camera but it’s the S20 Ultra that has the 108MP wide angle.
Using a combination of optical zoom and AI-enhanced ‘Super Resolution Zoom,’ the Galaxy S20 and S20+ can use their 64MP telephoto cameras to produce up to 30x zoom. The S20 Ultra ups the stakes using folded optics and the ISOCELL Bright HMX to deliver 100x maximum zoom.
All of these models can shoot 8K video thanks to the Snapdragon 865, all support 5G and all feature 3200 x 1440 AMOLED displays that are HDR10+ certified.

Friday, 21 February 2020

New LED technology is completely transforming displays

RedShark News 
At the corporate communications trade event Integrated Systems Europe (ISE) last week one of the more obvious trends is the rise and rise of LED screens. Some of the exhibition halls feel like you’ve wandered into Times Square, so overwhelming are the displays awash with hyper-vibrant imagery.
Only five years ago, LED technology was best known for large format outdoor installations but in recent years, the miniaturisation of LED components has made it possible to obtain increasingly fine pitch at affordable prices.
This makes it possible to offer LED walls with 8K resolutions and therefore to realise high-quality indoor installations even at short distance.
“With flexible and freeform LED, but also with projection mapping, the creative possibilities are virtually limitless,” says Michel Buchner from creative technology provider, Nexxt Technology. “The only problem is that the majority of designers and architects are not aware of this yet. Once they think beyond the frame and more about animated wallpaper, patterns, and textures blended as elements in their designs we expect a large rise in the use of aesthetic media with projection mapping and flexible LED.
The first examples of flexible printed circuit boards (PCBs) shown in a ribbon configuration were presented at ISE some years ago. Arguably, the real shift was the custom fabrication of PCB shapes which permit the design of any cut shape. From circles to complete logos and even domes - all made to spec.
LED ‘sheets’ exhibited at ISE from Spanish company Flying Screens can be combined to create custom-size screens for installed in unconventional locations such as on curved surfaces. The company claims the technology allows design and features which are currently impossible to reach with traditional LED panels.
One of the key technologies is MicroLED which offers far higher total brightness than OLED as well as far greater power efficiency.
Amazingly, it is the same technology behind both Samsung’s massive direct-view cinema screens, such as one in a Malaysian multiplex measuring 45.93-feet wide and 16.4-feet high, as well as microscopic display prototypes by California’s Mojo Vision, based in Saratoga, California, which has made a display with 14000 pixels just 0.48mm wide.
MicroLEDs are based on gallium nitride (GaN) LED technology which is being backed as the future of AR/VR display but they are difficult to manufacture at scale. A standard 55-inch panel, for example, requires millions of MicroLEDs. Any imperfection could create uneven colours and lighting.
A potential solution to mass fabrication may have been cracked by Compound Photonics, a US-based firm which makes compact high-resolution microdisplays partly at a factory in Plymouth together with Plessey Semiconductors, say they’ve produced the first fully addressable microLED display modules.
The proof of concept aims to produce the world’s highest performance microLED display modules that deliver “improved brightness at the smallest pixel sizes, higher frame rates, with extended bit depth at the lowest power consumption” to be fitted into AR/MR smart glasses and Head mounted VR applications. Initial samples of a 0.26 inch diagonal, Full HD 1080p resolution microLED display module are expected to be available this summer.
The minute we talk about screens not being standard shaped or oriented then content creation become a challenge, even though the procedure is largely the same as before. You can spend a million quid on the most amazing LED screen but it’s worthless if you don’t invest in the content.
Questions like ‘Is it a commercial or primarily artistic design?’ and ‘What should your spectator experience?’ are paramount. Do you want active attention or does it complement the space in an artistic way like a wallpaper or light sculpture?
“Every space or building has its own approach with visual media, but with numerous solutions and financial ramifications,” Buchner says. “It’s is absolutely important to drop the convention of the regular screen as most people look at it today.”

Friday, 14 February 2020

Ericsson claims 5G streaming record from Huawei

Streaming Media 


Ericsson’s decision not to attend Mobile World Congress was the first domino to topple the trade show (which was due to start Feb 24) but it is also among the first out of the bag with MWC news.
Given the sidelining of Huawei in the United States as well as in Australia and European countries like France for 5G infrastructure contracts it was always going to be the case that Ericsson and fellow telco equipment giant Nokia would make headway.
Ericsson’s headline grabbing announcement at a hastily reconvened ‘virtual’ press event was that it had trumped Huawei's 5G speed record of 2.92Gbps by clocking up 4.3Gbps over 800MHz of millimetre wave spectrum.
In context, 4.3Gbps is the equivalent of downloading one hour of UHD 4K content from a streaming service in just 14 seconds.
The telco equipment and networks vendor used a technical specification made up of 8 component carriers (8CC) aggregating 800 MHz of mmWave spectrum to set the new record. The test was made using a 5G smartphone “form factor test device” powered by Qualcomm’s Snapdragon X55 5G Modem-RF system.
Head of product area networks at Ericsson, Per Narvinger said in a release, “The 8CC aggregation solution we have successfully tested will enable not only higher speeds but also large-scale 5G deployments and new business opportunities.”
The commercial solution, including network and terminal support, will be available to 5G consumers during 2020, it confirmed.
The Swedish telco views the 4.3Gbps speed as further proof of 5G’s ability to replace fibre, as mmWave has advanced from 1Gbps to 2Gbps to 4Gbps peaks, quadrupling the top broadband speeds offered by most cable providers. Beyond video streaming, the company also expects mixed reality and multi-player online gaming to benefit from the speed advances.
It is believed, not least by politicians on both sides of the Atlantic, that Ericsson and Nokia are at least 18 months behind Huawei’s state-subsidised lead on 5G tech.
Ericsson denied that this was the case with CEO Börje Ekholm previously pushing back against the idea that US pressure on Huawei is giving the equipment vendor a “free ride.”
Instead, it is “creating uncertainty in the market, reducing investments overall,” Ekholm said during an interview with CNBC [https://www.cnbc.com/2020/01/21/davos-ericsson-boss-says-no-one-ahead-of-us-on-5g-not-even-huawei.html] during which he said that Ericsson is seeing “very little effect” on its order books as a result of Huawei discussions.  
US Attorney general Bill Barr has also argued that the US should take controlling stakes in Nokia or Ericsson or both to battle Huawei's dominance.
“The main concern about these suppliers is that they have neither Huawei's scale nor the backing of a powerful country with a large embedded market like China,” Barr told the Center for Strategic and International Studies in Washington. “Putting our large market and financial muscle behind one or both of these firms would make it a far more formidable competitor and eliminate concerns over its staying power.”

Cisco, the most obvious US-based suitor, however just ruled out this idea.
CEO Chuck Robbins reiterated in a Q2 financials reports Wednesday that technologies such 5G, 400-gigabit Ethernet, WiFi 6 and the cloud are big opportunities for the networking giant but that customers are still cautious about some of the tech transitions. Cisco is continuing to “pause” spending in light of a fall in company revenue of 4% from a year ago and lacklustre growth forecasts.
Ericsson said it expects 100 million 5G subscriptions worldwide by the end of the year with most demand coming from China. Ericsson itself has 81 5G commercial agreements with 49 customers and 25 live 5G networks worldwide.
Data from analyst firm Omdia (the new brand for Informa/IHS Markit) suggest 5G-enabled smartphone shipments will grow eight-fold by 2021.
While 5G is set to become a common feature on premium phones released this year (except foldables), Samsung’s latest flagship Galaxy S20 brings 5G into the mainstream.
Omdia says a key emphasis for Samsung is gaming performance. The new devices are using a display with an extremely high refresh rate of 120Hz with a 240Hz input sensor which, when coupled with 5G, “would give gamers the quickest reactions of any mobile gaming solution,” says Daniel Gleeson, Principal Analyst, Consumer Technology at Omdia.
Samsung is so confident in the gaming capabilities of the S20 that it has announced a partnership with gaming company, Forza Street, with cross play with PC gamers enabled. 
“5G will drive the rapid growth of game streaming and esports, with the bundled-service revenue market of such partnerships expected to grow to $2.6 billion by 2024,” according to Omdia’s latest forecast.
The Galaxy S20 also puts the focus squarely on the camera technology. The S20 Ultra model features a 108MP camera with a 100x zoom and 8K video recording.  8K footage recorded on to the device would soak up 600MB per minute, according to Samsung and that just five minutes of 8K footage would take up around 3GB of space. That’s one reason why a 5G upload connection to cloud storage is important.
“The headline feature should nonetheless grab attention and reassert Samsung’s technology leadership in the space,” says Gleeson.
It’s not just Samsung though. Chinese brands like Xiaomi and OnePlus are using 5G as a way to build partnerships with operators around Europe. Xiaomi has a major presence in the big five European markets already and Omdia expect that to grow. Apple’s iPhone 12 will have 5G too but release dates have not been made public.

Thursday, 13 February 2020

ISE2020: Communication technology rampant

IBC
With more than 80,000 attendees, and 1,300 exhibitors (minus LG and a few others from Asian Pacific which elected to stay home due to virus fears), there’s a lot to take in at the world’s largest exhibition for AV and systems integration professionals.
Pro AV, in a nutshell, is any video or audio application that isn’t media and entertainment. Most of the tech is designed to communicate information.
It’s a diverse market ranging from battle-ready military simulators to the digital advertising screens on any high street as reflected in the $85 billion that Avixa, the trade association and co-organiser of ISE, thinks Europe’s pro-AV market will be worth by 2024.
The XR opportunityVR, AR, MR, lumped together as XR (extended reality) and given its own ‘summit’ at ISE acknowledges that extended reality is a massive opportunity for the AV industry.
IDC’s Worldwide Augmented and Virtual Reality Spending Guide forecasts that global spend on AR/VR will top $18.8 billion this year – up 78.5% since 2019, with nearly two-thirds of the spend by businesses.
Another recent Research and Markets report pins global XR industry at $209 billion and predicts it to grow 65% over the next five years.
“While XR is undeniably the future of business, many businesses aren’t quite sure where to start,” suggests Kathryn Bloxham, head of innovation at XR Intelligence.
Hilary McVicker of immersive projection specialist Elumenati said the potential of “spatial XR” was to “bridge the physical and digital worlds” to provide “a social, collaborative way to experience immersive content.”
An obvious market for XR is museums, theme parks and other entertainment attractions. A ‘VR at ISE’ feature exhibited an aquatic theme park ride. Designed by Lightspeed Design this allows the rider to ‘touch’ virtual objects, including a jellyfish and dolphin, using mid-air gestures. The VR content is projected onto a curved Vioso Panadome screen by Digital Projection hardware so the rider doesn’t need to wear a headset. Other spectators, also on motion seats, could watch with head-mounted devices.
XR is being increasingly used in corporate training, design, aviation, medicine and education.
Volkswagen, for example, is using AR for indoor navigation at some of its factories. AR also helps companies to train new employees.
“Virtual prototyping and simulation will be a necessity for a huge number of companies,” said Frank Reynolds, marketing manager at Antycip Simulation. “Virtual content can open doors for countless procedural training tools addressing a multitude of professional uses.”
The AR cloud – conceptualised as a machine-readable 1:1 scale model of the real world is expected to transform the way that businesses operate and communicate with customers according to ABI Research. It thinks AR Cloud will have a market cap of over $100 billion by 2024.
Key technologies making this happen include computer vision, AI and Simultaneous Localisation and Mapping (SLAM) which will combine to precisely localise devices and deliver sharable, persistent AR content.
Haptics can’t be discounted either: For example, a helmet designed by researchers at Carnegie Mellon University gives firefighters the ability to ‘sense’ directions in pitch black, smoke-filled environments. The haptic helmet sends a buzz to the front, back or the sides of the head, precise signals which indicate whether the user should move forward, stop and turn left or right.
Esports get an educationColleges in North America have been offering scholarships in esports for years and now UK colleges and universities are following suit. That’s partly because of the growing market in esports-related jobs and partly because higher ed establishments believe an esports course can more than pay its own way in graduate interest. There’s evidence too that gaming helps STEM studies, particularly in attracting women to areas where they have been traditionally underrepresented.
Staffordshire University introduced the UK’s first esports degree in 2018. The University of York has a partnership with Esports group ESL. Other institutions following suit include Chichester, Roehampton, Sheffield Hallam and Teesside.
These degrees focus on the business and event management aspects of esports, building skills in budgeting, marketing, and casting – not to mention the production skills of a live broadcast. Staffordshire Uni, for example, has built a dedicated esports lab and ‘pro-gamer training facility’.
“There is a lot of crossover between pro video and AV in esports,” says Kieron Seth, marketing director at kit distributor Holdan. “For us, it’s about providing a broadcast-level live event experience designed both for spectators in the arena and for viewers over the internet.”
The facility’s IT network is another critical component to building an esports program, interconnecting powerful gaming stations, minimising latency, and ensuring a smooth spectator experience.
The installed base of esports devices in North American and West European universities is forecast to reach 64,000 by 2021, according to Futuresource. A survey by Extreme Networks found that more than 70% of schools across K-12 and higher education in North America, Asia Pacific and Europe are considering an esports curriculum.
Smart buildings improve wellbeingThe smart buildings market is predicted to grow to $92.5bn globally by 2025, according to Rethink Technology Research, up from around $4.2bn in 2019.
“The design of buildings with renewable energy sources and with smart energy systems is an increasingly important trend,” said architect Aryanour Djalali, CEO of architectural practice DNA Barcelona, who gave a keynote at ISE’s Smart Building Conference. “Automation can be fully integrated into buildings so that all systems feed into the same cloud-based network.”
Erik Ubles, CTO at EDGE said, the four key elements of smart buildings are wellbeing, design, sustainability and technology. He argued that 85% of workers aren’t engaged in the workplace because of the physical environment.
“While heat, lighting and air quality all play a part in producing happier (fitter?) and more productive occupants we want to design spaces that connect people to the building,” he said.
If you want to use smart building technologies to save costs or increase margins, the main use case you should be targeting is human productivity, advised Rethink.
On the ISE show floor were companies showing how an open-plan office could be integrated with ‘smart’ technology to enhance and improve the work experience. Products in this area include human-centric lighting systems which use tuneable LEDs to offset typical cold corporate illumination, immersive audio, voice and gesture control, and products using 5G and Wi-Fi 6. For example, using 5G, architects could quickly beam a 3D rendering of a building to a conference meeting over 5G for interactive AR/VR design collaboration.
Screens: thinking beyond the frameDigital signage is expected, by Avixa, to become the largest pro-AV solution area by 2024 eclipsing conferencing and collaboration.
Like ‘TV everywhere’ digital outdoor signage is all about context: who is watching and when, where and why they’re watching. As a result, content management software is evolving to digital experience platforms (given its own acronym DXP) to give greater control over tailoring content by time of day and device, or even individual passer-by.
Think Bladerunner 2049 and other future cityscapes where giant holograms react to your presence. Low latency 5G connections are set to enable superfast immediate facial recognition, customised for users via geo-fencing. It’s a data-driven approach that will see tech giants Google and Microsoft playing an increasingly larger role.
Another clue to how this will evolve was in a towering holographic display at the entrance to one of ISE’s halls made possible with an ‘invisible’ projection surface from Novaline and CarbonBlack Technologies, playback servers from disguise, Panasonic projectors and visuals created by Belgium’s NTTRB.
The next question is how do you make content stand out in a world of screens? Digital screens used to be flat and square but new flexible LEDs are freeing designers to get creative with format and to showcase more artistic content.
“With flexible and freeform LED, but also with projection mapping, the creative possibilities are virtually limitless,” says Michel Buchner from creative technology provider, Nexxt Technology. “The only problem is that the majority of designers and architects are not aware of this yet. Once they think beyond the frame and more about animated wallpaper, patterns, and textures blended as elements in their designs we expect a large rise in the use of aesthetic media with projection mapping and flexible LED.”
Combined with improvements to contrast ratio and more precisely controlled backlighting LED’s main advantage us its large footprint with no discernible gaps (bezel). Video walls composed of panels with virtually non-existent bezels (down to 0.5mm) spread across the show floor.
“The complexities of installation are becoming less of an issue too, as the rise of interchangeable tiles and out-of-the-box solutions pave the way for simplicity,” says Claire Kerrison, analyst at Futuresource.
LED ‘sheets’ exhibited at ISE from Spanish company Flying Screens can be combined to create custom-size screens for installed in unconventional locations such as on curved surfaces. The company claims the technology allows design and features which are currently impossible to reach with traditional LED panels.
The Abu Dhabi tourist board recently won a Guinness World Record for the size of an AR-enhanced digital billboard on display at Piccadilly circus. The 548 sqm display captured images of people near the statue of Eros and digitally placed them, using 3D AR overlays, among some of Abu Dhabi’s tourism landmarks.
Off to BarcelonaThe latest research from Avixa shows pro AV revenues growing at 5.7% or twice the expected global GDP over the next five years. As a result, next February, ISE relocates to the far larger venue of the Gran Via in the Fira de Barcelona. Not bad for a show that was attended by 3500 people on debut in Geneva only 16 years ago.

Sunday, 9 February 2020

AI gives iconic Lumière train movie a 4K make-over

RedShark News
It’s one of the most iconic films in all cinema, a 50-second shot of a train arriving into a station, now resurrected in 4K using Artificial Intelligent interpolation.
‘L’arrivée d'un train en gare de La Ciotat directed and produced by Auguste and Louis Lumière made its debut in 1896. It was filmed using the Cinematographe, an all-in-one camera, which also serves as a printer and film projector and was remade in the 1930s by Louis in stereo 3D – a moment to which Martin Scorsese paid homage in the opening sequence to his 3D feature Hugo.
In much the same way that Peter Jackson breathed new life into footage of the first world war with his documentary They Shall Not Grow Old, the grainy stuttering film footage has been restored, upscaled to 4K and uplifted to 60 frames a second.
YouTuber Denis Shiryaev had the idea to put the footage through a pair of AI programs, DAIN and Gigapixel AI. Both are forms of video frame interpolation which aim to synthesize non-existent frames in-between the original frames.
The former (Depth-Aware Video Frame Interpolation), imagines and inserts frames between the keyframes of an existing video clip and added enough frames to increase the rate to 60fps. Gigapixel AI “analyzes the image and recognizes details and structures and 'completes' the image” according to developer Topaz Labs.
Shiryaev has added in some sound for good measure. While the result is certainly cleaner and captures the unnerving feeling of watching something filmed a century ago as if it were yesterday, it’s not perfect either. There are jumps and odd ripples but if someone were to do a professional job on, say, Chaplin's first masterpiece, The Tramp from 1915, or to polish other historical film artefacts, then algorithmic assistance might be the most economical way to do it.
A.I has previously been used to restore Orson Welles’ unfinished feature The Other Side of the Wind. Originally made in the 1970s but unreleased, the film has had its resolution improved for release on Netflix in 2018.
“AI can create art,” says Alex Zhukov the chief technology officer of Video Gorillas whose A.I tools were used on the project. “The real question is when AI will create art that is indistinguishable from that of a human.”

Friday, 7 February 2020

VOD makes us anti-social – but don’t expect streamers to admit it

Videonet
It should come as no surprise that our seemingly insatiable appetite for streaming video is making us all more anti-social – even in our home. A fresh study suggests that the ability to view on connected devices is driving consumers to watch TV independently of other family members.
The increasing trend towards personalised viewing, something that consumers are said to desire, is morphing our behaviour so much that we are increasingly identifying as solo viewers.
Recent research by Ampere Analysis found that solo viewers no longer find watching TV with other members of their household particularly attractive – in fact they actively disagree that watching with others is important. It found that the numbers of solo viewers are greatest in those markets with highest OTT video usage, indicating that it is specifically the rise of VOD, and the huge variety in content choices that it enables, that is driving the phenomenon.
It’s only going to continue. As new streaming services launch, the content pot grows larger, while the ability to serve content tailored to individual preferences is improving all the time.
Ampere believes this won’t make much difference to how content is marketed .“I don’t think SVOD platforms will want to position themselves as advocating watching things on your own all the time, because watching TV is always seen as a communal activity that brings people together,” says Minal Modha, Consumer Research Lead and report author. “Also, in an age when people are allegedly becoming more insular due to social media and smartphones, SVODs wouldn’t want to contribute to anything that could be perceived as having a negative impact on mental health.”
Despite that, the more people a streaming service can get to watch different assets in its library, even from the same account, the more information it can collect about individual profiles. For the growing number of AVODs launching into the market, that can only be good for slicing, dicing and serving up targeted ads.
“With personalisation, I think it’s hard for SVOD platforms to know whether people are solo viewing or viewing with other people through a single profile on the account,” suggests Modha. “Ampere envisages that, as long as those profiles are engaging consumers, SVODs won’t change their personalisation process.”
Ampere found a clear correlation across countries including the U.S., Sweden, Denmark and Australia between SVOD usage and the proportion of consumers identifying as solo viewers. The relationship is especially clear once demographic effects are taken into account, and strongest in adult-only homes. The pattern breaks down among households with children. For this group, family time is still important. Regardless of whether the households have older or younger children, adults in these homes are less likely than their peers to engage in solo viewing, despite their high SVOD usage.
Live broadcasts were not included in the research, but Ampere concludes that live will encourage different viewing patterns due to the need to watch the content… live. “Therefore [live] aligns itself more with communal viewing and we would not expect that to change any time soon.” Indeed, last Sunday’s Super Bowl scored its first ratings increase in five years, with just under 100 million viewers, on average, watching Fox’s broadcast.

There is evidence that scheduling the release of ‘must-see’ content , rather than dropping all episodes into an on-demand offer at once (as a box set) for binge-viewing can still encourage communal experiences. The BBC scored its biggest new drama launch in more than five years with His Dark Materials, transmitted over eight successive Sundays before Christmas. For the first episode, 7.2 million tuned in, with the rest of the run averaging around 4 million and another million plus watching on catch-up.

Microservices Poised To Take Self-Serve To Another Level

BroadcastBridge
Microservices enable broadcasters to find new ways to adopt, engineer, operate and maintain the value of their solutions. For vendors, microservices provides opportunities to offer what could essentially be a self-serve menu for clients rather than building bespoke workflows internally. The impact on the service that will be delivered by broadcasters five to 10 years from now could be dramatic. 
Traditionally, broadcasters have had to wrestle with managing ‘heavy workflows and often bespoke code and integrations. These were usually ‘single use’ and not typically reusable elsewhere. Implementing microservices provides the opportunity to construct workflows from a variety of functional components and services and to reuse them across all the workflows within an operation.
Properly implemented, microservices deliver several key benefits such as re-usability, speed to implement, increased flexibility in that functional parts of a workflow can be more easily swapped out, almost on-the-fly without the need to completely re-engineer the changed workflow. Another benefit is the simplification of regression testing applied to integrations, but also within internal code, where there can be thousands of microservices within a product’s function library.
One example is of a user needing to deliver content to a non-linear distribution point. Craig Bury, CTO at consultancy and software services developer Three Media explains that five years ago, this process would likely have been one workflow with a ‘loose’ integration to a transcoder and perhaps a file transfer manager, maybe even watch folders.
“The user would have had little variance of options in that delivery,” he says. “A change to the deliverables to a platform, even a minor one, would typically mean the workflow would have to be completely re-started from scratch, manually, as there were no branched options to manage the changed area only.”
The microservice concept offers the opportunity to build an automated workflow with very granular steps and branching, with the re-use of similar services across each step, with tight API integration to all platform components. Now, when a change to a deliverable is required, only that area needs to be re-run, there is no total re-work, and this can be controlled via API, with the request made direct from a user.
“For example, this would give a user the ability to easily and quickly shuffle the order of operation, change priorities, start and end dates, bumpers or packaging, delivery file formats, etc,” Bury says. “The fine-grained self-serve user control enables change quickly and cost effectively, which is where the benefits lie.”
Prior to the rise of OTT, the TV industry has largely focused on performance and trying to squeeze more TV channels onto a constrained network. This has traditionally focused on a CAPEX play with investment in various physical hardware appliances to meet a tightly defined set of infrastructure requirements.
The last five years have seen content providers and operators start to recognise that flexibility and scale is vital to operating in an agile media landscape. This has led to more use of off-the-shelf COTS hardware along with a software-centric approach that enables a mix of datacentres, on-prem, private or public cloud with unified management and tools.
“As soon as the industry embraces the use of COTS hardware as a foundation infrastructure, it becomes possible to use an IT toolset that enables automation, and replication,” says Arnaud Caron, Director, Portfolio Transformation, MediaKind. “This process also applies to the software application stack when it is transformed to true microservices, as it is containerised and orchestrated.
A clear example is the expansion of a video headend to add more channels for satellite distribution. Caron says that in recent years, operators have undergone capacity planning: dedicated encoding hardware, network expansions/routing adjustment, manual reshuffling of statistical multiplexer, adaptation of the monitoring layer for expansion, careful configuration of the video service on hardware, to name a few.
“With cloud and orchestration, these operations can be automated and rationalised. The process of adding a node is a standard operation but the process of adding it to the video cluster and adjusting the video workflow is now increasingly automated. This relies on orchestration.”
The key benefit of microservices-designed architectures to broadcasters is flexibility. The industry is well aware of the fast-paced nature of broadcast content, especially in live environments, where content needs to be distributed to linear TV channels and IPTV streams quickly, efficiently and at a high-enough quality to tick off the quality of experience demanded by the average viewer.
“By using microservices to create modularity in application development, something that wasn’t available under the monolithic approach of the past, it allows broadcasters the opportunity to work in an environment with a software development community, pushing forward a more collaborative and flexible environment,” says Joop Janssen, CEO at Aperi.
“Broadcasters can now get to innovation software speeds that were previously defined by delays in hardware deployments. Microservices are the next logical step for broadcasters.”
There are a number of key challenges to overcome before we can realise the potential of microservices and the cloud in the broadcast sector. Although IP adoption has accelerated, industry-specific interfaces and protocols such as SDI are still a dominant transport technology for the production and facilities, particularly for uncompressed video. Dependence on SDI impedes the ability to scale and grow operations efficiently and leads to separate broadcast and IT infrastructures that further inhibits flexibility.
Another impact is at the human and organisational layers; “the need to build the right skillset within the organisation, potentially shifting from engineering towards managed services; removing siloed organisations to shared practices,” says Caron.
“New market entrants that have purely started as OTT, SVOD, VOD services are able to offer compelling services with a shorter time to market because they are built in the cloud,” he says. “They are based on both IT technologies and IT practices, which have been specifically adapted to media.”
Barriers to Adoption
Initial barriers to adoption of microservices tend to surface when a process improvement initiative identifies one or more functions within a monolithic system which requires a change or upgrading.
“The desired changes often require a top to bottom replacement or upgrade of the entire application stack rather than simply replacing or changing functional components of the application,” Bury says. “In most instances a top to bottom replacement would involve significant time and cost and require significant integration work to deliver new workflows and integrations and, ensure continued functionality of critical external business systems such as scheduling, contract management and finance and accounting. The cost and time to change would need to be assessed alongside the commercial pressures to deliver new workflows and functionality for clients. Unless significant investment is made the client deliverables will more often than not take priority.”
Perhaps the key development impacting the evolution of microservices is the rapid increase in the adoption of containerisation (such as Docker, Kubernetes) and the maturing of other serverless functions. This transition will further drive down costs related to integration and hosting microservices, either in the cloud or in on-premises implementations. It should also increase the breadth of services and types of functionality and increase flexibility and decrease time to deploy, while driving down costs.
In Five Years’ Time
By 2025, thousands of microservices could exist within a product, Bury suggests, “all exposed externally as well as internally via a well behaved and well documented set of API’s.
“Clients and their users will be able to build their own workflows to drive the vendor system simply by calling the various system API’s, thus eliminating the burden of near total vendor lock in. This dramatically changes the vendor landscape and offerings as there will be little or no on-going bespoke work required. User interfaces will be built to provide views to support this flexibility, supported by extended data schemas to accommodate this approach. This takes self-serve to another level with full control having moved to the client.”
Microservice will likely give broadcasters access to ubiquitous infrastructure, full flexibility, service agility and endless scalability. The impact on the service that will be delivered five to 10 years from now could be dramatic.
“With cloud elasticity, you’re not tied to just one linear broadcast channel, but you can create as many OTT variants as you wish to better match audience demographics,” says Eric Gallier, Vice President, Video Customer Solutions, at Harmonic. “Obviously, it is easy to create event-based, pop-up channels that have already been used extensively during major sports events. The rights for premium sports content are becoming so expensive and complex that distribution of the program directly to end users or to super aggregator MVPD partners is now calling for sophisticated methods to describe per-event distribution rights and potentially enforce those rights at the edge by either blacking out the content or replacing it with alternate programming.
Gallier adds, “Advertising monetization is moving to targeted models, but somewhat slowly because of flexibility and scalability issues. But the issue of scalability is going to disappear over time thanks to cloud distribution. Only cloud deployment can enable the kind of elasticity that is required by targeted advertisement workflows during the ad break of a premium sports event. Advertisers want to see more efficient use of spending dollars, so the quest for highly accurate targeting is only going to accelerate. The operation of replacing black-out and targeted advertising content requires cloud elasticity and scalability.
Video quality is another area where the cloud can be leveraged to generate (i.e., transcode) different variants of the same program to match the end-user device capability, which seems to be more and more fractured in terms of codec support (i.e., HEVC, AV1, VVC, EVC), resolution up to 4K and even 8K, HDR with HLG, HDR-10, Dolby Vision. Combine that with the different versions of streaming formats (i.e., HLS or DASH) and the different DRMs and you end up with an ever-increasing number of combinations that only a cloud solution can manage. Five years from now we’ll see a significant increase in the number of people watching streaming broadcast content. It is very likely that the streaming service will be much more personalized than today. The main question facing broadcasters then will not be whether cloud is a good solution, but which cloud solution is the best to manage a service that requires delivering personalized content based on analytics and other data to mass audiences. Only an extremely agile solution will be up to the challenge.”