Monday 30 September 2019

Facebook now wants your thoughts, literally

RedShark News
When Facebook made a two billion dollar play for Oculus Rift it triggered the first rush to mainstream VR. Now it’s spent half that amount buying a company that makes mind reading technology. The brain-computer interface race is on.
New York start up CTRL-Labs is being incorporated into the social media giant’s augmented and virtual reality division, with the idea of bringing its mind-reading technology into consumer products.
Similar technology (not least one already being developed at Facebook) is worn on the head. CTRL-Lab’s twist has been to create a wristband that can decode ‘musculoneural’ signals into machine-interpretable commands. By wearing the wristband you'll be able to control a computer using just your mind.
“You have neurons in your spinal cord that send electrical signals to your hand muscles telling them to move in specific ways such as to click a mouse or press a button," Facebook VP Andrew Bosworth said in a Facebook post. "The wristband will decode those signals and translate them into a digital signal your device can understand, empowering you with control over your digital life."
Quicker than a reflex, critics make the obvious carp that Mark Zuckerberg was making one more land grab for our collective digital soul.
Erin Biba’s Tweet to Bosworth was typical: "I mean honestly who do y'all think you are. Don't you own enough of our data already? Holy s--- this is gross."
Facebook is already up to its ears in invasion of privacy issues but the idea of a company like it owning your thoughts, potentially being able to pre-cog your own intellectual property, remains a far-fetched utopia.
A more profitable train of thought is why Facebook – and Elon Musk would be another – are turning their attention to solving the brain-computer interface at all.
“Being able to recognise even a handful of imagined commands, like ‘home’, ‘select’, and ‘delete’ would provide entirely new ways of interacting with today’s VR systems – and tomorrow’s AR glasses,” Facebook blogged a year ago in relation to the prototype headset that can transfer one’s thoughts directly onto a computer screen.
The intent is hands-free communication without saying anything.
“Imagine a world where all the knowledge, fun, and utility of today’s smartphones were instantly accessible and completely hands-free... A decade from now, the ability to type directly from our brains may be accepted as a given. Not long ago, it sounded like science fiction. Now, it feels within plausible reach.”
The CTRL-Labs acquisition builds on this by targeting hands-free communication without physical movement.
“It’s the intention [to move], not the movement” itself that controls the avatar, explained Thomas Reardon, CEO of CTRL-Labs, at an industry conference last December.
In June, CTRL-Labs itself was in the acquisition business when it bought patents related to a wearable that enables control of robotics and PCs via gestures and motion.
If something can measure brainwaves to determine movement that a person is thinking about, even if they aren’t physically moving, and then be able to translate that into movement on a digital screen that would truly be something out of Avatar the movie where the lead character was disabled.
According to CTRL-Labs, whose newly minted multi-million dollar founders are both Ph.Ds in neuroscience from Columbia University, measuring signals on the wrist rather than the head makes development of a commercial device easier. It would seem that determining the intent of neurons in the brain can be harder to decipher than doing so from the hand where the signals show less interference.
“There’s consistency across people due to the layout of the muscles [in the wrist],” said co-founder Patrick Kaifosh.
The next step will be to tie this into the core operating system for Oculus and pave the way for computer games played at the speed of thought.
In this scenario even the mixed reality futures imagined in Ready Player One seem anachronistic since the video game players still need full body haptic suits to participate. They are still tethered in rather ungainly real world fashion to the machine.
When the starting point is to conceive the human brain as a computer processor then the ultimate brain-computer interface is so transparent that you’d hardly notice.
CTRL-Labs aim is also to short circuit what they believe to be the brakes on the brain’s system – the body itself. We can think faster than any AI (currently) they suggest but it takes an age for those signals of intent to route to our hands or legs or eyes and therefore into action.
The only people ahead of Facebook, Google or Apple in this area will be the military who will no doubt be plotting thought-powered war games and techniques to hack the cortex of opponents.
If our thoughts can be inferred into actions for the playing of games, they can surely be altered for nefarious intent. See The Matrix for details.

Forget 8K, are you ready for 32K?



RedShark News
With 8K suddenly all the rage and as specifications for 8K tellies have just been agreed, there are already concrete steps to double and even quadruple the number of pixels for display.
Earlier this year Sony installed a 16K screen into the front of a cosmetics store in Yokohama, south of Tokyo. The 63ft wide screen (19.2m) screen is believed to be the largest 16K screen yet.
Sony has plans to make the product available, in custom sizes, for well-heeled consumers.
The screen is based on micro-LEDs, the same technology behind Sony and Samsung LED cinema screens which use tiny, non-organic LEDs - three per pixel – to deliver a colour and contrast quality with extreme high brightness that is on par with OLED, but seemingly without the manufacturing issues or lifespan deterioration of their organically emissive cousin.
While noting that 16K screens are likely to be adopted as a corporate niche for many, many years, respected pundit David Mercer told the BBC that “even 8K on a big display is almost mesmerising.
"When you get to this resolution it delivers almost a quasi-virtual reality experience as your eyes perceive there to be depth to the content."
That’s exactly the impact that we are promised with new technologies such as light field. Developers Light Field Lab are targeting large scale theatrical and experiential venues with its holographic displays first.
It’s also the target for a new music and entertainment venue proposed for the East end of London by the Madison Square Garden group. If plans for the MSG Sphere get the nod from London mayor later this year, it will house the “largest and highest resolution LED screen in the world” at a resolution which this reporter understands to be 32K.
The screen will also be curved to fit the structure’s golf ball design and allow for “immersive” performances (think Black Mirror’s ‘Rachel, Jack and Ashley Too’ with a giant holographic Miley Cyrus) and augmented reality.
There will also be an adaptive acoustics system that delivers “crystal-clear audio to every guest”, a haptic system that will convey bass so the audience can “feel” the experience and wireless connectivity that delivers 25 megabits per second for every guest (presumably based on 5G).
What’s more the 90-ft high sphere will be clad in LEDs panels to project ultra-high def footage, perhaps live from the event inside, over 150 metres away.

The arena

The 4.7-acre site near Stratford’s Westfield shopping centre is currently inaccessible to the public as it is surrounded on all sides by active railway lines. It was last used as a temporary coach park during the 2012 Paralympic Games.
The 21,500 capacity will make it the largest concert arena in the UK.
“London has a population of nearly 8.8 million, which is almost identical to New York City at 8.5 million, but while the London market has two large-scale capacity venues, the New York market has seven,” MSG explained.
Technically, there is no upper limit to content resolution. If money were no object then content of 50K could be produced – NHK the Japanese broadcaster now expanding into 8K drama production has admitted as much.
There are also stills cameras capable of recording triple the amount of megapixels than your ‘regular 8K 7680×4320 pixel count.
Quite whether there is any point given that 8K is widely understood to be at the limit of human perception is another matter.
32K though in an experiential setting, especially if off-set by some form of giant scale AR or mixed reality animation, could make for such a wildly different experience that it would pull in punters from passively grazing Netflix at home.
Worth noting that it’s not just music that MSG promotes either. It also has stakes in a couple of sports teams – basketball and ice hockey – and in esports.
What’s more I understand that a Soho post house is already prepping content for display at 32K.
8K is coming. The Consumer Technology Association this week laid out the official specs for 8K TV. This includes at least 7,680 x 4,320 resolution, support for up to 60 frames per second,10-bit colour and HDR function. Displays will also have to upscale any video to 8K. Official 8K logos will start appearing on sets from January.
A word of caution in all of this though: According to the latest figures from analyst Screen Digest, barely a fifth of Japanese homes will have a 4K UHD TV by next year and only 62000 homes in Japan (less than 0.1%) will watch the Olympics in 8K.
“There is zero correlation between content and product,” said Maria Rua Aguete, executive director at IHS Markit speaking at IBC 2019. In other words, the industry can pump as many 8K screens it likes into the ether but content producers are not taking the bait.

Friday 27 September 2019

Politics live: Hansard of the airwaves or reality TV car crash?

IBC
The live broadcast of the UK’s Parliament is proving increasingly popular with British viewers as debates over Brexit and legal challenges intensify. 
https://www.ibc.org/publish/politics-live-hansard-of-the-airwaves-or-reality-tv-car-crash/4982.article#.XY4KI_e-pYw.twitter
After heated debate in the Commons on Wednesday night, speaker John Bercow, who plays the part of on field referee, said the atmosphere in the Chamber was “worse than any I’ve known” and called it “toxic”.
Such drama is one of the reasons that Freeview channel 232 is suddenly the hottest button on the box.
BBC Parliament’s broadcast of proceedings from the House of Commons, House of Lords and various committees are usually only watched by a few 100,000 die hard political junkies but Brexit has sparked a ratings spike for the channel and its complementary website Parliamentlive.tv.
A number of video clips from the chamber have also gone viral, recorded by MPs in contravention of Parliament’s own rules.
A record audience of 2.6 million viewers tuned in during the week of 2-6 September peaking on Tuesday 3 September when 727,000 viewers (contributing to 1.5 million for the day) watched as more than a more than a dozen Tories defied their leader by voting in favour of seizing control of the Commons timetable.
That broke the record set earlier this year, when the week beginning January 14 drew 2.2 million viewers.
It is the element of unpredictability, the political point scoring and ticking clock of deadlines which has made recent broadcasts as appointment to view as a major league football or UFC wrestling match. Speaker John Bercow plays the part of on field referee and VAR.
Politicians had refused cameras entry into Parliament for years, afraid of what the public would learn.
“My concern is for the good reputation of this House,” Prime Minister Margaret Thatcher told the Commons during a 1988 debate on the topic. In response, Labour MPs chanted “frit, frit, frit” (“frightened”)
Those fears have come to pass. BBC Parliament is either democracy laid bare or a reality TV car crash.
Peter Knowles, the BBC Parliament controller has a more nuanced view. “People like the idea that this quirky TV channel is showing this drama and that it’s something they have got in common,” he recently told BBC Radio 4's Media Show. “People may hold extreme opposing views but love to find common ground in sharing their experience of watching it.”
Staying neutralWhile news programmes and channels have long dipped into the feed from Westminster adding commentators, opinion and analysis, the Parliament channel has to tread a studiously neutral line.
Aside from Westminster it also covers proceedings from the Welsh and Northern Ireland Assemblies, the Scottish Parliament and the European parliament, and offers a continuous largely unmediated feed.
“There is constant mediation on screen with captions telling you who’s speaking and what they are talking about,” Knowles says. “At special moments (such as Black Rod’s ceremonial proroguing of Parliament) or in gaps in proceedings we do try and guide people.
“One or two of the audience maybe think they know it all - they’ve got Erskine May [treatise on parliamentary practice] costing a few hundred pounds sitting on their knee at home - but most are jolly grateful for guidance.”
He stresses: “We are absolutely squeaky clean as far as commentary [is concerned]. We have to be impartial through and through.”Knowles, a former managing editor of BBC TV News, has been with BBC Parliament since 2001.
The Commons sets strict rules for what can and cannot be shown.
“There used to be a lot more [restrictions]. Mercifully that list has reduced over the years. At the start of parliament broadcasts (in November 1989) what you got was a close-up of somebody speaking (with occasional wide shot) but you couldn’t see anything going on around them. There were no reaction shots (of MPs listening).”
Parliament since relaxed its stance on that and more recent changes have helped give audiences a far greater sense of being there, he says.
“Eye level cameras next to the speaker’s chair have transformed things so you can see when the tellers approach you or see the PM on his feet opposite the leader of the opposition. It’s much, much closer to a feeling of being there.”
Nonetheless, when the session officially ends so too must the channel’s live coverage. “It goes to a colour picture of a clock,” is how Knowles describes the end of session place holder.
That’s tricky, because often the action continues on the green seats and spills out into the lobby, where his cameras can’t go.
For example, immediately after parliament was prorogued the SNP filmed a group of Plaid Cymru MPs singing Calon Lan and posted it on social media.
“It wasn’t part of the feed,” says Knowles. “It does raise the interesting and difficult question which is why do the MPs – who have set the rules – go off and break them?”
Parliamentary rulesThe immediate change [we want] is that if an MP or the PM themselves brings in a guest - which may be another country’s PM - into the viewing gallery and the MPs are clapping and pointing and talking about them, the audience cannot see it. It’s not filmed and that is a real frustration.
“The bigger question is whether we can get some glimpse of the voting lobbies. At the moment the only chance we get is if MPs tweet a picture out. But we can’t see them.”
Protesters in the public gallery are not allowed to be shown either.
“That’s common practice across all parliaments so as not to give much attention to protestors. That said the Scottish parliament might give you one shot, briefly. So there’s a balance they are striking between openness and being seen to encourage troubled behaviour in the house.”
Photos from the voting lobbies are banned – but Speaker Bercow has not wrapped any knuckles.
The ability for MPs to reach their constituents or wider audiences online, even by clipping Parliament channel coverage, is a significant development.
Video of Sikh Labour MP Tanmanjeet Singh Dhesi demanding that Boris Johnson apologise for controversial comments he made about the appearance of Muslim women has been viewed over 2.5 million times on Twitter.
MPs are well practiced at creating a sound bite targeting TV bulletins but the more media savvy MPs (or their teams) are now versed in making montages to share online. The PM or leader of the opposition will have a ready-made question or quote to publish instantly on their Facebook page or Twitter feed to target their electorate.
Even before the cameras were allowed into Chamber, then Prime Minister Thatcher was taking advice on where to look, how to stand, to convey her arguments not to the sitting MPs but to the televisual audience.
MP’s behaviour continues to be modified and amplified by social media. It’s unlikely we will catch Jacob Rees Mogg or anyone else reclining in quite the decadent way he was snapped doing.
The Prime Minister has even begun arranging curated Q&As with the public on Facebook Live – literally putting into practice the narrative of a government speaking direct to voters over the heads of parliament.
One side effect of the heated debates has been a growing fanbase on gaming platform Twitch. The site streams the UKparliament channel to 200,000 viewers on the Thursday PM I checked and has attracted almost 27,000 followers.
One reason given is the real-time chat interaction which the Twitch platform offers to participants alongside the live feed. Viewers can pepper their contributions with emotes (Twitch’s terms for emoji’s) of Johnson, Corbyn and the Speaker.
Broadcasting and streamingAlthough Parliamentary proceedings have been broadcast since 1989, the BBC only took over the channel and its production in in 1998. Bow Tie Television’s involvement with the broadcasts began in 1989 and it’s been the principal contractor since 2001.
It supplies multi-camera coverage of Chambers and Committees, operating over 170 camera channels concurrently to deliver over 7,000 hours of manned unscripted content each year. Now part of the NEP Group, Bow Tie’s staff operate mic switching and sound mixing across 50 room set ups using over 1250 microphones every day, providing the sound reinforcement in meeting rooms for all capture and relay. The teams produce the broadcast output, both directing and producing content and adding metadata markers to facilitate search and discovery.
On the Parliamentary website there are sometimes up to 20 live streams of Select Committees available (operated by Bow Tie).
One direction the channel could take was outlined last year by John Grogan MP is the Labour MP for Keighley and vice chair of the All Party Parliamentary BBC Group
“Is it possible to imagine that using the latest digital technology the BBC could do for Parliamentary coverage what they have done for the Olympics or Wimbledon? As Tony Hall himself told the Select Committee: “could we work with the parliamentary website to allow people to search more easily by topic, to have notifications when things are being brought up in the House?’”
BBC Parliament annual content budget is £1.6m and its transmission costs are reportedly around £7m, a cost that could be slashed if the channel went totally OTT although given the massive public service remit this is not likely soon.

Thursday 26 September 2019

Google claims era of quantum supremacy

RedShark News
Google claims to have built the first quantum system capable of a calculation that cannot be done by any normal computer. This means that it’s passed some sort of threshold called ‘quantum supremacy’ and while we don’t need to worry about transcending to the digitalverse just yet, it’s a milestone that’s worth reporting.
Simply put quantum computing might actually be able to do stuff that even supercomputers with a brain the size of a planet just cannot.
We’re not going to get desktop quantum PCs soon but the theory is out of the lab. The Schrödinger’s cat is out of the bag.
“As a result of these developments, quantum computing is transitioning from a research topic to a technology that unlocks new computational capabilities,” state the mathematicians behind the breakthrough. “We are only one creative algorithm away from valuable near-term applications.”
Blimey.
Google, Microsoft, Intel, IBM and others including Alibaba are in a race to build a reliable quantum system that can vastly outperform the bricks and mortar of silicon-chip based processors.
Quantum computers will be measured on the atomic scale using the power of atoms and molecules to perform memory and processing tasks.
Unlike digital computing’s requirement that data be binary, qubits (quantum bits) can be in multiple states at the same time. In theory, circuits can be programmed into a state known as a superposition, where they are equal to neither 1 nor 0, but some combination of the two. This fluidity means an unfeasibly large number of calculations can be done on unfeasibly large numbers extremely quickly to supercharge developments in artificial intelligence.
Decades ago, computing and AI visionary Richard Feynman laid down two tasks to kickstart the quantum era. One was to engineer a quantum system to perform a computation in a large enough computational space and with low enough errors to provide a quantum speedup; and secondly to formulate a problem that is hard for a classical computer but easy for a quantum computer.
Google reckons it has done both.
Using a quantum processor named Sycamore with 54 programmable superconducting qubits the Google AI Quantum team measured repeat experiments of a math problem (something to with proving the randomness of numbers which we won’t delve into here).
This took about 200 seconds trumping that of the world’s faster traditional supercomputer, Summit, by about, oh… 10,000 years.
Summit, which is capable of 200 petaflops, is built by IBM.
What’s more, Google’s researchers used an algorithm called Schrodinger which simulates the evolution of the full quantum state.
“Quantum processors based on superconducting qubits can now perform computations beyond the reach of the fastest classical supercomputers available today,” they declare. “To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor. Quantum processors have thus reached the regime of quantum supremacy.”
Scientists have been trying to make a quantum computer for years. Part of the issue is that - surprise surprise - atomic molecules are very sensitive to any kind of disturbance from the environment, such as heat, radiation and magnetic fields. Quantum computer chips must be protected by several levels of shielding and cooled down almost to absolute zero but that still makes them tricky systems to tick over.
Even Google’s boffins agree: “Realising the full promise of quantum computing still requires technical leaps to engineer fault-tolerant logical qubits.” In fact, the experiment was performed in part at a Nano-fabrication facility in California.
Nonetheless, “In reaching this milestone, we show that quantum speedup is achievable in a real-world system and is not precluded by any hidden physical laws,” the mathematicians state.
With Moore’s Law, a pretty consistent theory of computing power, about to hit its laws of Newtonian physics shelf life, the hope is that quantum compute power – governed by Einstein atomic physics - will grow even faster.
But while quantum computing has great potential, the field is in its infancy. And it will take many generations of qubit increases for quantum computers to begin solving the world’s challenges. It will likely be a decade before the industry gets to tackling engineering-scale problems, and it will likely require 1 million or more qubits to achieve commercial relevance.
One wrinkle: The Google paper ‘Quantum Supremacy Using a Programmable Superconducting Processor’ was posted on the website of a research centre for NASA but was subsequently removed but not before eagle-eyed guys at Spaceref had downloaded it.
Whether or not the results of this particular test stacks up, there’s no doubt that the breakthrough into quantum computing is just around the corner.
For one thing, the similarity between IBM’s own quantum computer design and the time-dimension travelling power console of the Tardis can’t be dismissed out of hand.

Tuesday 24 September 2019

Behind the Scenes: The Goldfinch

IBC
Complex, dense and 800 pages long, the 2014 Pulitzer Prize-winning bestseller The Goldfinch by Donna Tartt has become this year’s prestige cinematic literary adaptation.
John Crowley, who directed the adaptation of Colm Tóibín’s novel Brooklyn, has taken on the task and given himself the best possible shot at pulling it off by teaming with Roger Deakins BSC ASC. It’s the British cinematographer’s first film after winning the Oscar for Blade Runner 2049.
“I was attracted to the story because it’s character driven,” Deakins tells IBC365. “I’ve never been into the action genre or superhero movies and it’s getting increasingly hard to find movies that have something that’s really motivated by people in normal situations.”
The coming-of-age adventure focuses on Theo Decker (played at different ages by Oakes Fegley and Ansel Elgort) and the painting of a chained goldfinch he takes from a museum after a terrorist bomb attack that kills his mother (Nicole Kidman). The painting becomes a symbol of hope as Theo grieves, grows into adulthood, turns to crime, and then tries to right his life.
Tartt’s writing is rich with description about even the most circumstantial detail. The Guardian called her approach ‘cinematic’ observing that the narrator of The Goldfinch “catalogues the world’s visual clutter as greedily as any unblinking movie camera.”
“The danger is that details can overwhelm the main story points when you begin to visualise it,” Deakins says. “In a way, you’ve got to start from scratch. I’ve not done many films that have been adaptations. No Country for Old Men comes to mind where to me it felt important to have Cormac McCarthy’s image of that world in mind when shooting. Similarly, here, it’s more important for me to get a feeling of the book, something that’s not in the script. It’s nothing concrete, more a feeling of feel place and character.”
The script by Peter Straughn (who adapted the novels Tinker Tailor Soldier Spy and Wolf Hall) handles the sprawling story by omitting a chunk of time to concentrate on two periods featuring the hero aged 13 then jumps to his twenties.
“The script changed the book from being a linear narrative into something more fractured. That was what I talked about with John in terms of treatment. It’s one big flashback in a way bookended by the same traumatic event.”
Creating the atmosphereSome DPs may have opted for a different lighting or colour scheme to depict stages of the central character. Not Deakins. “I don’t think the past is any different to the present. I don’t like that technique where you use a specific lens or treatment on the image to make it look like a different time. It should be more about capturing a mood.”
Crowley presented him with a whole series of images at the outset of the film to describe the atmosphere he wanted to create.
“These were his feelings of the darkness and the light which we used as talking points really. We didn’t reference other movies, just this collage of different images.”
Location scouting, which Deakins made with the director and production designer KK Barrett, proved invaluable in translating these ideas to the screen.
“I wanted to use natural light where possible but due to shooting 10-12-hour days, including in New York in winter, there was no way we could do that and shoot a full day’s work. It was frustrating but we found work arounds.”
One was to find locations in the ground floors of apartments or houses with access to the street so that Deakins could ring a building with a lighting rig. “Then I had total control.”
Scenes set in a suburb of Las Vegas were shot in Albuquerque, New Mexico “on the edge of the desert where the sunlight was hot, harsh and hostile.”

The only set was the Metropolitan Museum, out of necessity since the building is blown up. Despite this major action scene, Deakins dislikes multi-cam, favouring shooting on ARRI Alexa XT with a set of Zeiss Master Primes in 3.4K Open Gate and an aspect ratio of 1.85:1.By contrast night interiors of scenes in New York were coloured warm while those in Amsterdam were more classically designed “with a whole range of colour based on the time of day and of the city.”
He persuaded the production to visit Amsterdam on location rather than, as planned, shoot scenes set there in a studio against blue screen.
“It was very important that we conveyed a feeling of time passing and the darkness of Theo’s struggle with himself in this hotel room,” he says. At this point in the film Theo is contemplating suicide. “We have to have the reality of him seeing the world from inside this box.”
Painting a shotThe painting of the film’s title is real. It’s by Carel Fabritius, a pupil of Rembrandt, made in 1654 and hangs in The Hague.
“We went to see the painting. The museum made a fantastic reproduction of it, then KK had them painted over to make them even better.”
The painting in itself, Deakins feels, is fairly insignificant. “It’s just a bird,” he says. “What’s important is the idea that the painting is Theo’s physical connection with his mother.”
The director of photography on any film is in a privileged position when it comes to being up close and personal with actors. Deakins has photographed dozens of stars giving arguably their best screen roles including Russell Crowe, Javier Bardem, Frances McDormand, Sean Penn and Tim Robbins, but he still gets a thrill from seeing a performance take shape in front of his eyes.
“I love operating the camera mainly because I am seeing the performances for the first time,” he says. “I am the closest person to the performance, and I know when I’m watching something pretty remarkable.”
In the case of The Goldfinch he singles out Fegley, Sarah Paulson and Luke Wilson, who plays Theo’s emotionally abusive father. “Luke was quite a revelation. I didn’t know him very much from his previous work but just seeing his performance was remarkable.
“What I don’t get to see, until a preview or premiere, is how the performances jig-saw together with each other. For obvious reasons we never film Oakes and Ansel in the same shot. The character development from child to man is, I think, a pretty brilliant translation of a book that many said was unfilmable.”

BT Sport discusses creative benefits of remote production over 5G

SVG Europe
BT Sport bagged another ‘world first’ at IBC last week by successfully demonstrating 5G-enabled multi-location live remote production — but the application throws up as many questions as it does opportunities.
The advantages of remote production – reduction in cost, better work-life balance for employees, reduced carbon footprint and the ability to produce more games – is all good for fans, but not game-changing.
“The real gamechanger is when you combine remote production with producers who understand how to exploit the new technology; how to make the most of the new freedom, including the creative flexibility of wireless cameras,” said Matt Stagg, director of mobile strategy at the pay TV broadcaster.
Jamie Hindhaugh, BT Sport COO added, “I am excited about the creativity. When you start untethering cameras then you don’t need to book RF points and the perception is that you have 14 cams when you’re only using ten. We are very interested in how 5G’s high bandwidth network can integrated with the classic OB van.”
“Rights owners should be looking at remote production more closely,” urged Paolo Pescatore, analyst at PP Foresight who chaired a Q&A at the demonstration. “It represents a great way for them to improve the quality of their asset, reach sports fans and keep them engaged in a much more efficient way.”
The showcase followed BT Sport’s and EE’s ‘world first’ two-way remote broadcast over 5G of the EE Wembley Cup between Wembley Stadium and the Excel exhibition centre last November.
This time, live feeds from three stadiums in the UK, where matches in the FA Women’s Super League were playing, were connected over EE’s 5G consumer network to the broadcaster’s production hub at Stratford and routed on to the OB park at the RAI. Andy Beale, BT Sport chief engineer, switched the feeds live.
Reporters at the venues – Stamford Bridge (Chelsea); The Emirates (Arsenal) and the Etihad (Manchester City) – carried 5G HTC dongles and conducted a live four way broadcast between each other and the RAI.
“5G allows broadcasters to pay for what you need with agile specifications that don’t exist with 4G,” Beale said.
Even 4G cellular bonded links had been a “best effort” from a network perspective, Beale said, especially in congested places like stadia, and even with relatively low attendance figures of 6000 let alone 60,000 fans.
“You couldn’t throw much at it; it was never your priority feed,” said Stagg. “5G will be a rock solid service. It will guarantee performance.”
Network slicing guarantees broadcasters a minimum standard of speed and throughput with 100 megabytes per second lower latency. As a result, OBs will become far more efficient.
“5G is superior to satellite for connectivity,” asserted Stagg. “The ability to network slice — to take a part of the network and put a service wrapper around it – will enable a broadcast grade network.”
Provisioning the network for something like breaking news should be as quick and easy as going to a web front end and entering the postcode, specifying an amount of bandwidth and an amount of time. “That’s our aim,” Stagg said.
Gemma Knight, football match director for BT Sport expounded on the benefits. “For lower tier sports like the National League we might start with the manager at home, then travel on board the team bus, film arrivals and interviews in the changing room. Then you can reposition the cameras for the position. It makes it look like you have a lot more facilities.”
She added that she felt trust in the reliability of the feed. “You want to know that when you cut, it is going to be stable. That’s huge for a director. Plus, it means I don’t have to travel long distances or overnight at a venue and I can spend more time with my son. That opens up a world of opportunity for me.”
BT Sport’s pitch side reporters were able to follow video of all live games on their mobile device enabling them to deliver a more up to date and informed goals update to viewers.
The broadcaster is developing apps including in-stadia AR such as instant playback of a penalty goal for spectators to watching a holographic view of the match “with the ability to walk around the field of play and see replays from every conceivable angle.”
It envisages that stadia could be ringed with a vast array of remote operated cameras catching every possible angle and giving a total field of view.
Switching between two or more cameras at one location
Such exotic consumer applications are being lined up for introduction of the full next generation 5G core network, enhanced device chipset capabilities, and increased availability of 5G-ready spectrum from 2022.
Amid all the excitement there are multiple challenges and wrinkles. For a start is not likely to be the main connectivity solution for every event for some time, possibly ever.
“We want to bring pictures back in the best quality we can, so if there’s fibre to the venue we will use it. If we want more flexibility with cameras will mix in 5G and if we need to use 4G we will bond it.”
Switching between two or more cameras at one location over 5G is another hurdle, as is cutting between 5G enabled and other contributed feeds in synch.
“One discussion with the standards bodies is getting to the ability to time signal in 1 millisecond so over a mobile network you will be able to get a PTP clock,” said Ian Wagdin, BBC senior technology transfer manager. “That means I can genlock cameras over mobile network. That breakthrough will be of massive importance.”
BT Sport and the BBC, in the guise of Stagg and Wagdin, represent the UK’s creative industries on 5G to mobile standards body 3GPP.
“Among other things we are discussing is how do you tell a story when handing off a 5G video feed? We need to have an integrated workflow,” Wagdin explained.
There were calls for bonded device vendors to start working with 5G chipset manufacturers to bring 5G links gear to market.
There is also the contentious issue of spectrum use for media specific applications like remote production.
“Traditional broadcasters are under an immense amount of pressure,” said Pescatore. “Valuable spectrum has and will continue to be taken away for reuse of rolling out mobile networks.
“Freeing up more spectrum for super fast 5G networks will mean better TV experiences for everybody, as well as helping fulfil obligations for rural coverage and meeting mobile demand. In essence, it solves a lot of problems.”
Asked whether 5G would be the de facto standard for remote production by 2030, BT M&B lead propositions manager Alison Hutchins said, “No”; Matt Stagg said it would depend on the sport and the tier of sport; and Wadgin said, “Yes. There will be no other way.”

Monday 23 September 2019

From stadium to sofa: Augmenting the @venue fan experience


SVG Europe

Having neglected the enjoyment for fans of the full digital experience at live sports events, broadcasters and stadium owners have begun addressing the gap with new technology that improves connectivity at venues.
https://www.svgeurope.org/blog/headlines/from-stadium-to-sofa-augmenting-the-venue-fan-experience/

Not only will this enable fans to interact during games, but also to entice them from their living rooms by offering improved ticket flexibility, immersive VR experiences and multi-screen content.
The symbiotic relationship between the in-venue experience and that of those sitting at home was stressed at a session on the topic at IBC.
“Broadcasters have underserved fans in the stadia,” said Matt Stagg, director of Mobile Strategy – BT Sport. “For too long we have excluded them. But broadcasters are in prime position to change this. We have the content and the technology that can enable fans to participate even more deeply in the experience. The question now is how we use the tech to enhance their experience.”
He pointed to BT Sport’s recent launch of an UHD HDR service, which has the added benefit of Dolby Atmos surround sound. “This [innovation] is truly bringing the atmosphere of those in the stadium to people who are not.”
Yiannis Exarchos, CEO Olympic Broadcasting Services & executive director concurred, “In Roman times they built huge arenas to host a whole city since there was no means of communication. Now we can deliver the event all over the globe but too often the in-venue experience has not kept up.”
He continued, “You can have all the best athletes in the world on stage but if they play in front of an empty venue the value of their achievement somehow diminishes. Conversely, if we can give fans at the event a once in a lifetime experience then it will translate into an atmosphere which can be felt by everyone.”
There are still challenges for 4G in ultra-dense environments like stadiums, especially when the demand for data keeps increasing, with fans doing everything from updating social media to streaming replays.
As a baseline, the venue should be outfitted with Wi-Fi – notably incoming standard Wi-Fi 6 which enables greater numbers of devices to be connected simultaneously – and/or a 5G cell on site.
“First and foremost, people want to tweet their friends,” said Stagg. “Even that has been problematic in heavily congested stadia. It’s a human right.”
Fabian Birgfield, founder and director at digital design agency W12 Studios pointed out that eSports is redefining not only notions of a sport’s format but also what constitutes a stadium experience.
“eSports is digital yet eSports tournaments in front of thousands of people are broadcast live to millions of enthusiasts on networks like Twitch. The engagement with the sport in the stadia is palpable. There are lessons we can take from there about extending the live event not only in time but also in space to reach wider audiences.”
Understandably John Rhodes, design principal & director of sports, recreation & entertainment at architects HOK was on the side of the stadia not the sofa.
“In greenfield stadia designs now we are seeing the convergence of digital and physical spaces. There are a lot of opportunities for digital environments to merge with the physical environment from 360-degree, halo video screens to universal high bandwidth venue connectivity.”
Rhodes said he preferred to think of his role as one of experiential designer. “We design spaces where people go to experience something unique but in the company of thousands of other people. Is there a way to help those spectators to become more participatory in the shared endeavour with the other people around them. Can we use technology to augment the theatre of an event?”
The Mercedes-Benz Stadium in Atlanta, for example, features a one-of-a-kind retractable roof composed of eight, 1,600-ton panels that each cantilever to resemble a camera shutter opening and closing.
“Stadium design was historically about showcasing the event,” Rhodes observed. “When broadcast came in, everything was increasingly angled toward delivering the best TV viewing experience. Now, we are evolving stadium design to harness the unique energy felt by 60,000 people and translating that to the 99% of the audience which is watching elsewhere.”
Daniel Marion, chief of information and communication technology – UEFA, said the organisation wanted fans to be excited about the venue experience since that was also good “for sponsors and money generators in general.”
He said, “We have to get back to basics. When we talk to national leagues, clubs or owners when building stadia one of the things they say is “there is no business model for toilets in a stadium so why do I need a business model for connectivity?” But there are a wealth of opportunities to target different groups in your arena with digital services.”
He said, “We are having conversations with our members about fan relation management. They have data on ticket holders and fans tend to have a very strong relationship with a club. There have to be ways clubs can use their stadia to better serve their fans.”
Delivering on the promises of 5G
This can range from paperless ticketing experiences and a cashless stadium to the way you buy food and beverages at the ground. As we move closer to a world where wearable technology is the norm, the opportunities for augmented reality open up.
“From the moment you arrive at the stadium, your seat will be located by a pop-up map showing the most direct route, and the team line-up will be shown as soon as you sit down,” Stagg said. “During the game, you will be able to view multiple camera angles and every seat will be the best seat in the house. No longer will you struggle to see the penalty at the other end of the pitch – with the touch of a button, you will see it from the keeper’s view.
“If you want to use the term it’s called monetising loyalty,” said Stagg. “I am more likely to trust my favourite club, Liverpool FC, than my bank. I want the club to look after my entire journey from buying the ticket to long after I am home, perhaps serving me with match stats on the day and exclusive video clips of the day a week afterwards.”
From day one of having 5G in a stadium, fans will be able to reap the benefits of increased speed and lower latency – but we shouldn’t get carried away with the idea that it is some kind of panacea.
“5G will deliver on its promises, but it will also be a staged rollout,” said Stagg. “At the moment the economics of mobile mean that the first layer is enhanced mobile broadband. It’s 5G working alongside 4G. Then, a little further along, we will get network slicing and the ability to service verticals including different groups in a stadia such as broadcast, spectators, operations, IT and press. What I don’t want is a camera-operator competing for the same bandwidth with someone uploading a selfie.”
Birgfield had the most succinct conceptualisation. He suggested that we look at stadiums as a platform – just like a smartphone.
“The stadium is at the point where the mobile phone was ten years ago. Then, smartphones started to enable many different kinds of experiences. Some stadia may not have all the technology installed yet but tech is not the key challenge. It’s about being innovative with the application of it which is where opportunities lie.”

Thursday 19 September 2019

IBC '19: Deploy IP or Be Left Behind, Says Cisco

Streaming Media

The transition to IP is happening now, but a skills gaps within broadcasters continues to impede transition.
That was the message according to Cisco and a panel of executives with experience at the coalface of the shift from SDI to IP.
"With adoption of SMPTE 2110, IP has critical mass," said Sunil Gudurvalmiki, senior product manager, data center networking at Cisco. "There are still some moving pieces, but you have to deploy now otherwise you will be left behind."
The fundamental requirement is "reliability and predictability" whether in editing, storage, or distribution. "PTP (precision time protocol) needs to be extremely accurate and scalable," he said.
Red Bee Media, a services provider with its feet firmly in linear playout, seconded this.
"The only place money is going is content, and we found ourselves in the middle getting squeezed," said David Travis, chief product and technology officer. "We were spending nearly half of our time and effort on deployment—that's people building and integrating all of this technology. It was imperative to turn the tanker around and get service deployment down to minutes."
The BBC embarked on a transition to IP across all its regional UK facilities in 2016—before even ST 2110 was ratified.
"It was a brave decision which some saw as crazy," said BBC lead architect for major project infrastructure, Mark Patrick. 
Cardiff, the first BBC facility to have been made all IP, goes fully live using Cisco switchers in Q1 2020. "We've not managed to reduce the cost of deployment as much as we'd hoped, but this was a learning process," said Patrick. "We've built a cookie cutter template for the next set of BBC deployments built on standard toolsets."
The common thread is the human factor of the transition in which IP software engineers who have never worked in a broadcast environment and broadcast engineers for whom ST 2110 is unfamiliar need to collaborate.
"The biggest hurdle is fear of the unknown," said Gudurvalmiki. "Broadcasters are not familiar with IP. My advice for those ready to embark is to talk to partners, vendors, and colleagues who have gone through this transition."
The BBC's advice was to "find your network wizard and keep them. Nurture the staff you need to see the project through and beyond, and give them time to fail. The standards are largely supportive but there are different profiles and pitfalls. ST 2110 is a big standard and vendors can diverge in their interpretation of it. They aren't breaking any rules, but it can lead to angst." 
Travis reserved his ire for Microsoft. "My ultimate frustration is that a lot of the vendors here at IBC are so reliant on Microsoft. There's a reason why it's the number one IT vendor in the world—because they know how to charge you for it. You will find that any efficiency gains will be quickly eroded by cost."
His advice; "Make sure your software model is based on truly flexibly commodity IT rather than monolithic stacks called Microsoft."
Among Cisco news at IBC is its ongoing partnership with broadcast-centric vendors Grass Valley, Lawo, Sony, Nevion, and Imagine to integrating its IP Fabric for Media for management and switching of uncompressed sources.

IBC '19: CMAF Ready for Wider Stage

Streaming Media

The concept of a single set of files deliverable to all relevant end points has been the holy grail since the dawn of adaptive bitrate (ABR) streaming. The goal of the Common Media Application Format (CMAF) is to enable such a solution and slash encoding, storage, and bandwidth costs for companies who deploy it.
The good news is that the MPEG standard now carries momentum and the solid backing of Microsoft and Apple, both of whom were represented at an industry developers' forum in Amsterdam adjacent to the IBC Show. 
The forum was convened by the Web Application Video Ecosystem (WAVE), a Consumer Electronics Association initiative begun about the same time as CMAF to develop a global internet video ecosystem.
"CMAF can serve as the basis for DASH/HLS interoperability," said Krasimir Kolarov, acting chair of WAVE CMAF-IF who is also director, embedded media at Apple. "There is convergence."
The goal is to create a single format (fmp4 based) for multimedia content that can be delivered by various manifests (HLS or DASH) to a variety of client devices.
The potential for a lack of interop between DASH and HLS has concerned the industry for some time, but common sense has prevailed, with the forum working to ensure that the two protocols don't diverge.
"This codification and standardization can significantly improve cross-platform interoperability, reducing encoding, storage, distribution and continuing engineering costs – accelerating global web media growth," said Will Law, Chairman of WAVE Technical Working Group and chief architect, media cloud engineering at Akamai. "We're trying to get away from the competition like DVB versus ATSC. We want one OTT solution that works around the world."
The WAVE CMAF-Industry Forum outlined next steps for the fledging format, one of which was wider industry education about what is actually intended to achieve.
"There is market confusion that CMAF is a third delivery format, which it definitely is not," said Kolarov. "CMAF defines the standardized transport container for streaming VOD and linear media using DASH or HLS protocols. It is much more relevant to packaging and encoding. It is not another presentation format."
Another misconception is that CMAF solely targets live low latency.  A low latency mode within CMAF, which splits each segment into smaller units, or "chunks" of 500 milliseconds or lower, has been seized on by an industry desperate to for OTT delivery to match broadcast level delays. 
CMAF can certainly do this, but it is not the IF's main focus.
"When people hear CMAF they just think low latency," noted Thomas Stockhammer, director technical standards at Qualcomm and chair of CTA WAVE Device Playback Task Force (DPCTF). "We need to make clear that this is one application of CMAF but not its main feature."
Another key aim is tofacilitate wide interoperability in the growing market of CMAF-based solutions by using a common set of content and device playback specifications, as well as test conditions and material. 
"From an encoding point of view, regardless of whether you have HLS low latency low or the DASH version of ULL [ultra-low latency], the encode is still the same," Stockhammer said. "The same on playback … whether your content is DASH or HLS, both should work if you follow the CMAF implementation and device specs."
At its Worldwide Developer Conference earlier this year, Apple had put a potential fly in the ointment by announcing specs for a Low-Latency HLS. While reducing latency for live streaming is a common goal, this news interrupted the industry-wide effort to do so via the chunked transfer encoding of CMAF.
"Apple's low latency spec is still in draft stage," said Kolarov. "We're collecting feedback and will accommodate this feedback into the protocol. This is just how HLS was introduced in the beginning."
He confirmed, "Low latency CMAF support is on our roadmap. Since we [Kolarov's team] are dedicated to CMAF, we've been encouraging others at Apple to participate."
New technologies for possible inclusion in CMAF include HDR and audio profiles; sequencing and splicing of presentations; supplemental data brands; timed metadata and random access to tracks are also under consideration.
WAVE CMAF-IF has also agreed in principle with the 3GPP for the use of CMAF as a media streaming format over 5G.
Also under consideration is creation of a CMAF IF web site with free access to CTA WAVE content and device playback specs and testing.
"It can be the one place companies can go and have pointers for all the different specs being developed," Law said.
More meetings like the one in Amsterdam will promote CMAF activity at CES, NAB, and at events in Asia.
Other points on the agenda include efforts to develop a "heat map" of relevant standard bodies, industry for a, and target members, to encourage existing CTA WAVE members to promote CTA WAVE within their own companies and outreach to CTA WAVE membership for participation in outward-focused messaging for WAVE and CMAF-IF.
There is also potential for rebranding the WAVE specs to "CMAF-IF Content Spec" and "CMAF-IF Device Playback Spec."
To reiterate, the main goals of CMAF are: to reduce overhead and delivery costs through standardised encryption methods; to simplify complexities associated with streaming workflows and integrations (such as DRM, closed captioning, caching); and to support a single format that can be used to stream across any online streaming device.