Sunday, 13 August 2023

Paul Dugdale: Shooting Cinematic Live Performances

NAB

It’s not surprising to learn that successful filmmakers, and editors especially, have an ear for music. It all helps with the tempo and rhythm, and, of course, the audience’s appreciation for any video rests significantly on the audio. Working in music videos, a musical background is perhaps even more important.

article here

Jeff Lee, director of education and product specialization at AbelCine, talks with multi-award-winning director Paul Dugdale, who is known for creating outstanding music documentaries, concert films, and global live events with the likes of The Rolling Stones, Adele, Ariana Grande, Harry Styles, Coldplay, Taylor Swift, Paul McCartney, and Elton John.

Not only is Dugdale musical but so are his close circle of collaborators, including a TD who is also a drummer and his editor who is also “an amazing dancer.”

Clearly it helps everyone involved that they appreciate music. Obsessed by it, even.

Filming Elton John at Dodger Stadium the production team them little room for maneuvers since the music promo team had to plug into a juggernaut of a global tour.

With Coldplay, on the other hand, Dugdale was involved even while the band was recording its album. “They had a concept for the record, I would speak to them and their manager, and they would just sort of talk me through the ambition for the sort of visual aesthetic that they wanted the whole project to embody and we created that project literally from scratch.”

A lot of Dugdale’s process comes boils down to trying to find ways to capture the energy that an artist has live and transfer that to a short video.

“There’s no real staple way that it happens but the intention is kind of the same, which is to embody that stage production, and try to maximize, emphasize all of the best parts of it and try and translate that to screen,” he explains.

“And also to try to capture the relationship between artists and audience in the room. A lot of bands will say that the band sort of exists in front of an audience and without anyone listening to the music, the band doesn’t exist.

 “You have to have that bounce of energy. That’s where the where the magic happens… to just show what it feels like to be in front of that artist and listening to that music.”

Getting close to performers even virtually has become even more significant since COVID, when fans were shut off from attending live performances.

Dugdale used to work with English electronic dance music band The Prodigy, and every show he filmed with them was intense. “Super high energy, really loud music, everyone squished together, sweating. And, you know, it was incredible environment.

“I just got to try and create something that at least lifts your heartbeat and makes you want to go out dancing. That’s the most important thing to me. to try and just make you feel something [when you are watching] at home.”

If the job is to capture the live show, then Dugdale prefers to prepare by watching the show and deciding on a camera plan. He trusts his DP to translate this vision into camera type and lenses.

Again, the closer he can get to the performers the more the live experience will shine through.

Dugdale goes into some detail about how he achieved a certain top-down shot of Elton John and a tracking shot of the Griffith Park Observatory.

“It’s tough filming Elton because he’s not the guy running up and down the stage. He’s sat behind a six-foot plank of wood [aka a grand piano],” he said.

“We had a bunch of different close ups and mid shots on him but each of them had to be so super precise to work. When Elton’s playing if he’s glancing down at the piano keys, you see the piano keys reflected in the glasses. Or you see his face perfectly reflected in the piano. All of those things have kind of been done before, but they’ve got to be right.”

Part of his job is to imagine ways of filming artists — in this case Elton John — in a way that hadn’t been done before.

“One of them was that having a tracking camera that is really close to him as though you were doing a music video or you’re shooting something in the studio, where that there is no boundary of where you can go.

“So on this one night we shot with just only two cameras and it tracked along the nose of the piano and gave us these beautiful tracking shots of Elton with beautiful reflection in the piano as well.”

 


Friday, 11 August 2023

AI Imaging Tools to Refine and Define (But Not Replace) Your Work

NAB

We all strive to get our photos perfect in camera, but things don’t always go as planned. Did you or your subject move slightly? Did you have to crank up the ISO or pull out shadow detail in post? Did you have the wrong lens and have to crop dramatically to frame your subject?

article here

There’s an AI for that. Fine art photographer Angela Andrieux explains the benefits of using AI tools to finesse the perfect picture.

In Andrieux’s case the AI is invariably from Topaz Labs, a developer she also advocates on her website.

In mid-2022, Topaz Labs shifted from offering a suite of individual photo editing applications to merging their most popular problem-solving tools into one app — Topaz Photo AI.

“I’ve been a big fan of Topaz Labs‘ software for more years than I can count,” she writes. “And while their apps have changed significantly over the years, they continue to be an integral part of my workflow.”

In the tutorial, Andrieux demonstrates how sharpening, noise reduction, and photo enlargement tools from the Topaz app can be used to process new photos and even those taken a few years back when the resolution in your camera would not have been so great.

“You’re not going to be able to fix something that’s completely blurred,” she warns. “But we can do a lot to improve it. Sometimes it’s not perfect, but it can get you close enough to have a usable image.”

Image stabilization used to be a huge issue in photography, particularly in low light conditions where the aperture needed to opened wider, for longer, to let enough light through. Absent a tripod and that was a problem leaving motion blur on the image.

But sometimes in travel or street photography, for example, you don’t have time to find the correct setting let alone set up a tripod.

“It’s better to have an imperfect photo than miss the shot,” says Andrieux. “And luckily we’re at a point with technology, where we’re able to minimize most of those flaws that you might have when shooting conditions aren’t ideal. Because there’s a lot of times where we’re shooting in low light, we don’t have the ability to add in artificial light, there’s just a lot of things that can make a shot not technically perfect.”

Andrieux typically shoots with Canon cameras, converting the Raw file into TIFF format as a precursor to creative editing. Most of her images go through Adobe Camera Raw or Adobe Lightroom before she opens up an AI tool to post-process aspects of the image.

“The cool thing about AI is it does a much better job of a non-AI tool of preserving detail. In the past, and with lower quality noise reduction tools, you’d end up in a situation where you could clear out grain but you also lost a tremendous amount of detail. The AI tools that we have available to us now preserve that fine detail and get rid of the grain.”

Most of the AI processing will take some time to render. So be selective, she advises.

No AI is going to deliver a perfect photo if your basic shot isn’t most of the way there. It isn’t, for example, going to rearrange your composition — unless you dive into a AI image generator like Midjourney.

“It’s always better to have a clean file to start with as much as possible. Try not to use these programs as a crutch to just get lazy with your setting. AI can dramatically improve things but there are always trade-offs. Some of those trade-offs with noise reduction, for instance, mean sometimes you can lose critical detail.

“This is going to be most evident on things where you want to see fine detail in hair or fur on animals or if you’re taking a picture of an object and you want to have that texture. You can lose some of those fine details using noise reduction.”

The AI palette can do things that might have been the preserve of high end grading systems. Andrieux describes how you can add diffusion or automatically enhance faces or eyes or whiten teeth or remove the shine from skin.

“Every AI program will do the job a little bit differently. Some work better for different camera manufacturers. So just try them out and see which ones get the results that you like best.”

So what is it? It’s a space where the worlds of photo and video converge, where image-based, still photography fuses with motion capture, where you trade in existing for expansive, or simply find the inspiration to try something new.

Best of all, it’s a space to connect — not only with the end-to-end workflow for your craft, but with your community. Content creators. Photographers. Videographers. And so many others through photo walks, meetups, Q&A sessions, demos, exhibits, workshops and more!

 


Wednesday, 9 August 2023

Inspiring filmmakers of the future

British Cinematographer

MetFilm School is a leading film and television media school with an approach to learning embedded in the industry.  

article here

When writer-director Nikyatu Jusu’s psychological horror Nanny won the Grand Jury Prize at Sundance last year, it not only acted as international calling card for its cinematographer Rina Yang BSC but a showcase for the work of the MetFilm School where Yang studied. 

Another alumnus is globally successful producer Leopold Hughes who has co-produced Star Wars: Episode VIII – The Last Jedi (2017), Glass Onion: A Knives Out Mystery (2019) and last year’s Glass Onion sequel for Netflix. Last year, former student Tabish Habib associate produced Joyland which won the Jury Prize at Un Certain Regard in Cannes and featured in the last issue of British Cinematographer; and filmmaker and actor Adjani Salmon was named a talent to watch by the RTS and BAFTA and trade title Screen

With campuses in London within Ealing Studios and a second at Garden Studios since 2021, in Berlin at BUFA Studios since 2013, and in Leeds at Prime Studios, the School provides practical part-time and full filmmaking courses from entrant level upwards including an intensive BA and an MA programme. 

The Leeds site has barely been open six months and has 100 students on a campus set to grow dramatically in the next few years, according to MetFilm director Jonny Persey. 

“What is really fascinating is that we attract students from all over the country to both London and Leeds, both places at the heart of the film and TV industry. Similarly, Berlin is a vibrant film destination. In all three campuses the industry demand for our graduates is very strong.”  

MetFilm School was launched in 2003 by Thomas Høegh, owner of Picturehouse Cinemas in Clapham where its first short courses were held. Persey joined two years later, coinciding with the school’s move to Ealing and the launch of its first one-year course. 

“Twenty years ago, film courses were very expensive by necessity because of the cost of kit and lack of funding,” he says. 

The School’s story since then has been one of growth by accessibility. Its footprint widened geographically and introducing undergraduate programmes enabled it to access government funds. 

“It allowed us to broaden our intake, including international students, and brought our costs in line with that of a typical university level programme. We introduced bursaries and a scholarship programme and then MA programmes. 

“One constant remained which is to inspire students that working at the very top of the industry is not something to just aspire to but something that is within reach.” 

Persey is also an independent film producer and leads MetFilm with the philosophy that the best way to learn is to actually make films with the support and guidance of industry practitioners. 

He says, “You can’t make films without having a professional environment which is why, when you walk into MetFilm, it is very deliberately an environment that looks like the film and TV industry and not a university. At the same time we work hard to bring academia and real-world experience closer together.” 

Students are able to use the virtual production stage at Garden Studio’s Park Royal studio complex. The facility’s existing 12m x 4m VP stage is about to get a bigger sister stage featuring a 26 x 4.5m ROE semi-circular volume. Rental company Procam Take 2 has a hire shop onsite. MetFilm’s Berlin facilities include a 1,500sq ft studio, two standing sets, a 5:1 surround sound screening room, and post-production suites. 

Aside from the hardware, MetFilm School operates alongside MetFilm Production, an award-winning feature film production company; and MetFilm Sales, a boutique international sales agency. 

“Production, sales and distribution all exist independently and synergistically with MetFilm,” Persey says. 

MetFilm Productions has made 25 features including the 2016 documentary How to Change the World which won a special jury award at Sundance and a prestigious Grierson Award. For the last 10 years each project has involved a significant number of undergrads. 

The Reason I Jump, produced with 19 MetFilm students and graduates on the crew, won the 2020 Audience Award at Sundance and distribution on Netflix in the US and Disney+ in the rest of the world.  

MetFilm is currently working on its biggest project yet, a four-part TV series on which 25 students are involved.  

Yang is perhaps the most notable alumni of the school’s cinematography course. She was recently accredited as a member of the BSC, the first Asian woman to break that ceiling. 

Of the school’s 1,000 full time students, less than half the current annual intake are female, although this ratio is improving with initiatives such as a Women in Screen Industries scholarship. 

“Achieving diversity takes time. You are seeing more and more women succeeding in demonstrating prowess in all aspects of filming including in technical and traditional male dominated roles.” 

Addressing diversity in race, Persey says, “It’s important to recognise the progress that has been made but also not to celebrate it at the expense of being complacent.” 

Later this year, a new series of undergraduate programmes will launch around technology-based production. MetFilm is working with Garden Studios to develop the courses which will complement existing screen acting courses using performance capture. 

“The technology and the methodology of virtual production is changing so fast which is both incredibly exciting in terms of its potential for storytelling but means everyone is learning on the job.” 

It’s working though: former student Marta Baidek was VP manager on Ant-Man and the Wasp: Quantumania. 

Data produced by the government shows MetFilm to be among the top higher education institutions for post graduate earnings and the extent to which alumni are in employment. 

“Our strategy is to give everybody the skills to get a job. We also know from former students that they are succeeding and that the work we are doing is making a difference.” 

Tuesday, 8 August 2023

Philip Grossman: Production in Impossible Places

NAB

Flashlights and wet wipes are just a few of the essential items in the gear bag of award-winning photographer and cinematographer Philip Grossman, whose specialty is shooting in extreme and off-limits locations.

article here


Grossman has been engaged in a long-term project in Chernobyl that led to his involvement in the award-winning HBO series Chernobyl. He also produced and hosted a one-hour episode of the Discovery Science Channel’s Mysteries of the Abandoned entitled “Chernobyl’s Deadly Secrets,” and is working on a documentary about the Soviet Space Shuttle program in Baikonur, Kazakhstan.


“One of the things you need to do if you’re going to get into adventure cinematography, and filming in unique places, is get accustomed to things not being like they are at home and just accept them,” he says. “It’s the journey. And so that’s what makes this all fun.” 

Grossman is both a still photographer and documentary camera operator, usually employing a Canon for the former and RED cameras for video, but occasionally he will use the lightweight RED Komodo and combine the two disciplines.

“I found that by having a different body for doing stills versus motion, it causes your brain to switch gears faster. But at the end of the day you’ve got to figure out what you can carry and fit in your bag to achieve the goal of what you’re trying to capture. I always have my iPhone with me and a GoPro because they’re tiny.”

Grossman has the distinction of being the first person to fly a multi-rotor drone in the Chernobyl Zone of Exclusion as part of his work in documenting the aftermath of the nuclear catastrophe: “The smaller drones the better. Drone footage for me it’s like Tabasco, you just want to splash a little bit into your food. If you just coat your food and everything with it, it loses its pop.”

Managing power in remote locations in the field is a particular skill. One tip (possibly illegal) involves swapping over the stickers on camera batteries so that they look like lower wattage units in order to take several as carry on an airplane.

Another tip is to turn the camera off rather than have it in “sleep” mode for power management. He also carries a USB-C charger for charging using the USB port in cars traveling between destinations.

“You just have to realize that eventually you’re going to run out of power,” he explains. “I have some friends who will go longer than a week and take solar with them but sometimes solar isn’t an option.”

There’s no substitute for preparation, however, when filming in places like off the radar NATO bases.

“Google Maps is your friend. Do not be afraid to call or contact the [US] embassy or the foreign embassy. It is government bureaucracy and will cause you to pull your hair out sometimes but they’re a great resource, and they’re really there to help you.

(Grossman has found in foreign countries that most of them are excited to have Westerners visit, especially in the former Soviet Socialist Republics.)

Do a lot of research, read lots of books, and find obscure books.

“If you’re not familiar with Wikimapia, become familiar with it. Again, great resource. It’s like Wikipedia, but it’s crowd-sourced map information,” he advises. “So if you actually go to Chernobyl, to the city of Pripyat, just about every building has been documented there. Just like Wikipedia, Wikimapia is not 100% accurate, but does give you a good sense of what’s there.”

He also pays for real-time satellite images: “It’s definitely a value.”

Often in tight or blackout situations, Grossman has learned the importance of taking multiple flashlights, which he also supplements with inch-and-a-half glow sticks, “party sticks for raves and necklaces… I got a pack of 100 for about $9 on Amazon. I always keep 10 of those in my bag at all times now because you can crack those, throw them on the ground and they become breadcrumb trails for you.”

He even has thermal scopes for some work and a satellite phone, you know, just in case.

He’ll take tripods provided they are small, compact carbon fiber built with quick-release heads that can fit into a small bag when collapsed.

Insurance is a must, but he always carries cameras with him, and will not check them into a baggage hold.

“A couple of things that I’ve found that helps. One, believe it or not, is I have a Department of Defense sticker that is on my Pelican case and ever since I put that on they have never opened it. I don’t use the TSA locks anymore. I literally just put a zip tie on it,” he says.

“But by all means check with your insurance company. There are different policies that will cover different things. Always have serial numbers for electronic items. If you have the original receipt, that’s great.”

When working with locals, or perhaps as a “gift” to officials, he recommends liquor: “Small little airplane bottles of bourbon is the greatest thing in the world.

“The other fun thing is breaking bread with the locals. We always make a point when we go. Fortunately for me, my filming partner is Polish and so his Russian is far better than mine but we always try to find locals prior to going or when we get there who can help us.”

The key here is respect, even when you have permissions to film: “Always be polite.”

 

 

Scott Robert Lim: How to Be a Hybrid Shooter

NAB

Every pro and semi-pro content creator should learn how to shoot video and extract still photos if they want to make money in future, according to expert Scott Robert Lim.

article here

“Everybody should get into video, everybody should start to become a hybrid shooter,” he says. “There are a billion video views per day on Facebook and 86% of businesses use video as a marketing tool. So if you’re not learning how to do it, you’re losing out on money.”

Lim, who is a certified educator and winner of over 70 international awards including Top Ten Most Influential Photographers, details about how stills shooters can shift into video in a masterclass presented by Sony and B&H.

This intensive session provides an overview of how stills photographers can shift their skills into video and extract still frames so that they are performing one workflow but expanding the range of platforms and publications for their work.

“I wanted to show this concept of pulling frames because I believe that this is literally the future,” Lim explains. “I know it’s a little bit early for some people but in five years, this is going to be common practice, especially as technology gets better and better.”

Every still frame in Lim’s hour-long presentation is a still pulled from video. He says that he first started experimenting with the concept during COVID and also because he realized that video is comprising nearly 100% of all media online.

“It was totally amazing to me that I was able to get a frame and then in Photoshop do my magic [and create] a totally usable image from something that I just thought was a castaway video.”

He shows how he pulled a still from a video shot at 8K and increased the resolution in Photoshop (using its AI tools) to a 32-megapixel file that any leading magazine would be comfortable using in print.

Of course, it’s a little more complicated than that. While with stills you might worry about resolution and format (along with composition and lighting, of course), if you’re pulling frames from video there are multiple parameters to juggle.

Lim deep dives great detail about frame rate, shutter speed, bit rate and compression. He also discusses color bit depth and color sample rates.

“That’s the reason why video is difficult, because the quality changes according to all these little things,” he says. “Your workflow [will mean] you create the video and then extract stills from the video so it’s a much more like the workflow is killing two birds with one stone.”

For instance, for social media the standard requirement is HD (2K) and not higher resolution. So for video for social media you will only have to shoot in full HD Or you can shoot in 4K and crop the heck out of it, he says.

“The pixel dimensions of an Instagram reel are full HD. So if I were shooting a 4K video and let’s say I didn’t want to turn it portrait, I just wanted to keep it landscape, there’s so much resolution there I could just crop the middle and I would have plenty of resolution for the reel. Or I could flip it and shoot it in portrait and then I could crop more than half of it and still have enough resolution for an Instagram reel.”

The information can be overwhelming but Lim recommends just hitting video record and see what comes out of it.

“Some of your greatest captures you’re going to find are not the ones that are the sharpest, or the ones with the highest pixel count or megapixel count. It’s the ones that you just feel when you look at them. So go out there, shoot, have fun with it.”

Watch the video at the top of the page, where Lim reveals what he calls his “magic hybrid settings” for doing both video and stills.


Monday, 7 August 2023

Jon Radoff: Why Creativity Will Be Critical in an AI-Driven Society

NAB

We’re on the cusp of an era that will be a fundamental change to civilization itself, according to game pioneer and entrepreneur Jon Radoff.

article here

It will be a battle for digital identity and self-expression.

“The next battleground on the internet will be between centralized AI services, and decentralized AI you could run on your own device,” he said during a presentation to the MIT Media Lab.

Radoff explained that to date our digital identities have been expressed through online game personas, social media and avatars. This has evolved toward expressing ourselves creatively such as through virtual worlds and platforms like Minecraft and Roblox.

The third era — which we are at the very beginning of, he thinks — is about empowerment through artificial intelligence, where autonomous AI agents carry out our will. But there are obvious dangers over loss of control that lie ahead.

Radoff began his presentation by asking that we should view games as the proto-metaverses.

“Games are abstractions of reality,” he says. “They have elements of storytelling, there’s some kind of shared imaginary space that has to take place to play a game.”

While most games are constrained by rules, some are not, and this potential is what excites Radoff. He identifies role playing game Dungeons & Dragons as the first to combine rules with the freedom of its players to create their own stories to expand the scope of the game.

“It is a shared imaginative space, shared place for creativity, a place you get to take on different roles. And there’s enormous emergent play. It’s emergent because you can’t fully predict the outcome.”

The metaverse takes this concept of emergence into digital form and allows participants “to cross time and spatial barriers,” Radoff says. “The metaverse give us a place where we can go through similar imaginative experiences without having to necessarily meet in person.”

Online multi-player games like World of Warcraft or world-building platforms like Roblox and Minecraft are inherently social, Radoff argues. It is the social connections that players are forming here that gives us clues as to how the metaverse will grow.

Just as the number of social interactions are essentially infinite so the emergent nature of these virtual worlds is infinite and infinitely unpredictable by design. It is something not possible with the closed HTML-based websites of the 2D internet but the seeds of how we will navigate its 3D successor are already here.

And this has individual digital identities at its core.

“Most people are participating in online games, or in social media, maybe you’re an eSports, maybe you’ve done online dating, maybe you’ve participated as a viewer, or even [participated in] live streaming. Maybe you’ve done some cryptocurrency. Maybe like me, you’re capturing your biometrics 24 hours a day and uploading it to the internet, where AI figures out how to tell me how to live a better life,” Radoff explains.

“The key idea is that our identities are now very much comprised not only of who we are physically, but who we are digitally. And that’s changing a lot of the trajectory of human civilization.”

If the current phase of our online ID is a presence on Twitter or Facebook, then the next step is our expression. It’s about what we put out online as digital beings.

Again, the first steps of this journey are already being taken in the form of digital twins. Radoff notes that just as we’re projecting our personas into digital space, we’re starting to take physical things with us into the digital space too.

“We’re going to have more and more digital twins of objects in the real world that you can scale up [online]. If you can do it in a factory, you could do an entire city. Why stop at a Smart City, when you can do a twin Earth?”

As we digitize more of the physical world replete, so the virtual world will impact the real world in what Radoff imagines will be a virtuous feedback loop.

“The idea of shaping worlds and exposing ourselves to them and allowing us to shape experiences that then affect us as well, such as creating an avatar online wearing it in real life. We’re starting to blur the distinction of like who we are with our digital personas online.”

But there’s yet another phase and it involves AI. Generative AI will amplify the whole virtual/real crossover and multiply the speed at which it is built. The question then becomes can we as individual humans retain agency over our own identities? Radoff thinks we can.

His solution lies in providing a framework for humans to retain agency — the power — to change and control AI. In this respect Radoff seems to agree with Web3 advocates who envision the next generation of the internet as the last chance for society to build a more equitable distribution of labor and reward.

“When I talk about projecting our will onto the online world through intelligent agents, I’m also thinking about our own agency about interpreting the online world. Right now, in the centralized version of the world [aka Web2], it’s really governed by algorithms whose objective function is revenue and EBITA for an organization. It’s totally fine. I’m a capitalist, so I get it. But I personally want to live in a world where it’s my online experience that is optimized around the objective function that I set,” he says.

“For instance, if my intelligent agent wants to let me know that it discovered a product or service that I ought to consider paying for, because it’s looked at all the information available, and its pattern match that based on my criteria to what I want.”

While not necessarily convinced of current iterations of blockchain or Decentralized Autonomous Organizations (DAOs), he does think these are examples of ways to create new governance systems that work for everybody.

“We could debate the pros and cons of whether that makes sense in all or in particular cases but nevertheless, DAOs are a social system, an emergent social structure, which I think is interesting to look at. So it’s interesting to think about what happens with the agents that represent us online? And then how do we form governments around that?”

Such theorizing becomes urgent when you consider the amount of deepfake content circulating online with few guardrails in place for people to detect fiction.

An article in Wired by Thor Benson titled ”This Disinformation Is Just for You” worries that generative AI won’t just flood the internet with more lies — it may also create convincing disinformation that’s targeted at groups or even individuals.

Hany Farid, a professor of computer science at the University of California, Berkeley, tells Benson this kind of customized disinformation is going to be “everywhere.” Though bad actors will probably target people by groups when waging a large-scale disinformation campaign, they could also use generative AI to target individuals.

“You could say something like, ‘Here’s a bunch of tweets from this user. Please write me something that will be engaging to them.’ That’ll get automated. I think that’s probably coming,” Farid says.

In the lead-up to the 2024 US election, Facebook’s algorithm — itself a form of AI — will likely be recommending some AI-generated posts instead of only pushing content created entirely by human actors. We’ve reached the point where AI will be used to create disinformation that another AI then recommends to you.

“We’ve been pretty well tricked by very low-quality content,” Kate Starbird, associate professor in the Department of Human Centered Design & Engineering at the University of Washington tells Benson. “We are entering a period where we’re going to get higher-quality disinformation and propaganda. It’s going to be much easier to produce content that’s tailored for specific audiences than it ever was before. I think we’re just going to have to be aware that that’s here now.”

While recognizing the dangers inherent in unrestrained AI, Radoff is concerned that rampant regulation might stifle innovation.

“There are, of course, real safety issues that are of concern but I also don’t want to throw out the potential for all of society, including civilizational improvement effects, that will be a huge net benefit to us.”

He continues, “We already have a deep fake problem… We’ll have to have first defensive technologies. This knowing the authenticity of content and who it came from, is going to be very important.

“The fear [about AI] is real and palpable, and it will drive politicians towards reacting to that potentially in a way that isn’t productive for innovation, and may even be a net worsening of safety. The rush to regulation worries me a lot.”

 

Behind the Scenes: The Hundred

IBC

The Hundred, the ECB’s 100-ball-a-side competition, aims to carry over momentum from The Ashes as it commences this week. Adrian Pennington takes to the crease to uncover the inside story.

article here

All 68 matches of The Hundred (34 men and 34 women’s) in the month-long tournament are being screened on Sky Sports, with 16 of them also broadcast live on the BBC iPlayer and BBC TV (which also has radio ball-by-ball commentary on every game).

The Hundred features eight teams across seven cities competing in men’s and women’s cricket. Every day will be a double-header with a women’s match in the afternoon and a men’s game following in the evening, each innings lasting approx. 65 mins.

Sky Sports produces the host feed from which the BBC produces its own version on-site at each of the venues.

IBC365 spoke with BBC Operations Executive Andy Underhill about the production of this year’s event.

“It’s a complex process,” he explained. “BBC Sport take the baked cake that Sky has produced and strips it back to its raw ingredients before putting it back together to create a BBC branded version.”

Sky’s feed is produced in UHD HDR remotely with all cameras switched back in Osterley. The BBC takes a signal from Sky down-converted to 1080i and produces its presentation as an OB.

This introduces a delay of up to 15 frames or 600 milliseconds which need to be matched with the BBC’s own cameras feeds which are ingest directly into the OB van.

Underhill explained: “Everything we get from Osterley we have to put through vision delay so when we cut them with our cameras we don’t, for instance, see a bowler perform the same action twice. The first thing we have to do each day is to make sure our replay wipe is in sync with the picture timing. This is done by matching the GPI trigger in the vision mixer at Sky with the trigger in the vision mixer in our truck.”

BBC Sport augments the host broadcast with ten cameras at each ground including two box lens cameras for ISO coverage such as close-ups on players, an RF for a reporter roving among fans and PTZs in the comms boxes.

“We tend to try find the highest building on site or adjacent to each ground to allow us to see the whole field at any time to explain fielding positions, for instance,” he said.

“The nature of this format of the game is very fast paced. In conventional cricket you get a pause to analyse every six balls whereas here it is every 10 balls. There is less time to fill so you’ve got to be quick to keep up with on field action.”

A feature of The Hundred coverage is the ability for commentators to talk live to players while they are on the pitch. This hasn’t yet been tried with the batting team where concentration is perhaps of a higher requirement than for a fielder.

“Also, we know that a fielder is probably going to be on the field for much more time than a batter,” said Underhill.

Sky and the BBC can each select one member of the fielding team each to mic up. The commentary team, which includes former and current players (Isa Guha, James Anderson, Heather Knight, Moeen Ali and Alex Hartley among them), can converse with the fielder between overs about tactics, weather the pitch or just general banter.

While the lavalier RF packs stay the same from last year, what has changed is a lighter and more comfortable harness for the players to wear while carrying the gear.

Another defining element of the entertainment format is the inclusion of live music. For certain games, the BBC is producing and broadcasting live select tracks live from the ground from a dedicated 4-camera OB unit. That cut is sent to the main BBC truck where it is switched and sent onto the network.

Artists performing this year include DYLAN kicking things off at Trent Bridge, Rudimental headlining the final at Lords on 27 August and various BBC Music Introducing artists in between.

The BBC takes a double expander OB unit and non-expander VT unit to each match, both supplied and crewed by Cloudbass in the second year of its contract with the BBC for The Hundred (Arena, the defunct OB outfit, handled the first event in 2020). AE Live provides the graphics onsite.

Connectivity is managed by NEP Connect using Net Insight Nimbra units. This includes transporting signals from Sky’s trucks (supplied by EMG) back to Sky and the output from Sky back to the BBC OB. The BBC also has a satellite backup for its own transmission, also managed by NEP Connect.

Umpire adjudication is conducted the same as for a Test Match with Hawkeye ball and player trackers, although unlike football this is done locally not remotely.

A ‘dirty feed’ of all matches is sent to the BBC in Salford where a team can cut highlights packages for insertion into the programme package.

Trent Rockets beat Manchester Originals in the 2022 men’s final with captain Lewis Gregory’s unbeaten 17 from six balls steering his side to their target of 121 with two balls to spare.

In the women’s event, Oval Invincibles defeated Southern Brave in the final for the second year in a row, topping Brave’s total of 101 with six balls in reserve. Teenager Capsey took two wickets with her off-spin and then hit a quick-fire 25 in Invincibles’ successful chase.

The BBC is the ECB’s free-to-air-broadcast partner in a deal which runs until next year. Last year the ECB signed a new four-year deal (2025-2028) with Sky worth over £220m a year and means The Hundred will be aired until at least 2028 on Sky.