Monday, 7 November 2022

Get You Up to Speed: Where We Are With Free Ad-Supported Streaming TV

NAB

Free ad-supported streaming TV (FAST) is all the rage. It’s the new linear distribution model that delivers pre-programmed content to a mass audience via connected devices. According to Julia Stoll, a research expert for Statista, more than half of TV viewers in the US (58%) are watching FAST services, growing from 40% in 2020 and 48% in 2021.

article here

nScreenMedia’s Colin Dixon shares from a new report that the FAST market in the US is projected to be around $4 billion in 2023. Pluto TV, Tubi, and The Roku Channel are among the best known, but even Google has jumped on the FAST wagon by adding Pluto TV channels to the Google/Android live guide and introducing YouTube’s own FAST offering.

FAST has become a major force in the APAC region where solutions developer Amagi charts a phenomenal YoY growth of 320% in total hours of viewing and a staggering 891% YoY growth in ad impressions across FAST channels.

“What the data tells us is loud and clear,” Amagi writes. “Now is the time to build your presence across FAST platforms with broadcast grade linear channels. Now is the time to reach the growing global audience base for FAST. Now is the time to tap into the increasing ad revenues generated across this space. And if you already have a presence here, now is when you must shift gears to strengthen your brand.”

Its white paper, “FAST 101: A complete guide to thriving in the Free Ad-supported Streaming TV world,” is a guide to FAST — how to build and monetize the right one.

What makes the FAST model different from AVOD (Advertising-supported Video on Demand) is the linear content distribution. In simpler terms, a FAST channel is like a traditional TV channel that has fixed programming, schedules, and advertising. On the other hand, the AVOD model, apart from being ad-based, lets viewers choose what they want to watch in an on-demand manner.

The reasons for its popularity? Aside from being free, Amagi contends that FAST channels offer better content discoverability.

“When you opt for a video/music streaming service, all you are looking for is entertainment. While AVOD and SVOD services offer a lot of options to choose from, we no longer seem to want just that,” the report says. “All we want is a fuss-free viewing experience. Since FAST channels offer content in a pre-programmed, linear fashion, all we have to do is watch. FAST platforms, therefore, offer much better content discoverability compared to its AVOD & SVOD competitors.”

Another reason is the growing base of Connected TVs (CTV) in the home which is indirectly helping pave the way for FAST as well. Smart TV makers, like Samsung and LG, are increasingly offering inbuilt FAST channels, thus helping the free ad-supported model gain more popularity.

If you look at this trend in terms of numbers, the global CTV market is estimated to be around $107.82 million and is expected to hit $115.8 million by 2028. “Proof that some big milestones are in store for the CTV and FAST businesses,” Amagi comments.

FAST channels (and to be clear, you won’t find consumers who actually know what FAST means) are perceived to be synonymous with content choices across a wide spectrum of genres.

Apart from licensing movies and series from media houses and companies, many FAST players are also actively engaging in creating some original content. According to The Roku Channel, the top 10 most watched programs on their channel from May 20 to June 3 in 2022 were all originals.

Moreover, “FAST content typically encompasses pieces that you won’t find on a traditional broadcasting channel,” Amagi explains. “Old content archives that no longer look like a source of revenue can be repurposed effectively using FAST channels. This content can be utilized in different formats, thus opening up new revenue streams.”

Plus, like in the name, there’s advertising. The greater the audience, the greater the monetization potential.

The report poses the questions: “If you go by the traditional approach, you can monetize your content by syndicating it via a TV broadcaster. But is that enough? Does this do justice to your rich content catalog? The answer is no.

“While traditional broadcasting limits your horizons, FAST platforms provide you the opportunity to monetize your content in new and innovative ways. You can repurpose your old content in different formats that is otherwise yielding very little monetarily.”

FAST enables a content provider to serve the best live, linear and VOD content and to hook up with an advertising platform to monetize those channels using the power of Server-Side Ad Insertion (SSAI) technology.

Amagi suggests: “To further get better monetization results, you can experiment with some innovative ad formats” such as contextual video ads, graphic overlay ads, and dynamic brand insertion. “They are non-intrusive, more relevant to the content being presented and account for an overall enhanced and seamless experience for the end viewers.”

Advertising on CTV is highly driven by numbers and metrics, a large pool of data that is further dissected and thoroughly analyzed — to understand what is working and what is not — works in favor of content owners, platforms, and advertisers.

 


Thursday, 3 November 2022

Your Daily Life in the Metaverse

NAB

The metaverse, like the internet, will be ubiquitous and omnipresent. But just as we’re often unaware of being online, the metaverse will sit in the background of most day-to-day experiences.

article here

So says technology commentator Matthew Ball in an interview with New York Magazine’s Benjamin Hart. Ball predicts that the metaverse will not be a substitute for the internet, nor does it have to involve the total immersion of a VR headset as the likes of Meta’s Mark Zuckerberg would have you believe.

In fact, there are practical examples all around us currently that show how the metaverse will assist our daily lives, even if we’re not conscious of it.

For example, few people can imagine driving a car without a GPS navigation system. It makes the whole experience of getting from A to B just a little easier.

“We don’t drive GPS instead of a car — we drive a car with GPS,” Ball explains to Hart. “We evaluate it based on whether we get there better, faster, easier, cheaper. And so for some people, [the metaverse] will be like that, a technology they use when [performing other activities]. For other people, it will surround them.”

Ball neglects to mention that GPS also gamifies the driving experience — a characteristic that the metaverse will surely amplify.

He does, however, note that Johns Hopkins University is now deploying XR (extended reality) devices to perform live-patient surgery.

“The physician who performed that surgery described it as like driving a car with GPS for the first time,” Ball says. “I love that example, because he’s talking about these technologies as a complement, not a substitute, and as part of real life as opposed to purely synthetic life.”

He continues: “When you walk into a hospital or a secure facility, you’re on the internet — your badge is validated over IP. When you check out at the grocery store, you’re accessing the internet for your transaction. Even when you’re crossing the street and you’re using a crosswalk button, it’s transmitting information through the internet.

“The metaverse is likely to be the same. Some of us will use it constantly for work and for socializing, and will do so with multiple different devices. For other people, it will be more occasional.”

However, the next-gen internet will be more than just another layer of always-on digital “censorship.”

It is about making the entire world legible to software in real time, “actually replicating existence in simulation software,” Ball notes. “There’s no way to do that without extraordinary data capture. And that’s bringing about all of these intensified questions of the role and extent of computer vision and about self-custody of data, national custody of data.”

Amazon Go retail stores are another example of an everyday metaverse use case.

“These are the convenience stores that you walk into and never check out of. What happens is there’s a network of cameras in the ceiling, and they produce a virtual simulation, dimensionalizing you in a digital twin of the grocery store, with full awareness of all of the products on the shelves. One of the ways in which they ensure that you are you is through gait analysis. And so if two individuals of similar shape and size duck behind one another to pick up an item, they’re tracked afterward through analysis of motion and movement.”

Ball seems to think this is beneficial to our grocery shopping, but I for one simply detest automated check-outs, which take longer, involve huge frustration when the machine doesn’t recognize a barcode, and crucially involve zero human interaction.

Communication is another example Ball uses. In particular, Google’s telepresence — Project Starline.

“We all find Zoom fatiguing, tiresome, alienating. And in holography, we see remarkable improvements in connection — 50% increases in nonverbal forms of communication: brow movements, head nods, hand gestures. Thirty percent increases in eye contact, 20% increases in memory recall.”

Similarly, Ball says education will benefit from metaverse technologies by being able to improve the learning outcomes for students.

“3D simulation and experimentation is clearly more engaging to children than just reading in a textbook.”

Really?

Ball also comments on how buildings using “interconnected simulations” (built in the metaverse as digital twins) can be designed to improve energy efficiency.

“So [we] look at these examples and imagine what happens as their realism improves, the devices become more intuitive, and our familiarity grows as well.”

Ball seems to acknowledge on the one hand that “the concept of exiting the real world and fully entering a new one” isn’t, in fact, something that many people really want while pointing to the younger generations as being born with a digital mentality.

“Nearly everyone born today is a gamer,” he says, suggesting that ubiquitous metaverse interactions are an inevitability. To be clear, his belief is that the metaverse will involve full-scale immersion in time once problems with VR headgear are solved.

“I think that the threshold for fully replacing one’s senses, most notably sight and sound, is much higher than was often imagined. Television doesn’t exclude the environment around you. With video games, you still know where your dog and your kids are. You need a truly extraordinary experience to replace reality in its entirety.

“The technical challenge in making a lightweight, high-performance, long-lasting battery cool (in temperature) device is really hard. This doesn’t mean that they’re never going to be successful. But it does explain why we’ve had so many false starts.”

 


Diagramming the Differences Between AVOD, SVOD and TVOD

NAB

With the launch of Netflix’s new, cheaper advertising-supported service, everyone is talking about AVOD. But that doesn’t mean AVOD or free ad-supported TV (FAST) now rules the streaming roost. Far from it, as choosing the right video-on-demand monetization model could make or break your business.

article here

Select the right one, and you can scale a profitable business. Choose wrong, and you might leave money on the table.

The following article is a back-to-basics explainer, courtesy of OTT service JW Player.

AVOD

AVOD (advertising video-on-demand) uses advertising-based video-on-demand to provide free content for viewers. Instead of paying with their wallet, consumers pay with their time by watching adverts before and during the video content. This business model is used on popular video platforms like YouTube and Facebook Watch.

Many premium streaming services offer an AVOD option as an entry-level tier to their SVOD subscriptions and use ad revenue to offset lower prices. For example, Hulu’s cheapest streaming option contains ads, and Netflix Basic plan with ads launches in November.

SVOD

SVOD (subscription video-on-demand) is the most popular VOD model, and it’s also the most competitive. Users pay a fixed monthly subscription fee to access a library of streamable content — whether that’s television series, movies, or sports.

While SVOD provides recurring income, winning over subscribers isn’t easy. With so many different options, consumers have had to become pickier with which services they purchase. While you can earn big payouts on a single piece of TVOD content, you’ll have to regularly publish the best-of-the-best videos to keep consumers subscribed to your SVOD service.

TVOD

TVOD (transactional video-on-demand) lets consumers buy content on a pay-per-view basis rather than a subscription model. Viewers can purchase permanent access to a video product or rent it for a limited time (or limited views) for a fraction of the cost.

TVOD models use exclusivity and recent releases to maximize income. Once the initial stream of new viewers declines, TVOD services can release the content into their library of SVOD or AVOD to repurpose its value. You can find examples of TVOD content on Disney+ new releases, Prime Video, and hotel televisions.

A TVOD model offers more options when it comes to marketing your products. Every time you have a release, you can expand your market by advertising the new content (rather than an entire library) to anyone that’d be interested — and first-time buyers have a good chance of coming back if they have a good experience.

Meet Your Viewers Where They Are

These are the three main business models used in streaming, giving publishers the flexibility to match their content with viewers’ needs. For example, your audience might not want to pay to watch your content, but they don’t mind watching a few advertisements to get access.

In this case, an AVOD model would help you monetize your videos without alienating your audience. Other viewers might not want to punch in their credit cards to watch a single video, but they have no problem doing it monthly to get ad-free access to thousands of videos.

JW Player advises streamers to give their audience options instead of an ultimatum. Allow them to watch content with ads for a discount or provide them with an opportunity to rent (and binge) a television series rather than purchase a subscription. Since 41% of consumers will pay to avoid ads, give them a premium viewing option.

If you were operating with a TVOD model, you might let viewers buy a series on a per-episode or per-season basis with different discount prices. Or you might let viewers watch the first episode of every season for free to give them a taste of the content before they make a purchase.

“You might want to eventually create an SVOD platform, but starting with a TVOD model might be better for drawing in initial customers while you build awareness and your content library,” JW Player recommends.

In sum, more options expand your target market and increase your earning potential. You might not be able to lock as many users into a subscription contract, but you’ll improve the user experience and boost viewer retention.

 


Wednesday, 2 November 2022

Keeping the animation boom BOOMING

 copy written for Sohonet 

article here

Deep in the heart of Glendale, California, Renegade Animation is home to some of the world’s most talented artists. The studio’s strengths lie in personality-based character animation combined with a strong use of design, achieving a traditional look with non-traditional tools. These skills have been profiled on six seasons of Tom and Jerry produced for Warner Bros that were respectful to the classic Hanna Barbera cartoons of the late 1940s and early ‘50s. The team also parlayed this into two Tom & Jerry features for HBO Max, animated shorts for Sesame Workshop, and feature production for Sony Pictures Animation.

Balancing growth with hybrid workflows

 “We’re fortunate to be living through an amazing time for animation right now driven by streaming services,” says Michael D’Ambrosio, editorial department manager and lead editor at Renegade Animation. “We were much more fortunate than the live-action end of the industry. Live action was shut down during the early stages of the pandemic while all aspects of animation production were able to continue but with that, came challenges. Basically, it was trial by fire. We had to figure out a way to work remotely more efficiently, and that’s where tools like ClearView Flex became critical.” 

Emerging from lockdowns earlier this year, Renegade is now a hybrid studio with just a handful of artists in the office at any time. To make this move permanent they switched their existing streaming review and approval to Sohonet ClearView Flex.

“We were using another product which did the job – up to a point,” D’Ambrosio explains. “It was sub par for us with connectivity and latency issues. There was a high level of frustration level, particularly with issues of sync.”

This came to a head at the end of last year when Renegade was working on a Tom and Jerry feature for HBO Max.

 Switching remote streaming solutions for rock-solid performance 

“The film involved musical numbers. I cut one of them and was quite happy with it. I invited the director, Darrell VanCitters, into the office for review. To my horror, when we played it back, the sequence was out of sync by 2, 3 or 4 frames. It was missing downbeats and some impacts. This is slapstick. This is Tom and Jerry – so you notice this immediately. Tom is getting walloped, and Jerry is scooting around and we’re missing all these beats timed with the music. 

“I was scratching my head – did I really cut it this way? I had to send the director away and knew it would take me some time to trim the scene the way it should be. 

“Then it dawned on me that maybe this had happened because I’d cut it remotely. That was confirmed when I checked another scene, also cut remotely, which again had frames out of sync. 

“The whole experience led us to think enough is enough. We needed to change our remote streaming service.”

Renegade researched the market, talked with peers, and selected ClearView. “We immediately saw a difference,” reports D’Ambrosio. “It performed at a much better level for our needs and essentially gave us the confidence to finish at home.”

When it comes to performance, ClearView solved one of the bugbears that the team experienced when working remotely with previous software. “In order to work without latency or freezes the other product needed to be hard wired. But that wasn’t possible for our director working at home. ClearView connects over WiFi to a laptop without any dropout. That right there is a huge advantage.”

Capturing the magic of in-person creativity 

D’Ambrosio also appreciates that ClearView Flex separates video conference calling from the AV stream. “It gives you the flexibility to use Zoom or Teams or another video communications tool while giving you a higher quality representation of what your work is looking like. It means the director feels more confident, as an editor I feel more confident, and all round the work is better.”

Another bonus was its ease of set up. “There is hardware involved but this is a positive since hardware is usually more stable than a pure software product. And it literally was plug-and-play. I was up and running in ten minutes, tops. It couldn’t have been easier.”

He adds, “Those are the advantages that convinced us to switch to ClearView and why we’re going to stick with it.”

Renegade has one ClearView Flex module but says it will buy another as projects scale up for additional editors and assists.

“Of course, nothing beats having social interaction with others in the same room. That is still missing in any form of video calling. But since we have now become a hybrid home/office studio, in order to maintain this workflow without experiencing connectivity or latency issues, then ClearView does a much, much better job.”

 


Envisioning Web3 as “The Internet of You”

NAB

Picture an internet that puts you at the center, learning and adaptive, representing the multifaceted nature of your life, interests and activities. One where you hold the power and the right to use your data as you wish. Wouldn’t that be amazing? Web3 holds that promise.

article here

In a post on Medium, Tom Vandendooren asks us to visualize the Internet of You. It isn’t hard to do.

“Imagine a world where you don’t have to adapt to the environment and its settings, but instead, the environment adapts to you. Whether that world is real or virtual, interaction will be on your terms and technology will be at your service. That’s the true promise of the Internet of You (IoU) — to have everything we’re surrounded by or interact with, adapt to us, learn from us, blend into our lives seamlessly, and remove all friction between us and our immediate environment.”

The Belgian web architect goes on to explain that the IoU is a “superset” of the Internet of Things (IoT) and the Internet of Everything (IoE), where smart tech and objects will be connected into an intelligent network providing “hyper-personalized” services and assistance.

The IoU will be a smart network powered by AI and autonomous systems that will reconfigure the world around you, “whether IRL or URL,” (in real life or online) “to match your identity, preferences and affinities.”

Digitization of Everything

It’s right to assume, as Vandendooren does, that consumers increasingly expect smart services to adapt to their personal preferences, and for experiences to be tuned to their real-time or predicted needs. And while personal data is required to feed these intelligent and adaptive systems, people are no longer willing to trade off privacy without keeping control over data access and sharing permissions, and without seeing a return of tangible value.

Hence, the need for an evolution of the internet — one that enables not disempowers the individual. This is Web3 — “an internet that is not value extracting, but rewarding, fueling a positive-sum game where you get to participate and partake in the economic value created.”

In contrast to today’s current Web2, the next version is planned to be decentralized. Its lack of centralization means that control shifts back to the individual, including the data and privacy that goes along with it.

“Thanks to a native ownership and authentication layer, Web3 will enable and accelerate the digitization of everything,” says Vandendooren. “Smart contracts and tokens act as a digital representation of ownership, enabling verifiable ownership within and across networks. Not only can you prove that your identity, assets, and objects are yours, you can take your identity, reputation and assets from one location to another, wherever they are supported.”

Ownership and Identity

Essentially, rather than centralized and walled garden data structures, the distributed blockchain puts the onus of true ownership on the users. Users, via tokens and wallets, will manage their identities, data and assets, and ultimately decide which services and applications get access.

“As a result, Web3 allows us to expose ourselves to platforms and smart services without fear of scrutiny or intrusion of privacy,” argues Vandendooren. “By aggregating our online data and sharing the parts we find relevant to receive better recommendations and find better opportunities, Web3 enables us to carry our identity — in the form of an address with on-chain data — across apps, platforms and communities, while having control to share those parts of ourselves that we choose.”

When combined with smart contracts, and connected to smart devices and services, “a person’s digital twin will reflect and update relevant status and context in real-time, triggering … experiences and outcomes which are optimized to a person’s in-the-moment needs and desires.”

He is not the only one eulogizing such a theory. In a blog post titled “People are the New Platforms,” Web3 investor David Phelps says, “Web3 can let us transcend the limited language of existing platforms by creating more advanced living, breathing identities on-chain. … The point ultimately is that it will showcase who we are not simply as checkboxes of consumer taste, but active creators, contributors, and collaborators — as humans.”

Web3 is being billed as a resetting of the internet — one that is “By the people, for the people.” It offers us a future where we are in charge of our own identity, data and reputations, not beholden to the whims of big data hoarding companies.

“Web3 promises to shift value, control and bargaining power back to us, the users,” advocates Vandendooren, to transfer identity of our “sovereign self” onto the blockchain and in doing so, Web3 will move us a little closer to the promise of the Internet of You.


Tuesday, 1 November 2022

The Ways AI Is Going to Revolutionize Filmmaking

NAB

Only a few months ago the art world was agog at breakthroughs in text-to-image synthesis but already new models have arrived capable of text-to-video. Advances in the field have been so swift that Meta’s Make-A-Video – announced just three weeks ago - looks basic.

article here

Another, called Phenaki, can generate video from a still image and a prompt rather than a text prompt alone. It can also make far longer clips: users can create videos multiple minutes long based on several different prompts that form the script for the video. The example given by MIT’s Technology Review is of ‘A photorealistic teddy bear is swimming in the ocean at San Francisco. The teddy bear goes underwater. The teddy bear keeps swimming under the water with colorful fishes. A panda bear is swimming underwater.’) 

“A technology like this could revolutionize filmmaking and animation,” writes Melissa Heikkilä. “It’s frankly amazing how quickly this happened. DALL-E was launched just last year. It’s both extremely exciting and slightly horrifying to think where we’ll be this time next year.”

In its white paper, Phenaki explains that generating videos from text is particularly challenging due to the computational cost, limited quantities of high quality text-video data and variable length of videos. To address these issues, Phenaki compresses the video to a small representation of “discrete tokens. This tokenizer uses causal attention in time, which allows it to work with variable-length videos.”

It goes onto to explain how it achieves a compressed representation of video. “Previous work on text to video either use per-frame image encoders or fixed length video encoders. The former allows for generating videos of arbitrary length, however in practice, the videos have to be short because the encoder does not compress the videos in time and the tokens are highly redundant in consecutive frames. The latter is more efficient in the number of tokens but it does not allow to generate variable length videos. In Phenaki, our goal is to generate videos of variable length while keeping the number of video tokens to a minimum so they can be modeled … within current computational limitations.”

Google also a text-to-video AI model called DreamFusion. This generates 3D images which can be viewed from any angle, the lighting can be changed, and the model can be placed into any 3D environment – handy for metaverse building you would imagine.

In its paper, DreamFusion researchers explains that existing generative AI have been “driven by diffusion models trained on billions of image-text pairs,” but adapting this approach to 3D synthesis “would require large-scale datasets of labeled 3D assets and efficient architectures for denoising 3D data, neither of which currently exist.”

Instead, Google circumvents these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis.

After a bit more jiggery-pokery “the resulting 3D model of the given text can be viewed from any angle, relit by arbitrary illumination, or composited into any 3D environment. Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.”

Wonderful you think – but such advances raise ethical questions, not least given the inherent bias of the data sets on which previous AI text-to-speech engines have been built.

“As the technology develops, there are fears it could be harnessed as a powerful tool to create and disseminate misinformation,” warns MIT. “It’s only going to become harder and harder to know what’s real online, and video AI opens up a slew of unique dangers that audio and images don’t, such as the prospect of turbo-charged deepfakes.”

AI-generated video could be a powerful tool for misinformation, because people have a greater tendency to believe and share fake videos than fake audio and text versions of the same content, according to researchers at Penn State University. 

The creators of Pheraki write in their paper that while the videos their model produces are not yet indistinguishable in quality from real ones, it “is within the realm of possibility, even today.” The models’ creators say that before releasing their model, they want to get a better understanding of data, prompts, and filtering outputs and measure biases in order to mitigate harms. 

The European Union is trying to do something about it. The AI Liability Directive is a new bill and is part of a push from Europe to force AI developers not to release dangerous systems.

According to MIT, the bill will add teeth to the EU’s AI Act, which is set to become law around a similar time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people. This could include AI systems used for policing, recruitment, or health care. 

“It would give people and companies the right to sue for damages when they have been harmed by an AI system—for example, if they can prove that discriminatory AI has been used to disadvantage them as part of a hiring process.

But there’s a catch: Consumers will have to prove that the company's AI harmed them, which could be a huge undertaking.”

 

“The Banshees of Inisherin:” Martin McDonagh Tells a Wonderful/Terrible Story

NAB

Much of the pleasure of watching The Banshees of Inisherin comes from the chemistry of its lead actors but critics wouldn’t be going nuts for this film if it were just a trivial confection of pub banter.

article here

Writer-director Martin McDonagh has fused his trademark dark humour with something all together more profound about the nature of friendship, creativity and mortality.

It follows a soured friendship between the cheerful but dim Pádraic (played by Colin Farrell) and the more tortured, artistic Colm (Brendan Gleeson), who summarily tells Pádraic one morning that he no longer wants to be pals. Over the course of the film, Pádraic’s initial bafflement curdles into resentment, and Colm’s attempts to stay away from him in their tiny community fail repeatedly.

On the face of it, a relationship breakup is a thin plot on which to hang a film, but this was McDonagh’s starting point.

“I just wanted to tell a very simple break up story,” he told Deadline. “And to see how far a simple comedic and dark plot could go.”

For all its comedy, the drama is best described as a melancholic ballad. McDonagh, who won best screenplay at the Venice film festival , says he tried to imbue the friends’ breakup “with all of the sadness of the breakup of a love relationship… because I think we’ve all been both parties in that equation,” he told The Guardian. “And there’s something horrible about both sides. Like knowing you have to break up with someone is a horrible, horrible thing as well. I’m not sure which is the best place to be in.”

Depicting that sadness accurately was his intent, he explained to AV Club: “It was about painting a truthful picture of a breakup, really. A sad breakup, a platonic breakup, which can be as heavy and sad and destructive as a divorce, as a sexual or loving relationship coming to an end.”

There’s more to the film than this. Setting the story in Ireland in 1923, with the Irish Civil War playing out in the background, is a metaphor that spins the tale a wider web.

“You don’t need any knowledge of Irish history,” McDonagh told The Atlantic’s David Sims. “All you need to know, really, is that [the civil war] was over a hairline difference of beliefs which had been shared up until the year before. And it led to horrific violence. The main story of Banshees is that, too: negligible differences that end up, well, spoiler alert, not in a good place.”

The one-time friends’ divide spirals into violence so quickly that the original relatively mild cause for dispute is forgotten. “I think that’s what was interesting about this story, that things unravel and get worse and worse, sometimes without, oftentimes without intending to,” McDonagh told Uproxx, “And then become unforgivable and irreparable. And I guess that’s true of wars as much as is true of this little story about the two guys.”

There are other layers too. Not least of which is what IndieWire fingers as McDonagh’s “deep questions about national identity” including his own. Despite writing Irish characters (in this film and his debut In Bruges) and setting previous theater plays in the country, McDonagh hails from London, although his parents are indeed from west coast Ireland.

McDonagh’s last movie set in the country was the 2004 short Six Shooter, which won an Academy Award. McDonagh’s first trilogy of plays, starting with The Beauty Queen of Leenane in 1996, took place in Galway and his second trilogy — which was unfinished — took place on the Aran Islands; Banshees was shot on Inishmore and Achill, two islands off Ireland’s west coast.

Inisherin itself is fictional, partly to put the real events of the civil war at one remove from the events on screen, and also because he and cinematographer Ben Davis, use the landscapes of two islands to convey the dueling personalities of his two main characters: They shot Colm’s home on Achill Island, where the rugged terrain matched his mood; the less sophisticated Pádraic had his scenes shot in Inishmore, which is comparatively flatter.

All in all, it certainly seems like McDonagh wants to grapple with the history and personality of the country after setting it aside for almost two decades,” notes IndieWire.

At the same time, his depiction of Ireland risks backlash. “There’s a certain degree of unease in Ireland about McDonagh’s post-modern, heightened versions of Irishness,” Irish film critic Donald Clarke told IndieWire. “The films and plays do well here. But there is a tension in Ireland about his treatment of the country.”

Critics also points to supposed southern stereotypes in his Oscar nominated Three Billboards Outside Ebbing, Missouri. IndieWire points out that McDonagh was often lambasted on the promotional tour of that movie for depicting a racist police officer (Sam Rockwell) with some measure of empathy.

“His characters are exaggerated to an almost allegorical degree in order to comment on the society around them, which has led some American audiences to see his view of the country as naïve,” writes Eric Kohn. Banshees burrows into the stereotype of Irish people at pubs, guzzling pints to the tune of ebullient folk music, and molds it into an emotionally resonant character study.

That character study is also linked to a meditation on death – and how an artist should make best use of their time. In the film, Colm is a musician and wants to use the rest of his days creatively, rather than sitting in the pub with Pádraic talking nonsense. Which raises several questions, including: do you have to be selfish and cruel in order to create? Can an artist be nice?

That is accompanied with a threat: If Pádraic doesn’t leave him alone, then Colm will start lopping off his own fingers.

“I thought it was interesting that an artist would threaten the thing that allows him to make art,” McDonagh said. “Does that thing make him the artist?”

It’s clearly something that preys on McDonagh’s mind. “I’m 52. You start thinking, Am I wasting time? Should I be devoting all my time, however much is left, to the artistic? he told Sims. “That’s something that’s always going on in my head—the waste of time, the duty to art, all that. So you start off being on [Pádraic’s] side and understanding the hurt, but you have to be completely truthful to the other side … You should feel conflicted.”

McDonagh says decided that he’s going to spend what creative time he has left – he reckons “around 25 years” – making films rather than plays. His reasoning? Films are quicker.

“I always used to think they took longer than plays, but with this one we were filming it a year ago, and now it’s out,” he says in the Guardian interview. “But if you’re lucky enough to have successful plays, to get that right with each move, to cast it and take care of it, go to rehearsals, that’s five years of your life.”

It was also clearly nagging at him to unleash the genii of Gleeson and Farrell’s chalk and cheese interplay that audiences lapped up in cult hit In Bruges (2008).

It feels like it was two days ago that we made In Bruges together but time passes so quickly,” he said in response to The Playlist wondering if they’ll be a third collaboration. “None of us are getting any younger. I don’t have an idea now, but just that little ticking bomb is somewhere in me. So, I do want to get them back together.”

In Banshees, McDonagh reunites the pair only to break them up in the first scenea delectable bit of cruelty for the audience,” observes Sims.

Although he made In Bruges to his satisfaction the director apparently faced pressure from execs at Focus Features at every turn. He now insists on final cut and got on Banshees, a movie produced by the Disney-owned Searchlight. IndieWire points out that his four movies have all been made for around $15 million, a manageable scale by studio standards that lets McDonagh get away with creative freedom.

“That is the reason why the films are singular,” he said. “It is all me. It hasn’t been watered down, for good or bad.”