HPA
“I am passionate about creating technology that enables us to translate the creator’s true intent,” technologist and neuroscientist Poppy Crum told the TR-X audience at the 2020 HPA Teach Retreat. “For me, there is a truth that the content creator wants me to feel – whether that’s fear, joy, disgust…. But my environment – and technology that assumes we all look and react the same – doesn’t always allow that. That there are better ways of ensuring that the intent reaches every viewer or listener in the richest way possible is, I believe, a goal worth striving for.”
Crum is deep into the future of what happens when technology knows more about us than we do, and she believes that although it’s easy to jump to unsettling thoughts of the cautionary or dystopian tales of Mission Impossible or 1984, implemented ethically it’s not a bad thing. In fact, she feels, “it’s probably the most empowering opportunity we have to enrich and elevate experiences of storytelling and technology capacity for individuals of all demographics and biological composition.”
Today, she says, AI algorithms can detect our slightest facial microexpressions, differentiating between a real smile and a fake one, predicting or diagnosing our mental or physical health from the patterns of our speech, or even knowing whether we may have early signs of illness or are feeling emotions such as joy and suspense solely from the chemical compositions of our breath. Whether we like it or not, we’ve been sharing a lot about our internal states long before we made it common place to wrap ourselves and our spaces in digital devices that track our every move, exhale, or beat. But now we do, and there is a lot that can do for each of us.
Formerly Research Faculty in the Department of Biomedical Engineering at Johns Hopkins School of Medicine and now Chief Scientist at Dolby Laboratories and Adjunct Professor at Stanford University, Crum has spent a lot of time studying the circuits of the human brain that create the unique perceptual realities that we all possess.
“Each of us hears and sees the world differently as a result of the interaction between our individual biological capacity, and the environment around us,” she said. “The distributions of colors, contours, sounds we surround ourselves with in our urban or rural environments are vastly different. These shape our unique perceptions, and through neuroplasticity impact the way our brain allocates its resources. For example, if I have spent a lot of my life in a rural desert environment without a lot of difference in hue, fewer sharp edges, and more need for me to identify subtle shifts in shading and contour, my brain will allocate more resources to decoding the more limited set of hues and shifts in contours in order to be effective in that environment. In contrast, years spent in noisy cities will shape how we are each able to react and effectively attend to resolution across a wider color gamut, sharper edges, and the intensity and cacophony of sounds that surround us”
Our relationship with technology and, by extension, the context we experience it in also shapes us. For example, someone who has just played their first forty hours of Call of Duty may be forever changed. They can be expected to have heightened visual acuity and faster and more effective probabilistic inference critical to strategic planning.
“Any time you build a new way of interacting with content or technology you are affecting human capability,” Crum said. “The point is that we can do this by design.”
This is already happening. Netflix, for example, uses data about its users to tailor film and TV recommendations to individual profiles. Its algorithms even adapt the color of artwork and font size. Audio playback systems can position the sound of an object in space in accord with the creative intent. Emerging technologies like object based broadcasting allow the content of programs to change according to the requirements of each individual audience member. Silicon Valley and Hollywood studios make use of electroencephalograms (EEGs) to understand how our moods affect the content we watch.
But all of this merely scratches the surface of the possible.
At present, Crum contends, most of our technology is built as ‘one size fits all,’ geared towards pretty much one demographic – typically a white male. It is not personalizing the way content is perceived at the individual user level in a way that delivers the true intent of the artist or the technology.
“I’m not talking about avatars replicating emotions. I am saying that technology is not responsive to my internal state. Even the most intelligent thermostat on the market does not know whether I’m hot or cold, or what I’m trying to do at that moment. If it takes even a small amount of information learned through signatures of combined sensors from the environment or wearable technologies into account then suddenly it is remarkably more effective at facilitating the goal of translating the technology or creator’s intent and improving the user’s experience”
This goal is tantalizingly in reach.
“We already record detail about the creative intent in metadata. You can imagine extending that to capture more information about the emotions and feelings intended by a creative scene or effort. In addition to which the ubiquity of sensors and the ability to amalgamate our personal and biometric signals offers a way of closing the loop.”
She has been able to show how changes in the density of CO2 in a space – such as movie theaters – can correspond with changes in emotion and stress of individuals in the room.
One demonstration involved a screening of National Geographic’s Oscar winning rock climbing documentary Free Solo. From special tubes installed throughout the theater, scientists in Crum’s team were able to measure, in real time, with high precision, the continuous differential concentration of carbon dioxide. But what the trace presented to the HPA audience really showed to Crum was “the entire room and audience in the theater going on the creator’s journey”
“It’s our collective suspense driving a change in CO2,” Crum explained. “You can see where Alex [Honnold] summits and where he abandons the climb, you can trace the character’s love story. The audience is broadcasting a chemical signature of their emotions. It is the end of the poker face.”
Combine this with input from other sensors, such as heart rate and thermal cameras, and paired with machine learning and AI assessment, and it’s possible to show that changes in the thermal signature correspond to shifts in an individual’s (or a group’s) engagement and attention.
In a recent talk for TED, Crum calls it the era of the empath. “If we recognize the power of becoming technological empaths, we get this opportunity where technology can help us bridge the emotional and cognitive divide. When technology is empathetic, it modifies its state by the response of our internal experiences. And in that way, we get to change how we tell our stories.”
Crum presents exciting food for thought. Are we capturing the right signals to preserve and transmit the intent of the creator? How can knowledge that our spaces and technologies know what we are feeling feed into ways we create, deliver and consume content in order that we might better experience the intent of the artist?
“We get a chance to connect to the experience and sentiments that are fundamental to us as humans in our senses, emotionally and socially. But regardless of whether it’s art or human connection, today’s technologies will know and can know what we’re experiencing on the other side, and this means we can all be closer and more authentic.”
No comments:
Post a Comment