Button TextButton Text
Download the asset
Back
Article

What Happens When You Give A Computer Emotions?

What Happens When You Give A Computer Emotions?

Decoding Human Emotion Through Affective Computing

Depending on the day, tools, and current events, you might run the whole gambit of emotions–a valid response as we're human. But what about our technology? What happens when technology is capable of emotional intelligence?

Just one month into 2023, we've seen a massive increase in the adoption of AI among the public. Open AI's ChatGPT is estimated to have crossed over 100 million monthly active users in January; while it's great at reading prompts and packaging back an answer that can feel conversational, it's still a long way off from achieving artificial general intelligence, and a key to getting there lies in AI's ability to understand the human condition. 

Enter Affective Computing. In episode 87 of the Feedback Loop, we sit down with the woman credited as the founder of the field, MIT professor, and co-founder of Empatica, Rosalind Picard. Together we explore the impact of Affective Computing on society and our relationship with our emotive technology as it relates to social robots, improving our health, surveillance, manipulating our emotions, and much more. 

Listen to the episode now, or keep reading as we dive into the key takeaways from Rosalind. 

Why create emotionally intelligent AI?  

Right now, AI is really good at giving you what you tell it you want. However, as anyone who's ever interacted with another human knows, we are not always very good at it, and sometimes it's nice to wish that our technology could understand us better. Affective computing, or Emotional AI as it's often called, is a subfield of AI that focuses on the development of technology that can recognize, interpret, and respond to human emotions involving the use of machine learning, natural language processing, and other AI techniques to create systems that can understand and respond to human emotions. In short, it's a branch of artificial intelligence that deals with the recognition, interpretation, and manipulation of human emotions. 

We often live our lives every day without knowledge of what technology is doing behind the scenes without us even realizing it. We often come face to face with Affective Computing without noticing, be it through virtual or voice assistants like Amazon's Alexa or Apple's Siri, social media, marketing (to analyze customer reactions to improve upon campaigns), call centers using a vocal effect analysis, etc. There are also increasing deployments of face expression analysis in attempts to tell if people are being authentic in their emotional states. The technology has integrated seamlessly into our daily routine that we've almost come to enjoy the benefits of its existence.

For a moment, imagine a future where Alexa can identify that you're stressed or becoming agitated before you can and can understand that perhaps it's 3 pm and, according to your wearable data, you have not had anything to eat today beyond the cup of coffee Alexa can see on your desk. After listening to ensure you're not in a meeting, she asks if you'd like to order your favorite meal from Doordash, it'll be here in 15 mins, and all you have to do is say yes. While the data, computing, research, and technology required to do what we described is still under development, it's not impossible to see how we could get to a future where our technology can identify not just how we feel but why we feel that way. 

Where do we get the data? 

Turning emotions into data isn't new. In 2000 the MIT Affective Computing Group, led by Picard and Jennifer Healey, released the first data set from their research to see if physiological signals have repeatable, identifiable patterns for different emotions. 

"In particular, we wanted to know if patterns could be found day-in day-out, for a single individual, that could distinguish a set of eight affective states (in contrast with prior emotion-physiology research, which focused on averaging results of lots of people over a single session of less than an hour.) We wanted to know if it might be possible to build a wearable computer system that could learn how to discriminate an individual's affective patterns, based on skin-surface sensing. We did build such a system, which attained 81% classification accuracy among the eight states studied for this data set." 
Jennifer Healey and Rosalind W. Picard (2002), Eight-emotion Sentics Data, MIT Affective Computing Group

Things have moved quite fast in recent decades. Picard mentions the emphasis on computer vision in the progression of accelerated change and technology, particularly deep learning. Helping to contribute to accelerated facial expression recognition, more gestural and body movement analyses and, of course, textual analysis – sentiment speech for interpreting how something is said. Also, mining data online to improve things like ChatGPT, GPT3 and things around larger language modeling give our computing a bit more sophistication to those dialogues. However, machines still don't have the capacity for human feelings and emotions. They don't think, they don't know, they don't feel and according to Picard, our language is wrong regarding how we describe emotions to our technology to achieve this level of intelligence.  

A good chunk of the data needed to advance the Affective Computing field lies in our wearable technology. The price-performance curve for sensors creates a unique opportunity to capture a wealth of health information you might not even know your device is tracking. 

However, even with data from wearable technology, researchers and engineers need to carefully look at where the data sets are pulled from and contextualize them properly. For example, looking at them in the context of time means noticing a person's typical rhythms and looking at signals and noise, which can vary based on focus. And if you are conducting an accurate scientific analysis, gathering raw data is recommended. For example, Picard highlights consistency issues with a study conducted at Harvard University that used heart rate variability data downloaded from a well-known consumer wearable. At the start of the study, old data was used; therefore, new data was needed and collected. After the new collection, old data was redownloaded once more. The result was a significant change in the heart rate variability, and the information completely changed the result of the study. 

When it comes to health data of any kind, researchers and users, deserve the raw data or be told when it's not. Raw data will be vital to the success of future long-term studies where it's essential to understand cycles, rhythms and patterns to get the full context of what's going on in a person's health and life. As we learn about those connections between our affective system and every organ in the body, we recognize that modeling and understanding how our affective state changes can, in turn, help us better manage disease and even go as far as preventing some illnesses.

So once we understand those intimate and vulnerable parts of the human condition, how do we ensure that our technology or Big Data does not have the power to control and manipulate us? 

Who will control who? 

While Affective Computing is intended to help improve our lives, it's impossible to ignore the potential ability this technology could have when it comes to hidden manipulation to straight-up control. If our technology continues to make life easy for us, can it impact our natural fight-or-flight responses or capacity for thought? Is it possible for this technology to Pavlov us into a stimulus-response of what the AI feels we should be doing, and what if we prefer it? 

Picard detailed that something along those lines happens daily when we interact with marketing. A marketer's job is to focus on engaging and maintaining your attention and have you associate things with a brand. In some way, they have mastered techniques of accomplishing it every time we are on the internet, watching TV or even driving in our cars. Picard also highlights that teachers do the same thing, except they want to associate your mind with their learning goals. So, control over influencing a human's mental state seems easy to achieve if done correctly, right?  

Yes and no. Our current relationship with technology is complicated, largely in part due to a lack of compliance and regulation when it comes to building the framework for the digital world, where it can feel as if there is a societal obligation to participate in or risk being left behind. Skipping this pre-work resulted in building negatively biased systems in a digital world we are increasingly spending more time in and putting more of our trust into, deserved or not. However, pause and take a moment to think about the subtle ways technology nudges you to do an action. Right now, that action is on behalf of another human or business, but what if we do achieve AGI and these digital assistants become more inclined to push us towards a particular direction because they believe it's what's best based on your data? 

The more data we collect, the more we learn, and the more good we can do. Unfortunately, there will always be the opportunity to do harm. According to Picard, in moments like these, we need to engage in conversations, engage with society, and make sure that we bring along people's knowledge, hearts and humanity with it. Those creating the technology have a responsibility to make it incredibly easy to do good things and make it close to impossible to do bad while also trying to anticipate misuses and prevent them.

Knowing the extent of just how comprehensive the data can be, and sharing the information with people who want to accomplish good with it, can be beneficial for progress. However, just like any form of technology and data, the problems start when the data gets into the wrong hands or "bad actors" that can use our personal data to judge or discriminate against us. Picard shares an anecdote of how in the early days of Fitbit users thought the only data measured was simple movement via steps and it was relatively secure and anonymous. However, we now know that the accelerator data can create a special signature that can identify you using only your heart rate and respiration and can do it with a signature that can be used to identify you. 

With the risks and rewards of building Affective Computing systems are we even qualified for the task when much of the human experience is a black box to us? Is it the blind leading the binary? 

"We want to see what we can do. We are makers; we are creators; we are risk takers. [Affective computing is] our version of climbing Everest,...Can we do it? We're attracted to what we know and how we work, right? We understand how we work by building it."

Listen to the full episode of the Feedback Loop for answers to the following questions: 

  • Can this technology help more humans live with dignity? 
  • How can we leverage existing systems, like GPT, to help us build out Affective Computing? 
  • Will we be able to build relationships with these technologies as they advance?
  • Is our increasingly digital experiences causing us to lack emotional maturity? 
  • What do these futures look like on the far horizons? 

Valeria Graziani

Valeria Graziani is an accomplished marketer and copywriter. She lives in Arizona.

Download the asset