Poking the empathy part of the brain

Hello and Happy New Year! Hopefully by now you’ve settled into 2018 and dug out of the frozen tundra (if you’re in the U.S.) Here in Brooklyn we have reached the “grey slush and surprise ice patches” phase of winter and it’s gross as ever. I could really write a whole post about the disgusting things that happen to the sidewalks here when the snow melts, but I’ll spare you… Instead, let’s talk about something else that seems kind of obvious but is actually super interesting: empathy’s role in moral dilemmas like, you know, whether or not to kill someone.

In a recent UCLA study, researchers found that they could guess whether a person would harm or kill another person based on how their brains reacted when they saw someone else in pain.

The researchers showed 19 volunteers two videos in which a hand was either touched by a cotton swab or stabbed with a needle. This is a pretty common way to measure empathy, and that’s what these scientists did: they used an fMRI machine to measure brain activity in the places where this specific type of empathy – acknowledging someone else’s pain and feeling for them – is thought to reside. They also apparently analyzed researchers’ “mirror neurons,” which are neurons that are said to fire both when a person feels something and when they see someone else appearing to experience that same feeling. (There’s some controversy about mirror neurons.)

In addition to showing them the videos, the researchers asked the participants some common questions aimed at determining whether a person would hurt or kill someone else: there’s the “crying baby during wartime” question, for example, when you’re meant to say whether you would suffocate a baby whose cries might give away your location to the enemy. They also asked whether the volunteers would torture someone in order to prevent a bomb from killing some people, or harm research animals in order to cure AIDS.

The researchers guessed that those with more empathy related action happening in their brains during the needle video would be less likely to hurt the crying baby, and they turned out to be correct, at least with this small sample size. They didn’t find a correlation between brain activity and willingness to hurt someone else to help a larger number of people, however. The reason, they argued, may be that those decisions are a lot more complex.

“It would be fascinating to see if we can use brain stimulation to change complex moral decisions through impacting the amount of concern people experience for others’ pain,” said Marco Iacoboni, director of the Neuromodulation Lab at UCLA and one of the leading mirror neuron experts, in a statement. “It could provide a new method for increasing concern for others’ well-being.”

I highlighted just a few of the reasons this research does not suggest causation (and, as you hopefully know, research rarely does). But I’m actually more interested in Iacoboni’s quote. I’ve been researching and writing a lot over the past year about the different ways people are trying to increase empathy in the human brain. Most of the time, these stories involve tech like virtual reality or Fitbit-like gadgets. But Iacoboni’s suggestion of using brain stimulation to potentially make people more empathetic decision-makers doesn’t seem that far-fetched…though it does seem kind of like taking the easy way out. I’m sure he means this mostly (if not completely) for academic purposes, but I wouldn’t put it past tech companies to find a quick way to capitalize on “instant empathy.” We already have brain stimulation gadgets that are meant to help with stress and anxiety and a host of other things.

There are a couple of concerns, here. First is regulation to keep people from accidentally harming themselves or tech companies from doing nefarious things with the new level of personal brain data they might collect. Second is kind of the opposite: Do we want the government to potentially have the ability to stimulate our brains to change how we make complex moral decisions? I don’t mean to sound like a conspiracy theorist! But when so much sci fi stuff is becoming real life, it seems worth asking these questions.

Something to keep an eye on, for sure.

Advertisements

On empathy, endorsement, and what happens next

I walk a little over a mile to and from work each day, and I usually spend it listening to podcasts, or to books on Audible. After more than a year of this, I really look forward to certain days when I know certain podcasts will have a new episode out. Note to Self is one of them. I love Manoush Zomorodi’s style of reporting on technology and how it affects our lives, and I love how she’s styled herself as a guide to “our accelerating world.” Because wow, yes, is it ever accelerating.

Note to Self is often about technology in a technical sense, but the show also takes occasional detours into the psychology of how we interact with tech. This, of course, is my favorite thing to write and read about. So I was really excited when Zomorodi recently interviewed Dylan Marron. He’s a progressive YouTuber and writer who also has a new podcast, called Conversations With People Who Hate Me. It’s pretty much exactly what it sounds like. Remember when Lindy West called her troll and it became a viral This American Life episode? Marron does a similar thing on each episode of Conversations. He talks to the people who profess to think he’s the scum of the earth, and tries to find out why.

This is something I’m going to write more about soon – podcasts and radical empathy – but for the purposes of this blog post I really want to focus on one thing Marron said during this Note to Self episode: empathy doesn’t mean endorsement. This is a fact that’s become so obvious to me, I think I forget to enunciate it to others when I talk about empathy. I’ve never found such a succinct way of saying it, either. But it’s absolutely the truth: sitting down and listening to someone does not necessarily mean validating them, and it definitely doesn’t mean agreeing with them. It’s just…acknowledging them. Taking their perspective.

That can feel a little scary. I know that I have had experiences in which I read something written by someone with vastly different views from my own and as I prepare to put myself in their shoes I think, what if I can’t get back out? What if they convince me? But things don’t really happen that abruptly, most of the time. We make our decisions and create our ideologies based on a mix of experiences and information, and it all sort of flows together and tries to balance itself out, rarely truly solidifying into one thing. What I mean is, we’re always learning, always changing our minds a little bit, even if we don’t always notice it, or want to.

I thought about this concept a lot as I watched the recent Alabama election unfold. Everyone around me kept asking, “How could these people vote for a pedophile?” I can’t say the answer is the same for everyone who voted for Roy Moore, but I can say pretty confidently that many of them did so because they didn’t believe what they heard about him. Or, they only believed parts of what they heard about him.

Brian Resnick has a great piece about this up at Vox. He interviewed a lot of Moore supporters in Alabama before the election, and reading this piece, I feel like I can really empathize with these people. Trying to put myself briefly in their shoes, I feel afraid, I feel disappointed, I feel betrayed. This is something I tried to do when reading story after story about Trump supporters last year as well. And I think it’s a worthwhile practice. But the part that nobody really seems to talk about is… then what?1

What do I do with this information? What do I do with the fact that people of all parties and ideologies cling to confirmation bias and “motivated reasoning?” Well, it’s made me feel a little bit less hopeless about change, for one thing. That might seem counter-intuitive, but knowing that we’re all susceptible to this, and witnessing people have conversations about it that don’t end in name-calling or fist fights, is encouraging. It also helps me feel less angry, which is no small thing. Over the past couple of years I’ve found anger to be less and less useful for me, at least on a personal level. Being mad at friends or family or strangers who did something I see as wrong doesn’t actually accomplish anything for me, except raising my blood pressure. When I understand their points of view a bit better, I can take some of the emotion out of my reaction to them. And if we’re both on the same page about that, we can have a conversation, and figure out where we agree. And sometimes… sometimes, one or both of us can bend a little. Without the pressure to immediately admit or agree to anything, this can feel a lot easier.

There’s one major caveat to all of this. And it’s never far from my mind when reading and listening to these conversations. This isn’t just about liberals learning to empathize with conservatives. There’s a lot going on in the other direction as well. And, especially after the election of Doug Jones over Roy Moore in Alabama, it’s way past time to start asking people to empathize with another group who doesn’t get nearly enough attention despite their huge impact and disproportionate burden: black voters. Especially black women. It’s good that we’re talking about empathy so much, but we also need to be real with ourselves about who we reserve it for.

Coding Empathy

I am always looking for cool ways that educators are using technology to teach empathy. This usually involves some kind of perspective-taking exercise, whether it be a simple real-life simulation or a virtual reality experience. Honestly, sometimes even kids’ TV shows are super explicit about empathy, and it seems like that can have just as much of an impact as the flashier stuff, if the kids are comfortable and attentive enough.

Yesterday, though, I came across a cool tech solution for building empathy that one teacher is using in Australia that takes things a step further, into the world of STEM. Getting kids interested in science and math at a young age is probably even more buzzworthy than teaching them empathy at this point, so a lesson that puts those two things together? I’m intrigued.

Journalist Sarah Duggan writes for Australian Teacher Magagzine about a teacher in South Australia who has the teenagers in her digital technology class use a feature of the game Minecraft, called Code Builder, to build a base camp for refugees from an imaginary war ravaging their own country. She encourages them to use resources like the United Nations High Commissioner for Refugees website to make sure the camp has the right supplies and structure, and in the process, they get a small taste of what it’s like to be on the refugee side of the equation.

“One of the first things that struck the students was the number of people who had to share amenities,” the teacher, Renee Chen, tells Duggan. “For example, in times of crisis and emergency up to 20 people have to share one toilet … Very quickly, students learn that refugees have very little control over their circumstances.”

The assignment also helped students better understand some of their classmates who were refugees themselves. “The task sparked conversations between these two groups of students,” Chen said. “These discussions led to the sharing of stories, formation of personal connections, and the development of empathy.” She said it also made students feel less helpless about awful things happening around the world.

These anecdotal results are really similar to what students report after playing games or having VR experiences that put them in others’ shoes. In this case, coding was basically a way to create a virtual world in which to do that. But it got me thinking about how empathy might be useful in coding more broadly, and not just among students, but with professionals as well.

This led me to the blog Coding With Empathy, which is run by a developer named Pavneet Singh Saund, whose mission is to get his fellow developers to hone “soft” skills like empathy and collaboration in much the same way they make sure to keep learning their “hard” craft. One of the main selling points for this kind of thinking is that developers aren’t creating software just for the sake of it – they’re creating it for users. And there’s a growing movement focused on teaching developers to put themselves in those users’ shoes in order to create the best product for them.

Saund is far from the only one talking about this. The folks at Simple Programmer call empathy “a developer’s superpower,” and Creative Bloq says implementing more empathy can even make developing more efficient.

I’m not a software developer or engineer of any kind, so I can’t tell you from personal experience whether any of this is true. But I can tell you as a journalist who pays attention to these things that it’s notable how many new ways educators, developers, and professionals of all kinds are finding to work empathy into their missions.

Empathy for the holidays

This post is probably coming too late for many of you who just finished celebrating American Thanksgiving, but luckily there are several more holidays to come this year. Depending on how you celebrate, that may also mean several more opportunities to exercise empathy with family, friends, and even coworkers!

A tangled ball of Christmas lights

In the lead-up to Thanksgiving this year I came across several pieces of advice that really helped me put my own anxieties about the holiday into perspective. In general I feel like I’ve learned a lot over the past year about how to truly empathize with others, both as a result of the research I’ve been doing for my book, and because of the particularly heated moment we’re currently living through in American culture. But I still learn things that surprise me.

While watching Instagram Stories the day before Thanksgiving, for example, I came across a list of tips for talking with families, from the organization Showing Up For Racial Justice. I thought I knew what to expect, but I was actually a little surprised by the first item on the list:

Listen mindfully before formulating a thoughtful response.

We talk a lot about listening to others’ opinions, especially when they differ widely from our own, but what does it mean to listen mindfully? I actually came to this post because it was shared by a celebrity, who added this question: Are you listening to answer, or to understand? Are you taking in everything the other person says just to help you formulate a response, or are you actually considering this person’s thoughts for their own sake?

It sounds so simple, but it’s actually not easy to do in the moment. I had to sit with myself and think about what I’m really doing when I listen to people with whom I disagree. I thought about one family member who likes to debate politics on Facebook and via email. I realized that when I’m reading her messages, I am often just sitting there planning how I’m going to respond. I wondered, if I read them more mindfully, could our conversations go more smoothly? Maybe, maybe not. But it seemed worth a try.

The rest of this list included things you might find obvious – asking questions, respectfully affirming differences, breathing. But there was another one that struck me:

Notice what is possible for you at this time – stretching into discomfort while also caring for yourself.

This notion that sometimes, if you don’t feel up to it, you can opt out of hard conversations, has been a tough lesson for me to learn. But ultimately it’s better for everyone. It’s hard to really show empathy to others in conversation, to listen mindfully, when you feel like you don’t have it all together yourself. Yes, it is important to “stretch into discomfort” in order to learn new things and understand others better, but there’s no rule that you have to do it at the expense of everything else.

Hopefully you find some benefit in these tips. I did, even though I didn’t end up needing to use them at Thanksgiving. But they feel like good pieces of advice even for everyday conversations, whether they are about race, politics, work, relationships, or even sports. It can sometimes be hard to understand how to work empathy into our everyday communications, but thinking about it as “mindful listening” might help!

Empathy, virtual reality, and anniversary anxiety

I’ve been working on a lot of things lately, and I’m sorry to say that this blog has not been one of them… but it will be again soon, worry not! In the meantime, here’s a look at two stories I recently published:

Can Virtual Reality Change Minds on Social Issues? at Narratively, about how nonprofits and other organizations are using virtual reality to trigger empathy and, ideally, action. There’s still some debate about whether this actually works at scale, but it can’t be denied that people are making some amazing, moving things with VR. Give the story a read, and check out the awesome gif at the top of the page!

A couple of days before the anniversary of the presidential election, I got the opportunity to write about why anniversaries like this are hard for people, psychologically. It turned into a really interesting piece that I think is relevant to the kind of behavioral science stuff I’m thinking about all the time: Why The Election Anniversary Is Hitting You So Hard at Lifehacker

More to come soon!

When the robots do it better…

Ellie
US soldiers and veterans revealed significantly more post-traumatic stress symptoms to a virtual interviewer than through a standard or anonymous Post-Deployment Health Assessment survey. CREDIT: USC Institute for Creative Technologies

It’s clear that PTSD is a major problem among American war veterans. According to the U.S. Department of Veterans Affairs, symptoms of PTSD affect almost 31 percent of Vietnam veterans, up to 10 percent of Gulf War veterans, and 11 percent of veterans who served in Afghanistan. But, as with many mental health issues, those numbers might be off because there is still a stigma attached. Veterans Affairs can’t count — or help — the soldiers who don’t feel comfortable coming forward. But what if instead of talking to people who might affect their careers, they could talk to robots?

Not, like, Bender robots, but artificial intelligence presented as kind strangers on a computer screen. In a recent study that used this technology, the AI made a big difference. Researchers at the University of Southern California found that service members who volunteered to try this out were more open about their symptoms with the “virtual human” they spoke to than they were when filling out a military required survey. Gale Lucas, who led the research, thinks this is likely because when PTSD symptoms are conveyed via the military survey (or directly to a military psychiatrist) they must be reported, which can affect service members’ career prospects. Speaking to the AI, known as “Ellie,” felt more anonymous.

“These kinds of technologies could provide soldiers a safe way to get feedback about their risks for post-traumatic stress disorder,” Lucas said in a statement. “By receiving anonymous feedback from a virtual human interviewer that they are at risk for PTSD, they could be encouraged to seek help without having their symptoms flagged on their military record.”

So, can AI provide potential life-saving empathy that real humans can’t?

Well, there’s (at least one) catch. Ellie makes soldiers feel comfortable, safe, and understood, but she is currently operated by researchers. If and when she becomes integrated into the military health system, she might lose her real magic: anonymity.

Joseph Hayes, a psychiatrist at University College London, told Newsweek

“For an intervention to be possible ultimately, the disclosure would have to be shared with the same commanding officers who have traditionally received the results of the service members PDHA, and entered into their military health record. Once this is made explicit, would disclosure reduce to levels seen previously? If so, it is a culture change (reducing public stigma–within the military and more broadly) which is truly going to impact on disclosure and provision of appropriate treatment.”

Lucas thinks her team can get around this by only requiring Ellie to alert humans if a service member threatens to hurt him- or herself or someone else, and leaving it up to the individual whether they want to follow up their session with the AI with a session with a real doctor.

The jury’s out on the ethics and implementation, but this is one more step toward empathetic AI, which is… well, both exciting and terrifying!

To learn more about this technology, check out the USC Institute for Creative Technologies website.

 

Driverless empathy

Algorithms and big data affect our lives in so many ways we don’t even see. These things that we tend to believe are there to make our lives easier and more fair also do a lot of damage, from weeding out job applicants based on unfair parameters that ignore context to targeting advertisements based on racial stereotypes. A couple of weeks ago I got to see Cathy O’Neil speak on a panel about her book Weapons of Math Destruction, which is all about this phenomenon. Reading her book, I kept thinking about whether a more explicit focus on empathy on the part of the engineers behind these algorithms might make a difference.

The futurist and game creator Jane McGonigal suggested something similar to me when I spoke to her for this story earlier this year. We talked about Twitter, and how some future-thinking and future-empathizing might have helped avoid some of the nasty problems the platform is facing (and facilitating) right now. But pretty soon Twitter may be the least of our worries. Automation is, by many accounts, the next big, disruptive force, and our problems with algorithms and big data are only going to bet bigger as this force expands. One of the most urgent areas of automation that could use an empathy injection? Self-driving cars.

img_533748

I’ll be honest – until very recently I didn’t give too much thought to self-driving cars as part of this empathy and tech revolution that’s always on my mind. I thought of them as a gadget that may or may not actually be available at scale over the next decade, and that I may or may not ever come in contact with (especially while I live in New York City and don’t drive). But when I listened to the recent Radiolab episode “Driverless Dilemma,” I realized I’d been forgetting that even though humans might not be driving these cars, humans are deeply involved in the creation and maintenance of the tech that controls them. And the decisions those humans make could have life and death consequences.

The “Driverless Dilemma” conversation is sandwiched around an old Radiolab episode about the “Trolley Problem,” which asks people to consider whether they’d kill one person to save five in several different scenarios. You can probably imagine some version of this while driving: suddenly there are a bunch of pedestrians in front of you that you’re going to hit unless you swerve, but if you swerve you’ll hit one pedestrian, or possibly kill yourself. As driverless technology becomes more common, cars will be making these split-second decisions. Except it’s not really the cars making the decisions, it’s people making them, probably ahead of time, based on a whole bunch of factors that we can only begin to guess at right now. The Radiolab episode is really thought-provoking and I highly recommend listening to it. But one word that didn’t come up that I think could play a major role in answering these questions going forward is, of course, empathy.

When I talked with Jane McGonigal about Twitter, we discussed what the engineers could have done to put themselves in the shoes of people who might either use their platform for harassment or be harassed by trolls. Perhaps they would then have taken measures to prevent some of the abuse that happens there. One reason that may not have happened is that those engineers didn’t fit into either of those categories, so it didn’t occur to them to imagine those scenarios. Some intentional empathy, like what design firms have been doing for decades (“imagine yourself as the user of this product”) could have gone a long way. This may also be the key when it comes to driverless cars. Except the engineers behind cars’ algorithms will have to consider what it’s like to be the “driver” as well as other actual drivers on the road, cyclists, pedestrians, and any number of others. And they’ll have to imagine thousands of different scenarios. An algorithm that tells the car to swerve and kill the driver to avoid killing five pedestrians won’t cut it. What if there’s also a dog somewhere in the equation? What if it’s raining? What if the pedestrians aren’t in a crosswalk? What if all of the pedestrians are children? What if the “driver” is pregnant? Car manufacturers say these are all bits of data that their driverless cars will eventually be able to gather. But what will they do with them? Can you teach a car context? Can you inject its algorithm with empathy?