One of the most confounding things about researching empathy is that it’s often talked about as if it’s an obvious Good Thing. I constantly read and hear that empathy will help fix this problem or improve that relationship or better that industry. But the reality I am coming to understand is that empathy is a mechanism, a tool of our brains and hearts, that does not have an inherent value. It comes up most often in a positive context because we see understanding others and putting ourselves in their shoes as a good thing, which it often is. But this kind of perspective-taking is also used for negative, manipulative reasons all the time. We don’t usually call that empathy, but in many cases the mechanism seems the same.
I’ve been thinking about this a lot since my June trip back to New York for the Games for Change summit. I eagerly consumed as many virtual reality experiences as I could in my one afternoon there, surrendering myself as much as possible to the narratives they immersed me in. I was a fly on the wall in the bedroom of a white supremacist teenager; I followed along with nonprofit workers as they helped to free a family in India from servitude; I embodied the virtual experience of a young black man. All of these were trying to teach me about a human experience I was unfamiliar with, and they all had positive intentions. But if the makers behind these VR projects could bring me close to tears for a former white nationalist and a young black man dealing with daily microaggressions in the same afternoon, what else could they do?
The major caveat to this line of thinking is, in my view, the fact that how we come to VR experiences – our own personal backgrounds, our expectations, or hopes for what we will see and feel – plays a big role in determining how they affect us. I don’t think many people are putting on VR headsets totally unencumbered by expectations or opinions about the content of what they’re about to see. As this technology becomes more ubiquitous, though, it might become easier to sneak these experiences on people; pulling the same levers that trigger empathy can trigger other things as well: hatred, anger, fear. We already do it with less fanciful technology, don’t we? Digital advertisers work to try to understand how we view the world so that they can put t he best ads in front of us; communications teams for politicians write certain words and phrases into speeches that they guess, by putting themselves briefly in the shoes of their base, will generate a reaction. Perspective-taking in service of manipulation isn’t new, but as technology evolves, the capacity for doing this on a larger and larger level grows.
This concern really started bubbling up for me a a couple of months ago, after I experienced a couple of rough days of trolling on Twitter. I realized that while I had always seen internet trolls as lacking empathy, what they were doing actually required a certain level of it. They just wielded it a different way than we’re used to thinking about. So of course, I tweeted some thoughts:
This gets at a lot of what’s been swirling around in my head as I write this book about the future of empathy and technology. The people I interview often remind me that technology is a tool, but I’m realizing that empathy is one too.
As I wrote in my last post, it feels like a weird time to be writing a book about empathy. But conversations like the one described above make me feel like it’s worth it, no matter how it turns out. Stay tuned for more.