Woulda, shoulda, coulda

Twitter co-founder Ev Williams posted a thread yesterday. Not super surprising, since he’s one of the fathers of Twitter, but as he explained in said thread, he doesn’t post his thoughts there much. He sticks to links, because he, “[doesn’t] enjoy debating with strangers in a public setting” and he “always preferred to think of [Twitter] as an information network, rather than a social network.”

That definitely elicited some eye-rolls, but this was the tweet – in a long thread about how he wants reporters to stop asking him how to fix Twitter’s abuse problems – that really caught my eye…

That is… exactly the problem! It’s both reassuring to see this apparent self-awareness, and frustrating how late it’s come, and how defensive he still is…

Maybe he feels like he can’t say for sure whether being more aware of how people “not like him” were being treated or having a more diverse leadership team or board would have led the company to tackle abuse sooner…. but those of us who are “not like him” are pretty confident it would have. Or at least it could have. It should have.

This is what I mean when I talk about a lack of empathy in tech. I don’t know Ev Williams or any of his co-founders; I don’t know many people who have founded anything at all. And I understand that founders and developers are people deserving of empathy too. As I read Williams’s thread, I tried to put myself in his shoes, even as I resisted accepting much of what he was saying. I get that “trying to make the damn thing work” must have been a monumental task. But as I talk about here a lot – there’s empathy, and then there’s sympathy. And as Dylan Marron likes to say, empathy is not endorsement. I can imagine it, but I don’t get it. And it’s little solace to the hundreds of people who are harassed and abused via Twitter every day to hear it confirmed that their safety wasn’t a priority, whatever the reason.

They know this – we know this. The question is, what now? Williams, for his part, brushes off this question. It’s not his problem anymore, he seems to say, and he doesn’t know how to fix it, but if you have any “constructive ideas,” you should let Twitter know (or write about them on Medium, Williams’s other tech baby…)

The toxicity that Williams says he’s trying to avoid – that he says his famous friend is very upset by, that he seems almost ready to acknowledge is doing real damage to many, many other people who use Twitter – was part of what inspired me to write The Future of Feeling. I wanted to know, if it’s this bad right now, how much worse could it get? Is anyone anyone trying to stop this train?

I talked to a lot of people in my reporting for the book, and over and over again I heard the same idea echoed: empathy has to be part of the fabric of any new technology. It has to be present in the foundation. It has to be a core piece of the mission. Creating a thing for the sake of creating the thing isn’t good enough anymore. (Frankly, it never was.) The thing you create is very likely to take on a life of its own. You need to give it some soul, too.

Williams ended his thread with a tweet that actually resonated with me. It’s something I’ve found to be absolutely true:

People made this mess. People will have to clean it up. If Williams doesn’t want to, or know how to, I know a lot of other folks who are getting their hands dirty giving it a try.

Advertisements

existentialist friday epilogue

I wrote my last post in a fog – a mixture of anxiety, sadness, nihilism and hope. Super dramatic for a Friday night, I know! And reading it today, I’m a little surprised by how intense those feelings were, and how clearly that intensity comes through.

Maybe I should be embarrassed – it was a very vulnerable piece of writing that might be better suited to a private journal. But even after reading it today, and considering that, I decided to hit publish because I do not believe I’m alone in those feelings or thought processes, and I think there are few things more important in this world right now than community with others in our feelings and thought processes.

Not necessarily validation, or reassurance, but community.

That’s what those people in those Christchurch mosques were engaging in last week when they were murdered. It’s what I did at my own church yesterday, feeling sad and uncertain and comforted by the knowledge that I was sitting among a lot of other people feeling the same things. We sang and meditated together, called out the elephants in the room (racism, hatred, violence, intolerance, ambiguity) and continued our ongoing conversation about how to live with and wrangle them. Lately I’ve come to view this as the most beautiful and important thing about being human – existing in community with one another. It sounds pretty and easy but it is one of the most complicated and difficult things I’ve ever done. I am grateful that I woke up today and get to keep doing it.

It’s also amazing to me how clear these ideas are after a couple of days of letting them simmer inside me. I avoided social media as much as possible this weekend. I exercised while listening to an audiobook, watched people of all ages fly kites in perfect weather, watched my husband make sourdough bread for the first time and beam with pride, ate delicious crab cakes and pizza, toasted to friends’ birthdays, read, sat in community with my friends at the Unitarian Universalist fellowship, drank a lot of water, took a bath, and let my brain breathe a little.

On the other side of all of that, I feel like things might be OK. I wonder what I can do to bring this feeling with me into every day, not just Mondays after a social media detox, while also respecting and cultivating the community that exists right there on social media too. They are different kinds of communities, but they overlap in so many ways. This is more true for me now that I live outside the New York City bubble than ever before, so maybe that’s why it might seem like I’m grasping for something others have known all along. But again, something tells me these things I’m wrestling with are more common than we like to admit.

Do you have your tech accountability buddy yet? Maybe you can admit it to each other?

just a little tech existentialism on a friday night

Note: I wrote this on Friday night (3/15) but didn’t want to post right away, to avoid seeming to make the Christchurch tragedy about me. That is not my intention at all. Rather, my intent is to share some of what was going through my mind that day (and frankly, many days) in hopes that it resonates with others and contributes to a broader conversation.

 

Who/what do you turn to when you feel overwhelmed or exhausted or afraid? When you feel overrun by information and opinions, how do you protect yourself?

I realized today that I don’t really have an answer to those questions.

It’s been a really long work week, and I’ve been channeling my stress into two things that I’ve noticed have become crutches for me when I don’t want to sit with my feelings: Instagram and podcasts.

This morning, by the time I got to work at 8:30 I had already watched about half an hour of Instagram stories, which is how I found out about the Christchurch shootings. I had heard a bit more about the horror on the short morning news podcast Up First, which I usually listen to while I get ready for work. I had also scrolled through Twitter for a few minutes, taking in but not quite digesting takes from dozens of people about what had happened, takes that made me feel, for a few seconds each: sad, sick, disgusted, embarrassed, guilty, defensive, angry, and heartbroken.

In the car, I put on Pod Save America and absorbed about 15 minutes of dudes yelling about politics and reminding me how untenable our current political situation is.

By the time I got to work I was feeling pretty anxious, but that’s nothing new for me so I just accepted it. I read some news, looked at Twitter some more, watched some more Instagram stories. Then I put PSA back on so I could listen while I did some editing. It’s like muscle memory.  Do some work while listening to a podcast, check email, get stressed about something, reach for phone and flip over to Instagram, feel guilty for doing that, get back to work and podcast, remember the world is burning, head over to Twitter, see something horrific, go back to Instagram for comfort, fill head with more and more and more of other people’s stories, ideas, and priorities.

I started reading You Are Not A Gadget by Jaron Lanier earlier this week and I’m only on page 16, so I don’t 100% know where the book is going, but the tone is already, “this is not what we meant for you when we made the social web.” And I know that’s true, to an extent. I don’t think anyone imagined this in the beginning, though I’m certain some people predicted it 10 or so years ago and helped usher it in because it makes lots of money. But it also makes people crazy.

I feel crazy, and when I say that I don’t mean it in the mentally ill sense (although we already know I am that, in some ways) but I mean frazzled, unmoored, grasping. I feel tethered to something for comfort but that thing is what makes me need comfort in the first place. I’ve seen several others compare their relationships with their phones and social media to abusive partner relationships, and I don’t think that’s far off.

Today, when I was overwhelmed by the bloodshed and hatred and extremity of the world all around me, I “retreated” via social media and podcasts into even more of the same. At 9:34am I sent my husband this message:

“I feel so overwhelmed today. I just want to crawl under my desk and cry.”

“I’m so sorry you’re feeling that way,” he messaged back.

But I feel that way almost every day around that time, because I set myself up for it. I know this, and yet I keep doing it, because it feels mandatory for being an active citizen of this world.

I know I’m not the only one in this cycle, and I really don’t think it has to be this way. But one of the things we’re going to have to do to change it is to gather the courage to break out.

On the first page of You Are Not A Gadget, Lanier writes:

“I want to say: You have to be somebody before you can share yourself.”

Right now I get the sense that many of us feel that sharing ourselves is part of what makes us somebody. I’m reminded of this recent piece in The Atlantic about young kids coming to terms with their own online-ness. One 13-year-old said, of trying to find information about herself with a group of friends in fifth grade: “We thought it was so cool that we had pics of ourselves online…We would brag like, ‘I have this many pics of myself on the internet.’ You look yourself up, and it’s like, ‘Whoa, it’s you!’ We were all shocked when we realized we were out there. We were like, ‘Whoa, we’re real people.’”

I’m somewhat ashamed to admit that last part really resonated with me. I grew up online, and have been sharing things about myself there since high school, maybe earlier. Having an online presence, an online self, has felt natural to me for half my life. I’m also a writer, so it might feel more natural to me than most to share my thoughts with the world. But something has shifted over the past few years, and the way the internet – and especially social media – is tied to my identity scares me a little. I find myself wondering if I’m doing certain things because I want to do them, or because I want to share them. When something big happens, I sometimes find myself imagining how I’ll describe it on social media before I even realize what I’m doing. Like I said, tethered. 

Online is where the validation is, I guess, even when we have partners and spouses and families and friends. The silent, pretty, no-strings-attached validation so many of us millennials simultaneously crave (because it’s a normal thing for a human to crave) and cynically joke about not caring about, or not being able to attain. But a lot of us seem to be grabbing for that validation in place of actually dealing with things. And I get it – there is too much to deal with. Mass shootings, climate change, racism, income inequality, mental and physical health problems – it’s all too much. But now that we have been performing for each other online for 30ish years, I’m worried we’re starting to forget not just how to be around each other, but how to feel. As a kid, my identity was so wrapped up in feeling – I cried all the time, was so emotional it scared some of my teachers, and later on definitely scared off a few boyfriends. I don’t cry as much anymore, which is probably healthy, but I also don’t really feel anything stronger than hunger or anxiety for more than a minute at a time. As soon as it pops up – sadness, anger, hurt, shame, worry – there I go, reaching for my phone.

I think there are a lot of remedies to this. One would of course be to just go cold turkey, cut ourselves off from all social media and not look back, but that kills all the good along with the bad. And there is so much good.

Another idea: the people who make this stuff, these products designed to pull us back for more and more, triggering dopamine receptors like slot machines, could…you know…stop. They could pull back and be more mindful – more empathetic – about how their users experience their products. I’m far from the first to suggest this, but given the slowly growing exodus from platforms like Facebook (by both users and employees), it might be about time for them to listen.

Or maybe something more communal is more realistic. Maybe we can get the human connection and validation we crave by helping each other be kinder to our brains and gentler toward our emotions, while also keeping up with all the memes and Trump tweets. What if you had a tech accountability buddy who texted you once a day to ask about your internet activity and how it was making you feel – not to shame you, but to empathize, acknowledge, validate, and encourage you? There are apps that do this, and chat bots, but as much faith as I want to have in empathetic technology, I know they don’t really care. Maybe a friend does, or wants to. Maybe we can get to a healthier place – a place where we can demand better from those who design the tools we use, and figure out how to use them without becoming dependent on them, and get back to feeling the difficult feelings – together.

By the way, you can support Christchurch victims and families here.

Is AOC right about AI?

Conservative Twitter is up in arms today over Rep. Alexandria Ocasio-Cortez saying at an MLK Day event that algorithms are biased. (Of course “bias” has been translated into “racism.”) The general response from the right has been, “What a dumb socialist! Algorithms are run by math. Math can’t be racist!” And from the tech experts on Twitter: “Well, actually….”

I have to put myself in the latter camp. Though I’m not exactly a tech expert, I’ve been researching the impact of technology like AI and algorithms on human well-being for a couple of years now, and the evidence is pretty clear: people have bias, people make algorithms, so algorithms have bias.

When I was a kid, my dad had this new-fangled job as a “computer programmer”. The most vivid and lasting evidence of this vocation was huge stacks of perforated printer paper and dozens upon dozens of floppy disks. But I also remember him saying this phrase enough times to get it stuck in my head: “garbage in, garbage out.” This phrase became popular in the early computer days because it was an easy way to explain what happened when flawed data was put into a machine – the machine spit flawed data out. This was true when my dad was doing…whatever he was doing… and when I was trying to change the look of my MySpace page with rudimentary HTML code. And it’s true with AI, too. (Which is a big reason we need the tech world to focus more on empathy. But I won’t go on that tangent today.)

When I was just starting work on my book, I read Cathy O’Neil’s Weapons of Math Destruction (read it.), which convinced me beyond any remaining doubt that we had a problem. Relying on algorithms to make decisions for us that have little to no oversight and are entirely susceptible to contamination by human bias – conscious or not – is not a liberal anxiety dream. It’s our current reality. It’s just that a lot of us – and I’ll be clear that here I mean a lot of us white and otherwise nonmarginalized people – don’t really notice.

Maybe you still think this is BS. Numbers are numbers, regardless of the intent/mistake/feeling/belief of the person entering them into a computer, you say. This is often hard to get your head around when you see all bias as intentional, I get that, I’ve been there. So let me give you some examples:

There are several studies showing that people with names that don’t “sound white” are often passed up for jobs in favor of more “white-sounding” names. It reportedly happens to women, too. A couple of years ago, Amazon noticed that the algorithm it had created to sift through resumes was biased against women. It had somehow “taught itself that male candidates were preferable.” Amazon tweaked the algorithm, but eventually gave up on it, claiming it might find other ways to skirt neutrality. The algorithm wasn’t doing that with a mind of its own, of course. Machine-learning algorithms, well, learn, but they have to have teachers, whether those teachers are people or gobs of data arranged by people (or by other bots that were programmed by people…). There’s always a person involved, is my point, and people are fallible. And biased. Even unconsciouslyEven IBM admits it. This is a really difficult problem that even the biggest tech companies haven’t yet figured out how to fix. This isn’t about saying “developers are racist/sexist/evil,” it’s about accounting for the fact that all people have biases, and even if we try to set them aside, they can show up in our work. Especially when those of us doing that work happen to be a pretty homogeneous group. One argument for more diversity in tech is that if the humans making the bots are more diverse, the bots will know how to recognize and value more than one kind of person. (Hey, maybe instead of trying to kill us the bots that take over the world will be super woke!)

Another example: In 2015, Google came under fire after a facial recognition app identified several black people as gorillas. There’s no nice way to say that. That’s what happened. The company apologized and tried to fix it, but the best it could do at the time was to remove “gorilla” as an option for the AI. So what happened? Google hasn’t been totally clear on the answer to this, but facial recognition AI works by learning to categorize lots and lots of photos. Technically someone could have trained it to label black people as gorillas, but perhaps more likely is that the folks training the AI in this case simply didn’t consider this potential unintended consequence of letting an imperfect facial recognition bot out into the world. (And, advocates argue, maybe more black folks on the developer team could have prevented this. Maybe.) Last year a spokesperson told Wired: “Image labeling technology is still early and unfortunately it’s nowhere near perfect.” At least Google Photos lets users to report mistakes, but for those who are still skeptical, note: that means even Google acknowledges mistakes are being – and will continue to be – made in this arena.

One last example, because it’s perhaps the most obvious and also maybe the most ridiculous: Microsoft’s Twitter bot, Tay. In 2016, this AI chatbot was unleashed on Twitter, ready to learn how to talk like a millennial and show off Microsoft’s algorithmic skills. But almost as soon as Tay encountered the actual people of Twitter – all of them, not just cutesy millennials speaking in Internet code but also unrepentant trolls and malignant racists – her limitations were put into stark relief. In less than a day, she became a caricature of violent, anti-semitic racist. Some of the tweets seemed to come out of nowhere, but some were thanks to a nifty feature in which people could say “repeat after me” to Tay and she would do just that. (Who ever would have thought that could backfire on Twitter?) Microsoft deleted Tay’s most offensive tweets and eventually made her account private. It was a wild day on the Internet, even for 2016, but it was quickly forgotten. The story bears repeating today, though, because clearly we are still working out the whole bot-human interaction thing.

To close, I’ll just leave you with AOC’s words at the MLK event. See if they still seem dramatic to you.

“Look at – IBM was creating facial recognition technology to target, to do crime profiling. We see over and over again, whether it’s FaceTime, they always have these racial inequities that get translated because algorithms are still made by human beings, and those algorithms are still pegged to those, to basic human assumptions. They’re just automated, and automated assumptions, it’s like if you don’t fix the bias then you’re automating the bias. And that gets even more dangerous.”

(This is the “crime profiling” thing she references, by the way. I’m not sure where the FaceTime thing comes from but I will update this post if/when I get some context on that.)

Update: Thanks to the PLUG newsletter (which I highly recommend) I just came across this fantastic video that does a wonderful job of explaining the issue of AI bias and diversity. It includes a pretty wild example, too. Check it out.

The Future of Feeling

Hi all! It’s been kind of quiet here recently because I’ve been working on a pretty big project that I can now finally announce: I’m writing a book!

It’s about empathy, of course. The future of empathy and technology, to be more precise. I get to interview lots of people who are creating technology aimed at building and/or preserving empathy in our tech-obsessed world, and it’s honestly a dream come true.

I’ll still be blogging here a bit. Even 60,000 words isn’t enough to cover everything empathy ;) And I want to thank all of you for reading –  you helped me get here!

Stay tuned for updates, and more nerdy posts in the coming months.

We like to move it

Virtual reality is often referred to as an “empathy machine,” a term coined in 2015 by tech entrepreneur and artist Chris Milk in a TED Talk. The idea is that while reading about something, or even watching a documentary, can be moving, there’s something uniquely intimate about virtual reality. It puts you “in” a situation in a way that other media doesn’t.

I’ve written before about how this idea has taken hold in service of social causes, and how “future tech” that’s really right around the corner could take empathy to a whole new level. Research is ongoing into what really happens when people put on VR headsets. Do they really feel more empathy for the characters they’re “watching,” or for people who experience the things they “experience” in VR? Some evidence shows that the answer is yes, but feedback about overwhelm and empathy fatigue after VR experiences is also common.

A couple of weeks ago Jeremy Bailenson, one of the foremost experts on VR, wrote in WIRED about some new evidence that the most effective way to create empathy through a VR experience is to make the user move around.

Bailenson, a professor of communication at Stanford, conducted a study in 2013 in which participants simulated being colorblind. Half used their imagination, while the other half used VR. They were then asked to lift up and sort objects in VR. The results showed that those who had experienced colorblindness in VR had a much harder time completing the task, and after the study, they spent a lot more time helping others when asked to do so.

The next study Bailenson plans to release will show a correlation between moving around a virtual coral reef and subjects’ desire to know more about ocean conservation.

He goes into a lot more detail in the piece, which you should read! This strategy of making people move around while having a VR experience might be the answer to a lot of criticisms of empathy focused VR. It makes sense to me just from a muscle-memory standpoint, but it will be interesting to see what the data shows about how VR, movement, and empathy are actually connected in our brains.

Empathy, virtual reality, and anniversary anxiety

I’ve been working on a lot of things lately, and I’m sorry to say that this blog has not been one of them… but it will be again soon, worry not! In the meantime, here’s a look at two stories I recently published:

Can Virtual Reality Change Minds on Social Issues? at Narratively, about how nonprofits and other organizations are using virtual reality to trigger empathy and, ideally, action. There’s still some debate about whether this actually works at scale, but it can’t be denied that people are making some amazing, moving things with VR. Give the story a read, and check out the awesome gif at the top of the page!

A couple of days before the anniversary of the presidential election, I got the opportunity to write about why anniversaries like this are hard for people, psychologically. It turned into a really interesting piece that I think is relevant to the kind of behavioral science stuff I’m thinking about all the time: Why The Election Anniversary Is Hitting You So Hard at Lifehacker

More to come soon!