Woulda, shoulda, coulda

Twitter co-founder Ev Williams posted a thread yesterday. Not super surprising, since he’s one of the fathers of Twitter, but as he explained in said thread, he doesn’t post his thoughts there much. He sticks to links, because he, “[doesn’t] enjoy debating with strangers in a public setting” and he “always preferred to think of [Twitter] as an information network, rather than a social network.”

That definitely elicited some eye-rolls, but this was the tweet – in a long thread about how he wants reporters to stop asking him how to fix Twitter’s abuse problems – that really caught my eye…

That is… exactly the problem! It’s both reassuring to see this apparent self-awareness, and frustrating how late it’s come, and how defensive he still is…

Maybe he feels like he can’t say for sure whether being more aware of how people “not like him” were being treated or having a more diverse leadership team or board would have led the company to tackle abuse sooner…. but those of us who are “not like him” are pretty confident it would have. Or at least it could have. It should have.

This is what I mean when I talk about a lack of empathy in tech. I don’t know Ev Williams or any of his co-founders; I don’t know many people who have founded anything at all. And I understand that founders and developers are people deserving of empathy too. As I read Williams’s thread, I tried to put myself in his shoes, even as I resisted accepting much of what he was saying. I get that “trying to make the damn thing work” must have been a monumental task. But as I talk about here a lot – there’s empathy, and then there’s sympathy. And as Dylan Marron likes to say, empathy is not endorsement. I can imagine it, but I don’t get it. And it’s little solace to the hundreds of people who are harassed and abused via Twitter every day to hear it confirmed that their safety wasn’t a priority, whatever the reason.

They know this – we know this. The question is, what now? Williams, for his part, brushes off this question. It’s not his problem anymore, he seems to say, and he doesn’t know how to fix it, but if you have any “constructive ideas,” you should let Twitter know (or write about them on Medium, Williams’s other tech baby…)

The toxicity that Williams says he’s trying to avoid – that he says his famous friend is very upset by, that he seems almost ready to acknowledge is doing real damage to many, many other people who use Twitter – was part of what inspired me to write The Future of Feeling. I wanted to know, if it’s this bad right now, how much worse could it get? Is anyone anyone trying to stop this train?

I talked to a lot of people in my reporting for the book, and over and over again I heard the same idea echoed: empathy has to be part of the fabric of any new technology. It has to be present in the foundation. It has to be a core piece of the mission. Creating a thing for the sake of creating the thing isn’t good enough anymore. (Frankly, it never was.) The thing you create is very likely to take on a life of its own. You need to give it some soul, too.

Williams ended his thread with a tweet that actually resonated with me. It’s something I’ve found to be absolutely true:

People made this mess. People will have to clean it up. If Williams doesn’t want to, or know how to, I know a lot of other folks who are getting their hands dirty giving it a try.

Advertisements

Another thing the “Google Bro” got wrong…

If you’ve known me for any length of time then you probably know my reaction when I heard about the “Google Bro” memo. A few years ago it would have resulted in a long Facebook rant, but slightly more grown-up, less knee-jerk Kaitlin just rolled her eyes and started reading the thinkpieces. Really, I don’t have much to add on the sexism side of this. The man (James Damore) clearly didn’t know what he was talking about regarding women and what we can and/or want to do with our lives and careers. On the free speech front…I’m not convinced there’s a legal issue with Google firing him. But that will be an interesting one to watch the courts decide (and I’m hoping my favorite legal podcaster Dahlia Lithwick will talk about it on a show soon…?) But what really stuck out to me was Google Bro’s suggestion to upper management at the tech giant that they “de-emphasize empathy.”

I’ve written on this blog before about how empathy has become something of a buzzword that is dangerously close to losing its meaning…and I’m afraid that for Damore that’s already happened. His position isn’t new, though. I’ve talked to lots of people who think empathy basically = sensitivity, or in other words weakness. It means feelings, emotions, “political correctness,” according to a certain perspective. And, they argue, encouraging people to spend their time wondering how others in their workplace or industry are feeling is not only a waste of time, it saps vital mental energy and improperly diverts resources and attention that could be used to create the next important technological innovation.

The thing is…those things – technological innovation and empathy – are not actually mutually exclusive. It would be convenient for a lot of people if they were. For those who are used to Silicon Valley being pretty white and male, it would be super convenient if they didn’t have to consider how non-white and non-male people thought about their company culture or the products they create. And it would be really convenient if they didn’t have to put themselves in their end users’ shoes either, and could just go on assuming the end user was probably just like them. But it’s just not true. And it hasn’t been for a while. Empathy isn’t a PC weapon, it’s actually a really useful and productive tool that’s vital to design and innovation…and also human compassion, if you’re interested in that kind of thing too.

Let’s look at what the memo actually said:

“I’ve heard several calls for increased empathy on diversity issues. While I strongly support trying to understand how and why people think the way they do, relying on affective empathy—feeling another’s pain—causes us to focus on anecdotes, favor individuals similar to us, and harbor other irrational and dangerous biases. Being emotionally unengaged helps us better reason about the facts.”

On Friday, in Forbes, Mark Murphy called this paragraph “dangerously wrong.” For one thing, he notes, “empathy is not the same as having a ‘case of the feels.’ Being empathic doesn’t mean we’re walking around weeping because of another’s pain.” All it means, really, is having a sense of another person’s perspective, maybe imagining yourself in their shoes. That’s it. If you’re getting super upset and bogged down, you’re probably experiencing something else known as emotional contagion, and yeah, that’s not always good.

The really relevant mistake to the tech world, though, is that Damore incorrectly believes that if Google stops obsessing over empathy, outcomes will improve. In Murphy’s words, “this is utter nonsense.”

There’s a reason everyone in industries from tech to medicine to education is talking about empathy so much lately, and it’s not just political correctness. People – employees, customers, users, patients, students – consistently say it’s what they want, it’s what’s missing from their experiences. There’s even a Global Empathy Index to measure this, and it shows that in business, more empathy leads to better performance, not less. Amusingly, guess which company was No. 2 on the list of the most empathetic for 2016?

Ash Huang at Fast Company brought up another good point: “Ironically enough, this man has written 10 pages against empathy, and yet this is exactly what he seems from his coworkers,” Huang writes. “He implores them to acknowledge his frustration and respect his point of view in the same language used by diversity advocates whose tactics he objects to, and whose foundation assumptions…he rejects.” It seems we don’t need less empathy in tech – we need even more, or we need more of the people who think they’re exempt from it to really start practicing it.

This has consequences not just for hostile memo-writers and their colleagues, but also for everyone who uses the products they create. The example that always comes to my mind is Twitter. Its creators could probably have been more empathetic in designing and creating the “micro blogging” platform, as it was once called. They probably didn’t predict that it would become such a source of harassment for so many people. But if empathy had been more of an explicit part of the process…maybe they would have been able to predict that. If empathy and diversity had been higher priorities, maybe more people involved in the process would have been able to share experiences that broadened everyone’s scope of understanding, and ultimately created a product that more people would feel comfortable using. Maybe that’s more touchy-feely than you’re used to being in a work environment, especially in tech. Maybe you don’t feel you have a personal stake in whether your company is diverse. But it’s worth noting that calls for empathy don’t just come from your “PC” colleagues. Users are paying attention and demanding more of it, too.

 

Accidental immersion reporting: my heart monitor experience

I don’t normally have a lot of personal experience with the things I write about for work. I don’t have nearly enough money to invest with a hedge fund, I’m probably never going to have a pension, and I may one day be wealthy enough to need a registered investment adviser, but not quite yet. I love the stories that allow me to write about interesting financial concepts that also have some relevance to my life, and the lives of my peers. That was one of the reasons I loved reporting and writing my recent health care feature. It was about how asset managers are investing in the future of health care, but it also included a lot of science about how our bodies work and discussion of new technologies that anyone can use. I assumed that would be the extent of my personal connection to the story, and I was OK with that.

Then, after the piece was finished and published, I had a follow up appointment with my cardiologist. I’ve been seeing her for about a year and a half, since I started having heart palpitations while training for the New York City Marathon (which I did not end up running, for obvious reasons!) My heart is fine structurally, but when I told my doctor I was still bothered by the palpitations, she suggested I wear a heart monitor for two weeks so she could get a better sense of what (if anything) was going on. I had worn a monitor once before, for 24 hours, and it was not a pleasant experience. I was left with extremely irritated skin where the monitor had stuck to my chest and the wires had gotten tangled up in my clothes. I was excited, then, when I realized that my doctor had quickly upgraded to the newest technology: the Body Guardian Remote Monitoring System from Preventice. How it worked: I stuck an adhesive strip with sensors on my chest over my heart, snapped on a small square monitor and pushed a button, which allowed the monitor to communicate with a smart phone made just for this purpose. The monitor tracks the wearer’s heart rate constantly, sending a full report at the end of the designated period. But if the wearer feels something irregular, he or she can push the button, select any symptoms they may be feeling on the smart phone screen, and a report is sent directly to the doctor. If anything truly dangerous happens, the monitor is supposed to pick it up and send an emergency alert.

I think my experience with this thing gave me a much more practical idea of the true impact of the health care revolution. Because once the awe at the fact that my doctor could essentially watch my heart beating from her office if she wanted to wears off, the reality that technology and the people who have to operate it are flawed settles in.

I have really sensitive skin, so I was told to change the sensor strip as infrequently as possible. The problem with that? After a day or two, its stickiness started to wear off, and if I moved around too much the monitor disconnected. This was mostly just annoying, until the end of the first week when I was standing in my kitchen doing dishes and had an intense run of (what I think were) premature ventricular contractions. I’m pretty used to them, but when I haven’t felt any for a while and am not feeling particularly anxious, they can be scary when they decide to pop up, especially when there seem to be several in a row. I immediately reached for the smart phone to log my symptoms, but I was met with an error message about connectivity. I sent my doctor a non-urgent message through her hospital’s web portal – another much lauded technological innovation – to see if she could check the log for a reading. There was nothing. When she showed me the print outs at my follow up appointment today, I saw where the disconnection happened – a flat line. “You weren’t dead,” she said, “so we know it disconnected.” I was frustrated and disappointed. How useful is an exciting new piece of technology if all it can tell you is that you’re not dead?

To be fair, the monitor worked properly for most of the two weeks, and it showed me and my doctor that my heart works normally most of the time, too. But that one five second period when I really needed to see what was going on, the technology let me down. Or did I let down the technology? Maybe I should have known to change the sensor strip earlier. Maybe the person who taught me how to use it didn’t emphasize that enough. Maybe the disconnection was caused by something else entirely.

Whatever the reason for my frustrating experience, it’s a reminder that however exciting new technology-based health innovations seem, however effective they would be for patient outcomes if they worked perfectly, they often don’t. Humans still have to operate the technology, for the most part, and that introduces room for error. Maybe that margin will grow smaller and smaller as investment and research into new health technology continues. For now, I’m dialing back my enthusiasm just a little bit, though I won’t hesitate to try something like this again. And perhaps more importantly, I’ll be adding a little more healthy skepticism to my reporting on health care technology.