just a little tech existentialism on a friday night

Note: I wrote this on Friday night (3/15) but didn’t want to post right away, to avoid seeming to make the Christchurch tragedy about me. That is not my intention at all. Rather, my intent is to share some of what was going through my mind that day (and frankly, many days) in hopes that it resonates with others and contributes to a broader conversation.

 

Who/what do you turn to when you feel overwhelmed or exhausted or afraid? When you feel overrun by information and opinions, how do you protect yourself?

I realized today that I don’t really have an answer to those questions.

It’s been a really long work week, and I’ve been channeling my stress into two things that I’ve noticed have become crutches for me when I don’t want to sit with my feelings: Instagram and podcasts.

This morning, by the time I got to work at 8:30 I had already watched about half an hour of Instagram stories, which is how I found out about the Christchurch shootings. I had heard a bit more about the horror on the short morning news podcast Up First, which I usually listen to while I get ready for work. I had also scrolled through Twitter for a few minutes, taking in but not quite digesting takes from dozens of people about what had happened, takes that made me feel, for a few seconds each: sad, sick, disgusted, embarrassed, guilty, defensive, angry, and heartbroken.

In the car, I put on Pod Save America and absorbed about 15 minutes of dudes yelling about politics and reminding me how untenable our current political situation is.

By the time I got to work I was feeling pretty anxious, but that’s nothing new for me so I just accepted it. I read some news, looked at Twitter some more, watched some more Instagram stories. Then I put PSA back on so I could listen while I did some editing. It’s like muscle memory.  Do some work while listening to a podcast, check email, get stressed about something, reach for phone and flip over to Instagram, feel guilty for doing that, get back to work and podcast, remember the world is burning, head over to Twitter, see something horrific, go back to Instagram for comfort, fill head with more and more and more of other people’s stories, ideas, and priorities.

I started reading You Are Not A Gadget by Jaron Lanier earlier this week and I’m only on page 16, so I don’t 100% know where the book is going, but the tone is already, “this is not what we meant for you when we made the social web.” And I know that’s true, to an extent. I don’t think anyone imagined this in the beginning, though I’m certain some people predicted it 10 or so years ago and helped usher it in because it makes lots of money. But it also makes people crazy.

I feel crazy, and when I say that I don’t mean it in the mentally ill sense (although we already know I am that, in some ways) but I mean frazzled, unmoored, grasping. I feel tethered to something for comfort but that thing is what makes me need comfort in the first place. I’ve seen several others compare their relationships with their phones and social media to abusive partner relationships, and I don’t think that’s far off.

Today, when I was overwhelmed by the bloodshed and hatred and extremity of the world all around me, I “retreated” via social media and podcasts into even more of the same. At 9:34am I sent my husband this message:

“I feel so overwhelmed today. I just want to crawl under my desk and cry.”

“I’m so sorry you’re feeling that way,” he messaged back.

But I feel that way almost every day around that time, because I set myself up for it. I know this, and yet I keep doing it, because it feels mandatory for being an active citizen of this world.

I know I’m not the only one in this cycle, and I really don’t think it has to be this way. But one of the things we’re going to have to do to change it is to gather the courage to break out.

On the first page of You Are Not A Gadget, Lanier writes:

“I want to say: You have to be somebody before you can share yourself.”

Right now I get the sense that many of us feel that sharing ourselves is part of what makes us somebody. I’m reminded of this recent piece in The Atlantic about young kids coming to terms with their own online-ness. One 13-year-old said, of trying to find information about herself with a group of friends in fifth grade: “We thought it was so cool that we had pics of ourselves online…We would brag like, ‘I have this many pics of myself on the internet.’ You look yourself up, and it’s like, ‘Whoa, it’s you!’ We were all shocked when we realized we were out there. We were like, ‘Whoa, we’re real people.’”

I’m somewhat ashamed to admit that last part really resonated with me. I grew up online, and have been sharing things about myself there since high school, maybe earlier. Having an online presence, an online self, has felt natural to me for half my life. I’m also a writer, so it might feel more natural to me than most to share my thoughts with the world. But something has shifted over the past few years, and the way the internet – and especially social media – is tied to my identity scares me a little. I find myself wondering if I’m doing certain things because I want to do them, or because I want to share them. When something big happens, I sometimes find myself imagining how I’ll describe it on social media before I even realize what I’m doing. Like I said, tethered. 

Online is where the validation is, I guess, even when we have partners and spouses and families and friends. The silent, pretty, no-strings-attached validation so many of us millennials simultaneously crave (because it’s a normal thing for a human to crave) and cynically joke about not caring about, or not being able to attain. But a lot of us seem to be grabbing for that validation in place of actually dealing with things. And I get it – there is too much to deal with. Mass shootings, climate change, racism, income inequality, mental and physical health problems – it’s all too much. But now that we have been performing for each other online for 30ish years, I’m worried we’re starting to forget not just how to be around each other, but how to feel. As a kid, my identity was so wrapped up in feeling – I cried all the time, was so emotional it scared some of my teachers, and later on definitely scared off a few boyfriends. I don’t cry as much anymore, which is probably healthy, but I also don’t really feel anything stronger than hunger or anxiety for more than a minute at a time. As soon as it pops up – sadness, anger, hurt, shame, worry – there I go, reaching for my phone.

I think there are a lot of remedies to this. One would of course be to just go cold turkey, cut ourselves off from all social media and not look back, but that kills all the good along with the bad. And there is so much good.

Another idea: the people who make this stuff, these products designed to pull us back for more and more, triggering dopamine receptors like slot machines, could…you know…stop. They could pull back and be more mindful – more empathetic – about how their users experience their products. I’m far from the first to suggest this, but given the slowly growing exodus from platforms like Facebook (by both users and employees), it might be about time for them to listen.

Or maybe something more communal is more realistic. Maybe we can get the human connection and validation we crave by helping each other be kinder to our brains and gentler toward our emotions, while also keeping up with all the memes and Trump tweets. What if you had a tech accountability buddy who texted you once a day to ask about your internet activity and how it was making you feel – not to shame you, but to empathize, acknowledge, validate, and encourage you? There are apps that do this, and chat bots, but as much faith as I want to have in empathetic technology, I know they don’t really care. Maybe a friend does, or wants to. Maybe we can get to a healthier place – a place where we can demand better from those who design the tools we use, and figure out how to use them without becoming dependent on them, and get back to feeling the difficult feelings – together.

By the way, you can support Christchurch victims and families here.

Advertisements

Is AOC right about AI?

Conservative Twitter is up in arms today over Rep. Alexandria Ocasio-Cortez saying at an MLK Day event that algorithms are biased. (Of course “bias” has been translated into “racism.”) The general response from the right has been, “What a dumb socialist! Algorithms are run by math. Math can’t be racist!” And from the tech experts on Twitter: “Well, actually….”

I have to put myself in the latter camp. Though I’m not exactly a tech expert, I’ve been researching the impact of technology like AI and algorithms on human well-being for a couple of years now, and the evidence is pretty clear: people have bias, people make algorithms, so algorithms have bias.

When I was a kid, my dad had this new-fangled job as a “computer programmer”. The most vivid and lasting evidence of this vocation was huge stacks of perforated printer paper and dozens upon dozens of floppy disks. But I also remember him saying this phrase enough times to get it stuck in my head: “garbage in, garbage out.” This phrase became popular in the early computer days because it was an easy way to explain what happened when flawed data was put into a machine – the machine spit flawed data out. This was true when my dad was doing…whatever he was doing… and when I was trying to change the look of my MySpace page with rudimentary HTML code. And it’s true with AI, too. (Which is a big reason we need the tech world to focus more on empathy. But I won’t go on that tangent today.)

When I was just starting work on my book, I read Cathy O’Neil’s Weapons of Math Destruction (read it.), which convinced me beyond any remaining doubt that we had a problem. Relying on algorithms to make decisions for us that have little to no oversight and are entirely susceptible to contamination by human bias – conscious or not – is not a liberal anxiety dream. It’s our current reality. It’s just that a lot of us – and I’ll be clear that here I mean a lot of us white and otherwise nonmarginalized people – don’t really notice.

Maybe you still think this is BS. Numbers are numbers, regardless of the intent/mistake/feeling/belief of the person entering them into a computer, you say. This is often hard to get your head around when you see all bias as intentional, I get that, I’ve been there. So let me give you some examples:

There are several studies showing that people with names that don’t “sound white” are often passed up for jobs in favor of more “white-sounding” names. It reportedly happens to women, too. A couple of years ago, Amazon noticed that the algorithm it had created to sift through resumes was biased against women. It had somehow “taught itself that male candidates were preferable.” Amazon tweaked the algorithm, but eventually gave up on it, claiming it might find other ways to skirt neutrality. The algorithm wasn’t doing that with a mind of its own, of course. Machine-learning algorithms, well, learn, but they have to have teachers, whether those teachers are people or gobs of data arranged by people (or by other bots that were programmed by people…). There’s always a person involved, is my point, and people are fallible. And biased. Even unconsciouslyEven IBM admits it. This is a really difficult problem that even the biggest tech companies haven’t yet figured out how to fix. This isn’t about saying “developers are racist/sexist/evil,” it’s about accounting for the fact that all people have biases, and even if we try to set them aside, they can show up in our work. Especially when those of us doing that work happen to be a pretty homogeneous group. One argument for more diversity in tech is that if the humans making the bots are more diverse, the bots will know how to recognize and value more than one kind of person. (Hey, maybe instead of trying to kill us the bots that take over the world will be super woke!)

Another example: In 2015, Google came under fire after a facial recognition app identified several black people as gorillas. There’s no nice way to say that. That’s what happened. The company apologized and tried to fix it, but the best it could do at the time was to remove “gorilla” as an option for the AI. So what happened? Google hasn’t been totally clear on the answer to this, but facial recognition AI works by learning to categorize lots and lots of photos. Technically someone could have trained it to label black people as gorillas, but perhaps more likely is that the folks training the AI in this case simply didn’t consider this potential unintended consequence of letting an imperfect facial recognition bot out into the world. (And, advocates argue, maybe more black folks on the developer team could have prevented this. Maybe.) Last year a spokesperson told Wired: “Image labeling technology is still early and unfortunately it’s nowhere near perfect.” At least Google Photos lets users to report mistakes, but for those who are still skeptical, note: that means even Google acknowledges mistakes are being – and will continue to be – made in this arena.

One last example, because it’s perhaps the most obvious and also maybe the most ridiculous: Microsoft’s Twitter bot, Tay. In 2016, this AI chatbot was unleashed on Twitter, ready to learn how to talk like a millennial and show off Microsoft’s algorithmic skills. But almost as soon as Tay encountered the actual people of Twitter – all of them, not just cutesy millennials speaking in Internet code but also unrepentant trolls and malignant racists – her limitations were put into stark relief. In less than a day, she became a caricature of violent, anti-semitic racist. Some of the tweets seemed to come out of nowhere, but some were thanks to a nifty feature in which people could say “repeat after me” to Tay and she would do just that. (Who ever would have thought that could backfire on Twitter?) Microsoft deleted Tay’s most offensive tweets and eventually made her account private. It was a wild day on the Internet, even for 2016, but it was quickly forgotten. The story bears repeating today, though, because clearly we are still working out the whole bot-human interaction thing.

To close, I’ll just leave you with AOC’s words at the MLK event. See if they still seem dramatic to you.

“Look at – IBM was creating facial recognition technology to target, to do crime profiling. We see over and over again, whether it’s FaceTime, they always have these racial inequities that get translated because algorithms are still made by human beings, and those algorithms are still pegged to those, to basic human assumptions. They’re just automated, and automated assumptions, it’s like if you don’t fix the bias then you’re automating the bias. And that gets even more dangerous.”

(This is the “crime profiling” thing she references, by the way. I’m not sure where the FaceTime thing comes from but I will update this post if/when I get some context on that.)

Update: Thanks to the PLUG newsletter (which I highly recommend) I just came across this fantastic video that does a wonderful job of explaining the issue of AI bias and diversity. It includes a pretty wild example, too. Check it out.

Another thing the “Google Bro” got wrong…

If you’ve known me for any length of time then you probably know my reaction when I heard about the “Google Bro” memo. A few years ago it would have resulted in a long Facebook rant, but slightly more grown-up, less knee-jerk Kaitlin just rolled her eyes and started reading the thinkpieces. Really, I don’t have much to add on the sexism side of this. The man (James Damore) clearly didn’t know what he was talking about regarding women and what we can and/or want to do with our lives and careers. On the free speech front…I’m not convinced there’s a legal issue with Google firing him. But that will be an interesting one to watch the courts decide (and I’m hoping my favorite legal podcaster Dahlia Lithwick will talk about it on a show soon…?) But what really stuck out to me was Google Bro’s suggestion to upper management at the tech giant that they “de-emphasize empathy.”

I’ve written on this blog before about how empathy has become something of a buzzword that is dangerously close to losing its meaning…and I’m afraid that for Damore that’s already happened. His position isn’t new, though. I’ve talked to lots of people who think empathy basically = sensitivity, or in other words weakness. It means feelings, emotions, “political correctness,” according to a certain perspective. And, they argue, encouraging people to spend their time wondering how others in their workplace or industry are feeling is not only a waste of time, it saps vital mental energy and improperly diverts resources and attention that could be used to create the next important technological innovation.

The thing is…those things – technological innovation and empathy – are not actually mutually exclusive. It would be convenient for a lot of people if they were. For those who are used to Silicon Valley being pretty white and male, it would be super convenient if they didn’t have to consider how non-white and non-male people thought about their company culture or the products they create. And it would be really convenient if they didn’t have to put themselves in their end users’ shoes either, and could just go on assuming the end user was probably just like them. But it’s just not true. And it hasn’t been for a while. Empathy isn’t a PC weapon, it’s actually a really useful and productive tool that’s vital to design and innovation…and also human compassion, if you’re interested in that kind of thing too.

Let’s look at what the memo actually said:

“I’ve heard several calls for increased empathy on diversity issues. While I strongly support trying to understand how and why people think the way they do, relying on affective empathy—feeling another’s pain—causes us to focus on anecdotes, favor individuals similar to us, and harbor other irrational and dangerous biases. Being emotionally unengaged helps us better reason about the facts.”

On Friday, in Forbes, Mark Murphy called this paragraph “dangerously wrong.” For one thing, he notes, “empathy is not the same as having a ‘case of the feels.’ Being empathic doesn’t mean we’re walking around weeping because of another’s pain.” All it means, really, is having a sense of another person’s perspective, maybe imagining yourself in their shoes. That’s it. If you’re getting super upset and bogged down, you’re probably experiencing something else known as emotional contagion, and yeah, that’s not always good.

The really relevant mistake to the tech world, though, is that Damore incorrectly believes that if Google stops obsessing over empathy, outcomes will improve. In Murphy’s words, “this is utter nonsense.”

There’s a reason everyone in industries from tech to medicine to education is talking about empathy so much lately, and it’s not just political correctness. People – employees, customers, users, patients, students – consistently say it’s what they want, it’s what’s missing from their experiences. There’s even a Global Empathy Index to measure this, and it shows that in business, more empathy leads to better performance, not less. Amusingly, guess which company was No. 2 on the list of the most empathetic for 2016?

Ash Huang at Fast Company brought up another good point: “Ironically enough, this man has written 10 pages against empathy, and yet this is exactly what he seems from his coworkers,” Huang writes. “He implores them to acknowledge his frustration and respect his point of view in the same language used by diversity advocates whose tactics he objects to, and whose foundation assumptions…he rejects.” It seems we don’t need less empathy in tech – we need even more, or we need more of the people who think they’re exempt from it to really start practicing it.

This has consequences not just for hostile memo-writers and their colleagues, but also for everyone who uses the products they create. The example that always comes to my mind is Twitter. Its creators could probably have been more empathetic in designing and creating the “micro blogging” platform, as it was once called. They probably didn’t predict that it would become such a source of harassment for so many people. But if empathy had been more of an explicit part of the process…maybe they would have been able to predict that. If empathy and diversity had been higher priorities, maybe more people involved in the process would have been able to share experiences that broadened everyone’s scope of understanding, and ultimately created a product that more people would feel comfortable using. Maybe that’s more touchy-feely than you’re used to being in a work environment, especially in tech. Maybe you don’t feel you have a personal stake in whether your company is diverse. But it’s worth noting that calls for empathy don’t just come from your “PC” colleagues. Users are paying attention and demanding more of it, too.

 

Retirement, tech, and toilets

I had a pretty prolific week! Thought I’d gather my clips together in one quick post. The stories grow in their level of “fun” as they go on, I promise ;)

First — Obama has recently come out in favor of a stronger fiduciary standard for retirement advisors. This is a big deal for 401(k) plan participants and pension reformers. The Obama administration has been super vocal about this issue – moreso than any other presidential administration.

Google already knows pretty much everything about you. I recently had a flight scheduled and when I went to Google something that day… there was my flight info at the top of the search page! If that creeps you out at all, you may be interested to know that the company (along with many of its rivals) is planning to branch out into the home security space. It’s already working on home automation with its purchase of Nest, but it also recently bought home security camera company Dropcam and may be in talks with ADT to connect Nest to the security company’s automated product ADT Pulse. Questions about data and privacy abound…

And last but certainly not least, this week I got to write a story that combined a few things I never thought I’d be allowed to type in an article: Muppets and poop. It’s about how Sesame Workshop and the Bill & Melinda Gates Foundation, with help from the World Bank, are using a new Muppet named Raya to teach kids and their caregivers in developing countries about sanitation.

Thanks for reading! And I’ll be back soon with a new original post.

Apple & Facebook’s “game changer”

Facebook and Apple have apparently decided to cover egg freezing for female employees. I have some thoughts about this….but first, a small note about my recent absence: I’m currently on vacation back home in North Carolina after finishing up my last couple of weeks of work at Law360. Next Monday, I’ll be starting a new job! It’s an exciting change, and the transition process has had me pretty busy lately. Thankfully I have a week to relax in between, and I’m trying to really do just that, but I couldn’t stay away from this space for long!

OK, down to business. I usually save topics like this for “Feminist Friday,” but every day this week is a basically Friday for me so when I came across this story I thought, why not? From NBC:

Facebook recently began covering egg freezing, and Apple will start in January, spokespeople for the companies told NBC News. The firms appear to be the first major employers to offer this coverage for non-medical reasons.

“Having a high-powered career and children is still a very hard thing to do,” said Brigitte Adams, an egg-freezing advocate and founder of the patient forum Eggsurance.com. By offering this benefit, companies are investing in women, she said, and supporting them in carving out the lives they want.

In a vacuum, this policy seems like it could only be a good thing. If women want or need to freeze their eggs so that they can get pregnant at a later date, it’s great that huge companies like Facebook and Apple want to cover those procedures.

But is this really a “game-changing” perk, as NBC says? And if it is, what does that say about the state of things for women in the corporate and tech world? What does it mean when Facebook and Apple will spend hundreds of thousands of dollars to help women freeze their eggs so that they can put off pregnancy in favor of their careers?

With notoriously male-dominated Silicon Valley firms competing to attract top female talent, the coverage may give Apple and Facebook a leg up among the many women who devote key childbearing years to building careers. Covering egg freezing can be viewed as a type of “payback” for women’s commitment, said Philip Chenette, a fertility specialist in San Francisco.

This is probably great news for some women, but is painting it as the way to “attract top female talent” really the statement tech wants to be making? Doesn’t it suggest that career and child-rearing are mutually exclusive, and that the reason women don’t enter the field in the first place, or leave, is because they want to have children? Studies have shown that’s just not true in many cases. More women seem to leave because of the hostile culture of the corporate world, and when they do cite children as the reason, it’s often because of the stubborn patriarchal ideal that the mother should take on the majority of the childcare responsibilities.

Offering to cover the cost of freezing eggs is great, and I’m definitely not suggesting Facebook and Apple reverse course on this. But making such a commitment to what is a relatively uncommon and invasive procedure and suggesting that it’s some kind of solution or salve for the huge “woman problem” in the industry just feels wrong.

What might be better? I have a few ideas:

  • Better maternity and paternity leave policies and flexible work schedules
  • A campaign to combat the idea that pregnancy and motherhood somehow render women less capable of doing their jobs
  • A dedicated effort to addressing the sexism and harassment that is far too common in the tech industry
  • An honest, empathetic statement of acknowledgment of the other reasons women may leave the industry and a concerted effort aimed at fixing those problems

I’m happy for the women in tech who really want to freeze their eggs and now will have the support of their employers. But is this a “game-changer” for anyone else? I’d argue no.