Is AOC right about AI?

Conservative Twitter is up in arms today over Rep. Alexandria Ocasio-Cortez saying at an MLK Day event that algorithms are biased. (Of course “bias” has been translated into “racism.”) The general response from the right has been, “What a dumb socialist! Algorithms are run by math. Math can’t be racist!” And from the tech experts on Twitter: “Well, actually….”

I have to put myself in the latter camp. Though I’m not exactly a tech expert, I’ve been researching the impact of technology like AI and algorithms on human well-being for a couple of years now, and the evidence is pretty clear: people have bias, people make algorithms, so algorithms have bias.

When I was a kid, my dad had this new-fangled job as a “computer programmer”. The most vivid and lasting evidence of this vocation was huge stacks of perforated printer paper and dozens upon dozens of floppy disks. But I also remember him saying this phrase enough times to get it stuck in my head: “garbage in, garbage out.” This phrase became popular in the early computer days because it was an easy way to explain what happened when flawed data was put into a machine – the machine spit flawed data out. This was true when my dad was doing…whatever he was doing… and when I was trying to change the look of my MySpace page with rudimentary HTML code. And it’s true with AI, too. (Which is a big reason we need the tech world to focus more on empathy. But I won’t go on that tangent today.)

When I was just starting work on my book, I read Cathy O’Neil’s Weapons of Math Destruction (read it.), which convinced me beyond any remaining doubt that we had a problem. Relying on algorithms to make decisions for us that have little to no oversight and are entirely susceptible to contamination by human bias – conscious or not – is not a liberal anxiety dream. It’s our current reality. It’s just that a lot of us – and I’ll be clear that here I mean a lot of us white and otherwise nonmarginalized people – don’t really notice.

Maybe you still think this is BS. Numbers are numbers, regardless of the intent/mistake/feeling/belief of the person entering them into a computer, you say. This is often hard to get your head around when you see all bias as intentional, I get that, I’ve been there. So let me give you some examples:

There are several studies showing that people with names that don’t “sound white” are often passed up for jobs in favor of more “white-sounding” names. It reportedly happens to women, too. A couple of years ago, Amazon noticed that the algorithm it had created to sift through resumes was biased against women. It had somehow “taught itself that male candidates were preferable.” Amazon tweaked the algorithm, but eventually gave up on it, claiming it might find other ways to skirt neutrality. The algorithm wasn’t doing that with a mind of its own, of course. Machine-learning algorithms, well, learn, but they have to have teachers, whether those teachers are people or gobs of data arranged by people (or by other bots that were programmed by people…). There’s always a person involved, is my point, and people are fallible. And biased. Even unconsciouslyEven IBM admits it. This is a really difficult problem that even the biggest tech companies haven’t yet figured out how to fix. This isn’t about saying “developers are racist/sexist/evil,” it’s about accounting for the fact that all people have biases, and even if we try to set them aside, they can show up in our work. Especially when those of us doing that work happen to be a pretty homogeneous group. One argument for more diversity in tech is that if the humans making the bots are more diverse, the bots will know how to recognize and value more than one kind of person. (Hey, maybe instead of trying to kill us the bots that take over the world will be super woke!)

Another example: In 2015, Google came under fire after a facial recognition app identified several black people as gorillas. There’s no nice way to say that. That’s what happened. The company apologized and tried to fix it, but the best it could do at the time was to remove “gorilla” as an option for the AI. So what happened? Google hasn’t been totally clear on the answer to this, but facial recognition AI works by learning to categorize lots and lots of photos. Technically someone could have trained it to label black people as gorillas, but perhaps more likely is that the folks training the AI in this case simply didn’t consider this potential unintended consequence of letting an imperfect facial recognition bot out into the world. (And, advocates argue, maybe more black folks on the developer team could have prevented this. Maybe.) Last year a spokesperson told Wired: “Image labeling technology is still early and unfortunately it’s nowhere near perfect.” At least Google Photos lets users to report mistakes, but for those who are still skeptical, note: that means even Google acknowledges mistakes are being – and will continue to be – made in this arena.

One last example, because it’s perhaps the most obvious and also maybe the most ridiculous: Microsoft’s Twitter bot, Tay. In 2016, this AI chatbot was unleashed on Twitter, ready to learn how to talk like a millennial and show off Microsoft’s algorithmic skills. But almost as soon as Tay encountered the actual people of Twitter – all of them, not just cutesy millennials speaking in Internet code but also unrepentant trolls and malignant racists – her limitations were put into stark relief. In less than a day, she became a caricature of violent, anti-semitic racist. Some of the tweets seemed to come out of nowhere, but some were thanks to a nifty feature in which people could say “repeat after me” to Tay and she would do just that. (Who ever would have thought that could backfire on Twitter?) Microsoft deleted Tay’s most offensive tweets and eventually made her account private. It was a wild day on the Internet, even for 2016, but it was quickly forgotten. The story bears repeating today, though, because clearly we are still working out the whole bot-human interaction thing.

To close, I’ll just leave you with AOC’s words at the MLK event. See if they still seem dramatic to you.

“Look at – IBM was creating facial recognition technology to target, to do crime profiling. We see over and over again, whether it’s FaceTime, they always have these racial inequities that get translated because algorithms are still made by human beings, and those algorithms are still pegged to those, to basic human assumptions. They’re just automated, and automated assumptions, it’s like if you don’t fix the bias then you’re automating the bias. And that gets even more dangerous.”

(This is the “crime profiling” thing she references, by the way. I’m not sure where the FaceTime thing comes from but I will update this post if/when I get some context on that.)

Update: Thanks to the PLUG newsletter (which I highly recommend) I just came across this fantastic video that does a wonderful job of explaining the issue of AI bias and diversity. It includes a pretty wild example, too. Check it out.

Advertisements

When the robots do it better…

Ellie
US soldiers and veterans revealed significantly more post-traumatic stress symptoms to a virtual interviewer than through a standard or anonymous Post-Deployment Health Assessment survey. CREDIT: USC Institute for Creative Technologies

It’s clear that PTSD is a major problem among American war veterans. According to the U.S. Department of Veterans Affairs, symptoms of PTSD affect almost 31 percent of Vietnam veterans, up to 10 percent of Gulf War veterans, and 11 percent of veterans who served in Afghanistan. But, as with many mental health issues, those numbers might be off because there is still a stigma attached. Veterans Affairs can’t count — or help — the soldiers who don’t feel comfortable coming forward. But what if instead of talking to people who might affect their careers, they could talk to robots?

Not, like, Bender robots, but artificial intelligence presented as kind strangers on a computer screen. In a recent study that used this technology, the AI made a big difference. Researchers at the University of Southern California found that service members who volunteered to try this out were more open about their symptoms with the “virtual human” they spoke to than they were when filling out a military required survey. Gale Lucas, who led the research, thinks this is likely because when PTSD symptoms are conveyed via the military survey (or directly to a military psychiatrist) they must be reported, which can affect service members’ career prospects. Speaking to the AI, known as “Ellie,” felt more anonymous.

“These kinds of technologies could provide soldiers a safe way to get feedback about their risks for post-traumatic stress disorder,” Lucas said in a statement. “By receiving anonymous feedback from a virtual human interviewer that they are at risk for PTSD, they could be encouraged to seek help without having their symptoms flagged on their military record.”

So, can AI provide potential life-saving empathy that real humans can’t?

Well, there’s (at least one) catch. Ellie makes soldiers feel comfortable, safe, and understood, but she is currently operated by researchers. If and when she becomes integrated into the military health system, she might lose her real magic: anonymity.

Joseph Hayes, a psychiatrist at University College London, told Newsweek

“For an intervention to be possible ultimately, the disclosure would have to be shared with the same commanding officers who have traditionally received the results of the service members PDHA, and entered into their military health record. Once this is made explicit, would disclosure reduce to levels seen previously? If so, it is a culture change (reducing public stigma–within the military and more broadly) which is truly going to impact on disclosure and provision of appropriate treatment.”

Lucas thinks her team can get around this by only requiring Ellie to alert humans if a service member threatens to hurt him- or herself or someone else, and leaving it up to the individual whether they want to follow up their session with the AI with a session with a real doctor.

The jury’s out on the ethics and implementation, but this is one more step toward empathetic AI, which is… well, both exciting and terrifying!

To learn more about this technology, check out the USC Institute for Creative Technologies website.