Droning on

Hello! Good morning. Let’s talk about drones.

Earlier this year, not long after Christmas, my husband and I went with one of our best friends to a historic village in North Carolina. We hadn’t been there since we were kids and wanted to experience it as adults. (See: walking into a building labeled “tavern” and walking right back out, dejected that there were no actual beers to be had.)

About halfway through the day, we exited an old building into a side yard just in time to see a drone taking off. The guy manning it was just a few feet away. He launched it off the ground and into the air, and I had two simultaneous thoughts:

“Wow, he’s gonna get some awesome photos of this place” and

“Wow, that sound is really, REALLY annoying, especially here!”

Such is the conundrum of life in 2019. There are so many tech things that make our lives cooler, easier, or safer while also being annoying, intrusive, or otherwise harmful. In the past I don’t think the developers of these technologies have done a great job anticipating future issues or needs. I do think that’s changing. But in the meantime, these are the kinds of things we have to deal with (and frankly, we probably will always have some degree of this issue).

I was recently reporting a piece about medical drones (coming soon) and came across this study that determined drones to be the most annoying of all vehicles. And that’s saying a lot, considering we also have motorcycles and 18-wheelers below them and airplanes above.

From a great New Scientist piece on the study:

“We didn’t go into this test thinking there would be this significant difference,” says study coauthor Andrew Christian of NASA’s Langley Research Center, Virginia. It is almost unfortunate the research has turned up this difference in annoyance levels, he adds, as its purpose was merely to prove that Langley’s acoustics research facilities could contribute to NASA’s wider efforts to study drones.

It’s a bummer all around, really. The study found that people (only 38 people, but still) experienced drone buzzes in a similar way they would experience a car that was twice as close as normal. These people didn’t even know what they were listening to, by the way, so we can’t just assume they’re anti-drone.

The piece I’ve been reporting is about the use of drones to save time and money moving blood samples and medical supplies. I wonder if people might find drones less annoying if they knew they were up there to help people? I hope that research is being done somewhere (I would not be surprised, as NASA and the FAA are doing a lot of work to study drone impact right now).

But even if we can get used to the sound of drones, or assuage ourselves with the thought that some of them are saving lives, we still have to look at them. It bugged me to see a black plastic mini-spaceship buzzing around a historic village, but it didn’t scare me or make me feel unsafe. Driving down the road and suddenly seeing a flock of them overhead, and not necessarily knowing their purpose…. would be a different story.

Advertisements

Power drills vs. dental drills

At the beginning of this year I went to the dentist for the first time in… a while, and learned I had five cavities. Five! I brush my teeth – I even floss! – but somehow three of my old fillings had failed me and two new ones were needed. This wouldn’t have been that big of a deal except… and now you’re really going to judge me… I am afraid of Novocaine.

Now, let me say as clearly as I can: this is a 95% irrational fear. Novocaine is extremely safe and I trust my dentist to use it properly, and I am even fairly certain if I used it nothing bad would happen. But because I have an anxiety brain, this was my thought process upon learning I needed five fillings:

Shit, that’s going to be expensive and take a while. Also, crap, they’ll give me Novocaine, and that has the potential to cause heart palpitations, and I’ll probably already be having them because I’ll be nervous, and that could create a dangerous situation, oh shit shit how do I get around this?

Again, Novocaine is extremely safe. Irregular heart beat is a very rare potential side effect associated with many medications – it’s part of the generic list of allergic reactions a step above itchiness and swelling. But since I’ve dealt (rather poorly, I’ll admit) with heart palpitations caused by stress and anxiety for years, I am hyper-vigilant about avoiding situations that might cause them. So, how did I get around it? I opted out. I said no to the Novocaine and sucked it up. And yeah, it hurt. I spaced the procedure out into three visits to spread out both the cost and the pain. In the end, each procedure took less time than it would have with numbing, and I was able to eat and drink right afterward. Most of all, I survived (which of course I would have regardless). The dentists and hygienists kept calling me a badass and saying how well I handled the pain, but I wasn’t proud; I was honestly a little embarrassed, and exhausted, and sore.

As I waited in the chair for each procedure to start, I stared at a flat screen monitor. The first time it scrolled through pictures of cute kids and puppies (including a truly awesome slideshow of dogs that look like other things); on my second visit it was a silent presentation about my dentist’s trip to Haiti, complete with facts about the country; and on the third and final visit I was treated to calming videos of waves crashing on sand.

During each procedure, there was a moment or two when I thought I couldn’t handle any more – when the drill would hit a specific spot on the tooth that was just too close to a nerve. During those times, I had the old calming television standby to distract me from another monitor on the ceiling: HGTV. (I have seen this in at least one other dental office and several specialists’ offices – there’s just something about Chip and Joanna…) And I have to tell you, these things worked. In the moments I would have gritted my teeth at the pain (which was obviously impossible) I instead focused all of my energy and attention on the wall demo or sconce selection happening on the ceiling screen. And it worked, in the sense that avoiding a full-on panic attack or biting off my dentist’s fingers = “working.” Which… I’ll take it!

It’s not shiplap that helps with pain and anxiety in the dental chair – it’s that shift in energy and attention. And it still works on me even though I know this. And I actually found myself thinking, as I left the dental office for the last time (for a while, at least…I hope…) that I really wish more medical offices had this kind of programming. Not just HGTV, but slideshows and silent videos made with the explicit goal of helping patients calm down. Not just cheesy quotes about serenity, but soothing images that are scientifically correlated with lower blood pressure and cortisol. Imagine if more clinicians acknowledged that we might be anxious, and rather than ignoring that or explaining it away, just empathized with it and tried to set a calmer tone. This sort of thing is relatively common in dentistry and in pediatrics; imagine if our anxiety and potential medical trauma was taken more seriously even in cardiology, physical therapy, dermatology, and other offices! I think it’s something to work toward.

 

We like to move it

Virtual reality is often referred to as an “empathy machine,” a term coined in 2015 by tech entrepreneur and artist Chris Milk in a TED Talk. The idea is that while reading about something, or even watching a documentary, can be moving, there’s something uniquely intimate about virtual reality. It puts you “in” a situation in a way that other media doesn’t.

I’ve written before about how this idea has taken hold in service of social causes, and how “future tech” that’s really right around the corner could take empathy to a whole new level. Research is ongoing into what really happens when people put on VR headsets. Do they really feel more empathy for the characters they’re “watching,” or for people who experience the things they “experience” in VR? Some evidence shows that the answer is yes, but feedback about overwhelm and empathy fatigue after VR experiences is also common.

A couple of weeks ago Jeremy Bailenson, one of the foremost experts on VR, wrote in WIRED about some new evidence that the most effective way to create empathy through a VR experience is to make the user move around.

Bailenson, a professor of communication at Stanford, conducted a study in 2013 in which participants simulated being colorblind. Half used their imagination, while the other half used VR. They were then asked to lift up and sort objects in VR. The results showed that those who had experienced colorblindness in VR had a much harder time completing the task, and after the study, they spent a lot more time helping others when asked to do so.

The next study Bailenson plans to release will show a correlation between moving around a virtual coral reef and subjects’ desire to know more about ocean conservation.

He goes into a lot more detail in the piece, which you should read! This strategy of making people move around while having a VR experience might be the answer to a lot of criticisms of empathy focused VR. It makes sense to me just from a muscle-memory standpoint, but it will be interesting to see what the data shows about how VR, movement, and empathy are actually connected in our brains.

Empathy is both given & made

When the subject of empathy comes up, there’s often a debate about whether we’re born with it, or whether it’s something we learn. As with most things, the answer is probably not at either end of the spectrum  – it’s most likely in the middle.

In the past few months, I’ve been researching and writing about both ends.

For Woolly, I wrote about the empathy movement in podcasting, where a growing collection of shows aims to get people to listen to (and have) tough conversations. I wrote about my personal retreat into podcasts (and away from cable news and social media) after the 2016 presidential election, and how some of them – especially With Friends Like These – helped me find empathy where I didn’t expect it.

Then I wrote for Vitals, Lifehacker’s health vertical, about the newest development in the search for an empathy gene. Researchers have figured out that at least some of individuals’ differences in empathy can be explained by DNA, so we might inherit our empathy levels, and disorders characterized by low empathy, like schizophrenia, might have a genetic cause. But they’re still trying to find out how. This latest study didn’t come up with any major revelations, but it’s a step forward, and it also validated a lot of previous findings.

That’s all for now. Apologies for being so absent these past few months. I moved from New York back to North Carolina and have been settling in. Now that things are starting to feel normal, I’ll be back to blogging more regularly!

Poking the empathy part of the brain

Hello and Happy New Year! Hopefully by now you’ve settled into 2018 and dug out of the frozen tundra (if you’re in the U.S.) Here in Brooklyn we have reached the “grey slush and surprise ice patches” phase of winter and it’s gross as ever. I could really write a whole post about the disgusting things that happen to the sidewalks here when the snow melts, but I’ll spare you… Instead, let’s talk about something else that seems kind of obvious but is actually super interesting: empathy’s role in moral dilemmas like, you know, whether or not to kill someone.

In a recent UCLA study, researchers found that they could guess whether a person would harm or kill another person based on how their brains reacted when they saw someone else in pain.

The researchers showed 19 volunteers two videos in which a hand was either touched by a cotton swab or stabbed with a needle. This is a pretty common way to measure empathy, and that’s what these scientists did: they used an fMRI machine to measure brain activity in the places where this specific type of empathy – acknowledging someone else’s pain and feeling for them – is thought to reside. They also apparently analyzed researchers’ “mirror neurons,” which are neurons that are said to fire both when a person feels something and when they see someone else appearing to experience that same feeling. (There’s some controversy about mirror neurons.)

In addition to showing them the videos, the researchers asked the participants some common questions aimed at determining whether a person would hurt or kill someone else: there’s the “crying baby during wartime” question, for example, when you’re meant to say whether you would suffocate a baby whose cries might give away your location to the enemy. They also asked whether the volunteers would torture someone in order to prevent a bomb from killing some people, or harm research animals in order to cure AIDS.

The researchers guessed that those with more empathy related action happening in their brains during the needle video would be less likely to hurt the crying baby, and they turned out to be correct, at least with this small sample size. They didn’t find a correlation between brain activity and willingness to hurt someone else to help a larger number of people, however. The reason, they argued, may be that those decisions are a lot more complex.

“It would be fascinating to see if we can use brain stimulation to change complex moral decisions through impacting the amount of concern people experience for others’ pain,” said Marco Iacoboni, director of the Neuromodulation Lab at UCLA and one of the leading mirror neuron experts, in a statement. “It could provide a new method for increasing concern for others’ well-being.”

I highlighted just a few of the reasons this research does not suggest causation (and, as you hopefully know, research rarely does). But I’m actually more interested in Iacoboni’s quote. I’ve been researching and writing a lot over the past year about the different ways people are trying to increase empathy in the human brain. Most of the time, these stories involve tech like virtual reality or Fitbit-like gadgets. But Iacoboni’s suggestion of using brain stimulation to potentially make people more empathetic decision-makers doesn’t seem that far-fetched…though it does seem kind of like taking the easy way out. I’m sure he means this mostly (if not completely) for academic purposes, but I wouldn’t put it past tech companies to find a quick way to capitalize on “instant empathy.” We already have brain stimulation gadgets that are meant to help with stress and anxiety and a host of other things.

There are a couple of concerns, here. First is regulation to keep people from accidentally harming themselves or tech companies from doing nefarious things with the new level of personal brain data they might collect. Second is kind of the opposite: Do we want the government to potentially have the ability to stimulate our brains to change how we make complex moral decisions? I don’t mean to sound like a conspiracy theorist! But when so much sci fi stuff is becoming real life, it seems worth asking these questions.

Something to keep an eye on, for sure.

When the robots do it better…

Ellie
US soldiers and veterans revealed significantly more post-traumatic stress symptoms to a virtual interviewer than through a standard or anonymous Post-Deployment Health Assessment survey. CREDIT: USC Institute for Creative Technologies

It’s clear that PTSD is a major problem among American war veterans. According to the U.S. Department of Veterans Affairs, symptoms of PTSD affect almost 31 percent of Vietnam veterans, up to 10 percent of Gulf War veterans, and 11 percent of veterans who served in Afghanistan. But, as with many mental health issues, those numbers might be off because there is still a stigma attached. Veterans Affairs can’t count — or help — the soldiers who don’t feel comfortable coming forward. But what if instead of talking to people who might affect their careers, they could talk to robots?

Not, like, Bender robots, but artificial intelligence presented as kind strangers on a computer screen. In a recent study that used this technology, the AI made a big difference. Researchers at the University of Southern California found that service members who volunteered to try this out were more open about their symptoms with the “virtual human” they spoke to than they were when filling out a military required survey. Gale Lucas, who led the research, thinks this is likely because when PTSD symptoms are conveyed via the military survey (or directly to a military psychiatrist) they must be reported, which can affect service members’ career prospects. Speaking to the AI, known as “Ellie,” felt more anonymous.

“These kinds of technologies could provide soldiers a safe way to get feedback about their risks for post-traumatic stress disorder,” Lucas said in a statement. “By receiving anonymous feedback from a virtual human interviewer that they are at risk for PTSD, they could be encouraged to seek help without having their symptoms flagged on their military record.”

So, can AI provide potential life-saving empathy that real humans can’t?

Well, there’s (at least one) catch. Ellie makes soldiers feel comfortable, safe, and understood, but she is currently operated by researchers. If and when she becomes integrated into the military health system, she might lose her real magic: anonymity.

Joseph Hayes, a psychiatrist at University College London, told Newsweek

“For an intervention to be possible ultimately, the disclosure would have to be shared with the same commanding officers who have traditionally received the results of the service members PDHA, and entered into their military health record. Once this is made explicit, would disclosure reduce to levels seen previously? If so, it is a culture change (reducing public stigma–within the military and more broadly) which is truly going to impact on disclosure and provision of appropriate treatment.”

Lucas thinks her team can get around this by only requiring Ellie to alert humans if a service member threatens to hurt him- or herself or someone else, and leaving it up to the individual whether they want to follow up their session with the AI with a session with a real doctor.

The jury’s out on the ethics and implementation, but this is one more step toward empathetic AI, which is… well, both exciting and terrifying!

To learn more about this technology, check out the USC Institute for Creative Technologies website.

 

Driverless empathy

Algorithms and big data affect our lives in so many ways we don’t even see. These things that we tend to believe are there to make our lives easier and more fair also do a lot of damage, from weeding out job applicants based on unfair parameters that ignore context to targeting advertisements based on racial stereotypes. A couple of weeks ago I got to see Cathy O’Neil speak on a panel about her book Weapons of Math Destruction, which is all about this phenomenon. Reading her book, I kept thinking about whether a more explicit focus on empathy on the part of the engineers behind these algorithms might make a difference.

The futurist and game creator Jane McGonigal suggested something similar to me when I spoke to her for this story earlier this year. We talked about Twitter, and how some future-thinking and future-empathizing might have helped avoid some of the nasty problems the platform is facing (and facilitating) right now. But pretty soon Twitter may be the least of our worries. Automation is, by many accounts, the next big, disruptive force, and our problems with algorithms and big data are only going to bet bigger as this force expands. One of the most urgent areas of automation that could use an empathy injection? Self-driving cars.

img_533748

I’ll be honest – until very recently I didn’t give too much thought to self-driving cars as part of this empathy and tech revolution that’s always on my mind. I thought of them as a gadget that may or may not actually be available at scale over the next decade, and that I may or may not ever come in contact with (especially while I live in New York City and don’t drive). But when I listened to the recent Radiolab episode “Driverless Dilemma,” I realized I’d been forgetting that even though humans might not be driving these cars, humans are deeply involved in the creation and maintenance of the tech that controls them. And the decisions those humans make could have life and death consequences.

The “Driverless Dilemma” conversation is sandwiched around an old Radiolab episode about the “Trolley Problem,” which asks people to consider whether they’d kill one person to save five in several different scenarios. You can probably imagine some version of this while driving: suddenly there are a bunch of pedestrians in front of you that you’re going to hit unless you swerve, but if you swerve you’ll hit one pedestrian, or possibly kill yourself. As driverless technology becomes more common, cars will be making these split-second decisions. Except it’s not really the cars making the decisions, it’s people making them, probably ahead of time, based on a whole bunch of factors that we can only begin to guess at right now. The Radiolab episode is really thought-provoking and I highly recommend listening to it. But one word that didn’t come up that I think could play a major role in answering these questions going forward is, of course, empathy.

When I talked with Jane McGonigal about Twitter, we discussed what the engineers could have done to put themselves in the shoes of people who might either use their platform for harassment or be harassed by trolls. Perhaps they would then have taken measures to prevent some of the abuse that happens there. One reason that may not have happened is that those engineers didn’t fit into either of those categories, so it didn’t occur to them to imagine those scenarios. Some intentional empathy, like what design firms have been doing for decades (“imagine yourself as the user of this product”) could have gone a long way. This may also be the key when it comes to driverless cars. Except the engineers behind cars’ algorithms will have to consider what it’s like to be the “driver” as well as other actual drivers on the road, cyclists, pedestrians, and any number of others. And they’ll have to imagine thousands of different scenarios. An algorithm that tells the car to swerve and kill the driver to avoid killing five pedestrians won’t cut it. What if there’s also a dog somewhere in the equation? What if it’s raining? What if the pedestrians aren’t in a crosswalk? What if all of the pedestrians are children? What if the “driver” is pregnant? Car manufacturers say these are all bits of data that their driverless cars will eventually be able to gather. But what will they do with them? Can you teach a car context? Can you inject its algorithm with empathy?