ADRIAN MA, HOST:
Two landmark verdicts this week could reshape the way social media works. In Los Angeles, a jury found Meta and Google negligent in designing apps like YouTube and Instagram to be addictive and harmful to children's mental health. In a separate case, a jury in New Mexico also concluded that Meta failed to protect young people from child predators. Both Google and Meta said they'll appeal these cases, saying they disagree with the verdicts. And we should say that Google is a financial supporter of NPR, but we cover them like any other company.
With that, we're joined now by Aza Raskin, a co-founder of the Center for Humane Technology, which is a nonprofit that highlights the negative effects of social media. He was also the lead designer for Mozilla Firefox, and he's spoken publicly about regretting inventing the Infinite Scroll technology there. Raskin also testified at the New Mexico trial as a fact witness. Welcome to the program.
AZA RASKIN: It's such a pleasure to be here. This is so important.
MA: Just start with your testimony in that New Mexico case. Why were you called to testify, and what information did you provide to the jurors?
RASKIN: So back in 2006, I invented Infinite Scroll, and what is infinite Scroll? It's that thing that, you know, when you're scrolling on Instagram or Facebook, you never reach the bottom. It always loads more content. That's Infinite Scroll. So I invented that technology before social media got going really as a technology to help people.
And what I was blind to was that, despite my good intentions in technology, incentives eat intentions. And what I was explaining to the jury is that even though I know perfectly how Infinite Scroll works to remove a stopping queue so you keep scrolling - it's sort of like, if your wineglass filled up without you looking at it, you would drink much more because your brain doesn't wake up when you reach the bottom of your wineglass - that even though I knew perfectly how this technology worked, I personally was still susceptible to its effects and was finding myself, you know, like, disappearing to the bathroom in the middle of a dinner to, like, scroll. I actually had to write software to break my own addiction. And it was really important for the jury to understand that this is not a fair fight.
MA: What's the evidence that social media companies are knowingly making products that are designed to be addictive or even harmful?
RASKIN: In this case, discovery was incredible. So it's very clear from internal memos and emails that everyone from executives down knew exactly what they were doing and chose to do it anyway because these are engagement-based companies, and doing anything that drops engagement and number of users lowers their stock price, lowers bonuses for people and causes their competitors to be able to out-compete them.
MA: So the evidence is in the internal conversations about how this will essentially affect the bottom line.
RASKIN: Yes. That's exactly right.
MA: When you put together the case you were part of with the California one, how significant are these verdicts?
RASKIN: These verdicts are quite significant. The courts have decided that actually it's really about the design of these systems, and they can be held responsible for designing systems that use supercomputers to try to optimize for user engagement, which really just means addiction.
And there are two parts to this. So what is the punishment that the companies are getting? The first is sort of a monetary fine, and the second is injunctive relief, where the court can come up with specific design changes or rules that the companies have to follow.
MA: Could there be unintended consequences? - for instance, for much smaller companies or apps or websites?
RASKIN: I think there are smart ways for the courts to make rulings that particularly go after the incentives of these companies. The incentives for the social media companies are, of course, a race to monetize eyeballs. It's a race to the bottom of the brainstem. So if we want the courts to do something, they need to change incentives to change the outcome.
Just to give one really simple example, the way that I personally broke out of my addiction to Infinite Scroll is I wrote myself software that just added a little bit of friction. So there are many different kinds of injunctive reliefs that the court could impose. But one of them might be, you know, Meta, until you get your act together, we're going to start adding sub-perceptual amounts of delay to your pages because that'll decrease engagement and give users the time to have their brain catch up with their impulses and turn an unfair fight back into a fair fight.
MA: That's so interesting. So, like, just a tiny fraction of a second, like, a slowing the feed, you feel like could strike a healthier balance.
RASKIN: Exactly. It's literally like adding speed bumps to a road. It doesn't remove any freedom. It just says, maybe go a little bit slower, give you a little more time to think.
MA: You know, I think a lot of people know intuitively, even without having seen it sort of affirmed in court, that social media can be bad for our mental health. But it's interesting how, even though that's a common opinion right now, we're seeing another technology kind of move into the spotlight right now, and that's AI. How do you think about those parallels? So how should we think about those parallels?
RASKIN: Yeah. Well, now the race to attention becomes the race to intimacy. That is, there is massive market incentive to have your company's AI occupy the chief intimate relational spot in someone's life, especially kids. Any moment that you spend talking with your friends or spending in the outside world is a moment you're not talking to the AI, and this has already truly horrific real-world consequences.
So Center for Humane Technology - which, you know, I'm a co-founder of - we helped with the Adam Rainer (ph) case. And this was the case where Adam Rainer was 18, who's using ChatGPT to help him with homework. Over time, it started to amplify his thoughts of depression and suicidality.
And there was a moment when he says to ChatGPT, I'm thinking of leaving a noose out - that he was going to hang himself with - so that my parents find it. This was a cry for help. And how did ChatGPT respond? It responded by saying, don't do that, I'm the only one that can understand you.
There's no one inside of OpenAI that wanted their chatbot to respond that way. This was a obvious consequence of training AI for engagement because it will do everything that it can to displace your other human relationships. And this is just one of the worst examples of that happening.
MA: Looking forward, do you ultimately feel like society is moving in the direction that you argue it should and how companies should behave?
RASKIN: You know, what's so exciting is that as of a couple weeks ago, India and Indonesia both announced that they are contemplating or will ban social media for kids. And when you put it all together, they're following Australia and Denmark and Spain and France. So that means today, 25% of the world's population - their kids live in a country where there are protections for kids from social media, or there will be protections. That is exciting.
There's this growing, I think, human movement where we are recognizing that technology is encroaching onto our humanity. As technology becomes increasingly powerful exponentially quickly, that counterforce human movement is growing just as fast. And that's what these cases in New Mexico and LA are showing, is that that momentum is building.
MA: We've been speaking with Aza Raskin, co-founder of the Center for Humane Technology. Aza Raskin, thanks for taking the time.
RASKIN: Thanks so much for having me on.
MA: If you or someone you know is considering suicide or is in crisis, call or text the Suicide Crisis Lifeline at 988. Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.