Francesca Mani will never forget the day last fall when she was called into the principal’s office at her high school in New Jersey.
And I’m not going to lie. After I left the office, I was crying.
The principal told her some appalling news. A male student had used AI to generate an explicit image of her, putting her face on a nude body, and he’d circulated it online. Francesca wasn’t the only one.
I was walking the hallways. I saw a group of boys laughing at a group of girls who were crying. And that’s when I realized I shouldn’t be sad, but I should be mad.
Francesca and allegedly more than 30 of her classmates were the victims of a type of online harassment that’s becoming more and more common. It’s a new kind of deepfake pornography that’s made possible by AI, and it’s sweeping the Internet.
So, I came home, and I told my mom, and I said, we need to do something about this because this is not fair to the girls, and this is not okay.
‘I’m Clare Duffy. I’m a tech reporter for CNN. And this is Terms of Service. I cover artificial intelligence and other new technologies for a living. And even I sometimes get overwhelmed trying to keep up with it all and stay safe in the process. I hope this show will help us all better understand how these tools work and how we can experiment with them without getting played by them. For this first episode, I wanted to know how can we protect ourselves against AI-generated deepfake pornography? Because even though tech companies and legislators are trying to play catchup with the dangerous ways people are using these tools, the truth is, they’re still really easy to access. One of the first tips, experts say, is to see if you can find a lawyer in your area who’s familiar with the issue. That’s why I invited Carrie Goldberg into our studio.
‘We can never be fully safe in a digital society, but it’s kind of up to one another to not be total a-holes.
‘Carrie runs a law firm out of New York City that takes on cases dealing with things like revenge porn and stalking. She’s worked with high-profile clients, including multiple Harvey Weinstein accusers. She’s also represented everyday people dealing with online harassment. And she wants to hold tech companies accountable for online harms. Carrie is part of a growing cohort of people working to get the law to catch up with all the tech advancements that have made deepfake pornography so common. For her, this work is personal. Well, Carrie Goldberg, thank you so much for doing this with us.
So, in your Instagram bio, you say you take on assholes, pervs, trolls and toxic tech. How did you get into this specific kind of work?
‘Well, so I went to law school, and I was really interested in personal injury and the idea that somebody could be injured and that the only way to really compensate them in our justice system is financially. And I continued to work with victims, and, ultimately, I had this terrible experience with a short-term relationship where I ended the relationship and my ex basically made it his mission to destroy my life. And one of the things that he did was use nude images like pictures and videos of me that he had, and then he would email them to me and then tell me that judges and clients and things were being blind copied, and I didn’t know what to do or how to help myself. And this was back in 2013, before I’d even heard of the word revenge porn. There were not criminal laws to prevent it, and so I was just scared out of my mind. And once I got through that miserable experience, I basically just started a law firm thinking that I could maybe help other people who had also been the targets of stalking and image-based abuse.
Obviously, revenge porn, even though, as you said, in 2013, nobody really called it that. But revenge porn, nonconsensual sexual images have been around basically as long as the Internet, unfortunately. How are you seeing AI change the online threat landscape for women and for men? Although,, we know that most of the victims of this kind of harassment are women.
‘So, the main concern for a long time was with people consensually sharing a nude image of themselves and then it being non-consensually published online. And with AI, we don’t even have to be creating the image or sharing it consensually. All we have to have is just a human form in order to become the victim. And so the whole idea that we used to have to battle with of the shaming of victims: Why did you take that picture in the first place? Didn’t you see the red flags with that person you shared it with? That’s totally removed because anybody can be the victim of AI deepfakes.
Yeah. When you hear from clients that this has happened, that somebody has created a nude image of them or looks like it’s them that they did not take or consent to, what does that mean for people’s lives?
They’re usually pretty freaked out, especially if they’re, if they’re young and they don’t know how to cope. And the Internet is this big, huge, nebulous place. How do you get an image down? How do you find out where all the images have been published? How do you find out who’s doing it, what technology they’re using? It feels overwhelming and scary.
Well, yeah, it’s crazy to think that, like, as you said, anybody who has their photo, their face online, which is almost all of us these days, this is the kind of thing that could happen.
Exactly. And you don’t even have to have an online presence. I mean, you just have to have a physical embodiment.
‘Yeah. How are these AI-generated explicit images made? Like on a technical level, how is this happening?
Well, now you can just go to, you know, Google Play and the App Store and just download really basic software that then just scans images and lets you splice them into pictures and videos. The bar to creating them is really, really low.
Yeah, this maybe is a dumb question, but the companies whose technology is being used to make these images, these apps that you can download from the App Store, are they aware that this is happening? They know that their technology is being used this way?
I mean, one of the main ones is called Nudify.
So I think they know that the main purpose is to injure, yes
We should say that, as of right now, the Nudify app isn’t available in the Apple App Store or on Google Play. Some similar apps were also removed from the app stores earlier this year. I reached out to Apple and Google for comment, and they pointed me to their terms of service. Google does not allow apps in the Play Store that contain content or services intended to be, quote, “sexually gratifying.” And it says that generative AI apps must prohibit the creation of certain restricted content like content that could facilitate the exploitation of children. On Apple’s App Store, apps must comply with certain guidelines, including a prohibition against objectionable content like, quote, “overtly sexual or pornographic material” or content that is, quote, “just plain creepy.” Regardless, tools that can generate deepfake pornography do still exist, and they’re easy enough to access that high school students are finding them. These images are only going to get more convincing as AI technology develops. But Carrie says that how realistic they are is kind of beside the point.
They look real. But I think, like, if we look closely enough at an image, we can see that, you know, there is a sixth finger on a hand or something like that. So there’s ways to tell. But, you know, it’s almost like you can’t unsee them. So if you’re a middle school kid, I mean — it’s illegal, by the way, it falls under our child sexual abuse material laws to create them of kids. But let’s say that you’re a kid and your deepfake has gone viral around your school or beyond. The harm has already happened. You know, like, it doesn’t really matter if people are aware that it’s fake. They’ve already seen you in this, like, sexual naked form. And so, you know, you’re dealing with the humiliation of that, so it doesn’t really matter.
Yeah, that makes a lot of sense. Yeah. It’s like if you have a sixth finger or not, they have this idea of you that was created by this technology that you can’t undo.
‘What are the big-picture implications of this technology? I wonder what it means for all of us and our experience online if we can’t trust that we have control over our own images.
Yeah, I mean, it’s all related to the proliferation of misinformation and disinformation, not ever feeling totally safe that we can control who we are online or what our reputations are. I mean, Google has taken away our ability to control. They are the ultimate arbiter of what people see about us online when they Google us, and we don’t control that. But this is just a new additional layer about our loss of control. And I think it’s really especially scary for the younger generation who are going to become our elected officials and our leaders of industry. And no one’s really going to be getting into adulthood totally unscathed without having the potential or vulnerability of becoming a victim. And that’s really scary.
On an individual level when you think about how this kind of harassment can impact somebody’s life, I mean, you made the point about it almost doesn’t matter if you can tell that it’s AI because the harm is done. Does this impact people’s ability, as you said, to run for office, to get jobs? Like, how could it harm somebody on that individual level beyond just sort of the personal trauma that somebody would undergo?
I mean, if you’re scared and humiliated and having to spend all your time, you know, filling out DMCA takedown notices and stuff, then you don’t have the ability to be thinking bigger. You are just dealing with a crisis. And so it can really derail a person’s career path or just even how they’re like thinking about what they want to do, who they want to be. If you’ve in your childhood went through something where you were sexually exploited in a public way.
So anyone can become a target of these deepfakes, even if they’ve never taken or shared an explicit image of themselves. And while most of the major social media platforms say they ban these kinds of images, they don’t always do the best job of removing them. Knowing this, it’s easy to feel powerless. But Carrie says there are steps we can take to protect ourselves and our loved ones. And she has some thoughts on how to hold tech companies more accountable. That’s after the break.
There are so many different sort of paths of accountability for this issue. But are governments specifically behind on addressing this issue? Are they catching up at all?
So, they’re catching up. We just got the Defiance Act passed in the Senate. It actually creates a private cause of action for victims to go after their offenders, and they can seek between $150,000 to $250,000 in damages. So that’s really progressive.
‘And what about the companies whose technology is being used to make this stuff? Is there any acknowledgment? I mean, I know with most tech companies, there’s a difference between saying we have a policy against you doing this bad thing and actually enforcing against that bad thing. But are the companies making any effort to address the non-consensual creation of these images?
‘I mean, the companies that are creating the technology and putting it into the hands of the public, they know what they’re doing and they’re doing it anyways. We can’t assume that they’re good actors, even if, you know, they might say that this is for other things besides nudes. So I don’t expect them to self-correct. I think that it’s going to take product liability cases and seller negligence cases against the distributors. Any platform that publishes them like X or there’s some websites that are just devoted to deepfakes. They’re going to say that they’re immune from liability under Section 230.
Okay. So, for the uninitiated, will you give us the brief explainer of section 230?
Sure. So Section 230 is a federal law that went into effect in 1995, which basically says that platforms cannot be held liable for the content that users post. But what’s happened over the intervening years is that the law has become more and more bloated to the point where it almost made it impossible to sue platforms for anything, you know, even for really bad conduct that it knew it was involved with. And there’s a lot of momentum right now to rein it back in. And that’s beginning to kind of take shape both with a lot of different legislative ideas as well as courts are beginning to say, okay, these companies actually are not just publishing platforms, but they’re really complex things with geolocation and DMing and all these different complex features. You know, we can’t just, like, act like they’re just dumb publishers.
‘Yeah, they’re not just a little bulletin board anymore. These are multi-billion dollar companies.
Exactly. So we really have to be looking at this from a product liability standpoint and say that, you know, if we’re fortunate enough to kind of be able to link an actual product to a specific person’s injury, then we can say, okay, this is an unreasonably dangerous product. So it’s exactly the same kind of theory that we would be using against, you know, a car manufacturer if the seatbelts didn’t work or the brakes went out. We have to be holding these companies liable under the same, like, product principles.
Is there momentum in that direction? Like are those cases happening?
Not so far against deepfakes. So, I’ve been doing product liability cases against tech companies since 2016. And in the beginning, I got laughed out of court because platforms would say, we’re not a product or a service, and you can’t use these kinds of, you know, liability claims on us. We have Section 230 immunity. Like, it was really an unpopular theory. But, over the last few years, the theory has caught on. We had a big case against Omegle.
We should say, for people who aren’t familiar with Omegle, if they’re listening, this is the website where you could log on and chat with strangers. And I mean, gosh, I remember Omegle in middle school being a very scary place because it was a place where, like, as a middle school girl, you’re like, hmm what is this? And then all of a sudden you’re chatting with somebody who’s flashing you, and it’s an adult man.
‘Right. So, Omegle’s whole thing is chat with a stranger. And so it matches children and adults, and there’s no age gating, and everyone’s anonymous. And so we sued them under a product liability theory. Our 11-year-old client had been horribly victimized by somebody that she met and matched with on that platform. And it was a three-year victimization. So we use the same theory of product liability against them. And, ultimately, the judge said, yes, absolutely. You know, like, the matching is a defect. You know, if your product matches kids and adults, you know, for private sexual chatting like livestreaming, then it’s defective, and it’s also trafficking. And so as that case developed, we ultimately got Omegle to settle, and the settlement involved them shutting down forever. So, the theory of holding the companies liable for having malicious products like deepfakes is something that I think has teeth.
‘Okay, so I want to talk about — because this is all like very scary. And I can imagine, especially if you’re a parent listening to this and thinking about the possibility of this happening to your child. I want to talk about some potential sort of solutions or what to do if somebody woke up tomorrow and discovered that somebody had created explicit AI-generated images of them without their consent, or if a parent discovers that this has happened to their child, how should they respond?
‘So my first advice for parents, actually, it should pre-date discovering that this happened to your child. The first thing that a parent should do is to say, listen, I want you to know that if anything messed up happens to you online, just tell me, okay? Don’t worry about me getting mad at you. Don’t worry about me taking your phone away. Like, that’s all secondary. I just want you to tell me as soon as possible. Because what happens is that kids are afraid to tell their parents, then think that they have to deal with it on their own because they’re afraid of the punishment or getting their phone taken away. And so they’ll just be dealing with a crisis in solitude, and it becomes overwhelming, and kids can actually become suicidal or self-harming because of it. So I want parents to know that they should just tell their kids to, like, tell them ASAP, and then we can kind of figure out where they’re being published. Most social media platforms have, you know, forms that you can fill out for content removal. So just like take screenshots of everything and then just be going through it, you know, just using brute force and getting the content removed. It’s important to try to figure out, is this part of a course of other kinds of stalking? Because if it is, if there’s somebody who just has it out for you, an ex or a coworker who is jealous or something, then we need to figure out who that person is and get an order of protection or report them criminally and kind of stop it at that level. There’s so many different ways to stop it. And in New York, it’s illegal criminally to create and publish digital forgeries. We have a deepfake law. Other states are also creating those, as well. So, you know, Google, find out who the advocates in your community are. There are different lawyers and social workers and stuff who can help.
So the screenshot thing I think we should stick on just for a minute because I think that’s probably really important. You’re — as much as you probably just want to delete this stuff as fast as possible and not look at it, that is creating a log where potentially you could use that as evidence to take some of this action, right? Legal action.
‘Exactly. So, the knee-jerk reaction is to get this off the Internet as soon as possible. But if you want to be able to have the option of reporting it criminally, you need the evidence.
So, you’re screenshotting so you have that evidence, and then you’re going to the companies and filling out these forms, asking them to take it down. Should everybody be going to the police when this happens to them or not necessarily?
It’s a personal decision. You know, it’s not illegal everywhere. And so going to the police when there’s no criminal law to be enforcing can be, can be demoralizing. It can be demoralizing when there is a criminal law. I’ve had a lot of people report this, and the police say this isn’t even criminal when it actually is. So I just want people to kind of go into the criminal reporting process fully aware of that. If you’re a college student, you should totally report it to the Title IX office, especially if you believe that it could be somebody that you’re going to college with because that’s, you know, a disciplinary violation.
And when you talk about looking for a lawyer or a social worker who might be able to support you, is that useful even if you’re in a place where there aren’t criminal protections against this kind of activity? Are there other things you could get help or support with?
Absolutely. So, first of all, a lawyer can tell you if you do live in a place where it’s illegal. Also, hopefully we’ll soon have a federal law that gives victims the right to take legal action if they want to or even to get an injunction against the offender, which would be a court order saying that they have to stop. It could be grounds for an order of protection that forces that person to leave you alone, and if they don’t, then they can be arrested. Plus, you know, there might be a way to go after the platform. You know, if we can figure out what technology was used to create it, what vendor, you know, whether it was the App Store or Google Play or somewhere else was distributing that product, like there might be a bigger way to stop the problem.
Yeah. Beyond sort of the parents having conversations with their kids, are there other things that people can do proactively to prevent this from happening to them?
‘My proactive advice is really to the would-be offenders, which is just like, don’t be a total scum of the earth. And try to, you know, steal a person’s image and use it for humiliation. There’s not as much that victims can do to prevent this. As we like to say in our office, like everybody is a moment away from crossing paths with somebody who’s just like hell-bent on their destruction. So we can never be fully safe in a digital society. But it’s kind of up to one another to not be total a-holes.
Well, Carrie, thank you so much for doing this.
It’s my pleasure. Thank you for asking such great questions, Clare.
‘So, to recap: Here are three tips that can help you and your loved ones navigate the growing risk of harmful deepfakes. First, if you have kids, try to have an open channel of communication with them so they’re not afraid to tell you if they’re being harassed online. That way you can help them deal with the issue as quickly as possible. Next, if you find an AI-generated nude image of yourself or a loved one on social media, look for the forms where you can request to have them removed. There are non-profits that can help with that, too. We’ll share links to them in our show notes. And before you get the images taken down, take screenshots as documentation. Finally, if you know who the perpetrator is, you may be able to file an order of protection against them. Look up lawyers or social workers in your area who can help. Thanks so much for listening to this first episode of Terms of Service. And one last thing. If you’ve had someone create deepfakes of you or if you’re a parent and this happened to your child, we’d appreciate hearing from you about how you handled it and if you have advice for others in the same situation. That’s it for today. I’m Claire Duffy. Talk to you next week.
Terms of Service is a CNN Audio and Goat Rodeo production. This show is produced and hosted by me, Clare Duffy. At Goat Rodeo, the lead producer is Rebecca Seidel, and the executive producers are Megan Nadolski and Ian Enright. At CNN, Haley Thomas is our Senior Producer, and Dan Dzula is our Technical Director. Steve Lickteig is the Executive Producer of CNN Audio. With support from Tayler Phillips, David Rind, Dan Bloom, Robert Mathers, Jamus Andrest, Nicole Pesaru, Alex Manasseri, Leni Steinhardt, Jon Dianora, and Lisa Namerow. Special thanks to Katie Hinman and Wendy Brundige. Thank you for listening.