Can we even trust videos anymore?
Editor’s Note: On Point reached out to OpenAI and asked for an interview. They declined. An OpenAI spokesperson told us in a statement:
“Sora includes visible, dynamic watermarks and C2PA provenance metadata on downloaded videos, and we maintain internal systems designed to determine whether a video originated with us. Our usage policies prohibit deceptive or misleading use, and we take action when we detect violations. We’ll continue strengthening our safeguards and contributing to efforts that make provenance signals more consistent and effective across the industry.”
OpenAI’s Sora 2 app lets anyone with a smartphone create AI-generated deepfake videos, from phony footage of a corgi rock climbing to fake videos of kids carrying guns in school. Is it time to stop believing our eyes?
____
The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited broadcast here:
Sponsored
____
Guests
Hany Farid, professor at the University of California — Berkeley’s schools of information and electrical engineering and computer sciences. Co-founder and chief science officer at GetReal Labs, which develops techniques to detect manipulated media.
Sponsored
Transcript
Part I
MEGHNA CHAKRABARTI: Back in September, OpenAI launched Sora 2, its newest artificial intelligence video generation model. The app lets anyone with a smartphone create hyper-realistic, deepfake videos with a simple text prompt. Social media was quickly flooded with these deepfakes, with things like a fake video of a figure skater performing a triple axel with a cat on her head.
(VIDEO PLAYS)
CHAKRABARTI: And phony footage of a video game character, the one and only Mario driving his signature red go-kart in a real-world police chase.
Sponsored
(VIDEO PLAYS)
CHAKRABARTI: There were also darker and more sinister videos created with Sora 2, like a fake news interview with someone about SNAP benefits, also known as food stamps.
(VIDEO PLAYS)
CHAKRABARTI: Some of the SNAP related deepfakes even made it to Fox News, which initially covered the AI generated videos as if they were real. And there were so many Sora 2 fake videos generated of Martin Luther King Jr., that just two weeks after the app launched, MLK’s family and OpenAI announced it had banned the use of King’s likeness in the app.
Now, since its release, OpenAI says it has cracked down on what’s allowed in Sora 2. The company’s website lists, quote: “Graphic violence or content promoting violence, extremist propaganda and hateful content” as things that could be removed.
Sponsored
It also includes: “Content that recreates the likeness of living public figures without their consent or deceased public figures in context where their likeness is not permitted for use.” End quote.
CHAKRABARTI: But note, this content is things, or are things that could be removed, meaning it’s possible it’s already been made and already had the chance to spread even worldwide.
And that’s different than totally preventing the AI from creating the content in the first place. So Hany Farid joins us again today. He’s a professor at the University of California — Berkeley Schools of Information and Electrical Engineering and Computer Sciences. He’s also co-founder and Chief Science Officer at GetReal Labs, which develops techniques to detect manipulated media.
Professor Farid, welcome back to On Point.
HANY FARID: Good to be back with you, Meghna.
Sponsored
CHAKRABARTI: Are we at a place where when we turn on a screen, we will not be sure if we can believe our eyes?
FARID: Yeah. I feel like you’ve asked me this question a few times and I’ve answered it a few times.
CHAKRABARTI: (LAUGHS)
FARID: The answer is increasingly yes.
So here’s what I can tell you. If we can level set for a second. Still images. We do perceptual studies where we show people images. They’re through the uncanny valley. People can’t tell anymore, not reliably. Voices, people talking like you and I right now, without video, people can’t tell reliably if what they’re listening to is a real voice or not, and video is a fast follow.
Vo 3 earlier this year from Google, Sora 2 from OpenAI, as you were just describing, have really pushed the boundary. I swear I’m not trying to get back on your show in a year from now, but in a year from now, I think I’m going to be able to tell you it’s also through the uncanny valley.
It’s all becoming so incredibly realistic. But there’s also something else that’s important here, which is that it’s playing into narratives that people want to believe. You mentioned the SNAP benefits and the Fox News.
When you create these videos that fill a void or are consistent with a narrative that people want, you don’t even have to have the videos be that good. People wanna get there. And of course they are, as you said, now spreading on social media, and we shouldn’t forget that part. This is not fundamentally only an AI question. This is an AI meets social media, meets political polarization, and that is the perfect storm that we’re experiencing right now.
CHAKRABARTI: A little later in the show, Hany, we’ll go through each part of this perfect storm in detail. Because as you said, we could be really at a pivotal moment here. And yes, you have said, I have asked you that question before, and for folks who don’t know what we’re referring to, it was at the beginning of this year in January? That we last had Hany Farid on. Because then it was about people using the Pixel phone. Google Pixel phone. How easily they could make fake still images. And quite frankly, Hany, Claire, my producer and I, we were thinking, oh, how quaint.
FARID: Yeah. Exactly. Yeah. Wasn’t that cute back in the old days when we used to do that nine months ago. But here’s the thing too. First of all, I thought it was more than a year ago, which just tells you how crazy the last year has been, but we used to measure technological progress in 12 to 18 to 24 months, we now measure it in 12 to 18 weeks.
No kidding. These things are happening so quickly that we can’t even keep up anymore. We just wake up every day and be like, all right, what does the world have to offer us now, in terms of AI?
CHAKRABARTI: Okay. And also, I will remember, I’ll try to remind myself and everyone that even though we’re focusing on the challenges, let’s put it lightly, in terms of this reality warping power of AI. That in all of our coverage, we’re also look fairly, I think the thing, the ways that AI can make the world actually a better place, but the fulcrum in all of this is always like how are people going to use it? But so let’s go into the details, if we could for a second.
Yeah. Professor Farid, how does Sora 2 and similar apps work now to create this hyper-realistic video? How does it do it?
FARID: Yeah. So almost all of the generative AI and let’s stay away from text for a minute, the ChatGPT of the world. Let’s talk about images, audio, and video. Most of them work in a very similar way.
They have gone out both legally and illegally and scraped the internet for every single piece of content that they can get their hands on. So images video is of course coming from the YouTubes and the TikTok’s of the world and many other sites. And the machines. The AI models are being trained on actual human content to generate things that are semantically consistent with a caption.
So you can say, Give me a video of a unicorn walking down Times Square. And of course, it’s unlikely that somewhere in the training dataset was exactly that, but all the pieces are there. What the AI learns is, what does Times Square look like? What does a unicorn look like? And the real breakthrough here with both Veo from Google.
And Sora with OpenAI is now the ability to create an entire, what we call, space time volume. So instead of just a single image or a video without sound, which was the state of the art about a year ago, it can now generate entire videos. That space and time is of course frame after frame, and then adding in the audio stream also.
Under the hood, lots of complicated AI technology, but at its core, it’s simply training on the internet. It has seen so many hundreds, millions of billions of hours of video. It is simply learning the patterns that are consistent with real video. And of course, only companies of the size of OpenAI and Google can do this because of the sheer volume of data and compute that is required to train these models.
CHAKRABARTI: And how detailed does the prompt have to be?
FARID: Yeah. Yeah, that’s the right question to ask. So there is still some prompt engineering at play here. I’ve certainly seen when in the hands of an experience user, if you can call experience over the last three, four months and Sora was released. You can really do some pretty impressive things.
So really simple prompts. You’ll create a cool video, but as you start to learn how to prompt and getting more detailed and knowing where the limits and the strengths and the weaknesses the models are you. And there’s plenty of websites that can go out there. I will tell you what I do, and it actually works really well.
This is a very weird thing, but I ask ChatGPT, tell me how to prompt your video model to get a really good video. And it’s really good at it. Wow. So you actually don’t, you can just learn it from the AI itself.
CHAKRABARTI: I want to actually just play something really quickly for you, Professor Farid, that I don’t think it’s actually, it’s not OpenAI.
It’s not Sora 2, but it just shows how much and how quickly things like hyper-realistic video can even, not just in the world of social media, but in other forms of day-to-day, now very common video communications. We’re seeing changes here because this has to do with something that you and Claire Donnelly, the producer who put together this hour actually did together.
Because you demonstrated some deep fake technology during your Zoom call with Claire. And we have a clip here of what happened in that Zoom call because halfway through it, you completely changed what your face looked like on the screen.
HANY FARID: Let me switch over here. So now if you look at my face, Claire, I’m a completely different person, right?
CLAIRE DONNELLY: That’s because of Sora.
FARID: It is not Sora. It’s a different technology. Sam Altman.
DONNELLY: Are you Sam Altman?
FARID: Yeah. Sam Altman.
DONNELLY: Oh my God.
FARID: I’m a pretty good Sam Altman, that I have darker hair, but it’s, I think the shape of our heads are pretty similar.
DONNELLY: Interesting.
CHAKRABARTI: Professor Farid, Claire really wants you to know that her reaction, because she’s a professional while she was talking to you, did not actually truly reflect her terror when she saw your face, like almost instantly changed to a perfect likeness of Sam Altman suddenly giving her the talk on Zoom.
FARID: Yeah. First of all, Claire and I are going to have to have a chat about that pre-interview as I understood it. But nevertheless, okay, so this is a different technology and let’s talk about this because what we have been talking up to up until now you can go up to Sora or Veo.
You can type a prompt, get a video posted online. Now what we also can do in real time, so as a Zoom or a Teams call is unfolding, I can change my identity. This is a different technology. It’s all AI under the hood of course, but it’s a different core technology called Face Swap. Deepfake or avatar deepfake.
There’s two different types. Where what I did in that bit that you just saw with Claire is I just changed my face, eyebrow to chin, cheek, to be a completely different person. So what this means now, Meghna, is you go online and it’s getting harder and harder to know what’s real and what’s not.
But now it’s going to be you get on a call. It’s going to get harder and harder to know. And at this point, you got to be asking yourself, are you 100% sure you’re talking to Professor Hany Farid from UC Berkeley, because it’s seven in the morning and maybe he’s too lazy to get up in the morning.
And he sent his grad student with a voice modulator, which you can also do. And so suddenly everything we do online, every interaction we have, personal, professional with our banks, with our doctors. How do we trust what we see anymore? If you can change people’s identities, their voice and their face in real time, and that’s getting even weirder than, okay, the internet has been weird for a long time. But surely, we can trust a Zoom and a Teams call, and the answer is not so much.
Part II
CHAKRABARTI: We spoke with someone who had an interesting experience with Sora 2. Because when it came out back in September, Katie Notopoulos downloaded it right away. She’s a senior correspondent at Business Insider, where she covers tech and business and culture.
KATIE NOTOPOULOS: Probably for two days I was all day long making videos. It drained my phone battery so fast, having to keep charging it. A bunch of my friends, other colleagues had also joined, and you can make videos with yourself and a friend. Only people who are also on the app and agree to it. So I would make a video of like myself and my colleague singing in a band together or something like, doing something embarrassing. Or I really liked making videos of my friends falling over on roller skates.
It was 10 seconds. It was incredibly funny to me and to my friend, and literally nobody else.
CHAKRABARTI: Now Sora 2 lets you choose who can have access to your likeness. You can keep it private to just you, or let your friends use it or leave it open for anyone on the app to use to make their own deepfake videos with your face.
As an experiment, Katie chose to leave hers completely open.
NOTOPOULOS: I wanted to do that for reporting purposes. Like I wanted to understand what that experience was like, if you left your thing open, which I would not recommend anybody does. Because what ended up happening a few weeks after it initially launched, I started noticing that there were a lot of like fetish content stuff, so it does not allow sex, like explicitly sexual content.
No one could make a video of you nude or in your underwear. But what they can do is like these really niche non nude fetishes. So like people who are really into feet or pregnancy, or women like eating a ton of food and their belly’s getting really big.
CHAKRABARTI: Katie says on Sora, there are now around a hundred videos that strangers made using her likeness.
She says she’s not surprised, but it freaked her out.
NOTOPOULOS: I felt deeply grossed out by, Oh, people are like making this weird fetish content using my face like that. That’s never happened on any other social platform, right? There’s just not really a mechanism for people to be doing that so fast and in such great quantity with your likeness.
CHAKRABARTI: Katie says she could change her privacy setting, so people can’t use her in their videos anymore, but she wants to let the experiment run its course. And while she had fun at first using Sora, Katie does worry about what it means for the trustworthiness of internet video overall.
NOTOPOULOS: This is a super easy, free, which is also key here, right? Way to make an amazingly realistic deepfake, either of a person or a situation. Oh, here’s the rain camera footage of a bear in my neighborhood, right? It’s really good at making fake video. There’s just a lot of very obvious concerns about what the world looks like when we’re flooded with fake video.
CHAKRABARTI: Katie thinks it might be helpful for everyone to take time to familiarize themselves with deepfake apps like Sora, so they know what to look for.
NOTOPOULOS: If you are concerned about giving OpenAI your face, likeness, data, that’s a reasonable concern. You don’t have to do that, but you can try making videos on your own of whatever you want.
That doesn’t involve your own face. I think that gives you a good sense of like how easy it is and how good the results look. If you know a little bit how they work, you can identify the stuff a little better.
CHAKRABARTI: So that’s Katie Notopoulos, a senior correspondent at Business Insider. Professor Farid, do you think Katie’s advice to use the app in order to learn how to detect what’s real and what’s not, is that good advice?
FARID: Yeah. I think it’s good advice. I think it is important to understand what these technologies are, how they work, how you’re being deceived online. I also agree with her advice, don’t give people the ability to use your likeness. I’ll add a footnote to that. In some ways it doesn’t really matter because the fact is if there’s a single image of you out there, in just a few seconds, of your voice, anybody can grab your likeness.
Maybe not with OpenAI’s Sora, but there are plenty of other apps out there that are much sketchier and don’t have certain guardrails. So I do think it’s important to understand these technologies and understand that when you go onto social media today and certainly tomorrow, a lot of what you’re seeing is just AI slop.
You are just being lied to. And if you want to use that as entertainment, I think it’s strange, but okay. But understand there’s not somebody with SNAP benefits who is selling it. The line between entertainment and outrage on important issues of the day is very fine. And we just need to be careful here.
Of course.
CHAKRABARTI: And that’s the point, right? Because as you’re saying, if I got Sora 2 today and decided to make a video of myself riding a unicorn with a lightsaber in my hand, destroying a demon in front of me, fun, right? No harm there because obviously, I would never be writing a unicorn, let alone unicorns existing.
But the reason why people are rightly concerned is, as you said, first of all, it doesn’t even have to be perfect deepfake, right? It’s just filling in a little bit of a cognitive gap where people already want to be. And we had that example at the beginning of the fake SNAP selling video that actually made it onto Fox News.
So tell me what you know, or what you’re hearing. Are there any of these kinds of concerns within OpenAI, within Google, within the companies that are actually at breakneck speed putting these new AI technologies out?
FARID: Yeah. So I don’t have a lot of insight into these companies other than being an advisor on a few things around online safety.
But here’s what I can tell you, that the companies are doing what the companies did for the last 20 years, which is move fast and break things. It’s a competitive landscape. They all want to get out there. They want to be first and once Google released Veo 3, Sora was going to come out months later. We know this, and you said this at the top of the hour too, which is, what did OpenAI do?
They released this thing. They immediately see the damage and they start trying to backfill in the safety guardrails. Have we learned nothing from the last 20 years of the internet? Have we learned that this is not going to end well for us? And what’s frustrating to people like me is that we are making exactly the same mistakes, but accelerated that we made in the first 20 years of the internet.
This is not, sometimes you develop technology and there are unintended consequences. People use technologies in the ways we could not have foreseen. That’s not this. People are doing exactly what we know what they’re going to do, and yet here we are releasing these technologies, letting them use them in awful ways, and then trying to backfill in the safety and it’s not going to end well.
CHAKRABARTI: To your point, here’s OpenAI CEO Sam Altman talking about Sora 2 on the a16z podcast in October.
SAM ALTMAN: It is important to give society a taste of what’s coming on this co-evolution point. So like very soon the world is going to have to contend with incredible video models that can deepfake anyone or kind of show anything you want.
And that will mostly be great. There will be some adjustment that society has to go through. I think it’s very important that world understands where video is going very quickly. Because video has much more like emotional resonance than text. And very soon we’re going to be in a world where this is going to be everywhere.
CHAKRABARTI: Professor Farid, I want you and all listeners to know that we reached out to OpenAI multiple times asking if anybody from the company could join us for an interview. They declined. I’ll read their statement a little bit later, but it is very challenging from a civil society point of view, or even just a democracy preserving or sanity preserving point of view.
To hear Sam Altman just throw in, Oh there will be some kind of adjustment that society has to go through.
FARID: Yeah. And good luck everybody. I hear this quite a bit and I find it a really odd argument. People like Sam Altman say look, somebody’s going to do this, so let’s just get out ahead there and make a lot of money doing it.
That’s essentially the argument, right? Is if we don’t do it, somebody else will. So we’re going to go ahead and do it. It’s a really weird mindset to be in. And you said this well, and we should repeat this. Are there positive aspects to this technology? Yes, but there’s the second part of that question. Which is are they outweighed by the negative aspects?
I don’t know, but I feel like we’re getting awfully close. It feels awfully close to the negatives that are outweighing the positives. Because what are the positives? A creative side of things. A young, talented person can start to create really beautiful short films, for example. That’s really cool.
But do you have to unleash this product without any guardrails on the public globally and then think okay, some good things will come, some bad things will come and we’re sorry. We’re just going to have to adjust to it while we become a trillion dollar company. It’s a strange argument and I don’t buy it.
CHAKRABARTI: In fact, you said earlier that money has a lot to do with this and NPR had a really good report on Sora 2, and they spoke with three former OpenAI employees who all said that they were not surprised the company launched this app showcasing the video technology because yeah, in their words, investor pressure is building up on OpenAI to quote “dazzle the world” as it did just three years ago after the release of ChatGPT.
FARID: Yeah. Yeah. And I’ll point out by the way that the podcast that you just heard Sam Altman on was from an investor, a very famous one, in fact. So yes, this is investor pressure.
And we tend to criticize and rightfully the CEOs of tech companies, and we should. For the damage that they have created on society, but behind them is the 800 pound gorilla, which are the venture capitalists and the investors who are putting massive pressures on these companies to become profitable and to burn the place down to the ground in order to do that.
And they should be held accountable for that as well.
CHAKRABARTI: And the fact that NPR got these recently former OpenAI employees to talk is interesting. Because I presume everybody has to sign an NDA when working for any of these companies. Because another one said that, quote, releasing Sora tells the world where the party’s going.
AI video may be the last conquered social media frontier, and OpenAI wants to own it, but as Silicon Valley competes over it, companies will bend the rules to stay competitive. And that could be bad for society.
We should disabuse ourselves of any presumption that nobody in these companies knows what might happen with the technologies they’re developing. On the other hand, or that they don’t know what might happen.
But on the other hand, as I said, we did ask OpenAI for an interview. They declined. But here is what a spokesperson told us in a statement. I’m just going to read it in full here.
Sora includes visible, dynamic watermarks and C2PA provenance metadata on downloaded videos, and we maintain internal systems designed to determine whether a video originated with us.
CHAKRABARTI: What is C2PA, professor?
FARID: So let’s talk about all three of those. So the visible watermark, you’ve seen this with the Sora logo is bouncing around. There’s a problem with that answer, which is that if you use the free version of the app, yes there is a watermark, but if you use the paid version through their API, there is no watermark.
So that’s only half the story. That is, if you’re willing to pay OpenAI to create these videos, they will remove that watermark. That’s only a part of the answer. C2PA is the Coalition for Content Provenance and Authenticity. These are what’s called metadata. You and I, you can’t see it when you look at the video.
It’s stored in the file, but when I, for example, load that video into a platform that we’ve created over at GetReal to analyze it for authenticity, I can read that credential. But the problem is the bad guy can remove the credential very easily. So the credential is extremely malleable. It can be ripped out very easily.
So look, did OpenAI do absolutely nothing to guard against harm? No. They did a few things. They added C2PA credentials, they added watermark, and they added a few semantic guardrails. Did they do enough? No. And here’s the thing. Even if they did do enough, there’s another company, right? This technology is out there, and once everybody starts breaking through that barrier, everybody is rushing through it.
So I don’t think OpenAI is the worst in the space, and I also don’t think that they’re best in the space.
CHAKRABARTI: Okay. Hany, let’s go to the second half of OpenAI statement. So they go on to say:
Our usage policies prohibit deceptive or misleading use, and we take action when we detect violations. We’ll continue strengthening our safeguards and contributing to efforts that make provenance signals more consistent and effective across the industry.
CHAKRABARTI: Now this sounds, you just said it. This sounds exactly like the social media company, ‘Oh, we’re taking steps. We’re going to keep stuff off our platforms. When they get reported, we will remove it.’ As you said earlier, have we learned nothing?
FARID: Yeah. Yeah. I think ChatGPT wrote that response, by the way, because honestly, that’s the same response I’ve been hearing for the last 20 years for everything coming out of Silicon Valley.
So it’s a press release and we should take it as what it is. But the fact is that whoever you spoke with, OpenAI, said it exactly right. The companies know what they’re doing. They absolutely know what they’re doing. But they’re under pressure. And it’s a competitive landscape and nobody wants to unilaterally disarm.
And so here we are. At the end of the day, this is what capitalism does without the guardrails from the federal government, from international governments. The companies are going to do what the companies do. And I don’t see that changing right now. I don’t, there’s not real leadership in the U.S. around these issues.
And I think this is, we are on a road where we know what the end game looks like, because we’re more than three quarters of the way down that road because of the last 20 years of social media and the internet and technology. And I don’t think we are being thoughtful enough about these technologies.
And I think that I’m glad you and I are having these conversations, but these conversations have to happen at the highest levels of the U.S. government, and they are not right now, and that is a problem. And then there’s like a secondary problem where these things never stay within the world of social media.
And we’re already seeing, the SNAP video’s one of them, but there’s already, we’ve already seen videos, fake videos, of say, riots happening. Outside ICE facilities. So the rest of us, even if we’re not making or using these videos, are also in a sense, part of the problem.
Because we’re ready to believe what our eyes are seeing.
FARID: Absolutely. And at a time when people still are spending way too much time on social media and gathering way too much of their news and information about the world from social media, it is absolutely rotting our brains. We have no idea what’s going out there.
We don’t have a shared sense of reality, and that is nothing by the way, to what the courts are having to deal with right now, introducing evidence into course of a law, images, audio recordings, video recordings, police body cams, dash cams, CCTV footage, how are the courts going to deal with really serious issues?
Not that the ones we haven’t enumerated aren’t serious, and it really is like there’s no sense of shared reality. And I think that’s a real threat to our society.
Part III
CHAKRABARTI: Let’s listen to OpenAI CEO Sam Altman one more time.
He was interviewed about Sora 2 in October by Rowan Cheung, who hosts the show on the state of AI and he told Altman he was worried about the app Sora 2 letting people make deepfakes of each other doing things they never did. And here is how Sam Altman responded.
ALTMAN: We know that in some number of months or years, it’s going to be widely available, and anybody will be able to make a video of you doing whatever they want with video that’s publicly available of you on the internet.
One of the ways we have found to help society with these transitions is to release it early with guardrails. So that society and the technology have time to co-evolve. I think we’ll learn to adapt and we’ll learn very quickly that there will be a lot of fake video of you on the internet generated with no watermark by some open source model and not that easy to trace.
And so getting society inoculated to that probably has some value.
CHAKRABARTI: Hany, let me just fact check something here. Altman there is either saying directly or at least intimating that they’re releasing models with guardrails, but from our conversation, that’s not what OpenAI is doing.
FARID: I would say there’s some guardrails. But they’re pretty weak. And they’re very easily broken. Even the semantic guardrails of sexual content and violent content, there’s plenty of webpages out there that will show you how to get around those guardrails. So I would say, did they try something? Are they sufficient?
Absolutely not.
CHAKRABARTI: Hearing Sam Altman say getting society inoculated to that probably has some value, it’s jarring.
FARID: Jarring is a good word. This is a very typical Silicon Valley thing. These guys live in a bubble. They’re not experiencing the real world. They haven’t talked to women who’ve been victims of non-consensual intimate imagery, or kids who have been extorted with images of themselves.
They’re not talking to the now hundreds of thousands, millions of people who have been scammed and defrauded to the tunes of tens of billions of dollars. And the answer is, look, this is coming, so let’s just get out there. I wonder if somebody who had created a deadly biological warfare would be like, look, this is coming guys, so let’s go ahead and just get it out there and get it over with.
It’s a really strange mindset and really only one that I think can come from Silicon Valley. Because you don’t, what you didn’t hear him say is, what is the upside? What’s the upside of this technology besides you becoming profitable. Talk to me about that upside, because I see a few things, but I also can go down a very long list of really negative things.
Sam Altman, I remember that this is maybe a few years ago, talked about how AI was gonna cure cancer and combat climate change, and what is he doing? AI slop and porn. This is where we are. Yeah. Which of course, is a repeat of the internet from the last 20 years, these lofty missions of AI have made way for, oh, we got to be profitable, so let’s do what we know is profitable, which is social media and explicit material.
CHAKRABARTI: You said something really interesting about bio weapons for example. Do we just let people develop them and be like science is going to science? And I say this as a major fan of scientific endeavors, science is going to science, so someone else is going to do it, so we better do it.
And off you go. Yes. Because obviously there are people out there who are trying to develop these weapons, but the major difference is that there is vigorous global regulation. Yeah, global agreements on this. In the United States. It would be an absolute crime.
So the difference being, to state, again, the obvious is that we as a society have put major guardrails on the development of that kind of technology. Why do you think we’re playing super-duper catch up with everything coming out of Silicon Valley?
FARID: Yeah, so first of all, you’re absolutely right.
We treat the physical world and the digital world very differently. You cannot release a product into the market without it being tested to oblivion. And why? Because there’s liability. You create a product that you knew or should have known is going to create damage, and it does. We’re going to sue you back to the dark ages.
We have product liability in this country and that’s what has kept us relatively safe when it comes to products and foods and medicines. But we treated the internet in the early days very differently. And you could make an argument in the early days of the internet, that okay, let’s get out of the way and see what this is. Because it is exciting and there is real opportunity there, but we quickly saw it go off the rails, but we never caught up from a regulatory perspective or a liability perspective.
These companies, up until fairly recently, have not been held accountable for the harms that have come from their products. There’s no product liability, so they don’t have to internalize it. So what do they do? They move fast and break things. They put things out there. They become highly profitable.
They become massive, they become powerful, and then they have PR firms to clean up the mess afterwards. We treat the digital and the physical world differently. But here’s the thing, there’s only one world. There is no more online world and offline world. This is all the same thing, and we simply have not caught up in the last 20 years.
And here’s my thing. We need to catch up because this is not going to end well for us.
CHAKRABARTI: So what’s the first step in trying to catch up?
FARID: Okay, so first, I don’t think this is going to come from a regulatory landscape. Certainly not here in the U.S. For the next three years this administration has made clear that they have no interest in regulating AI.
In fact, quite to the contrary, they are now even pushing back on states using their power to regulate AI technology. So I don’t see this coming from the U.S. government.
CHAKRABARTI: Can I just jump in there? Because you raised a really good point. The counter argument that I hear from AI supporters and also from the administration is what we’re really trying to do is prevent 50 different sets of regulations for Silicon Valley businesses to have to comply with.
FARID: Yeah. Yeah. That’s not a bad argument, except it’s not true because the U.S. government is not proposing its own set of regulation. In fact, has clearly said, we want no regulation in AI for the next 10 years.
And here’s the thing. We tried this 20 years ago. We tried, Hey, let’s not regulate this landscape. Let’s let technology be technology, and here we are. And now we’re using exactly the same policy, which by the way, is being whispered into the ears of this administration by the multi-billionaires that now want to become multi trillionaires.
Don’t buy the story. Regulation is not bad for innovation. It’s good. It is good to have guardrails on technology, because it makes it safer, makes it more ubiquitous. So this is the problem with the regulatory landscape. Now, the European Union, the UK, the Australians, the Canadians, other parts of the world are starting to think more seriously about this.
But let’s be honest, if the U.S. doesn’t, we’re lost in space. Now there’s a liability side. So we can deal with this in the courts. And we’re starting to see that, in fact, there’s a case right now, very public, making its way through the courts where a young man took his own life because of interactions with a chatbot. And you don’t get to hide from that. And so I think this will be really interesting to see what happens when the courts step in and say, look, you created a product that you knew or should have known was going to be dangerous. It led to death, harm, financial loss and we are now going to hold you accountable.
And when that happens, the companies will start internalizing that liability and will start to slow down. Innovate, create safety by design and then release. And I think that will be better for everybody. It will in fact be better for us as consumers. It’ll be better for societies and democracies, and it will be better for the companies.
This is the thing they haven’t got through their thick skulls yet, which is everybody is exhausted and fed up with Silicon Valley and with these tech CEOs and that is not good for them, and it’s not good for their industry.
CHAKRABARTI: As a now firmly middle-aged person, with all of these technologies coming so fast.
Cognitively, I just, I have a difficult time processing all of it, and I find, I literally find myself using my phone way less, that’s how I’m coping with it. But for the billions of kids out there, people much younger than me. This is the world that they were born into, right?
So are there anything, are there any things that, anything that parents can do? Or, you know, people who even want to somehow find technological assistance to help them figure out what’s real and what’s not.
FARID: Yeah, it’s a great question. The short answer is not really, the short answer is these devices in are in our hands, there’s no taking them away.
I’ll put a caveat to that in a second. The reality is that if you are spending a lot of time at social media, it is going to be bad for you. It’s bad for your mental health, it’s bad for your physical health. And I swear to God, it drops your IQ by 30 points because you’re just being fed nonsense all day long.
In Australia, as of last week, kids under the age of 16 are no longer allowed on social media. I think that age should be 30, by the way, but I’ll concede the age 16 for now. I think this is a really smart move. I think we have to start taking very seriously what we are doing to these young people, putting these devices in front of their faces and letting them have unfettered access to all kinds of nonsense.
So I really am a strong supporter of this Australian regulation. I think there’s going to be a fast follow in the EU, in Canada, in the UK, and in other places. And then we in the U.S. have to decide. And what’s really nice about the law, by the way, if I may, is that it gives parents cover.
The reality is here in this country, if every single kid in your class has a phone, is on social media, it is excruciatingly difficult to say no to your kids. But if the federal government says you are not allowed, the parents have cover. And that’s what this law is giving.
CHAKRABARTI: I wonder if we need to also, in this conversation, I know you’ve thought about this, but we need to talk more broadly about the damaging ripple effects.
And again, I’m thinking of the people who like don’t even use social media, may not even have a smartphone that can put video in front of them. You had said that for example, scams on the elderly have truly skyrocketed.
FARID: Yeah. Yeah. So let me give you a shocking number. Speaking of moving fast and breaking things in. The moral corruption of Silicon Valley, Facebook Meta 10% of their profits, so it’s about $10 billion a year, is from scams and frauds. 10% forever. That’s insane. And they know this. They know that people around the world are being scammed and defrauded to the tune of billions of dollars and they’re profiting from it.
That’s insane. So here’s the thing with all these products, is you know that cyber criminals are using it and all forms of frauds. Making phone calls to your grandparents and your parents saying, oh, I’m hurt. Send me money. Large scale fraud. We see it in the enterprise. Fortune 500 companies losing tens of millions of dollars and the companies are profiting and laughing all the way to the bank.
How we put up with this as society is an enduring mystery to me, honestly.
CHAKRABARTI: And just a side note for On Point listeners, we actually did a whole show about scammy ads on social media platforms, and it’s a mind-blowing hour. So go check that out in our podcast feed.
Hany Farid, I want to just make something clear in the way that folks like Sam Altman talk about these new products, these new powers of AI. As you pointed out, they’re not talking about making their products better or safer, every time they discuss it, it’s no, actually what we want to do is desensitize the public.
Is that right?
FARID: Yeah, and in some ways, I appreciate the honesty. Because in the early days of the internet with social media, everybody’s, Oh, we’re gonna connect the world. This is gonna be better. And they knew it was a lie. At least Sam Altman is being honest about it.
I actually respect that in some ways. He’s, this is pretty weird and it’s gonna be bad, but we’re gonna get pretty rich along the way. That’s the end of that sentence. So at least we’re being a little bit more honest about it. And look, it’s super easy to pick on Sam Altman and OpenAI, but he is not alone.
Every major tech company is in this space, is doing exactly the same thing. So we should look more broadly across, but yes, he’s saying, yeah, this is going to be weird and bad, and everybody buckle up. Is essentially what he’s saying.
CHAKRABARTI: Because I was thinking that, alright, in this effort to desensitize the world to fakery, what they’re also doing is desensitizing us or making us skeptical of reality.
FARID: Everything. Everything. Yeah. So this is the thing everybody has to understand is that the ability to create a fake video with your likeness, or my likeness or anybody on listening’s likeness is, can be bad and can be harmful.
But then what’s going to happen when real images, real videos, come out in a court of law, in a political environment after a natural disaster, after a horrific mass shooting. Everybody’s gonna deny it, and you’re already seeing it over the weekend. In Australia, people are, oh, that video is fake, that image is fake.
That’s fake. Why? Because this story doesn’t fit my narrative. And so suddenly everything’s in question. Things that I don’t like that are real, fake. The things that I like that are fake, real. And now it’s a mess. And of course, everything is being super amplified on social media and as you mentioned earlier on mainstream media outlets too.
And so the poisoning of the information ecosystem means we don’t live in a shared reality. And I don’t know how you have a stable democracy when you don’t have a shared reality, right? We disagree on lots of things. It’s okay, but we have to agree that 2 + 2 is 4. We have to start there.
CHAKRABARTI: This is a thing, right?
We’ve been living in not shared realities or different realities for some time because of social media, but there was still that narrow window where we could say, alright, there was a picture, or there was a video. I’m thinking of a benign example like President Trump saying he had the largest inauguration crowd in history and then media companies.
Or journalism organizations put out actual video of no, that inauguration cloud was not that large. But like now, as Aaron Rodericks, he’s been widely quoted. He’s the Bluesky head of trust and safety. He’s been widely quoted as saying the public is just not ready for a collapse. That’s what he, word use, a collapse between reality and fakery. That’s drastic.
We only have about a minute left. Hany, at the tail of this conversation, I know that a lot of people, myself included, are feeling like, okay, but like right now, what can I do to protect myself from this stuff? From like even having to absorb it.
FARID: Yeah, so I get this question, by the way, a lot and a lot more recently from my students. I was teaching this last semester, and a lot of students have been coming to me saying, what do I do about this AI slop on my feed? And my answer is pretty similar.
Start deleting the apps. I know it’s hard to delete social media because people feel connected, but at least delete it off your phone.
Get it off your phone. And I was just talking to a student the other day here who followed my advice and he said, this is great. He said, I use it so much less. When I need to use it, I use it, but I don’t mindlessly use these things. So just take ’em off your phone and if you need to leave mine on your computer, it’s good, you will.
You will notice this right away, how much less you use this. But we also have to return to trusted sources. We have to understand that we are being manipulated to make a relatively small number of people even wealthier than they already are, and we’re being manipulated and it is not good for our mental health.
It’s not good for our physical health, and we need, it’s like anything else that is negative in our life, and that is addictive in nature. We need to be very conscious of how to separate from it. And there are really simple, easy steps. Put rate limits on your phone for the apps, delete the apps and start to move towards a more healthy online ecosystem.
Our sense of reality now depends on it.
This article was originally published on WBUR.org.