Connect with us

Hi, what are you looking for?

U.S. News

Democrat Senate Intel Chair: Disinformation Isn’t Protected Speech, I Want More Government Oversight On Social Media

via CBS
This article was originally published at StateOfUnion.org. Publications approved for syndication have permission to republish this article, such as Microsoft News, Yahoo News, Newsbreak, UltimateNewswire and others. To learn more about syndication opportunities, visit About Us.

Senator Mark Warner discussed concerns about misinformation and election interference, emphasizing the potential threats from Russia and Iran.

He highlighted the use of deepfakes and AI tools for disinformation and urged the Biden administration to take a more proactive stance in regulating social media companies.

Warner argued that when it comes to misinformation and deepfakes, they should not be protected by the First Amendment and should be subject to regulation, similar to rules applied in other areas.

“I wish I could say I was confident, but we may be headed for the perfect storm in terms of election interference, and let me give you, quickly, a couple of reasons: One, Russia, with its war in Ukraine, Iran with its challenges in the Middle East, they have a higher interest than ever in interfering in American elections,” Warner said.

“Two, we know in America you’ve got folks who are election deniers still trying to relitigate 2020. Third thing is, there is a court case that came out of Texas that basically is constraining the government from even having voluntary conversations with the social media firms.”

He expressed worries about the potential impact on the 2024 elections and the need for increased public focus on election security.

“Which of the threats that you mentioned is the one that concerns you the most?” A Martínez asked.

“Well, you take an aggressive Russia, sometimes not just spreading disinformation, but amplifying, many times, American disinformation. You think about that being used sometimes with AI tools that can do this disinformation at scale and speed that’s unprecedented. And we’re seeing this with public figures already. There [have] been some deepfakes using President Trump’s voice and image. There [have] been some deepfakes recently — inappropriate photos that were not real of Taylor Swift. These tools are out there,” Warner said.

“You mentioned how you want the Biden administration to be more aggressive. What does that look like?” Martínez asked.

“Well, I think it looks like taking advantage of the exemption that was in that case to say, no, we can have regular communications with the social media companies, because my fear is, if they wait until the Supreme Court, a lot of mischief could be done. I also think we got through 2020 because we were relatively well-protected,” Warner said.

“I just worry the public focus isn’t as high right now, and the fact that we got through 2022. But 2024, with these new AI tools and the fact that the war in Ukraine elevates Russia’s interest in determining or trying to drive the outcome of the elections in the United States, this is a recipe potentially for a real problem.”

“But having the FBI or other government agencies be in contact with social media companies, aren’t we treading close to a First Amendment violation there?” Martínez pressed.

“I think when you’re talking about true misinformation or disinformation, or when you’re talking about utilization of deepfakes where an image of A Martínez or Mark Warner is put up and it’s not us, but it looks like us and sounds like us, I don’t think those are First Amendment protections,” Warner said.

“I think those are, frankly, just malicious — the kind of manipulation that we’ve already banned from things like public trading in the stock market. The same rules ought to be applicable.”

You May Also Like

Trending