The CEO of OpenAI, Sam Altman, has warned that AI systems could possess “superhuman persuasion” abilities before achieving superhuman general intelligence.
He believes this could lead to “strange outcomes.”
“I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence,” Sam Altman, CEO of OpenAI, the company behind the popular ChatGPT platform, said on social media earlier this month. (Trending: Judge Declines To Recuse From Trump 2024 Ballot Case)
A few months after Elon started OpenAI with Sam Altman, they sat down to talk about the optimal AI future
Elon said democratization of AI tech is the most important factor in avoiding bad outcomes
2023 has been the year of AI: are you optimistic about our AI future? pic.twitter.com/n3ASplT5XS
— 💸💸💸 (@itsALLrisky) November 2, 2023
He then warned that these capabilities may “lead to some very strange outcomes.”
While some experts question the legitimacy of these fears, others argue that AI’s ability to identify persuasive content is already happening with digital advertising.
“There is a threat for persuasive AI, but not how people think. AI will not uncover some subliminal coded message to turn people into mindless zombies,” said Christopher Alexander, chief analytics officer of Pioneer Development Group.
“Machine learning and pattern recognition will mean that an AI will get very good at identifying what persuasive content works, in what frequency and at what time. This is already happening with digital advertising. Newer, more sophisticated AI will get better at it.”
They also noted that social media already has the power to influence people.
“Social media already does that and is difficult to outperform,” Alexander said. (Trending: It’s Time For Donald Trump To Drop Out)
Sam Altman, the CEO of openAI, recently joined Joe Rogan on his podcast. Among the topics discussed, he shared his views on startup founders skewing older. He wants to know where all the elite twenty-something’s have gone. pic.twitter.com/xHadUKMEMP
— Riggs Eckelberry (@riggseck) November 1, 2023
“It would be no different than a human who is rhetorically gifted, with the exception that some people may find the implicit nature of technology more trustworthy,” stated Aiden Buzzetti, president of the Bull Moose Project.
“With that said, there’s nothing to it right now, and any fears over this are misplaced. The real question would be, when will AI match or surpass human intelligence accurately? There’s nothing superhuman about it.”
Current platforms like ChatGPT still have limitations in providing accurate information.
However, there are pressing concerns that AI may be able to persuade people and perpetrate fraud as it learns to simulate human behavior.
Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), believes “we are already at that point” of persuasion among “some AI technology.”
“If a bad actor coded an AI algorithm to misuse data or make incorrect conclusions, I think it could persuade that it was correct,” Siegel said. “But the solution is the same as how to treat experts — respect their knowledge but just don’t take it as a given.”
Siegel pointed out that the same could be true of humans who are skilled enough to “convince people of things that later turn out to be untrue.”
“It is literally the same problem,” Siegel said. “It requires the same solution, which is to question and don’t accept answers as a given from human or machine experts without pressure testing them.”
“It stands to reason that as AI learns how to simulate human behavior, it also learns how to dupe susceptible people and perpetrate fraud,” Jon Schweppe, policy director of American Principles Project said. “Give it a few years, and we might have AI androids running for Congress. They’ll fit in perfectly in Washington.”
Most Popular:
Ex-Obama Official Says Trump Will ‘Likely’ Be Jailed
Democrat Testifies in Trump Case
Will ‘MAGA Mike’ Succeed as House Speaker?
