Microsoft detected instances of adversaries like Iran, North Korea, Russia and China attempting to use generative AI models from Microsoft and OpenAI for offensive cyber and influence operations.
The techniques were early-stage but represent an emerging threat.
“Of course bad actors are using large-language models — that decision was made when Pandora’s Box was opened,” Tenable CEO Amit Yoran said.
Examples included North Korean and Iranian groups using models to research targets and generate phishing emails, while Russian and Chinese actors explored satellite technologies and geopolitical topics.
OpenAI said the capabilities were limited but experts warned large language models will become a powerful cyber weapon.
“Why not create more secure black-box LLM foundation models instead of selling defensive tools for a problem they are helping to create?” Berryville Institute of Machine Learning co-founder Gary McGraw said.
Some criticized Microsoft for selling defensive tools for a problem exacerbated by widely releasing models without sufficient security, arguing focus should be on building more secure foundation models from the start.
Most Popular:
‘The View’ Host Finds Out Awful Truth About Her Ancestors
Prominent Democrat Arrested Over Mail-In Ballot Scheme