The U.S. Army Research Laboratory recently tested commercial AI chatbots like OpenAI’s GPT models in a simulated military planning scenario using the video game StarCraft II.
The AI assistants were tasked with proposing strategic plans to accomplish objectives, responding within seconds with multiple options. OpenAI’s latest GPT models outperformed older AI, achieving goals though with higher casualties.
While some military applications are allowed by OpenAI, weaponry or harm are prohibited.
Experts warn using such AI for real operational planning is currently premature and unwise given limitations.
“This idea that you’re going to use [an AI] that’s going to say ‘here’s your really big strategic plan’, technically that is not feasible right now,” cybersecurity expert Josh Wallin said. “And it certainly is not feasible from an ethical or legal perspective.”
Concerns include overreliance on AI advice despite flaws, with defense experts saying generative models shouldn’t be used for high-stakes situations given current risks and ethics.
“I would not recommend using any large language model or generative AI system for any high-stakes situation,” Carol Smith of the Software Engineering Institute at Carnegie Mellon University said.
The military continues exploring AI applications cautiously.
Read Also:
Bombshell Decision from Michelle Obama on 2024 Run
Former Border Chief Shreds Biden With 4 Simple Words
