OpenAI recently revised its usage policy, removing the prohibition on using its technology for “military and warfare.”
The updated policy now only restricts use that would “bring harm to others.”
This change enables OpenAI to collaborate with the military, causing internal division.
While some emphasize the potential cost savings and improved effectiveness for the military, others express concerns about the dangers of AI and the need for safeguards.
There are discussions surrounding the necessity of transparency in AI development for defense purposes.
The move has sparked skepticism about OpenAI’s ethics and the need for explainable AI models in military applications.
“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” an OpenAI spokesperson stated.
“There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.”
“The losing faction is concerned about AI becoming too powerful or uncontrollable and probably misunderstands how OpenAI might support the military,” Pioneer Development Group chief analytics officer Christopher Alexander said.
“The most likely use of OpenAI is for routine administrative and logistics work, which represents a massive cost savings to the taxpayer. I am glad to see OpenAI’s current leadership understands that improvements to DOD capabilities lead to enhanced effectiveness, which translates to fewer lives lost on the battlefield.”
“This is probably a confluence of events. First, the disempowerment of the nonprofit board probably tipped the balance toward abandoning this policy. Second, the military will have applications that save lives as well as might take lives, and not allowing those uses is hard to justify. And lastly, given the advances in AI with our enemies, I’m sure the U.S. government has asked the model providers to change those policies. We can’t have our enemies using the technology and the U.S. not,” Phil Siegel said.
“We should be concerned that as AI learns to become a killing-machine and more advanced in strategic warfare, that we have safeguards in place to prevent it from being used against domestic assets[.]”
“OpenAI was likely always going to collaborate with the military. AI is the new frontier and is simply too important of a technological development to not use in defense,” Samuel Mangold-Lenett stated. “The federal government has made clear its intention to use it for this purpose. CEO Sam Altman has expressed concern over the threats AI poses to humanity; our adversaries, namely China, fully intend to use AI in future military endeavors that will likely involve the U.S.”
“We not only have to worry about adversaries’ AI capabilities, but also we also have to worry about the runaway AI problem,” American Principles Project Director Jon Schweppe said. “We should be concerned that as AI learns to become a killing-machine and more advanced in strategic warfare, that we have safeguards in place to prevent it from being used against domestic assets; or even in the nightmare runaway AI scenario, turning against its operator and engaging the operator as an adversary.”
“Companies like OpenAl are not moral guardians, and their pretty packaging of ethics is but a facade to appease critics,” Heritage Foundation’s Tech Policy Center Research Associate Jake Denton said. “While adopting advanced Al systems and tools in our military is a natural evolution, OpenAl’s opaque black-box models should give pause. While the company may be eager to profit from future defense contracts, until their models are explainable, their inscrutable design should be disqualifying.”
“As our government explores Al applications for defense, we must demand transparency,” Denton said. “Opaque, unexplainable systems have no place in matters of national security.”
Read Also:
Two Names Emerge As Trump’s Possible 2024 Running Mate
2024 Miss America Winner Crowned