OpenAI has recently embarked on a series of collaborations with the Pentagon, marking a significant shift from its previous policy of not engaging with military entities.
Among these projects is the development of cybersecurity solutions, a venture that involves working closely with DARPA on their AI Cyber Challenge, which was unveiled last year.
In an interview at Bloomberg House during the World Economic Forum in Davos, Anna Makanju, OpenAI’s Vice President of Global Affairs, disclosed that the company, known for creating ChatGPT, is also in preliminary discussions with the US government. These talks focus on exploring how OpenAI’s technology could potentially aid in addressing the issue of veteran suicide.
In a notable policy shift, the company has revised its terms of service, eliminating the clause that previously prohibited the use of its AI technology in “military and warfare” applications. According to Anna Makanju, this change reflects an effort to adapt to the evolving applications of ChatGPT and other tools developed by the company.
She said: “Because we previously had what was essentially a blanket prohibition on the military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.” However, OpenAI persisted in its ban on using the company’s technology to create weapons, destroy property, or harm anyone.
As OpenAI’s primary investor, Microsoft has a history of supplying various software solutions to the US armed forces and other governmental sectors. In a collaborative effort, OpenAI, along with Anthropic, Google, and Microsoft, is contributing expertise to the US Defense Advanced Research Agency’s AI Cyber Challenge. This initiative focuses on identifying software capable of autonomously repairing security flaws and safeguarding infrastructure against cyber threats.
OpenAI has also announced an intensification of its efforts in the realm of election security. This initiative is focused on dedicating more resources to guarantee that its generative AI technologies are not exploited for the dissemination of political misinformation.
The company’s CEO Sam Altman said: “Elections are a huge deal. I think it’s good that we have a lot of anxiety.”