While it is no news that the US military uses artificial intelligence (AI) to upgrade its arsenal, a new report sheds light on how it is using OpenAI’s flagship GPT 4 to run war simulations through video games.

However, it is nothing like science fiction movies with AI-powered machines taking over the war. All of this is reportedly happening within the safe confines of a military video game. That video game is none other than the widely known real-time strategy video game by Activision Blizzard, Starcraft II. 

Researchers at the US Army Research Laboratory are using Starcraft II together with OpenAI’s powerful GPT 4 AI model to see if AI can improve their battle planning skills.

The way these two are being used is quite simple. The experiment involved a few military units with a transparent understanding of the battlefield dynamics. The deployed GPT 4 powered custom chatbots onto the battlefield to serve as key advisers to a digital military leader. These chatbots were based on GPT 4 Turbo and GPT 4 Vision.

Swiftly, the assistants presented a variety of strategies. For instance, they provided the option for the player-commander to capture a certain bridge.

Nonetheless, the chatbots were not without their shortcomings. They experienced a higher number of casualties compared to other AI participants. Yet, they were able to fulfill their mission objectives successfully.

In conclusion, GPT 4 may be able to save time in some low-priority tasks, but it is far from advising people how to win a video game, let alone on real-life battlefields.

This isn’t the first time the US military has worked with OpenAI either. Last month, the prominent AI startup partnered with the US Army to help develop cybersecurity tools. The venture involves closely working with the Defense Advanced Research Projects Agency (DARPA) on their AI Cyber Challenge, which was unveiled last year.

Other than cybersecurity tools, the US government is also in talks with OpenAI on how its AI technology could be used to address the issue of veteran suicide. This was a significant move for OpenAI as it involved a policy change of not engaging with military entities.