Elon Musk has been very critical of OpenAI in the last few months, sharing his criticism publicly on X and different interviews. The co-founder and former board member of the AI lab, who doesn’t shy away from taking credit for starting OpenAI, has expressed his frustration primarily about three things: the company’s shift from non-profit and open source to for-profit and closed source, concerns regarding AI safety, and the partnership with Microsoft.

In addressing the transition from non-profit to for-profit, he likened it to finding an organization to save the Amazon Rain Forest, only for it to become a lumber company that exploits the forest for profit. He questioned the legality of such a shift, suggesting that if it’s permissible to switch from non-profit to for-profit while benefiting financially, it could become a widespread practice.

Additionally, he expressed concerns about control over potential digital superintelligence, particularly in relation to OpenAI’s association with Microsoft. He worried that Microsoft’s investment might grant them greater influence, as they hold rights to software, model weights, and more.

Sam Altman, the CEO of OpenAI, apparently doesn’t like to talk about it but has still addressed some of his criticism in different public appearances. In one of his recent interviews with Bloomberg, he said that Elon’s criticism stems from a genuine concern for AI safety, “I think he really cares about AI safety a lot, and I think that is where it is coming from. A good place. We just have a difference of opinion on some parts but we both care about that. He wants to make sure that we the world have maximal chance.”

He also touched upon Elon Musk’s apprehensions about the Microsoft relationship with OpenAI. He explained that when Musk mentions Microsoft having more control, he likely refers to the contractual ability to potentially restrict OpenAI’s access to the data center, rather than financial control. He further clarified that the issue revolves around data center operation, not financial resources, as OpenAI maintains its own funding.

The OpenAI CEO emphasized that, ultimately, questions about his perspective should be directed to Musk himself, as he would be able to provide a more comprehensive answer.

It is not the first time Sam has addressed the criticism. Responding to his warnings around how dangerous AI could be for the world, Sam in a previous public interaction said, “I think he’s totally wrong about this stuff. He can sort of say whatever he wants, but I’m like proud of what we’re doing, and I think we’re going to make a positive contribution to the world, and I try to stay above all that.”

In June earlier this year, the Tesla and SpaceX CEO launched a new AI startup, xAI, to “build a good AGI with the overarching purpose of just trying to understand the universe.” Sharing his vision for the company in a public conversation, he stated, “I think the safest way to build an AI is to make one that is curious and truth-speaking. My theory behind a maximally curious, maximally truthful AI as being the safest approach is, I think to a superintelligence, humanity is much more interesting than not humanity.”