Paris-based AI startup Mistral known for its open-source AI models has confirmed working on a new and upcoming version that will rival, or may even exceed OpenAI’s flagship GPT 4.

The news comes shortly after the said model was leaked on HuggingFace, the biggest open-source AI model and code-sharing platform. Mistral’s co-founder and CEO Arthur Mensch took to X (formerly Twitter) soon afterward to confirm the leak was true and points to an actual Mistral model.

He clarified that the model was leaked by an “over-enthusiastic” employee and it was, in fact, a quantized and watermarked version of an older model. This model was retrained later using Llama 2 while the pertaining finished the day Mistral 7B was released.

Last but not least, he added that the model in question has made good progress since then and to “stay tuned”, meaning more news is to be expected soon enough.

Could Beat GPT 4

The original leak, as mentioned earlier, was shared on HuggingFace last Sunday, January 28. It mentioned a seemingly new open-source large language model (LLM) labeled “miqu-1-70b.” The HugginFace entry is still live to this day and it noted that the leaked model’s prompt format was the same as Mistral, which is known as the leading open-source AI model maker.

Users shared their findings on X, the platform previously identified as Twitter, owned by Elon Musk, highlighting the impressive capabilities of the model. Its performance on widely recognized LLM benchmarks, notably the EQ-Bench, was reported to rival that of OpenAI’s GPT-4, the prior frontrunner.

Machine learning (ML) researchers took notice of it as well and even called it one of, if not the best, open-source AI models at the moment.

Maxime Labonne, an ML scientist at JP Morgan & Chase, one of the world’s largest banking and financial companies said: “Does ‘miqu’ stand for MIstral QUantized? We don’t know for sure, but this quickly became one of, if not the best open-source LLM. Thanks to @152334H, we also now have a good unquantized version of miqu. The investigation continues. Meanwhile, we might see fine-tuned versions of miqu outperforming GPT-4 pretty soon.”

Via: VentureBeat