Mistral, which has labeled itself as the “OpenAI of Europe” is seeking $300 million in another funding round from investors, reports The Information. The AI startup is a joint production of Facebook and Google’s parent companies i.e. Meta Platforms and Alphabet.

This news comes only four months after Mistral raised $113 million in a seed round courtesy of Lightspeed Venture Partners, according to two insiders familiar with the matter.

Another insider reports that the seeding round will value the Paris-based startup at over $1 billion before the investment. For now, it remains unclear which VC firms Mistral is reaching out to for investments.

Back in June, Mistral AI picked up a $113 million round of seed funding despite being barely a month-old startup. This funding was only the first step in competing against OpenAI for building, training, and application of large language models and generative AI.

The financing round was spearheaded by Lightspeed Venture Partners, and it includes participation from several notable investors across different regions. Redpoint, Index Ventures, Xavier Niel, JCDecaux Holding, Rodolphe Saadé, and Motier Ventures in France. Furthermore, La Famiglia and Headline in Germany, Exor Ventures in Italy, Sofina in Belgium, and First Minute Capital and LocalGlobe in the UK took part in the investment.

Fast forward to September 2023, Mistral made its first major announcement, making its large language model free for everyone. It is called the Mistral 7B model and it is completely open source. The model was made available under the Apache 2.0 license since it allows for freedom of use with no restrictions. This essentially means that the model can be employed by anyone, be it an amateur enthusiast, a massive multinational corporation, or even a government entity like the Pentagon.

During the model’s release, the Mistral team wrote in a blog post: “Our ambition is to become the leading supporter of the open generative AI community and bring open models to state-of-the-art performance. Mistral 7B’s performance demonstrates what small models can do with enough conviction. This is the result of three months of intense work, in which we assembled the Mistral AI team, rebuilt a top-performance MLops stack, and designed a most sophisticated data processing pipeline, from scratch.”