Mistral’s Large 2 is its answer to Meta and OpenAI’s latest models

Date:


For frontier AI models, when it rains, it pours. Mistral released a fresh new flagship model on Wednesday, Large 2, which it claims to be on par with the latest cutting edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning.

The release of Mistral Large 2 falls just one day after Meta dropped its latest and greatest open source model, Llama 3.1 405b. Mistral says Large 2 raises the bar for performance and cost for open models, backing that up with a handful of benchmarks.

Large 2 appears to outpace Llama 3.1 405B on code generation and math performance, and does so with under a third of the parameters: 123 billion, to be precise.

In a press release, Mistral says one of its key focus areas during training was to minimize the model’s hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible.

The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation. Though Mistral is one of the newer entrants in the artificial intelligence space, it’s quickly shipping AI models on or near the cutting edge.

However, it’s important to note that Mistral’s models are, like most others, not open source in the traditional sense – any commercial application of the model needs a paid license. And while it’s more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama’s 405 billion parameters, of course.)

Something missing from Mistral Large 2, and was also absent from Meta’s Llama 3.1 release yesterday, is multimodal capabilities. OpenAI is far ahead of the competition with regard to multimodal AI systems, capable of processing image and text simultaneously, a feature some startups are increasingly looking to build with.

The model has a 128,000 token window, which means Large 2 can intake a lot of data in a single prompt (128,000 tokens is equal to roughly a 300 page book). Mistral’s new model also includes improved multilingual support. Large 2 understands English, French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, along with 80 coding languages. Notably, Mistral claims Large 2 also produces more concise responses than leading AI models, which have a tendency to blabber on.

Mistral Large 2 is available to use on Google Vertex AI, Amazon Bedrock, Azure AI Studio, and IBM watsonx.ai. You can also use the new model on Mistral’s le Plateforme under the name “mistral-large-2407”, and test it out for free on the startup’s ChatGPT competitor, le Chat.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

The Lion’s markings | Astronomy Magazine

The Lion’s markings | Astronomy Magazine ...

String theory is not dead yet

String theory is a mathematical description of nature...

JWST spots more light than expected in the early universe

This artist's concept shows early galaxies forming in...