Mistral AI Launches Beta Access to API Endpoints, Revolutionizing AI Technology
Introduction:
Paris-based startup unveils beta platform for API endpoints, offering three chat endpoints tailored to developer needs.
Mistral AI, a Paris-based startup valued at $2 billion, has launched its beta platform for API endpoints, marking a significant milestone in the company’s journey. With its commitment to open-source AI, Mistral AI now offers early access to its first platform services, including three chat endpoints — mistral-tiny, mistral-small, and mistral-mid — each designed to balance performance and cost.
Mixtral has the following capabilities.
- It gracefully handles a context of 32k tokens.
- It handles English, French, Italian, German and Spanish.
- It shows strong performance in code generation.
- It can be finetuned into an instruction-following model that achieves a score of 8.3 on MT-Bench.
Hallucination and biases. To identify possible flaws to be corrected by fine-tuning / preference modelling, we measure the base model performance on TruthfulQA/BBQ/BOLD.
Compared to Llama 2, Mixtral is more truthful (73.9% vs 50.2% on the TruthfulQA benchmark) and presents less bias on the BBQ benchmark. Overall, Mixtral displays more positive sentiments than Llama 2 on BOLD, with similar variances within each dimension.
Chat Endpoints Explained:
The beta offers three chat endpoints, each with unique features and benefits:
- Mistral-tiny: The most cost-effective option, utilizing the Mistral 7B Instruct v0.2 model. It scores 7.6 on MT-Bench and is proficient in English.
- Mistral-small: Operating with the Mixtral 8x7B model, it supports multiple languages and coding, achieving an 8.3 on MT-Bench.
- Mistral-medium: The highest-quality endpoint, using a prototype model with top-tier performance, mastering multiple languages and scoring 8.6 on MT-Bench.
Ease of Use:
The platform is designed for ease of use, following industry-standard chat interface specifications. Python and JavaScript client libraries are available, enabling developers to integrate these powerful AI tools into their applications seamlessly. Additionally, the APIs allow for a system prompt feature, ensuring a higher level of moderation for outputs, which is crucial in sensitive applications.
Registration and Pricing:
The platform is now open for registration, with Mistral inviting users to experience its beta version as the company ramps up its capabilities. Users can expect continuous improvements as it moves towards a fully self-served platform. The pricing structure is as follows:
- mistral-tiny: For every 1 million tokens processed as input, the cost is 0.14€, and for every 1 million tokens generated as output, the cost is 0.42€.
- mistral-small: The input cost is 0.6€ per 1 million tokens, and the output cost is 1.8€ per 1 million tokens.
- mistral-medium: The input cost is 2.5€ per 1 million tokens, and the output cost is 7.5€ per 1 million tokens.
Comparison with GPT-3.5:
By comparison (for the Mistral-medium model), the GPT-3.5-turbo-1106 model’s pricing is $1.00 per million tokens for input; for output, the cost is $2.00 per million tokens. The cost for Mistral-medium can be justified based on the benchmarks on top of the 32,000 token context window.
Join the Beta and Scale Up Your AI:
Mistral welcomes you to register for the beta and experience the power of its platform firsthand. Their dedicated team will help you assess your needs and facilitate access. As they progress towards a self-served platform, expect continuous improvements!
Conclusion:
Mistral AI’s beta launch of its API endpoints marks a significant milestone in the company’s journey. With its commitment to open-source AI and tailored chat endpoints, Mistral AI is revolutionizing the AI technology landscape. As the platform continues to evolve, developers can expect seamless integration and top-tier performance from Mistral’s powerful AI tools.
- By Deepak Chawla, Head of Data Science, CoffeeBeans.