Mistral AI launches Medium 3.5 with 128B parameters, 256k context, and integrated agentic capabilities
Tags AI · Models · Open source

Mistral AI released Medium 3.5, a 128-billion-parameter dense model with a 256k context window, available as open weights under a modified MIT license. The model scores 77.6% on SWE-Bench Verified and 91.4% on the τ³-Telecom benchmark. It can self-host on as few as four GPUs, significantly lowering the hardware barrier for agentic AI. The release merges Mistral's previous separate model lines — Medium 3.1 for chat, Magistral for reasoning, and Devstral 2 for coding — into a single unified model. Alongside the model, Mistral introduced Vibe remote agents that enable asynchronous cloud-based coding sessions launchable from the CLI or Le Chat, and a new Work mode in Le Chat for agentic multi-step task execution across email, calendar, and external tools. API pricing is set at $1.5/M input tokens and $7.5/M output tokens. Mistral reports the model is more cost-efficient than GPT-5.4 mini on 5 out of 7 benchmarks.