Microsoft AI unveiled two purpose-built, internally developed AI models that underscore its commitment to empowering every person and organization on the planet through applied, trustworthy AI technologies.
The first model, MAI-Voice-1, is a high-efficiency, highly expressive speech generation model now powering the Copilot Daily and Podcasts features. It is also available for experimentation via Copilot Labs, providing vivid, natural-sounding audio output across both single- and multi-speaker scenarios. The company emphasized that MAI-Voice-1 is a lightning-fast speech generation model, with an ability to generate a full minute of audio in under a second on a single GPU, making it one of the most efficient speech systems available today.
Also Read: TransPerfect Acquires Unbabel for Language AI
The second model, MAI-1-preview, represents Microsoft’s first end-to-end trained foundation model and is currently undergoing public testing on the AI benchmarking platform LMArena. This mixture-of-experts model has been both pre-trained and post-trained using approximately 15,000 NVIDIA H100 GPUs. Microsoft stated that MAI-1-preview is an in-house mixture-of-experts model, pre-trained and post-trained on ~15,000 NVIDIA H100 GPUs and is designed to provide powerful capabilities to consumers seeking to benefit from models that specialize in following instructions and providing helpful responses to everyday queries.
Microsoft AI further announced plans to roll out MAI-1-preview across certain text-based use cases within Copilot over the coming weeks, in order to gather user feedback and continuously enhance model performance.
“With this announcement, we are taking the first steps toward making our vision a reality,” Microsoft AI said. “We have big ambitions for where we go next. Not only will we pursue further advances here, but we believe that orchestrating a range of specialized models serving different user intents and use cases will unlock immense value. There will be a lot more to come from this team on both fronts in the near future. We’re excited by the work ahead as we aim to deliver leading models and put them into the hands of people globally.”