Site icon AIT365

Tabnine Adds Support for Cohere Command R Model to Accelerate and Optimize Software Development; Provide Access to More Models

Tabnine

Tabnine, the originators of the AI code assistant category, announced that Cohere’s Command R model is now available as one of the Large Language Models (LLMs) integrated to support Tabnine’s AI-enabled software development tools. The model is available via an API directly from Oracle Cloud Infrastructure (OCI) Generative AI.

The integration and support between Tabnine and Cohere, the enterprise-focused AI platform, will boost engineering velocity, code quality, and developer happiness as enterprises increasingly adopt generative AI within their software engineering processes.

Tabnine offers a leading AI platform that delivers code generation, code refactoring, automated creation of documentation and tests, autonomous bug fixes, and much more. With broad support for both Tabnine-developed and 3rd-party models, Tabnine enables engineering teams to tap into the latest innovations in AI-enabled software development. And as the originator of the AI code assistant category, Tabnine’s deep expertise in leveraging and optimizing LLMs for software development tasks, now combined with the Command R model, provides a secure, private generative AI solution custom-tailored to enterprise IT teams.

“This integration with Cohere LLMs running on OCI Generative AI delivers another private, secure option for generative AI capabilities and underscores Tabnine’s commitment to integrating state-of-the-art models with Tabnine so that our customers get the best tools to deliver better code, faster,” said Peter Guagenti, Tabnine President. “As a trusted resource for millions of developers and thousands of organizations, we’re committed to maintaining the highest standards of privacy, security, and performance, and the Cohere Command R model on the OCI Generative AI is a welcomed addition.”

Also Read: FriendliAI integrates with Weights & Biases to streamline Generative AI deployment workflows

Tabnine’s switchable models capability allows users to switch the model that underpins Tabnine Chat in real time. Tabnine has engineered AI agents that use a combination of layered, specific prompts and rich context about each engineering team and project to ensure high quality code and the most relevant answers to AI-enabled software use cases. This combination of Tabnine’s AI engineering expertise and the performance of the underlying model maximizes the value users get from LLMs like Cohere’s Command R.

Command R is a state of the art model that’s ideal for large scale production workloads, balancing high efficiency with strong accuracy. The model offers low latency, high throughput, and excels at enterprise use cases.

“We’re proud that Cohere’s highly secure enterprise-grade Command R model is helping power Tabnine’s AI code assistant to streamline software development,” said Rodrigue Hajjar, VP of Engineering at Cohere. “Command R is fast, efficient, and optimized for long context tasks offering best-in-class integration for retrieval-augmented generation (RAG) applications. Our technology will enable Tabnine’s developers to access a contextually relevant AI assistant enhancing productivity and efficiency.”

Since the Command R model is hosted on OCI Generative AI, Tabnine sends data to the service for computing responses to user prompts. In addition, OCI helps ensure that the data transferred between integrated development environments (IDEs) and the API endpoint is encrypted in transit to prevent eavesdropping or person-in-the-middle attacks. Additionally, data at rest is encrypted.

“OCI helps ensure that enterprise applications—including AI and data-driven, resource intensive apps—run in a way that makes business sense, at scale,” said Vinod Mamtani, vice president, Generative AI Services, Oracle Cloud Infrastructure. “We’re excited to see Tabnine Chat in action with OCI Generative AI, providing private, protected, and fast time to value for any organization looking to innovate.”

Source: GlobeNewsWire

Exit mobile version