Friday, November 22, 2024

Amazon and Anthropic Announce Strategic Collaboration to Advance Generative AI

Related stories

Capgemini, Mistral AI & Microsoft Boost Generative AI

Capgemini announced a global expansion of its Intelligent App...

Rackspace Launches Adaptive Cloud Manager for Growth

Rackspace Technology®, a leading hybrid, multicloud, and AI technology...

Theatro Launches GENiusAI to Boost Frontline Productivity

Theatro, a pioneer in voice-controlled mobile communication technology, is...

Denodo 9.1 Boosts AI & Data Lakehouse Performance

Latest release adds an AI-powered assistant, an SDK to...

Health Catalyst Launches AI Cyber Protection for Healthcare

Health Catalyst, Inc., a leading provider of data and...
spot_imgspot_img

Anthropic selects AWS as its primary cloud provider and will train and deploy its future foundation models on AWS Trainium and Inferentia chips, taking advantage of AWS’s high-performance, low-cost machine learning accelerators

Anthropic deepens commitment to AWS, making its future foundation models accessible to millions of developers and providing AWS customers early access to unique features for model customization, using their proprietary data, and fine-tuning capabilities, all through Amazon Bedrock

Amazon and Anthropic announced a strategic collaboration that will bring together their respective industry-leading technology and expertise in safer generative artificial intelligence (AI) to accelerate the development of Anthropic’s future foundation models and make them widely accessible to AWS customers. As part of the expanded collaboration:

  • Anthropic will use AWS Trainium and Inferentia chips to build, train, and deploy its future foundation models, benefitting from the price, performance, scale, and security of AWS. The two companies will also collaborate in the development of future Trainium and Inferentia technology.
  • AWS will become Anthropic’s primary cloud provider for mission critical workloads, including safety research and future foundation model development. Anthropic plans to run the majority of its workloads on AWS, further providing Anthropic with the advanced technology of the world’s leading cloud provider.
  • Anthropic makes a long-term commitment to provide AWS customers around the world with access to future generations of its foundation models via Amazon Bedrock, AWS’s fully managed service that provides secure access to the industry’s top foundation models. In addition, Anthropic will provide AWS customers with early access to unique features for model customization and fine-tuning capabilities.
  • Amazon will invest up to $4 billion in Anthropic and have a minority ownership position in the company.
  • Amazon developers and engineers will be able to build with Anthropic models via Amazon Bedrock so they can incorporate generative AI capabilities into their work, enhance existing applications, and create net-new customer experiences across Amazon’s businesses.

“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” said Andy Jassy, Amazon CEO. “Customers are quite excited about Amazon Bedrock, AWS’s new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’s AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”

Also Read: Lenovo Delivers AI at the Edge, Bringing Next-Generation Intelligence to Data

“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. “Since announcing our support of Amazon Bedrock in April, Claude has seen significant organic adoption from AWS customers. By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology.”

An AWS customer since 2021, Anthropic has grown quickly into one of the world’s leading foundation model providers and a leading advocate for the responsible deployment of generative AI. Their foundation model, Claude, excels at a wide range of tasks, from sophisticated dialogue and creative content generation to complex reasoning and detailed instruction, while maintaining a high degree of reliability and predictability. Its industry-leading 100,000 token context window can securely process extensive amounts of information across all industries, from manufacturing and aerospace to agriculture and consumer goods, as well as technical, domain-specific documents for industries such as finance, legal, and healthcare. Customers report that Claude is much less likely to produce harmful outputs, easier to converse with, and more steerable compared to other foundation models, so developers can get their desired output with less effort. Anthropic’s state-of-the-art model, Claude 2, scores above the 90th percentile on the GRE reading and writing exams, and similarly on quantitative reasoning.

Today’s news is the latest AWS generative AI announcement as the company continues to expand its unique offering at all three layers of the generative AI stack. At the bottom layer, AWS continues to offer compute instances from NVIDIA as well as AWS’s own custom silicon chips, AWS Trainium for AI training and AWS Inferentia for AI inference. At the middle layer, AWS is focused on providing customers with the broadest selection of foundation models from multiple leading providers where customers can then customize those models, keep their own data private and secure, and seamlessly integrate with the rest of their AWS workloads—all of this is offered through AWS’s new service, Amazon Bedrock. With today’s announcement, customers will have early access to features for customizing Anthropic models, using their own proprietary data to create their own private models, and will be able to utilize fine-tuning capabilities via a self-service feature within Amazon Bedrock. At the top layer, AWS offers generative AI applications and services for customers like Amazon CodeWhisperer, a powerful AI-powered coding companion, which recommends code snippets directly in the code editor, accelerating developer productivity as they code.

As part of this deeper collaboration, AWS and Anthropic are committing meaningful resources that are helping customers get started with Claude and Claude 2 on Amazon Bedrock, including through the AWS Generative AI Innovation Center, where teams of AI experts will help customers of all sizes to develop new generative AI-powered applications to transform their organizations.

Customers accessing Anthropic’s current models via Amazon Bedrock are building generative AI-powered applications that help automate tasks such as producing market forecasts, developing research reports, enabling new drug discovery for healthcare, and personalizing education programs. Enterprises already taking advantage of this advanced technology include Lonely Planet, a premier travel media company celebrated for its decades of travel content; Bridgewater Associates, a premier asset management firm for global institutional investors; and LexisNexis Legal & Professional, a top-tier global provider of information and analytics serving customers in more than 150 countries.

SOURCE: BusinessWire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img