Enables Fast and Secure Deployment of Fine-Tuned Business Applications Including Summarization, Coding, Query and Image Generation Based on the Latest Open-Source Generative AI Models
Esperanto’s new appliance is ideal for organizations that want to leverage the benefits of Generative AI technology to create custom applications initially around information summarization, organizational data/knowledge query, computer code generation and translation and image generation. Esperanto’s Data Science and Software teams designed it to support various application UI and output texts, computer programs and images, and is continually expanding the availability of LLMs and Diffusion models as they are made public. Examples of industries that can benefit from Esperanto’s new solution include the healthcare and legal professions which require quick and accurate summaries of complex descriptions while maintaining data privacy, and the financial industry which can translate its legacy code base to more modern and maintainable programming languages.
Also Read: NOVONIX and SandboxAQ Collaborate on Breakthrough AI Solutions for Battery Technology
“Generative AI is revolutionizing the way we create and summarize content, generate and translate computer code, and generate visual and video content. However, creating and deploying LLM-based applications typically requires large teams of data scientists, long development times and expensive, hard-to-obtain GPU-based platforms. This can make Generative AI strategies impractical for most organizations today,” said Art Swift, president and CEO at Esperanto Technologies. “Esperanto recognizes these challenges and has developed its new Generative AI Appliance based on its advanced RISC-V hardware using pretrained LLMs that are highly accurate but with much faster development and strong data privacy.”
Esperanto’s Generative AI Appliance is currently running the latest LLMs and image generation models such as LLaMA 2, Vicuna, StarCoder, OpenJourney and Stable Diffusion, and the company’s strategy is to continuously update the system with the latest versions of popular open-source models as soon as they are released.
“We are in the early stages of a multi-year super cycle for merchant ASICs, driven by the adoption of Generative AI, an increase in AI training, significant growth of AI inferencing, and HPC workflows,” said Ben Bajarin, CEO and principal analyst at Creative Strategies, Inc. “We are forecasting an Enterprise Edge infrastructure refresh as companies look to run more AI and HPC workloads on-prem for cost, privacy, and data sovereignty reasons. In addition, energy efficiency is a growing priority, so offerings like Esperanto’s that have a strong dollar-per-watt value are well positioned.”
“The market is trending toward smaller LLM and diffusion models – 30 billion parameters and below – driven by reducing the high cost of inference on very large models,” said Karl Freund, founder and principal analyst at Cambrian-AI Research. “These models are trained to be highly accurate with much lower training and inference costs. There is a lot of money to be made in this space, and inference solutions like Esperanto’s Generative AI Appliance should save customers significant costs versus GPU-based systems.”
SOURCE: BusinessWire