Thursday, December 26, 2024

Fastino Unveils 1000x Faster LLMs, No GPUs Required

Related stories

Pollo AI Unveils Game-Changing Video-to-Video Feature

Pollo AI, a leading AI video generator released by...

SKF Enhances Customer Support with AI Tool

SKF Product assistant, a new search assistant for finding...

PFN, Mitsubishi, and IIJ Partner for AI Cloud

Preferred Networks (PFN), Mitsubishi Corporation (MC) and Internet Initiative Japan (IIJ) have on...

KAYTUS NextGen Server: High Performance with Liquid Cooling for AI

KAYTUS, a leading IT infrastructure provider, announced its new...

Zenity Launches AI Security Solution for Microsoft Fabric Skills

Zenity, the leader in securing Agentic AI everywhere, announced...
spot_imgspot_img

Fastino, a new foundation AI model provider, launched to provide a family of task-optimized language models that are more accurate, faster, and safer than traditional LLMs. The company also announced its $7 million pre-seed funding round led by global software investor Insight Partners and M12, Microsoft’s Venture Fund, with participation from NEA, CRV, Valor, Github CEO Thomas Dohmke, and others.

While Generative AI deployments have steadily increased year over year, even early adopters continue to face significant challenges when implementing the new technology. A 2024 McKinsey study shows that 63 percent of enterprises implementing Generative AI struggle to achieve demonstrable ROI due to model inaccuracy. Conventional LLMs offer significant innovation potential, but technological and operational complexities hinder companies from fully realizing this value. Fastino introduces a differentiated approach to help enterprises of all sizes accelerate the adoption and deployment of generative AI technology tailored to solve their business challenges.

“Fastino aims to bring the world more performant AI with task-specific capabilities,” said Ash Lewis, CEO and co-founder of Fastino. “Whereas traditional LLMs often require thousands of GPUs, making them costly and resource-intensive, our unique architecture requires only CPUs or NPUs. This approach enhances accuracy and speed while lowering energy consumption compared to other LLMs.”

Also Read: NVIDIA Networking Boosts xAI’s AI Supercomputer

Key features of Fastino include:

  • Fit-for-purpose architecture for consistent, accurate outputs: Fastino delivers task-optimized models for critical enterprise use cases like structuring of textual data, RAG systems, text summarization, task planning, and more.
  • CPU-level inferencing for swifter results: Fastino’s novel architecture operates up to 1000x faster than traditional large language models. Its optimized computation enables flexible deployment on CPUs or NPUs, minimizing the reliance on high end GPUs.
  • Task-optimized models for safer AI systems: Fastino’s family of models enable new, distributed AI systems, which are less vulnerable to adversarial attacks, hallucinations, and privacy risks.

“We’re proud to announce our initial funding round, led by Insight Partners and M12, Microsoft’s Venture Fund. This pre-seed funding allows us to continue pioneering LLM architecture, developing accurate, secure solutions that bring AI to the enterprise,” said George Hurn-Maloney, COO and co-founder of Fastino. “Global enterprises are facing increasing difficulty in accessing computing power while achieving the precision and speed necessary to integrate AI effectively. Fastino aims to fix this with scalable, high-performance language models, optimized for enterprise tasks.”

George Mathew, Managing Director at Insight Partners: “Fastino’s approach to solving contemporary AI challenges presents one of the most exciting developments in the trillion-dollar enterprise AI opportunity. We see a bright future in tunable, high-performance, low-latency foundation models that empower firms to use the most accurate generative AI available while reducing their risk exposure to data leakage and inaccurate outputs.”

Michael Stewart, Managing Partner at M12, Microsoft’s Venture Fund: “Fastino’s innovative architecture enables high performance while addressing critical challenges like safety, data leakage, accuracy and efficiency. Our investment will accelerate Fastino’s development of secure and performant Foundation AI, tunable to address enterprise challenges, from the banking to the consumer electronics sectors.”

Thomas Dohmke, CEO of GitHub: “I’m excited to be an early investor in Fastino, a company on a mission to bring the world accurate, fast, and safe task-specific LLMs that solve organizations’ most pressing challenges. Their novel approach involves a new architecture that runs on CPUs, making AI more accessible for a future with 1B developers.”

Source: Businesswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img