Persistent Systems, a global leader in digital engineering and business modernization, announced the launch of GenAI Hub , an innovative platform designed to accelerate the creation and deployment of generative AI (GenAI) applications within enterprises. This platform seamlessly integrates with an organization’s existing infrastructure, applications, and data, enabling the rapid development of tailored, industry-specific generative AI solutions. The GenAI Hub supports the adoption of generative AI across different large language models (LLMs) and clouds, without vendor lock-in.
To effectively harness the potential of generative AI and translate insights into tangible business outcomes, businesses need to seamlessly integrate it into their existing systems. With a wide range of AI models ranging from broad to specialized, customers need a robust platform like the GenAI Hub. This platform simplifies the development and management of multiple generative AI models, speeding time to market with pre-built software components, while respecting the principles of responsible AI.
The GenAI Hub consists of five main elements:
- Playground is a no-code tool that allows domain experts to explore and apply generative AI with LLMs on enterprise data without the need for programming skills. It provides a single, uniform interface for LLMs from private providers like Azure OpenAI, AWS Bedrock, and Google Gemini, and open models from Hugging Face like LLaMA2 and Mistral.
- The agent framework provides a versatile architecture for developing generative AI applications, leveraging libraries such as LangChain and LlamaIndex for innovative solutions, including retrieval augmented generation (RAG).
- The assessment framework uses an “AI to validate AI” approach and can automatically generate ground truth questions for verification by a human in the loop. It uses metrics to track application performance and measure drift and bias that can be corrected.
- The gateway serves as a router between LLMs, enabling application compatibility and improving management of service priorities and load balancing. It also offers detailed information on token consumption and associated costs.
- Custom model pipelines facilitate the creation and integration of LLMs and small language models (SLMs) into the generative AI ecosystem, supporting a streamlined process for data preparation and model refinement, tailored to cloud and on-premises deployments.
The GenAI Hub streamlines use case development for businesses, providing step-by-step guidance and seamless data integration into LLMs, enabling the rapid creation
SOURCE: PRNewswire