Lumino, an AI infrastructure startup, announced that it has launched its large language model (LLM) fine-tuning SDK and web console. Training and fine-tuning LLMs require large amounts of computing power, which has led to a price surge for GPUs.
Additionally, fine-tuning models requires building and setting up machine learning infrastructure, which significantly increases the time needed to build and deploy LLMs. Finally, building machine learning infrastructure requires expertise in software engineering and machine learning that is very difficult to hire for, leading to delays in deploying LLMs.
Also Read: D-Wave & Staque Partner to Boost Quantum Computing in ME
Lumino’s SDK and web app make it extremely easy for anyone to fine-tune LLMs. Developers can fine-tune LLMs with a few lines of code by integrating the SDK into their codebase. Users can also fine-tune LLMs directly in Lumino’s web console with a few clicks, making it very easy for non-developers to build and use AI. Finally, the cost of compute on Lumino’s platform is up to 80% cheaper as compared to cloud service providers, enabling large cost savings on compute.
SOURCE: PRNewswire