Site icon AIT365

Landing AI Launches Docker Deployment for LandingLens Models

Landing AI

Landing AI, the leading computer vision cloud platform, announced the launch of a new way to deploy LandingLens models using Docker, enabling DevOps users to integrate the model inference programmatically and scale deployments quickly both on-premise and in the cloud.

With Docker, users can now deploy anywhere, unlocking even greater value for Landing AI’s growing customer base across multiple industries. After users deploy their models to an endpoint using Landing AI’s Docker approach, they can use the Landing AI Python SDK to run inference on the model.

“Landing AI makes it easy to train vision models and now we’ve added another way to deploy them, too,” said Andrew Ng, Landing AI CEO. “This is another big step on our quest to democratize access to AI by making the technology easier to use—and at scale.”

Also Read: BigBear.ai to Acquire Pangiam, Combining Facial Recognition and Advanced Biometrics with BigBear.ai’s Computer Vision Capabilities to Spearhead the Vision AI Industry

Deploying LandingLens models with Docker is intended for developers already utilizing container technology to deploy applications across their organizations, however, users can also deploy LandingLens models via:

The new deployment option follows other recent initiatives to democratize access, including Landing AI’s new “App Space,” a new repository of applications and use cases to help developers more quickly create customized Landing AI-based computer vision solutions and also an SDK to support integrating applications with LandingLens’ computer vision capabilities and application code samples.

Landing AI has also added a course that guides you through refining your computer vision use case and goals, selecting a project type, analyzing and improving model performance, and a comprehensive walkthrough of LandingLens. Enroll in the LandingLens Computer Vision Fundamentals course.

SOURCE: PRNewswire

Exit mobile version