Getting started
Last updated
Was this helpful?
Last updated
Was this helpful?
The LLM Module is a package built to solve the biggest challenges in deploying LLMs and other Gen AI models, building applications with them, and managing those applications over time. With the LLM Module, you will be able to...
Easily deploy models locally, with a choice of 3 serving backends optimized for LLM and other foundation models. We also offer integration with 3rd party services (OpenAI, to start) as a 'hosted' alternative.
Build Applications with those deployments. We offer an out-of-the-box memory component to store chat history within an application, support for prompt templates and templating tools, and support for custom components, all plug-and-play within Core 2 pipelines.
Leverage the rest of Seldon's feature set for model management, logging, monitoring, access management and more!
For a demo, more questions, or to get access to LLM Module:
The Seldon LLM module provides four components to support your LLM deployment, application building, and monitoring needs. Each component is built to support different parts of the LLM application building landscape, from the deployment of the model first, to the implementation of common AI application design patterns around that deployment like retrieval augmented generation and memory. Since the components are implemented as MLServer runtimes, they each use a unique model-settings.json
configuration file and have their own inference request-responses. The components offered are:
Access hosted models, like OpenAI and Gemini
Store and retrieve conversation history as part of an LLM application
Retrieve relevant context from a vector database given an embedding vector
Deploy LLM and GenAI models locally, leveraging for LLM and GenAI models