Getting started

The LLM Module is a package built to solve the biggest challenges in deploying LLMs and other Gen AI models, building applications with them, and managing those applications over time. With the LLM Module, you will be able to...

  • Easily deploy models locally, with a choice of 3 serving backends optimized for LLM and other foundation models. We also offer integration with 3rd party services (OpenAI, to start) as a 'hosted' alternative.

  • Build Applications with those deployments. We offer an out-of-the-box memory component to store chat history within an application, support for prompt templates and templating tools, and support for custom components, all plug-and-play within Core 2 pipelines.

  • Leverage the rest of Seldon's feature set for model management, logging, monitoring, access management and more!

llm-components.png

Runtimes

The Seldon LLM module provides four components to support your LLM deployment, application building, and monitoring needs. Each component is built to support different parts of the LLM application building landscape, from the deployment of the model first, to the implementation of common AI application design patterns around that deployment like retrieval augmented generation and memory. Since the components are implemented as MLServer runtimes, they each use a unique model-settings.json configuration file and have their own inference request-responses. The components offered are:

Runtime
Description

Access hosted models, like OpenAI and Gemini

Deploy LLM and GenAI models locally, leveraging performance optimizations for LLM and GenAI models

Set up modular and LLM-agnostic prompts that allow for the re-use of an LLM across different use-cases

Store and retrieve conversation history as part of an LLM application

Retrieve relevant context from a vector database given an embedding vector

Last updated

Was this helpful?