LLM Module
latest
latest
  • Getting started
    • Introduction
    • Installation
    • Quickstart
  • Components
    • Models
      • API
        • OpenAI
        • Gemini
      • Local
        • Chat
        • Multimodel serving
        • Quantization
      • Prompting
      • Streaming
    • Memory
      • Example
    • Retrieval
      • Example
    • Embeddings
      • Example
      • Advanced Use Cases
    • Prompting
      • Example
  • Use cases
    • Retrieval Augmented Generation
    • Chat Bot
    • Function Calling
      • OpenAI Function Calling
    • Guardrails
    • Agents
      • Tool Use
      • Planning
    • Monitoring
  • Resources
    • Reference
      • API
      • Conversational Memory
      • Local
      • Retrieval
      • PromptUtils
    • Changelog
Powered by GitBook
On this page

Was this helpful?

Edit on GitHub
Export as PDF

Getting started

NextInstallation

Last updated 1 day ago

Was this helpful?

The LLM Module is a package built to solve the biggest challenges in deploying LLMs and other Gen AI models, building applications with them, and managing those applications over time. With the LLM Module, you will be able to...

  • Easily deploy models locally, with a choice of 3 serving backends optimized for LLM and other foundation models. We also offer integration with 3rd party services (OpenAI, to start) as a 'hosted' alternative.

  • Build Applications with those deployments. We offer an out-of-the-box memory component to store chat history within an application, support for prompt templates and templating tools, and support for custom components, all plug-and-play within Core 2 pipelines.

  • Leverage the rest of Seldon's feature set for model management, logging, monitoring, access management and more!

For a demo, more questions, or to get access to LLM Module:

Contact Us

Runtimes

The Seldon LLM module provides four components to support your LLM deployment, application building, and monitoring needs. Each component is built to support different parts of the LLM application building landscape, from the deployment of the model first, to the implementation of common AI application design patterns around that deployment like retrieval augmented generation and memory. Since the components are implemented as MLServer runtimes, they each use a unique model-settings.json configuration file and have their own inference request-responses. The components offered are:

Runtime
Description

Access hosted models, like OpenAI and Gemini

Store and retrieve conversation history as part of an LLM application

Retrieve relevant context from a vector database given an embedding vector

Deploy LLM and GenAI models locally, leveraging for LLM and GenAI models

API
Local
performance optimizations
Conversational Memory
Retrieval
llm-components.png