Seldon Developer Portal
  • Seldon Documentation Resources
  • Core 2 + Modules
    • Core 2
    • MLServer
    • LLM Module
    • MPM Module
    • Alibi Detect Module
    • Alibi Explain Module
  • OTHER SOLUTIONS
    • Core 1
    • Enterprise Platform
Powered by GitBook
On this page
Export as PDF
  1. Core 2 + Modules

LLM Module

Learn more about how LLM Module integrates with Seldon Core 2.

PreviousSeldon Documentation ResourcesNextMPM Module

Last updated 24 days ago

The LLM Module in Seldon Core 2 is designed to simplify the deployment, application development, and lifecycle management of LLMs and other generative AI models. Some of the advantages of LLM Module are:

  • Flexible deployment options

    • Serve models locally with optimized backends for LLMs and GenAI models.

    • Integrate with hosted services like OpenAI as an alternative.

  • Build complex application

    • Use out-of-the-box components like conversational memory to store chat history.

    • Leverage prompt templates and templating tools.

    • Integrate custom components easily within Core 2 pipelines.

The module also connects seamlessly with other feature set of Seldon for model management, logging, monitoring, and access control.

To get started with the LLM Module or explore the full documentation, reach out to the .

Seldon Support team