LLM Module
Learn more about how LLM Module integrates with Seldon Core 2.
Last updated
Learn more about how LLM Module integrates with Seldon Core 2.
Last updated
The LLM Module in Seldon Core 2 is designed to simplify the deployment, application development, and lifecycle management of LLMs and other generative AI models. Some of the advantages of LLM Module are:
Flexible deployment options
Serve models locally with optimized backends for LLMs and GenAI models.
Integrate with hosted services like OpenAI as an alternative.
Build complex application
Use out-of-the-box components like conversational memory to store chat history.
Leverage prompt templates and templating tools.
Integrate custom components easily within Core 2 pipelines.
The module also connects seamlessly with other feature set of Seldon for model management, logging, monitoring, and access control.
To get started with the LLM Module or explore the full documentation, reach out to the .