Gemini

The following demonstrates how to locally run an API Runtime instance to run inference with Gemini models. It also illustrates the different ways it can be used.

This example only showcases the API Runtime as a stand-alone component, for a more integrated example check out the chatbot demo.

To get up and running we need to pull the runtime Docker image. To pull the docker image, you must be authenticated. Check our installation tutorial to see how you can authenticate with Docker CLI.

docker pull \
    europe-west2-docker.pkg.dev/seldon-registry/llm/mlserver-llm-api:0.7.0

Before we can start the runtime we need to create the model-settings.json which will tell it what model to run:

!cat models/gemini-chat-completions/model-settings.json
{
  "name": "gemini-chat-completions",
  "implementation": "mlserver_llm_api.LLMRuntime",
  "parameters": {
    "extra": {
      "provider_id": "gemini",
      "config": {
        "model_id": "gemini-1.5-flash",
        "model_type": "chat.completions",
        "llm_parameters": {
          "generation_config": {
            "temperature": 0.7
          },
          "safety_settings": {
            "HARASSMENT": "BLOCK_LOW_AND_ABOVE"
          },
          "system_instruction": "You are a cat, your name is Neko"
        }
      }
    }
  }
}

In the above settings, the runtime config is specified in the parameters JSON field:

  1. the choice of "provider_id" - currently, the providers supported are "openai" and "gemini".

  2. we've chosen the gemini-1.5-flash model and the chat.completions API.

Starting the Runtime

Finally, to start the server run:

Sending Requests

To send our first request to the chat.completions endpoint that we are now serving via MLServer, we use the following:

Note that we've sent three tensors: a "role", a "content", and a "type" tensor. The "role" tensors tell the model who is speaking. In this case, it includes a "system" role and a "user" role. The "system" role is used to dictate the context of the interaction and the "user" role indicates that the matching content is sent by a user. In the above the system content is: "You are a helpful assistant" and the user content is "Hello from MLServer". The "type" tensor indicates that the content we sent is a text.

The endpoint responds with its own "role", "content" and "type" tensors. Its "role" is given as "assistant" and the "content" it returns is "Hello! How can I assist you today?". As well as this the server returns the full response received from OpenAI via the "output_all" tensor.

Requests with Parameters

We can also add parameters to a given request, to specify sampling arguments. For a list of all available parameters see the Gemini documentation for their API. The following sets the top_k tokens to sample and the maximum number of tokens to generate:

Note that if you are sending a single message, you must not encode the content and type as json.

Adding prompts

Prompting is a pivotal part of using large language models. It allows developers to embed messages sent by a user within other textual contexts that provide more information about the task you want the model to perform. For instance, we use the following prompt to give the model more context about the kind of question that the model is expected to answer:

In the above the content sent by the user can be inserted in the {question} variable. A developer should specify what content to insert thereby giving that tensor the name "question". To start with we need to create a new model-settings.json file. This will be the same as the previous one but in addition it specifies the prompt tempalte to be used through prompt_utils settings.

We can test this using the following (Note that we send a single tensor named "question", this is the content that will be inserted.):

Multi-modal inputs

Gemini API allows the specific mime types and converting the either base64 string-encoded media (in case of REST) or raw bytes (in case of gRPC) to the expected message format. The supported mime types are described in the Gemini API docs (vision and audio):

Images

The following code defines two utility function used to serialize the image for REST and gRPC.

Use any jpg/jpeg image of your choice and replace "assets/grand-canyon.jpg" with the path of your image. For this tutorial we will use the following image (source here).

Audio

We download a mp3 audio sample to be used in this example.

The following code defines an utility function used to serialize the audio file for REST and gRPC.

Deploying on Seldon Core 2

We will now demostrate how do deploy the chat completions model on Seldon Core 2. All the other models can be deployed with the same steps.

While the runtime image can be used as a stand alone server in most cases you'll want to deploy it as part of a Kubernetes cluster. This section assumes the user has a Kubernetes cluster running with Seldon Core 2 installed in the seldon namespace. In order start serving Gemini models you will have to first create a secret for the Gemini API key and deploy the API Runtime server. Please check our installation tutorial to see how you can do so.

To deploy the chat completions model, we will need to create the associated manifest file.

To load the model in Seldon Core 2, run:

Before sending the actual request, we need to get the mesh ip. The following util function will help you retrieve the correct ip:

As before, we can now send a request to the model:

You now have a deployed model in Seldon Core 2, ready and available for requests! To unload the model, you can run the following command.

Last updated

Was this helpful?