githubEdit

HuggingFace

This package provides a MLServer runtime compatible with HuggingFace Transformers.

Usage

You can install the runtime, alongside mlserver, as:

pip install mlserver mlserver-huggingface

For further information on how to use MLServer with HuggingFace, you can check out this worked out examplearrow-up-right.

Content Types

The HuggingFace runtime will always decode the input request using its own built-in codec. Therefore, content type annotationsarrow-up-right at the request level will be ignored. Note that this doesn't include input-level content typearrow-up-right annotations, which will be respected as usual.

Settings

The HuggingFace runtime exposes a couple extra parameters which can be used to customise how the runtime behaves. These settings can be added under the parameters.extra section of your model-settings.json file, e.g.

---
emphasize-lines: 5-8
---
{
  "name": "qa",
  "implementation": "mlserver_huggingface.HuggingFaceRuntime",
  "parameters": {
    "extra": {
      "task": "question-answering",
      "optimum_model": true
    }
  }
}

Loading models

Local models

It is possible to load a local model into a HuggingFace pipeline by specifying the model artefact folder path in parameters.uri in model-settings.json.

HuggingFace models

Models in the HuggingFace hub can be loaded by specifying their name in parameters.extra.pretrained_model in model-settings.json.

Reference

You can find the full reference of the accepted extra settings for the HuggingFace runtime below:

Last updated

Was this helpful?