RunnerBase

RunnerBase()

Base class for model registering.

This base class introduces 5 static methods as followings:

  • predict: Make prediction with given data and model. This method must be overridden. The data is given from the result of preprocess_data, and the return value of this method will be passed to postprocess_data before service.
  • save_model: Save the model into a file. Return value of this method will be given to the load_model method on model loading. If this method is overriden, load_model must be overriden as well.
  • load_model: Load the model from a file.
  • preprocess_data: Preprocess the data before prediction. It converts the API input data to the model input data.
  • postprocess_data: Postprocess the data after prediction. It converts the model output data to the API output data.

Check each method’s docstring for more information.

Methods:

.load_model

vessl.load_model(
   props: Union[Dict[str, str], None], artifacts: Dict[str, str]
)

Load the model instance from file.

props is given from the return value of save_model, and artifacts is given from the register_model method.

If the save_model is not overriden, props will be None

Args

  • props (dict | None) : Data that was returned by save_model. If save_model is not overriden, this will be None.
  • artifacts (dict) : Data that is given by register_model function.

Returns Model instance.

.preprocess_data

vessl.preprocess_data(
   data: InputDataType
)

Preprocess the given data.

The data processed by this method will be given to the model.

Args

  • data : Data to be preprocessed.

Returns Preprocessed data that will be given to the model.

.predict

vessl.predict(
   model: ModelType, data: ModelInputDataType
)

Make prediction with given data and model.

Args

  • model (model_instance) : Model instance.
  • data : Data to be predicted.

Returns Prediction result.

.postprocess_data

vessl.postprocess_data(
   data: ModelOutputDataType
)

Postprocess the given data.

The data processed by this method will be given to the user.

Args

  • data : Data to be postprocessed.

Returns Postprocessed data that will be given to the user.

.save_model

vessl.save_model(
   model: ModelType
)

Save the given model instance into file.

Return value of this method will be given to first argument of load_model on model loading.

Args

  • model (model_instance) : Model instance to save.

Returns (dict) Data that will be passed to load_model on model loading. Must be a dictionary with key and value both string.


register_model

vessl.register_model(
   repository_name: str, model_number: Union[int, None], runner_cls: RunnerBase,
   model_instance: Union[ModelType, None] = None, requirements: List[str] = None,
   artifacts: Dict[str, str] = None, **kwargs
)

Register the given model for service. If you want to override the default organization, then pass organization_name as **kwargs.

Args

  • repository_name (str) : Model repository name.
  • model_number (int | None) : Model number. If None, new model will be created. In such case, model_instance must be given.
  • runner_cls (RunnerBase) : Runner class that includes code for service.
  • model_instance (ModelType | None) : Model instance. If None, runner_cls must override load_model method. Defaults to None.
  • requirements (List[str]) : Python requirements for the model. Defaults to [].
  • artifacts (Dict[str, str]) : Artifacts to be uploaded. Key is the path to artifact in local filesystem, and value is the path in the model volume. Only trailing asterisk(*) is allowed for glob pattern. Defaults to .

Example

  • ”model.pt”, “checkpoints/”: “checkpoints/”},
register_model(
    repository_name="my-model",
    model_number=1,
    runner_cls=MyRunner,
    model_instance=model_instance,
    requirements=["torch", "torchvision"],
)

register_torch_model

vessl.register_torch_model(
   repository_name: str, model_number: Union[int, None], model_instance: ModelType,
   preprocess_data = None, postprocess_data = None, requirements: List[str] = None,
   **kwargs
)

Register the given torch model instance for model service. If you want to override the default organization, then pass organization_name as **kwargs.

Args

  • repository_name (str) : Model repository name.
  • model_number (int | None) : Model number. If None, new model will be created.
  • model_instance (model_instance) : Torch model instance.
  • preprocess_data (callable) : Function that will preprocess data. Defaults to identity function.
  • postprocess_data (callable) : Function that will postprocess data. Defaults to identity function.
  • requirements (list) : List of requirements. Defaults to [].

Example

vessl.register_model(
    repository_name="my-model",
    model_number=1,
    model_instance=model_instance,
)