state_dict
, or the model’s parameters. If you saved the model’s layers as well, you do not have to redefine the layers.)Then, we define a MyRunner
which inherits from vessl.RunnerBase
, which provides instructions for how to serve our model. You can read more about each method here.Finally, we register the model using vessl.register_model
. We specify the repository name and number, pass MyRunner
as the runner class we will use for service, and list any requirements to install.After executing the script, you should see that two files have been generated: vessl.manifest.yaml
, which stores metadata and vessl.runner.pkl
, which stores the runner binary. Your model has been registered and is ready for service.vessl.register_model
to register a new model as well:vessl.manifest.yaml
, which stores metadata, vessl.runner.pkl
, which stores the runner binary, and vessl.model.pkl
, which stores the trained model. Your model has been registered and is ready for service.preprocess_data
and postprocess_data
- the other methods are autogenerated.
POST
method and pass your authentication token as a header. Pass your input data in the format you’ve specified in your runner when you registered the model. You should receive a response with the prediction.