Launch a barebone GPU-accelerated workload
nvidia-smi
. It illustrates the basic components of a single run and how you can deploy one.
nvidia-smi
ModuleNotFoundError: No module named 'packaging'
,
please run the command pip install packaging
.resources
image
run
quickstart.yaml
and define the key-value pairs one by one.
Spin up a compute instance
resources
defines the hardware specs you will use for your run. Here’s an
example that uses our managed cloud to launch an A10 instance. You can see
the full list of compute options and their string values for preset
under
Resources. Later, you will be able to add and launch workloads on your
private cloud or on-premises clusters simply by changing the value for
cluster
.Configure a runtime environment
Write a run command
nvidia-smi
. We can do this
by defining a pair of workdir
and command
under run
.Add metadata
quickstart.yaml
:vessl run
.
vessl run create -f quickstart.yaml
vessl run
, VESSL performs the following as defined in quickstart.yaml
:
nvidia-smi
and print the result.