Name
— Set a name for the preset. Use names that well represent the preset
like a100-2.mem-16.cpu-6
.Processor type
— Define the preset by the processor type, either by CPU or
GPU.CPU limit
— Enter the number of CPUs. For a100-2.mem-16.cpu-6
, enter 6
.Memory limit
— Enter the amount of memory in GB. For a100-2.mem-16.cpu-6
,
the number would be 16.Priority
- Assigning different priority values disables the First In, First
Out (FIFO) scheduler and executes workloads based on their priority, with
lower priority values being processed first. In the example preset above,
workloads running on cpu-medium
are always prioritized over workloads on
other GPUs. To view the priority assigned to each node, click the Edit
button under Resource Specs.GPU type
— Specify the GPU model you are using by running the nvidia-smi
command on your server. In the example below, the GPU type is
a100-sxm-80gb
.GPU limit
— Enter the number of GPUs. For gpu2.mem16.cpu6
, enter 2
. You
can also place decimal values if you are using Multi-Instance GPUs (MIG).Available workloads
— Select the type of workloads that can use the preset.
With this, you can guide users to use Experiment by preventing them from
running Workspace with 4 or 8 GPUs.Tolerations
Key
and Value
match the
node’s taint exactly. Example: If a node has a taint key=value
, the
Toleration must also specify key=value
to allow scheduling.Key
exists, regardless of the Value
.
Example: If a node has a taint with key=anything
, the Toleration only
needs to specify key
to allow scheduling.key=gpu, value=true, effect=NoSchedule
.key=gpu, value=true, operator=Equal
.Node selectors
vessl.ai/role
gpu-worker
Key=Value
).vessl.ai/role=gpu-worker
.vessl.ai/role
gpu-worker