Comment on page
Creating a Sweep
To create a Sweep you need to specify a few options including objective, parameters, algorithm, and runtime.
You should log the objective metric with Python SDK in your code.
Specify the objective metric that you want to optimize. You can set either to maximize or minimize your target value.
Specify the following parameters with a positive integer:
- Max experiment count: The maximum number of experiments to run. A sweep will keep spawning a new experiment until the total number of experiments reaches the count.
- Parallel experiment count: The number of parallel experiments to run. Sweep runs experiments concurrently up to the number of parallel experiment counts.
- Max failed experiment count: The number of allowed failed experiments. If the number of failed experiments exceeds this number, the Sweep will no longer spawn new experiments.
Noted that both Parallel experiment count and Max failed experiment count should be less than or equal to Max experiment count.
- Grid search: A simple exhaustive searching by all combinations through a specified search space. All search space of parameters should be discrete and bounded. If each of the two parameters has three possible values, the total number of possible experiments is six.
- ****Bayesian optimization: A global optimization method for noisy black-box functions. Bayesian optimization will select the next parameter on the probabilistic model of the function mapping from hyperparameter values to the objective.
- Name: The name of the parameter that is applied to the experiment as an environment variable at runtime.
- Type: Choose between categorical, int, or double type of the parameter.
- Range: You can choose between search space and list options. For a categorical type, only a list option is available.
- Value: The input form of the value is determined by the range type. For a search space, a continuous space is defined with min, max, and step, and for a list option, a search space is defined with discrete values.
You can set early stopping to prevent overfitting on the training dataset. It supports the median algorithm which takes two input values,
start_step. VESSL examines the metric value for each step after
start_stepand compares it to the median value of the completed experiment to decide whether to trigger early stopping.
Configuring the runtime option is similar to creating an experiment:
You can retrieve the configuration of prior experiments by clicking Configure from Prior Experiments.