YAML configs

Running Sweeps using YML

You can run a hyper parameter sweep using the Grid YML file by passing the hyper_params key. There are two sub-keys you can configure:
    settings: use to choose between different
    params: use to configure which command-line flags are passed to your script and which values are passed
You can pass any of the supported Python or NumPy expressions to each params key. For example, you can pass the following:
1
hyper_params:
2
3
settings:
4
strategy: random_search # either random_search or grid_search
5
trials: 2 # only used in random_search
6
7
params:
8
learning_rate: uniform(0.001, 0.008, 20)
9
gamma: 0.234
Copied!
That will generate 20 values for the learning_rate.

Using Environment Variables

You can pass in environment variables to be used by your experiment by using the environment key. Pass any values and those values will be available in your experiment context. For example:
1
compute:
2
train:
3
environment:
4
MY_ENVIRONMENT_VARIABLE: "example"
Copied!
The environment variable MY_ENVIRONMENT_VARIABLE will be injected into your experiment runtime.

Specifying Requirement Files

Grid will automatically install dependencies into your project using either pip or conda. It does that automatically by finding files either named requirements.txt or environment.yml in your project's root.
If your dependencies live elsewhere, you can specify their location using the dependency_file_info attribute in the Grid YAML config.
1
compute:
2
train:
3
dependency_file_info:
4
package_manager: pip
5
path: ./requirements/requirements.txt # can have any name
Copied!

Full Example

In this example, we run a hyper parameter sweep that creates 2 experiments. That's the case because we are using random_search with the trials parameter set to 2, which will randomly sample 2 combinations of hyper parameters from the combinations generated by learning_rate and gamma.
1
# Main compute configuration.
2
compute:
3
4
# Add cloud configuration here.
5
provider:
6
7
credentials: XXXXXX # Cloud key ID
8
region: us-east-1 # Cloud region
9
vendor: aws # Vendor, only aws
10
11
# Training configuration.
12
train:
13
14
cpus: 1 # Number of CPUs
15
gpus: 0 # Number of GPUs
16
instance: t2.xlarge # AWS instance type
17
memory: null # RAM memory
18
nodes: 0 # Nodes to start with
19
scale_down_seconds: 1800 # Second in between every scaling down evaluation
20
21
# Your environment variables
22
environment:
23
MY_ENVIRONMENT_VARIABLE: "example"
24
25
# Dependency file specification
26
dependency_file_info:
27
package_manager: pip
28
path: ./requirements/requirements.txt # can have any name
29
30
hyper_params:
31
settings:
32
strategy: random_search
33
trials: 2
34
params:
35
learning_rate: uniform(0.001, 0.008, 20)
36
gamma: 0.234
Copied!
Last modified 1mo ago