hc99's picture
Add files using upload-large-folder tool
fc0f7bd verified
|
raw
history blame
6.1 kB
metadata
id: submitit_launcher
title: Submitit Launcher plugin
sidebar_label: Submitit Launcher plugin

PyPI PyPI - License PyPI - Python Version PyPI - Downloads Example application Plugin source

The Submitit Launcher plugin provides a SLURM Launcher based on Submitit.

Installation

This plugin requires Hydra 1.0 (Release candidate)

$ pip install hydra-submitit-launcher --pre

Usage

Once installed, add hydra/launcher=submitit to your command line. Alternatively, override hydra/launcher in your config:

defaults:
  - hydra/launcher: submitit

Note that this plugin expects a valid environment in the target host. usually this means a shared file system between the launching host and the target host.

Submitit supports 3 types of queues: auto, local and slurm. Its config looks like this

class QueueType(Enum):
    auto = "auto"
    local = "local"
    slurm = "slurm"


@dataclass
class SlurmQueueConf:
    # Params are used to configure sbatch, for more info check:
    # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py

    # maximum time for the job in minutes
    time: int = 60
    # number of cpus to use for each task
    cpus_per_task: int = 10
    # number of gpus to use on each node
    gpus_per_node: int = 1
    # number of tasks to spawn on each node
    ntasks_per_node: int = 1
    # number of nodes to use for the job
    nodes: int = 1
    # memory to reserve for the job on each node, in GB
    mem: str = "${hydra.launcher.mem_limit}GB"
    # slurm partition to use on the cluster
    partition: Optional[str] = None
    # USR1 signal delay before timeout
    signal_delay_s: int = 120
    # name of the job
    job_name: str = "${hydra.job.name}"
    # Maximum number of retries on job timeout.
    # Change this only after you confirmed your code can handle re-submission
    # by properly resuming from the latest stored checkpoint.
    # check the following for more info on slurm_max_num_timeout
    # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md
    max_num_timeout: int = 0


@dataclass
class LocalQueueConf:
    # local executor mocks the behavior of slurm locally

    # maximum time for the job in minutes
    timeout_min: int = 60
    # number of gpus to use on each node
    gpus_per_node: int = 1
    # number of tasks to spawn on each node (only one node available in local executor)
    tasks_per_node: int = 1


@dataclass
class AutoQueueConf:
    # auto executor automatically identifies and uses available cluster
    # Currently this is only slurm, but local executor can be manually forced
    # instead.
    # Most parameters are shared between clusters, some can be cluster specific

    # cluster to use (currently either "slurm" or "local" are supported,
    # None defaults to an available cluster)
    cluster: Optional[str] = None

    # maximum time for the job in minutes
    timeout_min: int = 60
    # number of cpus to use for each task
    cpus_per_task: int = 1
    # number of gpus to use on each node
    gpus_per_node: int = 0
    # number of tasks to spawn on each node
    tasks_per_node: int = 1
    # memory to reserve for the job on each node (in GB)
    mem_gb: int = 4
    # number of nodes to use for the job
    nodes: int = 1
    # name of the job
    name: str = "${hydra.job.name}"

    # following parameters are SLURM specific

    # Maximum number of retries on job timeout.
    # Change this only after you confirmed your code can handle re-submission
    # by properly resuming from the latest stored checkpoint.
    # check the following for more info on slurm_max_num_timeout
    # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md
    slurm_max_num_timeout: int = 0
    # USR1 signal delay before timeout for the slurm queue
    slurm_signal_delay_s: int = 30
    # slurm partition to use on the cluster
    slurm_partition: Optional[str] = None


@dataclass
class QueueParams:
    slurm: SlurmQueueConf = SlurmQueueConf()
    local: LocalQueueConf = LocalQueueConf()
    auto: AutoQueueConf = AutoQueueConf()


@dataclass
class SubmititConf:
    queue: QueueType = QueueType.local

    folder: str = "${hydra.sweep.dir}/.${hydra.launcher.params.queue}"

    queue_parameters: QueueParams = QueueParams()

See Submitit documentation for full details about the parameters above.

An example application using this launcher is provided in the plugin repository.

Starting the app with python my_app.py task=1,2,3 -m will launch 3 executions:

$ python my_app.py task=1,2,3 -m
[HYDRA] Sweep output dir : multirun/2020-05-28/15-05-22
[HYDRA]        #0 : task=1
[HYDRA]        #1 : task=2
[HYDRA]        #2 : task=3

You will be able to see the output of the app in the output dir:

$ tree
.
β”œβ”€β”€ 0
β”‚   └── my_app.log
β”œβ”€β”€ 1
β”‚   └── my_app.log
β”œβ”€β”€ 2
β”‚   └── my_app.log
└── multirun.yaml


$ cat 0/my_app.log 
[2020-05-28 15:05:23,511][__main__][INFO] - Process ID 15887 executing task 1 ...
[2020-05-28 15:05:24,514][submitit][INFO] - Job completed successfully