| .TH "sbatch" "1" "SLURM 2.0" "May 2009" "SLURM Commands" |
| |
| .SH "NAME" |
| sbatch \- Submit a batch script to SLURM. |
| |
| .SH "SYNOPSIS" |
| sbatch [\fIoptions\fP] \fIscript\fP [\fIargs\fP...] |
| |
| .SH "DESCRIPTION" |
| sbatch submits a batch script to SLURM. The batch script may be given to |
| sbatch through a file name on the command line, or if no file name is specified, |
| sbatch will read in a script from standard input. The batch script may contain |
| options preceded with "#SBATCH" before any executable commands in the script. |
| |
| sbatch exits immediately after the script is successfully transferred to the |
| SLURM controller and assigned a SLURM job ID. The batch script is not |
| necessarily granted resources immediately, it may sit in the queue of pending |
| jobs for some time before its required resources become available. |
| |
| When the job allocation is finally granted for the batch script, SLURM |
| runs a single copy of the batch script on the first node in the set of |
| allocated nodes. |
| .SH "OPTIONS" |
| .LP |
| |
| .TP |
| \fB\-\-acctg\-freq\fR=<\fIseconds\fR> |
| Define the job accounting sampling interval. |
| This can be used to override the \fIJobAcctGatherFrequency\fR parameter in SLURM's |
| configuration file, \fIslurm.conf\fR. |
| A value of zero disables real the periodic job sampling and provides accounting |
| information only on job termination (reducing SLURM interference with the job). |
| |
| .TP |
| \fB\-B\fR \fB\-\-extra\-node\-info\fR=<\fIsockets\fR[:\fIcores\fR[:\fIthreads\fR]]> |
| Request a specific allocation of resources with details as to the |
| number and type of computational resources within a cluster: |
| number of sockets (or physical processors) per node, |
| cores per socket, and threads per core. |
| The total amount of resources being requested is the product of all of |
| the terms. |
| As with \-\-nodes, each value can be a single number or a range (e.g. min\-max). |
| An asterisk (*) can be used as a placeholder indicating that all available |
| resources of that type are to be utilized. |
| As with nodes, the individual levels can also be specified in separate |
| options if desired: |
| .nf |
| \fB\-\-sockets\-per\-node\fR=<\fIsockets\fR> |
| \fB\-\-cores\-per\-socket\fR=<\fIcores\fR> |
| \fB\-\-threads\-per\-core\fR=<\fIthreads\fR> |
| .fi |
| When the task/affinity plugin is enabled, |
| specifying an allocation in this manner also instructs SLURM to use |
| a CPU affinity mask to guarantee the request is filled as specified. |
| NOTE: Support for these options are configuration dependent. |
| The task/affinity plugin must be configured. |
| In addition either select/linear or select/cons_res plugin must be |
| configured. |
| If select/cons_res is configured, it must have a parameter of CR_Core, |
| CR_Core_Memory, CR_Socket, or CR_Socket_Memory. |
| |
| .TP |
| \fB\-\-begin\fR=<\fItime\fR> |
| Submit the batch script to the SLURM controller immediately, like normal, but |
| tell the controller to defer the allocation of the job until the specified time. |
| |
| Time may be of the form \fIHH:MM:SS\fR to run a job at |
| a specific time of day (seconds are optional). |
| (If that time is already past, the next day is assumed.) |
| You may also specify \fImidnight\fR, \fInoon\fR, or |
| \fIteatime\fR (4pm) and you can have a time\-of\-day suffixed |
| with \fIAM\fR or \fIPM\fR for running in the morning or the evening. |
| You can also say what day the job will be run, by specifying |
| a date of the form \fIMMDDYY\fR or \fIMM/DD/YY\fR |
| \fIYYYY-MM-DD\fR. Combine date and time using the following |
| format \fIYYYY\-MM\-DD[THH:MM[:SS]]\fR. You can also |
| give times like \fInow + count time\-units\fR, where the time\-units |
| can be \fIseconds\fR (default), \fIminutes\fR, \fIhours\fR, |
| \fIdays\fR, or \fIweeks\fR and you can tell SLURM to run |
| the job today with the keyword \fItoday\fR and to run the |
| job tomorrow with the keyword \fItomorrow\fR. |
| The value may be changed after job submission using the |
| \fBscontrol\fR command. |
| For example: |
| .nf |
| \-\-begin=16:00 |
| \-\-begin=now+1hour |
| \-\-begin=now+60 (seconds by default) |
| \-\-begin=2010\-01\-20T12:34:00 |
| .fi |
| |
| .RS |
| .PP |
| Notes on date/time specifications: |
| \- Although the 'seconds' field of the HH:MM:SS time specification is |
| allowed by the code, note that the poll time of the SLURM scheduler |
| is not precise enough to guarantee dispatch of the job on the exact |
| second. The job will be eligible to start on the next poll |
| following the specified time. The exact poll interval depends on the |
| SLURM scheduler (e.g., 60 seconds with the default sched/builtin). |
| \- If no time (HH:MM:SS) is specified, the default is (00:00:00). |
| \- If a date is specified without a year (e.g., MM/DD) then the current |
| year is assumed, unless the combination of MM/DD and HH:MM:SS has |
| already passed for that year, in which case the next year is used. |
| .RE |
| |
| .TP |
| \fB\-\-checkpoint\fR=<\fItime\fR> |
| Specifies the interval between creating checkpoints of the job step. |
| By default, the job step will no checkpoints created. |
| Acceptable time formats include "minutes", "minutes:seconds", |
| "hours:minutes:seconds", "days\-hours", "days\-hours:minutes" and |
| "days\-hours:minutes:seconds". |
| |
| .TP |
| \fB\-\-checkpoint\-dir\fR=<\fIdirectory\fR> |
| Specifies the directory into which the job or job step's checkpoint should |
| be written (used by the checkpoint/blcrm and checkpoint/xlch plugins only). |
| The default value is the current working directory. |
| Checkpoint files will be of the form "<job_id>.ckpt" for jobs |
| and "<job_id>.<step_id>.ckpt" for job steps. |
| |
| .TP |
| \fB\-\-comment\fR=<\fIstring\fR> |
| An arbitrary comment. |
| |
| .TP |
| \fB\-C\fR, \fB\-\-constraint\fR=<\fIlist\fR> |
| Specify a list of constraints. |
| The constraints are features that have been assigned to the nodes by |
| the slurm administrator. |
| The \fIlist\fR of constraints may include multiple features separated |
| by ampersand (AND) and/or vertical bar (OR) operators. |
| For example: \fB\-\-constraint="opteron&video"\fR or |
| \fB\-\-constraint="fast|faster"\fR. |
| In the first example, only nodes having both the feature "opteron" AND |
| the feature "video" will be used. |
| There is no mechanism to specify that you want one node with feature |
| "opteron" and another node with feature "video" in that case that no |
| node has both features. |
| If only one of a set of possible options should be used for all allocated |
| nodes, then use the OR operator and enclose the options within square brackets. |
| For example: "\fB\-\-constraint=[rack1|rack2|rack3|rack4]"\fR might |
| be used to specify that all nodes must be allocated on a single rack of |
| the cluster, but any of those four racks can be used. |
| A request can also specify the number of nodes needed with some feature |
| by appending an asterisk and count after the feature name. |
| For example "\fBsbatch \-\-nodes=16 \-\-constraint=graphics*4 ..."\fR |
| indicates that the job requires 16 nodes at that at least four of those |
| nodes must have the feature "graphics." |
| Constraints with node counts may only be combined with AND operators. |
| If no nodes have the requested features, then the job will be rejected |
| by the slurm job manager. |
| |
| .TP |
| \fB\-\-contiguous\fR |
| If set, then the allocated nodes must form a contiguous set. |
| Not honored with the \fBtopology/tree\fR or \fBtopology/3d_torus\fR |
| plugins, both of which can modify the node ordering. |
| |
| .TP |
| \fB\-\-cpu_bind\fR=[{\fIquiet,verbose\fR},]\fItype\fR |
| Bind tasks to CPUs. Used only when the task/affinity plugin is enabled. |
| The configuration parameter \fBTaskPluginParam\fR may override these options. |
| For example, if \fBTaskPluginParam\fR is configured to bind to cores, |
| your job will not be able to bind tasks to sockets. |
| NOTE: To have SLURM always report on the selected CPU binding for all |
| commands executed in a shell, you can enable verbose mode by setting |
| the SLURM_CPU_BIND environment variable value to "verbose". |
| |
| The following informational environment variables are set when \fB\-\-cpu_bind\fR |
| is in use: |
| .nf |
| SLURM_CPU_BIND_VERBOSE |
| SLURM_CPU_BIND_TYPE |
| SLURM_CPU_BIND_LIST |
| .fi |
| |
| See the \fBENVIRONMENT VARIABLE\fR section for a more detailed description |
| of the individual SLURM_CPU_BIND* variables. |
| |
| When using \fB\-\-cpus\-per\-task\fR to run multithreaded tasks, be aware that |
| CPU binding is inherited from the parent of the process. This means that |
| the multithreaded task should either specify or clear the CPU binding |
| itself to avoid having all threads of the multithreaded task use the same |
| mask/CPU as the parent. Alternatively, fat masks (masks which specify more |
| than one allowed CPU) could be used for the tasks in order to provide |
| multiple CPUs for the multithreaded tasks. |
| |
| By default, a job step has access to every CPU allocated to the job. |
| To ensure that distinct CPUs are allocated to each job step, us the |
| \fB\-\-exclusive\fR option. |
| |
| If the job step allocation includes an allocation with a number of |
| sockets, cores, or threads equal to the number of tasks to be started |
| then the tasks will by default be bound to the appropriate resources. |
| Disable this mode of operation by explicitly setting "-\-cpu\-bind=none". |
| |
| Note that a job step can be allocated different numbers of CPUs on each node |
| or be allocated CPUs not starting at location zero. Therefore one of the |
| options which automatically generate the task binding is recommended. |
| Explicitly specified masks or bindings are only honored when the job step |
| has been allocated every available CPU on the node. |
| |
| Binding a task to a NUMA locality domain means to bind the task to the set of |
| CPUs that belong to the NUMA locality domain or "NUMA node". |
| If NUMA locality domain options are used on systems with no NUMA support, then |
| each socket is considered a locality domain. |
| |
| Supported options include: |
| .PD 1 |
| .RS |
| .TP |
| .B q[uiet] |
| Quietly bind before task runs (default) |
| .TP |
| .B v[erbose] |
| Verbosely report binding before task runs |
| .TP |
| .B no[ne] |
| Do not bind tasks to CPUs (default) |
| .TP |
| .B rank |
| Automatically bind by task rank. |
| Task zero is bound to socket (or core or thread) zero, etc. |
| Not supported unless the entire node is allocated to the job. |
| .TP |
| .B map_cpu:<list> |
| Bind by mapping CPU IDs to tasks as specified |
| where <list> is <cpuid1>,<cpuid2>,...<cpuidN>. |
| CPU IDs are interpreted as decimal values unless they are preceded |
| with '0x' in which case they are interpreted as hexadecimal values. |
| Not supported unless the entire node is allocated to the job. |
| .TP |
| .B mask_cpu:<list> |
| Bind by setting CPU masks on tasks as specified |
| where <list> is <mask1>,<mask2>,...<maskN>. |
| CPU masks are \fBalways\fR interpreted as hexadecimal values but can be |
| preceded with an optional '0x'. |
| .TP |
| .B sockets |
| Automatically generate masks binding tasks to sockets. |
| If the number of tasks differs from the number of allocated sockets |
| this can result in sub\-optimal binding. |
| .TP |
| .B cores |
| Automatically generate masks binding tasks to cores. |
| If the number of tasks differs from the number of allocated cores |
| this can result in sub\-optimal binding. |
| .TP |
| .B threads |
| Automatically generate masks binding tasks to threads. |
| If the number of tasks differs from the number of allocated threads |
| this can result in sub\-optimal binding. |
| .TP |
| .B ldoms |
| Automatically generate masks binding tasks to NUMA locality domains. |
| If the number of tasks differs from the number of allocated locality domains |
| this can result in sub\-optimal binding. |
| .TP |
| .B help |
| Show this help message |
| .RE |
| |
| .TP |
| \fB\-c\fR, \fB\-\-cpus\-per\-task\fR=<\fIncpus\fR> |
| Advise the SLURM controller that ensuing job steps will require \fIncpus\fR |
| number of processors per task. Without this option, the controller will |
| just try to allocate one processor per task. |
| |
| For instance, |
| consider an application that has 4 tasks, each requiring 3 processors. If our |
| cluster is comprised of quad\-processors nodes and we simply ask for |
| 12 processors, the controller might give us only 3 nodes. However, by using |
| the \-\-cpus\-per\-task=3 options, the controller knows that each task requires |
| 3 processors on the same node, and the controller will grant an allocation |
| of 4 nodes, one for each of the 4 tasks. |
| |
| .TP |
| \fB\-D\fR, \fB\-\-workdir\fR=<\fIdirectory\fR> |
| Set the working directory of the batch script to \fIdirectory\fR before |
| it it executed. |
| |
| .TP |
| \fB\-e\fR, \fB\-\-error\fR=<\fIfilename pattern\fR> |
| Instruct SLURM to connect the batch script's standard error directly to the |
| file name specified in the "\fIfilename pattern\fR". |
| See the \fB\-\-input\fR option for filename specification options. |
| |
| .TP |
| \fB\-\-exclusive\fR |
| The job allocation cannot share nodes with other running jobs. This is |
| the oposite of \-\-share, whichever option is seen last on the command line |
| will win. (The default shared/exclusive behaviour depends on system |
| configuration.) |
| |
| .TP |
| \fB\-F\fR, \fB\-\-nodefile\fR=<\fInode file\fR> |
| Much like \-\-nodelist, but the list is contained in a file of name |
| \fInode file\fR. The node names of the list may also span multiple lines |
| in the file. Duplicate node names in the file will be ignored. |
| The order of the node names in the list is not important; the node names |
| will be sorted by SLURM. |
| |
| .TP |
| \fB\-\-get\-user\-env\fR[=\fItimeout\fR][\fImode\fR] |
| This option will tell sbatch to retrieve the |
| login environment variables for the user specified in the \fB\-\-uid\fR option. |
| The environment variables are retrieved by running something of this sort |
| "su \- <username> \-c /usr/bin/env" and parsing the output. |
| Be aware that any environment variables already set in sbatch's environment |
| will take precedence over any environment variables in the user's |
| login environment. Clear any environment variables before calling sbatch |
| that you do not want propagated to the spawned program. |
| The optional \fItimeout\fR value is in seconds. Default value is 8 seconds. |
| The optional \fImode\fR value control the "su" options. |
| With a \fImode\fR value of "S", "su" is executed without the "\-" option. |
| With a \fImode\fR value of "L", "su" is executed with the "\-" option, |
| replicating the login environment. |
| If \fImode\fR not specified, the mode established at SLURM build time |
| is used. |
| Example of use include "\-\-get\-user\-env", "\-\-get\-user\-env=10" |
| "\-\-get\-user\-env=10L", and "\-\-get\-user\-env=S". |
| NOTE: This option only works if the caller has an |
| effective uid of "root". |
| This option was originally created for use by Moab. |
| |
| .TP |
| \fB\-\-gid\fR=<\fIgroup\fR> |
| If \fBsbatch\fR is run as root, and the \fB\-\-gid\fR option is used, |
| submit the job with \fIgroup\fR's group access permissions. \fIgroup\fR |
| may be the group name or the numerical group ID. |
| |
| .TP |
| \fB\-h\fR, \fB\-\-help\fR |
| Display help information and exit. |
| |
| .TP |
| \fB\-\-hint\fR=<\fItype\fR> |
| Bind tasks according to application hints |
| .RS |
| .TP |
| .B compute_bound |
| Select settings for compute bound applications: |
| use all cores in each physical CPU |
| .TP |
| .B memory_bound |
| Select settings for memory bound applications: |
| use only one core in each physical CPU |
| .TP |
| .B [no]multithread |
| [don't] use extra threads with in-core multi-threading |
| which can benefit communication intensive applications |
| .TP |
| .B help |
| show this help message |
| .RE |
| |
| .TP |
| \fB\-I\fR, \fB\-\-immediate\fR |
| The batch script will only be submitted to the controller if the resources |
| necessary to grant its job allocation are immediately available. If the |
| job allocation will have to wait in a queue of pending jobs, the batch script |
| will not be submitted. |
| |
| .TP |
| \fB\-i\fR, \fB\-\-input\fR=<\fIfilename pattern\fR> |
| Instruct SLURM to connect the batch script's standard input |
| directly to the file name specified in the "\fIfilename pattern\fR". |
| |
| By default, "/dev/null" is open on the batch script's standard input and both |
| standard output and standard error are directed to a file of the name |
| "slurm\-%j.out", where the "%j" is replaced with the job allocation number, as |
| described below. |
| |
| The filename pattern may contain one or more replacement symbols, which are |
| a percent sign "%" followed by a letter (e.g. %j). |
| |
| Supported replacement symbols are: |
| .PD 0 |
| .RS 10 |
| .TP |
| \fB%j\fR |
| Job allocation number. |
| .PD 0 |
| .TP |
| \fB%N\fR |
| Node name. Only one file is created, so %N will be replaced by the name of the |
| first node in the job, which is the one that runs the script. |
| .RE |
| |
| .TP |
| \fB\-J\fR, \fB\-\-job\-name\fR=<\fIjobname\fR> |
| Specify a name for the job allocation. The specified name will appear along with |
| the job id number when querying running jobs on the system. The default |
| is the name of the batch script, or just "sbatch" if the script is |
| read on sbatch's standard input. |
| |
| .TP |
| \fB\-\-jobid\fR=<\fIjobid\fR> |
| Allocate resources as the specified job id. |
| NOTE: Only valid for user root. |
| |
| .TP |
| \fB\-k\fR, \fB\-\-no\-kill\fR |
| Do not automatically terminate a job of one of the nodes it has been |
| allocated fails. The user will assume the responsibilities for fault\-tolerance |
| should a node fail. When there is a node failure, any active job steps (usually |
| MPI jobs) on that node will almost certainly suffer a fatal error, but with |
| \-\-no\-kill, the job allocation will not be revoked so the user may launch |
| new job steps on the remaining nodes in their allocation. |
| |
| By default SLURM terminates the entire job allocation if any node fails in its |
| range of allocated nodes. |
| |
| .TP |
| \fB\-L\fR, \fB\-\-licenses\fR=<\fBlicense\fR> |
| Specification of licenses (or other resources available on all |
| nodes of the cluster) which must be allocated to this job. |
| License names can be followed by an asterisk and count |
| (the default count is one). |
| Multiple license names should be comma separated (e.g. |
| "\-\-licenses=foo*4,bar"). |
| |
| .TP |
| \fB\-m\fR, \fB\-\-distribution\fR= |
| <\fIblock\fR|\fIcyclic\fR|\fIarbitrary\fR|\fIplane=<options>\fR> |
| Specify an alternate distribution method for remote processes. In |
| sbatch this only sets environment variables that will be used by |
| subsequent srun requests. |
| .RS |
| .TP |
| .B block |
| The block method of distribution will allocate processes in\-order to |
| the cpus on a node. If the number of processes exceeds the number of |
| cpus on all of the nodes in the allocation then all nodes will be |
| utilized. For example, consider an allocation of three nodes each with |
| two cpus. A four\-process block distribution request will distribute |
| those processes to the nodes with processes one and two on the first |
| node, process three on the second node, and process four on the third node. |
| Block distribution is the default behavior if the number of tasks |
| exceeds the number of nodes requested. |
| .TP |
| .B cyclic |
| The cyclic method distributes processes in a round\-robin fashion across |
| the allocated nodes. That is, process one will be allocated to the first |
| node, process two to the second, and so on. This is the default behavior |
| if the number of tasks is no larger than the number of nodes requested. |
| .TP |
| .B plane |
| The tasks are distributed in blocks of a specified size. |
| The options include a number representing the size of the task block. |
| This is followed by an optional specification of the task distribution |
| scheme within a block of tasks and between the blocks of tasks. |
| For more details (including examples and diagrams), please see |
| .br |
| https://computing.llnl.gov/linux/slurm/mc_support.html |
| .br |
| and |
| .br |
| https://computing.llnl.gov/linux/slurm/dist_plane.html. |
| .TP |
| .B arbitrary |
| The arbitrary method of distribution will allocate processes in\-order as |
| listed in file designated by the environment variable SLURM_HOSTFILE. If |
| this variable is listed it will over ride any other method specified. |
| If not set the method will default to block. Inside the hostfile must |
| contain at minimum the number of hosts requested. If requesting tasks |
| (\-n) your tasks will be laid out on the nodes in the order of the file. |
| .RE |
| |
| .TP |
| \fB\-\-mail\-type\fR=<\fItype\fR> |
| Notify user by email when certain event types occur. |
| Valid \fItype\fR values are BEGIN, END, FAIL, ALL (any state change). |
| The user to be notified is indicated with \fB\-\-mail\-user\fR. |
| |
| .TP |
| \fB\-\-mail\-user\fR=<\fIuser\fR> |
| User to receive email notification of state changes as defined by |
| \fB\-\-mail\-type\fR. |
| The default value is the submitting user. |
| |
| .TP |
| \fB\-\-mem\fR=<\fIMB\fR> |
| Specify the real memory required per node in MegaBytes. |
| Default value is \fBDefMemPerNode\fR and the maximum value is |
| \fBMaxMemPerNode\fR. If configured, both of parameters can be |
| seen using the \fBscontrol show config\fR command. |
| This parameter would generally be used of whole nodes |
| are allocated to jobs (\fBSelectType=select/linear\fR). |
| Also see \fB\-\-mem\-per\-cpu\fR. |
| \fB\-\-mem\fR and \fB\-\-mem\-per\-cpu\fR are mutually exclusive. |
| |
| .TP |
| \fB\-\-mem\-per\-cpu\fR=<\fIMB\fR> |
| Mimimum memory required per allocated CPU in MegaBytes. |
| Default value is \fBDefMemPerCPU\fR and the maximum value is |
| \fBMaxMemPerCPU\fR. If configured, both of parameters can be |
| seen using the \fBscontrol show config\fR command. |
| This parameter would generally be used of individual processors |
| are allocated to jobs (\fBSelectType=select/cons_res\fR). |
| Also see \fB\-\-mem\fR. |
| \fB\-\-mem\fR and \fB\-\-mem\-per\-cpu\fR are mutually exclusive. |
| |
| .TP |
| \fB\-\-mem_bind\fR=[{\fIquiet,verbose\fR},]\fItype\fR |
| Bind tasks to memory. Used only when the task/affinity plugin is enabled |
| and the NUMA memory functions are available. |
| \fBNote that the resolution of CPU and memory binding |
| may differ on some architectures.\fR For example, CPU binding may be performed |
| at the level of the cores within a processor while memory binding will |
| be performed at the level of nodes, where the definition of "nodes" |
| may differ from system to system. \fBThe use of any type other than |
| "none" or "local" is not recommended.\fR |
| If you want greater control, try running a simple test code with the |
| options "\-\-cpu_bind=verbose,none \-\-mem_bind=verbose,none" to determine |
| the specific configuration. |
| |
| NOTE: To have SLURM always report on the selected memory binding for |
| all commands executed in a shell, you can enable verbose mode by |
| setting the SLURM_MEM_BIND environment variable value to "verbose". |
| |
| The following informational environment variables are set when \fB\-\-mem_bind\ |
| is in use: |
| |
| .nf |
| SLURM_MEM_BIND_VERBOSE |
| SLURM_MEM_BIND_TYPE |
| SLURM_MEM_BIND_LIST |
| .fi |
| |
| See the \fBENVIRONMENT VARIABLES\fR section for a more detailed description |
| of the individual SLURM_MEM_BIND* variables. |
| |
| Supported options include: |
| .RS |
| .TP |
| .B q[uiet] |
| quietly bind before task runs (default) |
| .TP |
| .B v[erbose] |
| verbosely report binding before task runs |
| .TP |
| .B no[ne] |
| don't bind tasks to memory (default) |
| .TP |
| .B rank |
| bind by task rank (not recommended) |
| .TP |
| .B local |
| Use memory local to the processor in use |
| .TP |
| .B map_mem:<list> |
| bind by mapping a node's memory to tasks as specified |
| where <list> is <cpuid1>,<cpuid2>,...<cpuidN>. |
| CPU IDs are interpreted as decimal values unless they are preceded |
| with '0x' in which case they interpreted as hexadecimal values |
| (not recommended) |
| .TP |
| .B mask_mem:<list> |
| bind by setting memory masks on tasks as specified |
| where <list> is <mask1>,<mask2>,...<maskN>. |
| memory masks are \fBalways\fR interpreted as hexadecimal values. |
| Note that masks must be preceded with a '0x' if they don't begin |
| with [0-9] so they are seen as numerical values by srun. |
| .TP |
| .B help |
| show this help message |
| .RE |
| |
| .TP |
| \fB\-\-mincores\fR=<\fIn\fR> |
| Specify a minimum number of cores per socket. |
| |
| .TP |
| \fB\-\-mincpus\fR=<\fIn\fR> |
| Specify a minimum number of logical cpus/processors per node. |
| |
| .TP |
| \fB\-\-minsockets\fR=<\fIn\fR> |
| Specify a minimum number of sockets (physical processors) per node. |
| |
| .TP |
| \fB\-\-minthreads\fR=<\fIn\fR> |
| Specify a minimum number of threads per core. |
| |
| .TP |
| \fB\-N\fR, \fB\-\-nodes\fR=<\fIminnodes\fR[\-\fImaxnodes\fR]> |
| Request that a minimum of \fIminnodes\fR nodes be allocated to this job. |
| The scheduler may decide to launch the job on more than \fIminnodes\fR nodes. |
| A limit on the maximum node count may be specified with \fImaxnodes\fR |
| (e.g. "\-\-nodes=2\-4"). The minimum and maximum node count may be the |
| same to specify a specific number of nodes (e.g. "\-\-nodes=2\-2" will ask |
| for two and ONLY two nodes). |
| The partition's node limits supersede those of the job. |
| If a job's node limits are outside of the range permitted for its |
| associated partition, the job will be left in a PENDING state. |
| This permits possible execution at a later time, when the partition |
| limit is changed. |
| If a job node limit exceeds the number of nodes configured in the |
| partition, the job will be rejected. |
| Note that the environment |
| variable \fBSLURM_NNODES\fR will be set to the count of nodes actually |
| allocated to the job. See the \fBENVIRONMENT VARIABLES \fR section |
| for more information. If \fB\-N\fR is not specified, the default |
| behavior is to allocate enough nodes to satisfy the requirements of |
| the \fB\-n\fR and \fB\-c\fR options. |
| The job will be allocated as many nodes as possible within the range specified |
| and without delaying the initiation of the job. |
| |
| .TP |
| \fB\-n\fR, \fB\-\-ntasks\fR=<\fInumber\fR> |
| sbatch does not launch tasks, it requests an allocation of resources and |
| submits a batch script. This option advises the SLURM controller that job |
| steps run within this allocation will launch a maximum of \fInumber\fR |
| tasks and sufficient resources are allocated to accomplish this. |
| The default is one task per socket or core (depending upon the value |
| of the \fISelectTypeParameters\fR parameter in slurm.conf), but note |
| that the \fB\-\-cpus\-per\-task\fR option will change this default. |
| |
| .TP |
| \fB\-\-network\fR=<\fItype\fR> |
| Specify the communication protocol to be used. |
| This option is supported on AIX systems. |
| Since POE is used to launch tasks, this option is not normally used or |
| is specified using the \fBSLURM_NETWORK\fR environment variable. |
| The interpretation of \fItype\fR is system dependent. |
| For systems with an IBM Federation switch, the following |
| comma\-separated and case insensitive types are recognized: |
| \fBIP\fR (the default is user\-space), \fBSN_ALL\fR, \fBSN_SINGLE\fR, |
| \fBBULK_XFER\fR and adapter names (e.g. \fBSNI0\fR and \fBSNI1\fR). |
| For more information, on IBM systems see \fIpoe\fR documentation on |
| the environment variables \fBMP_EUIDEVICE\fR and \fBMP_USE_BULK_XFER\fR. |
| Note that only four jobs steps may be active at once on a node with the |
| \fBBULK_XFER\fR option due to limitations in the Federation switch driver. |
| |
| .TP |
| \fB\-\-nice\fR[=\fIadjustment\fR] |
| Run the job with an adjusted scheduling priority within SLURM. |
| With no adjustment value the scheduling priority is decreased |
| by 100. The adjustment range is from \-10000 (highest priority) |
| to 10000 (lowest priority). Only privileged users can specify |
| a negative adjustment. NOTE: This option is presently |
| ignored if \fISchedulerType=sched/wiki\fR or |
| \fISchedulerType=sched/wiki2\fR. |
| |
| .TP |
| \fB\-\-no\-requeue\fR |
| Specifies that the batch job should not be requeued after node failure. |
| Setting this option will prevent system administrators from being able |
| to restart the job (for example, after a scheduled downtime). |
| When a job is requeued, the batch script is initiated from its beginning. |
| Also see the \fB\-\-requeue\fR option. |
| The \fIJobRequeue\fR configuration parameter controls the default |
| behavior on the cluster. |
| |
| .TP |
| \fB\-\-ntasks\-per\-core\fR=<\fIntasks\fR> |
| Request that no more than \fIntasks\fR be invoked on each core. |
| Similar to \fB\-\-ntasks\-per\-node\fR except at the core level |
| instead of the node level. Masks will automatically be generated |
| to bind the tasks to specific core unless \fB\-\-cpu_bind=none\fR |
| is specified. |
| NOTE: This option is not supported unless |
| \fISelectTypeParameters=CR_Core\fR or |
| \fISelectTypeParameters=CR_Core_Memory\fR is configured. |
| |
| .TP |
| \fB\-\-ntasks\-per\-socket\fR=<\fIntasks\fR> |
| Request that no more than \fIntasks\fR be invoked on each socket. |
| Similar to \fB\-\-ntasks\-per\-node\fR except at the socket level |
| instead of the node level. Masks will automatically be generated |
| to bind the tasks to specific sockets unless \fB\-\-cpu_bind=none\fR |
| is specified. |
| NOTE: This option is not supported unless |
| \fISelectTypeParameters=CR_Socket\fR or |
| \fISelectTypeParameters=CR_Socket_Memory\fR is configured. |
| |
| .TP |
| \fB\-\-ntasks\-per\-node\fR=<\fIntasks\fR> |
| Request that no more than \fIntasks\fR be invoked on each node. |
| This is similar to using \fB\-\-cpus\-per\-task\fR=\fIncpus\fR |
| but does not require knowledge of the actual number of cpus on |
| each node. In some cases, it is more convenient to be able to |
| request that no more than a specific number of ntasks be invoked |
| on each node. Examples of this include submitting |
| a hybrid MPI/OpenMP app where only one MPI "task/rank" should be |
| assigned to each node while allowing the OpenMP portion to utilize |
| all of the parallelism present in the node, or submitting a single |
| setup/cleanup/monitoring job to each node of a pre\-existing |
| allocation as one step in a larger job script. |
| |
| .TP |
| \fB\-O\fR, \fB\-\-overcommit\fR |
| Overcommit resources. Normally, \fBsbatch\fR will allocate one task |
| per processor. By specifying \fB\-\-overcommit\fR you are explicitly |
| allowing more than one task per processor. However no more than |
| \fBMAX_TASKS_PER_NODE\fR tasks are permitted to execute per node. |
| |
| .TP |
| \fB\-o\fR, \fB\-\-output\fR=<\fIfilename pattern\fR> |
| Instruct SLURM to connect the batch script's standard output directly to the |
| file name specified in the "\fIfilename pattern\fR". |
| See the \fB\-\-input\fR option for filename specification options. |
| |
| .TP |
| \fB\-\-open\-mode\fR=append|truncate |
| Open the output and error files using append or truncate mode as specified. |
| The default value is specified by the system configuration parameter |
| \fIJobFileAppend\fR. |
| |
| .TP |
| \fB\-P\fR, \fB\-\-dependency\fR=<\fIdependency_list\fR> |
| Defer the start of this job until the specified dependencies have been |
| satisfied completed. |
| <\fIdependency_list\fR> is of the form |
| <\fItype:job_id[:job_id][,type:job_id[:job_id]]\fR>. |
| Many jobs can share the same dependency and these jobs may even belong to |
| different users. The value may be changed after job submission using the |
| scontrol command. |
| .PD |
| .RS |
| .TP |
| \fBafter:job_id[:jobid...]\fR |
| This job can begin execution after the specified jobs have begun |
| execution. |
| .TP |
| \fBafterany:job_id[:jobid...]\fR |
| This job can begin execution after the specified jobs have terminated. |
| .TP |
| \fBafternotok:job_id[:jobid...]\fR |
| This job can begin execution after the specified jobs have terminated |
| in some failed state (non-zero exit code, node failure, timed out, etc). |
| .TP |
| \fBafterok:job_id[:jobid...]\fR |
| This job can begin execution after the specified jobs have successfully |
| executed (ran to completion with non-zero exit code). |
| .TP |
| \fBsingleton\fR |
| This job can begin execution after any previously launched jobs sharing the same |
| job name and user have terminated. |
| .RE |
| |
| .TP |
| \fB\-p\fR, \fB\-\-partition\fR=<\fIpartition name\fR> |
| Request a specific partition for the resource allocation. If not specified, |
| the default behaviour is to allow the slurm controller to select the default |
| partition as designated by the system administrator. |
| |
| .TP |
| \fB\-\-propagate\fR[=\fIrlimits\fR] |
| Allows users to specify which of the modifiable (soft) resource limits |
| to propagate to the compute nodes and apply to their jobs. If |
| \fIrlimits\fR is not specified, then all resource limits will be |
| propagated. |
| The following rlimit names are supported by Slurm (although some |
| options may not be supported on some systems): |
| .RS |
| .TP 10 |
| \fBALL\fR |
| All limits listed below |
| .TP |
| \fBAS\fR |
| The maximum address space for a processes |
| .TP |
| \fBCORE\fR |
| The maximum size of core file |
| .TP |
| \fBCPU\fR |
| The maximum amount of CPU time |
| .TP |
| \fBDATA\fR |
| The maximum size of a process's data segment |
| .TP |
| \fBFSIZE\fR |
| The maximum size of files created |
| .TP |
| \fBMEMLOCK\fR |
| The maximum size that may be locked into memory |
| .TP |
| \fBNOFILE\fR |
| The maximum number of open files |
| .TP |
| \fBNPROC\fR |
| The maximum number of processes available |
| .TP |
| \fBRSS\fR |
| The maximum resident set size |
| .TP |
| \fBSTACK\fR |
| The maximum stack size |
| .RE |
| |
| .TP |
| \fB\-Q\fR, \fB\-\-quiet\fR |
| Suppress informational messages from sbatch. Errors will still be displayed. |
| |
| .TP |
| \fB\-\-requeue\fR |
| Specifies that the batch job should be requeued after node failure. |
| When a job is requeued, the batch script is initiated from its beginning. |
| Also see the \fB\-\-no\-requeue\fR option. |
| The \fIJobRequeue\fR configuration parameter controls the default |
| behavior on the cluster. |
| |
| .TP |
| \fB\-\-reservation\fR=<\fIname\fR> |
| Allocate resources for the job from the named reservation. |
| |
| .TP |
| \fB\-s\fR, \fB\-\-share\fR |
| The job allocation can share nodes with other running jobs. (The default |
| shared/exclusive behaviour depends on system configuration.) |
| This may result the allocation being granted sooner than if the \-\-share |
| option was not set and allow higher system utilization, but application |
| performance will likely suffer due to competition for resources within a node. |
| |
| .TP |
| \fB\-t\fR, \fB\-\-time\fR=<\fItime\fR> |
| Set a limit on the total run time of the job allocation. If the |
| requested time limit exceeds the partition's time limit, the job will |
| be left in a PENDING state (possibly indefinitely). The default time |
| limit is the partition's time limit. When the time limit is reached, |
| each task in each job step is sent SIGTERM followed by SIGKILL. The |
| interval between signals is specified by the SLURM configuration |
| parameter \fBKillWait\fR. A time limit of zero requests that no time |
| limit be imposed. Acceptable time formats include "minutes", |
| "minutes:seconds", "hours:minutes:seconds", "days\-hours", |
| "days\-hours:minutes" and "days\-hours:minutes:seconds". |
| |
| .TP |
| \fB\-\-tasks\-per\-node\fR=<\fIn\fR> |
| Specify the number of tasks to be launched per node. |
| Equivalent to \fB\-\-ntasks\-per\-node\fR. |
| |
| .TP |
| \fB\-\-tmp\fR=<\fIMB\fR> |
| Specify a minimum amount of temporary disk space. |
| |
| .TP |
| \fB\-U\fR, \fB\-\-account\fR=<\fIaccount\fR> |
| Change resource use by this job to specified account. |
| The \fIaccount\fR is an arbitrary string. The account name may |
| be changed after job submission using the \fBscontrol\fR |
| command. |
| |
| .TP |
| \fB\-u\fR, \fB\-\-usage\fR |
| Display brief help message and exit. |
| |
| .TP |
| \fB\-\-uid\fR=<\fIuser\fR> |
| Attempt to submit and/or run a job as \fIuser\fR instead of the |
| invoking user id. The invoking user's credentials will be used |
| to check access permissions for the target partition. User root |
| may use this option to run jobs as a normal user in a RootOnly |
| partition for example. If run as root, \fBsbatch\fR will drop |
| its permissions to the uid specified after node allocation is |
| successful. \fIuser\fR may be the user name or numerical user ID. |
| |
| .TP |
| \fB\-V\fR, \fB\-\-version\fR |
| Display version information and exit. |
| |
| .TP |
| \fB\-v\fR, \fB\-\-verbose\fR |
| Increase the verbosity of sbatch's informational messages. Multiple |
| \fB\-v\fR's will further increase sbatch's verbosity. By default only |
| errors will be displayed. |
| |
| .TP |
| \fB\-w\fR, \fB\-\-nodelist\fR=<\fInode name list\fR> |
| Request a specific list of node names. The list may be specified as a |
| comma\-separated list of node names, or a range of node names |
| (e.g. mynode[1\-5,7,...]). Duplicate node names in the list will be ignored. |
| The order of the node names in the list is not important; the node names |
| will be sorted by SLURM. |
| |
| .TP |
| \fB\-\-wckey\fR=<\fIwckey\fR> |
| Specify wckey to be used with job. If TrackWCKey=no (default) in the |
| slurm.conf this value is ignored. |
| |
| .TP |
| \fB\-\-wrap\fR=<\fIcommand string\fR> |
| Sbatch will wrap the specified command string in a simple "sh" shell script, |
| and submit that script to the slurm controller. When \-\-wrap is used, |
| a script name and arguments may not be specified on the command line; instead |
| the sbatch-generated wrapper script is used. |
| |
| .TP |
| \fB\-x\fR, \fB\-\-exclude\fR=<\fInode name list\fR> |
| Explicitly exclude certain nodes from the resources granted to the job. |
| |
| .PP |
| The following options support Blue Gene systems, but may be |
| applicable to other systems as well. |
| |
| .TP |
| \fB\-\-blrts\-image\fR=<\fIpath\fR> |
| Path to blrts image for bluegene block. BGL only. |
| Default from \fIblugene.conf\fR if not set. |
| |
| .TP |
| \fB\-\-cnload\-image\fR=<\fIpath\fR> |
| Path to compute node image for bluegene block. BGP only. |
| Default from \fIblugene.conf\fR if not set. |
| |
| .TP |
| \fB\-\-conn\-type\fR=<\fItype\fR> |
| Require the partition connection type to be of a certain type. |
| On Blue Gene the acceptable of \fItype\fR are MESH, TORUS and NAV. |
| If NAV, or if not set, then SLURM will try to fit a TORUS else MESH. |
| You should not normally set this option. |
| SLURM will normally allocate a TORUS if possible for a given geometry. |
| If running on a BGP system and wanting to run in HTC mode (only for 1 |
| midplane and below). You can use HTC_S for SMP, HTC_D for Dual, HTC_V |
| for virtual node mode, and HTC_L for Linux mode. |
| |
| .TP |
| \fB\-g\fR, \fB\-\-geometry\fR=<\fIXxYxZ\fR> |
| Specify the geometry requirements for the job. The three numbers |
| represent the required geometry giving dimensions in the X, Y and |
| Z directions. For example "\-\-geometry=2x3x4", specifies a block |
| of nodes having 2 x 3 x 4 = 24 nodes (actually base partitions on |
| Blue Gene). |
| |
| .TP |
| \fB\-\-ioload\-image\fR=<\fIpath\fR> |
| Path to io image for bluegene block. BGP only. |
| Default from \fIblugene.conf\fR if not set. |
| |
| .TP |
| \fB\-\-linux\-image\fR=<\fIpath\fR> |
| Path to linux image for bluegene block. BGL only. |
| Default from \fIblugene.conf\fR if not set. |
| |
| .TP |
| \fB\-\-mloader\-image\fR=<\fIpath\fR> |
| Path to mloader image for bluegene block. |
| Default from \fIblugene.conf\fR if not set. |
| |
| .TP |
| \fB\-R\fR, \fB\-\-no\-rotate\fR |
| Disables rotation of the job's requested geometry in order to fit an |
| appropriate partition. |
| By default the specified geometry can rotate in three dimensions. |
| |
| .TP |
| \fB\-\-ramdisk\-image\fR=<\fIpath\fR> |
| Path to ramdisk image for bluegene block. BGL only. |
| Default from \fIblugene.conf\fR if not set. |
| |
| .TP |
| \fB\-\-reboot\fR |
| Force the allocated nodes to reboot before starting the job. |
| |
| .SH "INPUT ENVIRONMENT VARIABLES" |
| .PP |
| Upon startup, sbatch will read and handle the options set in the following |
| environment variables. Note that environment variables will override any |
| options set in a batch script, and command line options will override any |
| environment variables. |
| |
| .TP 22 |
| \fBSBATCH_ACCOUNT\fR |
| Same as \fB\-U, \-\-account\fR |
| .TP |
| \fBSBATCH_ACCTG_FREQ\fR |
| Same as \fB\-\-acctg\-freq\fR |
| .TP |
| \fBSLURM_CHECKPOINT\fR |
| Same as \fB\-\-checkpoint\fR |
| .TP |
| \fBSLURM_CHECKPOINT_DIR\fR |
| Same as \fB\-\-checkpoint\-dir\fR |
| .TP |
| \fBSBATCH_CONN_TYPE\fR |
| Same as \fB\-\-conn\-type\fR |
| .TP |
| \fBSBATCH_CPU_BIND\fR |
| Same as \fB\-\-cpu_bind\fR |
| .TP |
| \fBSBATCH_DEBUG\fR |
| Same as \fB\-v, \-\-verbose\fR |
| .TP |
| \fBSBATCH_DISTRIBUTION\fR |
| Same as \fB\-m, \-\-distribution\fR |
| .TP |
| \fBSBATCH_EXCLUSIVE\fR |
| Same as \fB\-\-exclusive\fR |
| .TP |
| \fBSBATCH_GEOMETRY\fR |
| Same as \fB\-g, \-\-geometry\fR |
| .TP |
| \fBSBATCH_IMMEDIATE\fR |
| Same as \fB\-I, \-\-immediate\fR |
| .TP |
| \fBSBATCH_JOBID\fR |
| Same as \fB\-\-jobid\fR |
| .TP |
| \fBSBATCH_JOB_NAME\fR |
| Same as \fB\-J, \-\-job\-name\fR |
| .TP |
| \fBSBATCH_MEM_BIND\fR |
| Same as \fB\-\-mem_bind\fR |
| .TP |
| \fBSBATCH_NETWORK\fR |
| Same as \fB\-\-network\fR |
| .TP |
| \fBSBATCH_NO_REQUEUE\fR |
| Same as \fB\-\-no\-requeue\fR |
| .TP |
| \fBSBATCH_NO_ROTATE\fR |
| Same as \fB\-R, \-\-no\-rotate\fR |
| .TP |
| \fBSBATCH_OPEN_MODE\fR |
| Same as \fB\-\-open\-mode\fR |
| .TP |
| \fBSBATCH_OVERCOMMIT\fR |
| Same as \fB\-O, \-\-overcommit\fR |
| .TP |
| \fBSBATCH_PARTITION\fR |
| Same as \fB\-p, \-\-partition\fR |
| .TP |
| \fBSBATCH_TIMELIMIT\fR |
| Same as \fB\-t, \-\-time\fR |
| |
| .SH "OUTPUT ENVIRONMENT VARIABLES" |
| .PP |
| The SLURM controller will set the following variables in the environment of |
| the batch script. |
| .TP |
| \fBBASIL_RESERVATION_ID\fR |
| The reservation ID on Cray systems running ALPS/BASIL only. |
| .TP |
| \fBSLURM_CPU_BIND\fR |
| Set to value of the \-\-cpu_bind\fR option. |
| .TP |
| \fBSLURM_JOB_ID\fR (and \fBSLURM_JOBID\fR for backwards compatibility) |
| The ID of the job allocation. |
| .TP |
| \fBSLURM_JOB_CPUS_PER_NODE\fR |
| Count of processors available to the job on this node. |
| Note the select/linear plugin allocates entire nodes to |
| jobs, so the value indicates the total count of CPUs on the node. |
| The select/cons_res plugin allocates individual processors |
| to jobs, so this number indicates the number of processors |
| on this node allocated to the job. |
| .TP |
| \fBSLURM_JOB_DEPENDENCY\fR |
| Set to value of the \-\-dependency option. |
| .TP |
| \fBSLURM_JOB_NAME\fR |
| Name of the job. |
| .TP |
| \fBSLURM_JOB_NODELIST\fR (and \fBSLURM_NODELIST\fR for backwards compatibility) |
| List of nodes allocated to the job. |
| .TP |
| \fBSLURM_JOB_NUM_NODES\fR (and \fBSLURM_NNODES\fR for backwards compatibility) |
| Total number of nodes in the job's resource allocation. |
| .TP |
| \fBSLURM_MEM_BIND\fR |
| Set to value of the \-\-mem_bind\fR option. |
| .TP |
| \fBSLURM_TASKS_PER_NODE\fR |
| Number of tasks to be initiated on each node. Values are |
| comma separated and in the same order as SLURM_NODELIST. |
| If two or more consecutive nodes are to have the same task |
| count, that count is followed by "(x#)" where "#" is the |
| repetition count. For example, "SLURM_TASKS_PER_NODE=2(x3),1" |
| indicates that the first three nodes will each execute three |
| tasks and the fourth node will execute one task. |
| .TP |
| \fBMPIRUN_NOALLOCATE\fR |
| Do not allocate a block on Blue Gene systems only. |
| .TP |
| \fBMPIRUN_NOFREE\fR |
| Do not free a block on Blue Gene systems only. |
| .TP |
| \fBSLURM_NTASKS_PER_CORE\fR |
| Number of tasks requested per core. |
| Only set if the \fB\-\-ntasks\-per\-core\fR option is specified. |
| .TP |
| \fBSLURM_NTASKS_PER_NODE\fR |
| Number of tasks requested per node. |
| Only set if the \fB\-\-ntasks\-per\-node\fR option is specified. |
| .TP |
| \fBSLURM_NTASKS_PER_SOCKET\fR |
| Number of tasks requested per socket. |
| Only set if the \fB\-\-ntasks\-per\-socket\fR option is specified. |
| .TP |
| \fBSLURM_RESTART_COUNT\fR |
| If the job has been restarted due to system failure or has been |
| explicitly requeued, this will be sent to the number of times |
| the job has been restarted. |
| .TP |
| \fBSLURM_SUBMIT_DIR\fR |
| The directory from which \fBsbatch\fR was invoked. |
| .TP |
| \fBMPIRUN_PARTITION\fR |
| The block name on Blue Gene systems only. |
| |
| .SH "EXAMPLES" |
| .LP |
| Specify a batch script by filename on the command line. |
| The batch script specifies a 1 minute time limit for the job. |
| .IP |
| $ cat myscript |
| .br |
| #!/bin/sh |
| .br |
| #SBATCH \-\-time=1 |
| .br |
| srun hostname |sort |
| .br |
| |
| .br |
| $ sbatch \-N4 myscript |
| .br |
| salloc: Granted job allocation 65537 |
| .br |
| |
| .br |
| $ cat slurm\-65537.out |
| .br |
| host1 |
| .br |
| host2 |
| .br |
| host3 |
| .br |
| host4 |
| |
| .LP |
| Pass a batch script to sbatch on standard input: |
| .IP |
| $ sbatch \-N4 <<EOF |
| .br |
| > #!/bin/sh |
| .br |
| > srun hostname |sort |
| .br |
| > EOF |
| .br |
| sbatch: Submitted batch job 65541 |
| .br |
| |
| .br |
| $ cat slurm\-65541.out |
| .br |
| host1 |
| .br |
| host2 |
| .br |
| host3 |
| .br |
| host4 |
| |
| .SH "COPYING" |
| Copyright (C) 2006\-2007 The Regents of the University of California. |
| Copyright (C) 2008\-2009 Lawrence Livermore National Security. |
| Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER). |
| CODE\-OCEC\-09\-009. All rights reserved. |
| .LP |
| This file is part of SLURM, a resource management program. |
| For details, see <https://computing.llnl.gov/linux/slurm/>. |
| .LP |
| SLURM is free software; you can redistribute it and/or modify it under |
| the terms of the GNU General Public License as published by the Free |
| Software Foundation; either version 2 of the License, or (at your option) |
| any later version. |
| .LP |
| SLURM is distributed in the hope that it will be useful, but WITHOUT ANY |
| WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS |
| FOR A PARTICULAR PURPOSE. See the GNU General Public License for more |
| details. |
| |
| .SH "SEE ALSO" |
| .LP |
| sinfo(1), sattach(1), salloc(1), squeue(1), scancel(1), scontrol(1), slurm.conf(5), sched_setaffinity(2), numa(3) |