...
-
-
The dispatcher normally runs on the same host/VM as the API server.
-h4. Perl SDK dependencies
+h2. Perl SDK dependencies
Install the Perl SDK on the controller.
* See "Perl SDK":{{site.baseurl}}/sdk/perl/index.html page for details.
-h4. Python SDK dependencies
+h2. Python SDK dependencies
Install the Python SDK and CLI tools on controller and all compute nodes.
* See "Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html page for details.
-h4. Slurm
+h2(#slurm). Set up SLURM
-On the API server, install slurm and munge, and generate a munge key:
+On the API server, install SLURM and munge, and generate a munge key.
+
+On Debian-based systems:
<notextile>
<pre><code>~$ <span class="userinput">sudo /usr/bin/apt-get install slurm-llnl munge</span>
</code></pre>
</notextile>
-Now we need to give slurm a configuration file in @/etc/slurm-llnl/slurm.conf@. Here's an example:
+On Red Hat-based systems, "install SLURM and munge from source following their installation guide":https://computing.llnl.gov/linux/slurm/quickstart_admin.html.
+
+Now we need to give SLURM a configuration file in @/etc/slurm-llnl/slurm.conf@. Here's an example:
<notextile>
<pre>
# COMPUTE NODES
NodeName=DEFAULT
PartitionName=DEFAULT MaxTime=INFINITE State=UP
-PartitionName=compute Default=YES Shared=yes
NodeName=compute[0-255]
-
-PartitionName=compute Nodes=compute[0-255]
+PartitionName=compute Nodes=compute[0-255] Default=YES Shared=YES
</pre>
</notextile>
-Please make sure to update the value of the @ControlMachine@ parameter to the hostname of your dispatcher (api server).
+h3. SLURM configuration essentials
+
+Whenever you change this file, you will need to update the copy _on every compute node_ as well as the controller node, and then run @sudo scontrol reconfigure@.
+
+*@ControlMachine@* should be a DNS name that resolves to the SLURM controller (dispatch/API server). This must resolve correctly on all SLURM worker nodes as well as the controller itself. In general SLURM is very sensitive about all of the nodes being able to communicate with the controller _and one another_, all using the same DNS names.
+
+*@NodeName=compute[0-255]@* establishes that the hostnames of the worker nodes will be compute0, compute1, etc. through compute255.
+* There are several ways to compress sequences of names, like @compute[0-9,80,100-110]@. See the "hostlist" discussion in the @slurm.conf(5)@ and @scontrol(1)@ man pages for more information.
+* It is not necessary for all of the nodes listed here to be alive in order for SLURM to work, although you should make sure the DNS entries exist. It is easiest to define lots of hostnames up front, assigning them to real nodes and updating your DNS records as the nodes appear. This minimizes the frequency of @slurm.conf@ updates and use of @scontrol reconfigure@.
+
+Each hostname in @slurm.conf@ must also resolve correctly on all SLURM worker nodes as well as the controller itself. Furthermore, the hostnames used in the configuration file must match the hostnames reported by @hostname@ or @hostname -s@ on the nodes themselves. This applies to the ControlMachine as well as the worker nodes.
+
+For example:
+* In @/etc/slurm-llnl/slurm.conf@ on control and worker nodes: @ControlMachine=uuid_prefix.your.domain@
+* In @/etc/slurm-llnl/slurm.conf@ on control and worker nodes: @NodeName=compute[0-255]@
+* In @/etc/resolv.conf@ on control and worker nodes: @search uuid_prefix.your.domain@
+* On the control node: @hostname@ reports @uuid_prefix.your.domain@
+* On worker node 123: @hostname@ reports @compute123.uuid_prefix.your.domain@
+
+h3. Automatic hostname assignment
+
+If your worker node bootstrapping script (see "Installing a compute node":install-compute-node.html) does not send the worker's current hostname, the API server will choose an unused hostname from the set given in @application.yml@, which defaults to @compute[0-255]@.
+
+If it is not feasible to give your compute nodes hostnames like compute0, compute1, etc., you can accommodate other naming schemes with a bit of extra configuration.
+
+If you want Arvados to assign names to your nodes with a different consecutive numeric series like @{worker1-0000, worker1-0001, worker1-0002}@, add an entry to @application.yml@; see @/var/www/arvados-api/current/config/application.default.yml@ for details. Example:
+* In @application.yml@: <code>assign_node_hostname: worker1-%<slot_number>04d</code>
+* In @slurm.conf@: <code>NodeName=worker1-[0000-0255]</code>
-h4. Crunch user account
+If your worker hostnames are already assigned by other means, and the full set of names is known in advance, have your worker node bootstrapping script (see "Installing a compute node":install-compute-node.html) send its current hostname, rather than expect Arvados to assign one.
+* In @application.yml@: <code>assign_node_hostname: false</code>
+* In @slurm.conf@: <code>NodeName=alice,bob,clay,darlene</code>
-* @adduser crunch@
+If your worker hostnames are already assigned by other means, but the full set of names is _not_ known in advance, you can use the @slurm.conf@ and @application.yml@ settings in the previous example, but you must also update @slurm.conf@ (both on the controller and on all worker nodes) and run @sudo scontrol reconfigure@ whenever a new node comes online.
-The crunch user should have the same UID, GID, and home directory on all compute nodes and on the dispatcher (api server).
+h2. Enable SLURM job dispatch
-h4. Repositories
+In your API server's @application.yml@ configuration file, add the line @crunch_job_wrapper: :slurm_immediate@ under the appropriate section. (The second colon is not a typo. It denotes a Ruby symbol.)
+
+h2. Crunch user account
+
+Run @sudo adduser crunch@. The crunch user should have the same UID, GID, and home directory on all compute nodes and on the dispatcher (API server).
+
+h2. Git Repositories
Crunch scripts must be in Git repositories in the directory configured as @git_repositories_dir@/*.git (see the "API server installation":install-api-server.html#git_repositories_dir).
ArgumentError: Specified script_version does not resolve to a commit
</pre>
-h4. Running jobs
+h2. Run the Crunch dispatcher service
-* @services/api/script/crunch-dispatch.rb@ must be running.
-* @crunch-dispatch.rb@ needs @services/crunch/crunch-job@ in its @PATH@.
-* @crunch-job@ needs @sdk/perl/lib@ and @warehouse-apps/libwarehouse-perl/lib@ in its @PERLLIB@
-* @crunch-job@ needs @ARVADOS_API_HOST@ (and, if necessary in a development environment, @ARVADOS_API_HOST_INSECURE@)
+To dispatch Arvados jobs:
-Example @/var/service/arvados_crunch_dispatch/run@ script:
+* The API server script @crunch-dispatch.rb@ must be running.
+* @crunch-job@ needs the installation path of the Perl SDK in its @PERLLIB@.
+* @crunch-job@ needs the @ARVADOS_API_HOST@ (and, if necessary, @ARVADOS_API_HOST_INSECURE@) environment variable set.
-<pre>
-#!/bin/sh
+We recommend you run @crunch-dispatch.rb@ under "runit":http://smarden.org/runit/ or a similar supervisor. Here's an example runit service script:
+
+<notextile>
+<pre><code>#!/bin/sh
set -e
rvmexec=""
-## uncomment this line if you use rvm:
-#rvmexec="/usr/local/rvm/bin/rvm-exec 2.1.1"
+## Uncomment this line if you use RVM:
+#rvmexec="/usr/local/rvm/bin/rvm-exec default"
-export PATH="$PATH":/path/to/arvados/services/crunch
-export ARVADOS_API_HOST={{ site.arvados_api_host }}
+export ARVADOS_API_HOST=<span class="userinput">uuid_prefix.your.domain</span>
export CRUNCH_DISPATCH_LOCKFILE=/var/lock/crunch-dispatch
+export RAILS_ENV=production
-fuser -TERM -k $CRUNCH_DISPATCH_LOCKFILE || true
+## Uncomment this line if your cluster uses self-signed SSL certificates:
+#export ARVADOS_API_HOST_INSECURE=yes
-## Only if your SSL cert is unverifiable:
-# export ARVADOS_API_HOST_INSECURE=yes
+# This is the path to docker on your compute nodes. You might need to
+# change it to "docker", "/opt/bin/docker", etc.
+export CRUNCH_JOB_DOCKER_BIN=<span class="userinput">docker.io</span>
-cd /path/to/arvados/services/api
-export RAILS_ENV=production
+fuser -TERM -k $CRUNCH_DISPATCH_LOCKFILE || true
+cd /var/www/arvados-api/services/api
exec $rvmexec bundle exec ./script/crunch-dispatch.rb 2>&1
-</pre>
+</code></pre>
+</notextile>