Merge branch '3219-further-docker-improvements'
authorWard Vandewege <ward@curoverse.com>
Tue, 29 Jul 2014 20:34:14 +0000 (22:34 +0200)
committerWard Vandewege <ward@curoverse.com>
Tue, 29 Jul 2014 20:34:14 +0000 (22:34 +0200)
refs #3219

40 files changed:
doc/_includes/_alert_docker.liquid [deleted file]
doc/install/index.html.textile.liquid
doc/install/install-docker.html.textile.liquid
doc/user/getting_started/check-environment.html.textile.liquid
docker/api/Dockerfile
docker/api/application.yml.in
docker/api/apt.arvados.org.list [new file with mode: 0644]
docker/api/arvados-clients.yml.in [new file with mode: 0644]
docker/api/crunch-dispatch-run.sh [new file with mode: 0755]
docker/api/keep_server_0.json [new file with mode: 0644]
docker/api/keep_server_1.json [new file with mode: 0644]
docker/api/munge.key [new file with mode: 0644]
docker/api/setup-gitolite.sh.in [new file with mode: 0755]
docker/api/setup.sh.in [new file with mode: 0755]
docker/api/slurm.conf.in [new file with mode: 0644]
docker/api/supervisor.conf
docker/api/update-gitolite.rb [new file with mode: 0755]
docker/arvdock
docker/base/Dockerfile
docker/base/apt.arvados.org.list [new file with mode: 0644]
docker/build.sh
docker/build_tools/Makefile
docker/build_tools/build.rb
docker/compute/Dockerfile [new file with mode: 0644]
docker/compute/fuse.conf [new file with mode: 0644]
docker/compute/setup.sh.in [new file with mode: 0755]
docker/compute/ssh.sh [new file with mode: 0755]
docker/compute/supervisor.conf [new file with mode: 0644]
docker/compute/wrapdocker [new file with mode: 0755]
docker/config.yml.example
docker/shell/Dockerfile [new file with mode: 0644]
docker/shell/fuse.conf [new file with mode: 0644]
docker/shell/setup.sh.in [new file with mode: 0755]
docker/shell/superuser_token.in [new file with mode: 0644]
docker/shell/supervisor.conf [new file with mode: 0644]
docker/slurm/Dockerfile [new file with mode: 0644]
docker/slurm/munge.key [new file with mode: 0644]
docker/slurm/slurm.conf.in [new file with mode: 0644]
docker/slurm/supervisor.conf [new file with mode: 0644]
docker/workbench/Dockerfile

diff --git a/doc/_includes/_alert_docker.liquid b/doc/_includes/_alert_docker.liquid
deleted file mode 100644 (file)
index be4a8e0..0000000
+++ /dev/null
@@ -1,4 +0,0 @@
-<div class="alert alert-block alert-info">
-  <button type="button" class="close" data-dismiss="alert">&times;</button>
-  <p>The Docker installation is not feature complete. We do not have a Docker container yet for crunch-dispatch and the arvados compute nodes. This means that running pipelines from a Docker-based Arvados install is currently not supported without additional manual configuration. Without that manual configuration, it is possible to use arv-crunch-job to run a 'local' job against your Docker-based Arvados installation. To do this, please refer to the "Debugging a Crunch script":{{site.baseurl}}/user/topics/tutorial-job-debug.html page.</p>
-</div>
index 3e9cc2d1d1e9cd96d1a3a7e6ffabbea771938a06..7cb0fea15a56d959a0f783a146929040e84778cf 100644 (file)
@@ -12,8 +12,6 @@ For larger scale installations, a manual installation is more appropriate.
 
 h2. Docker
 
-{% include 'alert_docker' %}
-
 "Installing with Docker":install-docker.html
 
 h2. Manual installation
index ef2b70932146521771dab2391ab3ae8249b5fe9e..4c70f7da4aafa51fa5ada186199bbc92d6df6f36 100644 (file)
@@ -4,12 +4,15 @@ navsection: installguide
 title: Installing with Docker
 ...
 
-{% include 'alert_docker' %}
+h2. Purpose
 
-h2. Prerequisites:
+This installation method is appropriate for local testing, evaluation, and
+development. For production use, this method is not recommended.
+
+h2. Prerequisites
 
 # A GNU/Linux (virtual) machine
-# A working Docker installation
+# A working Docker installation (see "Installing Docker":https://docs.docker.com/installation/)
 
 h2. Download the source tree
 
@@ -34,6 +37,7 @@ parameters:
 
 <pre>
 PUBLIC_KEY_PATH
+ARVADOS_USER_NAME
 API_HOSTNAME
 API_AUTO_ADMIN_USER
 </pre>
@@ -44,54 +48,58 @@ Then build the docker containers (this will take a while):
 <pre><code>
 ~$ <span class="userinput">./build.sh</span>
 ...
- ---> 05f0ae429530
-Step 9 : ADD apache2_foreground.sh /etc/apache2/foreground.sh
- ---> 7292b241305a
-Step 10 : CMD ["/etc/apache2/foreground.sh"]
- ---> Running in 82d59061ead8
- ---> 72cee36a9281
-Successfully built 72cee36a9281
-Removing intermediate container 2bc8c98c83c7
-Removing intermediate container 9457483a59cf
-Removing intermediate container 7cc5723df67c
-Removing intermediate container 5cb2cede73de
-Removing intermediate container 0acc147a7f6d
-Removing intermediate container 82d59061ead8
-Removing intermediate container 9c022a467396
-Removing intermediate container 16044441463f
-Removing intermediate container cffbbddd82d1
-date >sso-image
+Step 7 : ADD generated/setup.sh /usr/local/bin/setup.sh
+ ---> d7c0e7fdf7ab
+Removing intermediate container f3d81180795d
+Step 8 : CMD ["/usr/bin/supervisord", "-n"]
+ ---> Running in 84c64cb9f0d5
+ ---> d6cbb5002604
+Removing intermediate container 84c64cb9f0d5
+Successfully built d6cbb5002604
+date >shell-image
 </code></pre></notextile>
 
 If all goes well, you should now have a number of docker images built:
 
 <notextile>
 <pre><code>~$ <span class="userinput">docker.io images</span>
-REPOSITORY          TAG                 IMAGE ID            CREATED              VIRTUAL SIZE
-arvados/sso         latest              72cee36a9281        11 seconds ago       1.727 GB
-arvados/keep        latest              c3842f856bcb        56 seconds ago       210.6 MB
-arvados/workbench   latest              b91aa980597c        About a minute ago   2.07 GB
-arvados/doc         latest              050e9e6b8213        About a minute ago   1.442 GB
-arvados/api         latest              79843d0a8997        About a minute ago   2.112 GB
-arvados/passenger   latest              2342a550da7f        2 minutes ago        1.658 GB
-arvados/base        latest              68caefd8ea5b        5 minutes ago        1.383 GB
-arvados/debian      7.5                 6e32119ffcd0        8 minutes ago        116.8 MB
-arvados/debian      latest              6e32119ffcd0        8 minutes ago        116.8 MB
-arvados/debian      wheezy              6e32119ffcd0        8 minutes ago        116.8 MB
+REPOSITORY              TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
+arvados/shell           latest              d6cbb5002604        10 minutes ago      1.613 GB
+arvados/sso             latest              377f1fa0108e        11 minutes ago      1.807 GB
+arvados/keep            latest              ade0e0d2dd00        12 minutes ago      210.8 MB
+arvados/workbench       latest              b0e4fb6da385        12 minutes ago      2.218 GB
+arvados/doc             latest              4b64daec9454        12 minutes ago      1.524 GB
+arvados/compute         latest              7f1f5f7faf54        13 minutes ago      1.862 GB
+arvados/slurm           latest              f5bfd1008e6b        17 minutes ago      1.573 GB
+arvados/api             latest              6b93c5f5fc42        17 minutes ago      2.274 GB
+arvados/passenger       latest              add2d11fdf24        18 minutes ago      1.738 GB
+arvados/base            latest              81eaadd0c6f5        22 minutes ago      1.463 GB
+arvados/debian          7.6                 f339ce275c01        6 days ago          116.8 MB
+arvados/debian          latest              f339ce275c01        6 days ago          116.8 MB
+arvados/debian          wheezy              f339ce275c01        6 days ago          116.8 MB
+crosbymichael/skydock   latest              e985023521f6        3 months ago        510.7 MB
+crosbymichael/skydns    next                79c99a4608ed        3 months ago        525 MB
+crosbymichael/skydns    latest              1923ce648d4c        5 months ago        137.5 MB
 </code></pre></notextile>
 
 h2. Updating the Arvados Docker containers
 
-If there has been an update to the Arvados Docker building code, it is safest to rebuild the Arvados Docker images from scratch. All build information can be cleared with the '--clean' option to build.sh:
+If there has been an update to the Arvados Docker building code, it is safest to rebuild the Arvados Docker images from scratch. All build information can be cleared with the 'clean' option to build.sh:
+
+<notextile>
+<pre><code>~$ <span class="userinput">./build.sh clean</span></code></pre>
+</notextile>
+
+You can also use 'realclean', which does what 'clean' does and in addition removes all Arvados Docker containers and images from your system, with the exception of the arvados/debian image.
 
 <notextile>
-<pre><code>~$ <span class="userinput">./build.sh --clean</span></code></pre>
+<pre><code>~$ <span class="userinput">./build.sh realclean</span></code></pre>
 </notextile>
 
-You can also use '--realclean', which does what '--clean' does and in addition removes all Arvados Docker containers and images from your system.
+Finally, the 'deepclean' option does what 'realclean' does, and also removes the arvados/debian, crosbymichael/skydns and crosbymichael/skydock images.
 
 <notextile>
-<pre><code>~$ <span class="userinput">./build.sh --realclean</span></code></pre>
+<pre><code>~$ <span class="userinput">./build.sh deepclean</span></code></pre>
 </notextile>
 
 h2. Running the Arvados Docker containers
@@ -105,13 +113,16 @@ The @arvdock@ command can be used to start and stop the docker containers. It ha
 usage: ./arvdock (start|stop|restart|test) [options]
 
 ./arvdock start/stop/restart options:
-  -d [port], --doc[=port]        Documentation server (default port 9898)
-  -w [port], --workbench[=port]  Workbench server (default port 9899)
-  -s [port], --sso[=port]        SSO server (default port 9901)
-  -a [port], --api[=port]        API server (default port 9900)
-  -k, --keep                     Keep servers
-  --ssh                          Enable SSH access to server containers
-  -h, --help                     Display this help and exit
+  -d[port], --doc[=port]        Documentation server (default port 9898)
+  -w[port], --workbench[=port]  Workbench server (default port 9899)
+  -s[port], --sso[=port]        SSO server (default port 9901)
+  -a[port], --api[=port]        API server (default port 9900)
+  -c, --compute                 Compute nodes (starts 2)
+  -v, --vm                      Shell server
+  -n, --nameserver              Nameserver
+  -k, --keep                    Keep servers
+  --ssh                         Enable SSH access to server containers
+  -h, --help                    Display this help and exit
 
   If no options are given, the action is applied to all servers.
 
index c9d4778c7edc455055a03d96653f2084e2a3ec2d..287d7087d7c78be4c1b39ec413058247874bb2a0 100644 (file)
@@ -37,4 +37,4 @@ However, if you receive the following message:
 
 bc. ARVADOS_API_HOST and ARVADOS_API_TOKEN need to be defined as environment variables
 
-Then follow the instructions for "getting an API token,":{{site.baseurl}}/user/reference/api-tokens.html and try @arv user current@ again.
+follow the instructions for "getting an API token,":{{site.baseurl}}/user/reference/api-tokens.html and try @arv user current@ again.
index 99a0b4c527a9363abd645f33b4e0448f51ecff82..6a70fc30eb3672e5069d4c4a7771cfda5fcab17e 100644 (file)
@@ -4,14 +4,17 @@ FROM arvados/passenger
 MAINTAINER Tim Pierce <twp@curoverse.com>
 
 # Install postgres and apache.
-# Clone a git repository of Arvados source -- not used to build, but
-# will be used by the Commit model and anything else that needs to
-# check a git repo for crunch scripts.
-#
 RUN apt-get update && \
-    apt-get -q -y install procps postgresql postgresql-server-dev-9.1 apache2 \
-                          supervisor && \
-    git clone --bare git://github.com/curoverse/arvados.git /var/cache/git/arvados.git
+    apt-get -q -y install procps postgresql postgresql-server-dev-9.1 apache2 slurm-llnl munge \
+                          supervisor sudo libwww-perl libio-socket-ssl-perl libcrypt-ssleay-perl \
+                          libjson-perl cron
+
+ADD munge.key /etc/munge/
+RUN chown munge:munge /etc/munge/munge.key && chmod 600 /etc/munge/munge.key
+ADD generated/slurm.conf /etc/slurm-llnl/
+
+RUN /usr/local/rvm/bin/rvm-exec default gem install arvados-cli arvados
+# /for crunch-dispatch
 
 RUN /bin/mkdir -p /usr/src/arvados/services
 ADD generated/api.tar.gz /usr/src/arvados/services/
@@ -47,9 +50,29 @@ RUN a2dissite default && \
     a2enmod ssl && \
     /bin/mkdir /var/run/apache2
 
+# Install a token for root
+RUN mkdir -p /root/.config/arvados; echo "ARVADOS_API_HOST=api" >> /root/.config/arvados/settings.conf && echo "ARVADOS_API_HOST_INSECURE=yes" >> /root/.config/arvados/settings.conf && echo "ARVADOS_API_TOKEN=$(cat /tmp/superuser_token)" >> /root/.config/arvados/settings.conf && chmod 600 /root/.config/arvados/settings.conf
+
+# Set up directory for job commit repo
+RUN mkdir -p /var/lib/arvados
+# Add crunch user
+RUN addgroup --gid 4005 crunch && mkdir /home/crunch && useradd --uid 4005 --gid 4005 crunch && chown crunch:crunch /home/crunch
+
+# Create keep and compute node objects
+ADD keep_server_0.json /root/
+ADD keep_server_1.json /root/
+
+# Set up update-gitolite.rb
+RUN mkdir /usr/local/arvados/config -p
+ADD generated/arvados-clients.yml /usr/local/arvados/config/
+ADD update-gitolite.rb /usr/local/arvados/
+
 # Supervisor.
 ADD supervisor.conf /etc/supervisor/conf.d/arvados.conf
 ADD ssh.sh /usr/local/bin/ssh.sh
+ADD generated/setup.sh /usr/local/bin/setup.sh
+ADD generated/setup-gitolite.sh /usr/local/bin/setup-gitolite.sh
+ADD crunch-dispatch-run.sh /usr/local/bin/crunch-dispatch-run.sh
 ADD apache2_foreground.sh /etc/apache2/foreground.sh
 
 # Start the supervisor.
index 056d4b9263d19c6b2683ee45c80268020179af6b..355c4e5f94be8aa4ceda83272daffd00433e311b 100644 (file)
@@ -50,6 +50,11 @@ production:
   #     Net::HTTP.get(URI("http://169.254.169.254/latest/meta-data/#{iface}-ipv4")).match(/^[\d\.]+$/)[0]
   #   end << '172.16.0.23'
   # %>
+  permit_create_collection_with_unsigned_manifest: true
+  git_repositories_dir: /home/git/repositories
+  crunch_job_wrapper: :slurm_immediate
+  action_mailer.raise_delivery_errors: false
+  action_mailer.perform_deliveries: false
 
 test:
   uuid_prefix: zzzzz
diff --git a/docker/api/apt.arvados.org.list b/docker/api/apt.arvados.org.list
new file mode 100644 (file)
index 0000000..7eb8716
--- /dev/null
@@ -0,0 +1,2 @@
+# apt.arvados.org
+deb http://apt.arvados.org/ wheezy main
diff --git a/docker/api/arvados-clients.yml.in b/docker/api/arvados-clients.yml.in
new file mode 100644 (file)
index 0000000..babfc4e
--- /dev/null
@@ -0,0 +1,5 @@
+production:
+  gitolite_url: 'git@api:gitolite-admin.git'
+  gitolite_tmp: 'gitolite-tmp'
+  arvados_api_host: 'api'
+  arvados_api_token: '@@API_SUPERUSER_SECRET@@'
diff --git a/docker/api/crunch-dispatch-run.sh b/docker/api/crunch-dispatch-run.sh
new file mode 100755 (executable)
index 0000000..5103b1d
--- /dev/null
@@ -0,0 +1,24 @@
+#!/bin/bash
+set -e
+export PATH="$PATH":/usr/src/arvados/services/crunch
+export PERLLIB=/usr/src/arvados/sdk/perl/lib
+export ARVADOS_API_HOST=api
+export ARVADOS_API_HOST_INSECURE=yes
+export CRUNCH_DISPATCH_LOCKFILE=/var/lock/crunch-dispatch
+
+if [[ ! -e $CRUNCH_DISPATCH_LOCKFILE ]]; then
+  touch $CRUNCH_DISPATCH_LOCKFILE
+fi
+
+export CRUNCH_JOB_BIN=/usr/src/arvados/services/crunch/crunch-job
+export HOME=`pwd`
+fuser -TERM -k $CRUNCH_DISPATCH_LOCKFILE || true
+
+# Give the compute nodes some time to start up
+sleep 5
+
+cd /usr/src/arvados/services/api
+export RAILS_ENV=production
+/usr/local/rvm/bin/rvm-exec default bundle install
+exec /usr/local/rvm/bin/rvm-exec default bundle exec ./script/crunch-dispatch.rb 2>&1
+
diff --git a/docker/api/keep_server_0.json b/docker/api/keep_server_0.json
new file mode 100644 (file)
index 0000000..ce02f50
--- /dev/null
@@ -0,0 +1,6 @@
+{
+  "service_host": "keep_server_0.keep.dev.arvados",
+  "service_port": 25107,
+  "service_ssl_flag": "false",
+  "service_type": "disk"
+}
diff --git a/docker/api/keep_server_1.json b/docker/api/keep_server_1.json
new file mode 100644 (file)
index 0000000..dbbdd1c
--- /dev/null
@@ -0,0 +1,7 @@
+{
+  "service_host": "keep_server_1.keep.dev.arvados",
+  "service_port": 25107,
+  "service_ssl_flag": "false",
+  "service_type": "disk"
+}
+
diff --git a/docker/api/munge.key b/docker/api/munge.key
new file mode 100644 (file)
index 0000000..34036a0
Binary files /dev/null and b/docker/api/munge.key differ
diff --git a/docker/api/setup-gitolite.sh.in b/docker/api/setup-gitolite.sh.in
new file mode 100755 (executable)
index 0000000..92014f9
--- /dev/null
@@ -0,0 +1,77 @@
+#!/bin/bash
+
+ssh-keygen -q -N '' -t rsa -f /root/.ssh/id_rsa
+
+useradd git
+mkdir /home/git
+
+# Set up gitolite repository
+cp ~root/.ssh/id_rsa.pub ~git/root-authorized_keys.pub
+chown git:git /home/git -R
+su - git -c "mkdir -p ~/bin"
+
+su - git -c "git clone git://github.com/sitaramc/gitolite"
+su - git -c "gitolite/install -ln ~/bin"
+su - git -c "PATH=/home/git/bin:$PATH gitolite setup -pk ~git/root-authorized_keys.pub"
+
+# Make sure the repositories are created in such a way that they are readable
+# by the api server
+sed -i 's/0077/0022/g' /home/git/.gitolite.rc
+
+# And make sure that the existing repos are equally readable, or the API server commit model will freak out...
+chmod 755 /home/git/repositories
+chmod +rx /home/git/repositories/*git -R
+
+# Now set up the gitolite repo(s) we use
+mkdir -p /usr/local/arvados/gitolite-tmp/
+# Make ssh store the host key
+ssh -o "StrictHostKeyChecking no" git@api info
+# Now check out the tree
+git clone git@api:gitolite-admin.git /usr/local/arvados/gitolite-tmp/gitolite-admin/
+cd /usr/local/arvados/gitolite-tmp/gitolite-admin
+mkdir keydir/arvados
+mkdir conf/admin
+mkdir conf/auto
+echo "
+
+@arvados_git_user = arvados_git_user
+
+repo @all
+     RW+                 = @arvados_git_user
+
+" > conf/admin/arvados.conf
+echo '
+include "auto/*.conf"
+include "admin/*.conf"
+' >> conf/gitolite.conf
+
+#su - git -c "ssh-keygen -t rsa"
+cp /root/.ssh/id_rsa.pub keydir/arvados/arvados_git_user.pub
+# Replace the 'root' key with the user key, just in case
+cp /root/.ssh/authorized_keys keydir/root-authorized_keys.pub
+# But also make sure we have the root key installed so it can access all keys
+git add keydir/root-authorized_keys.pub
+git add keydir/arvados/arvados_git_user.pub
+git add conf/admin/arvados.conf
+git add keydir/arvados/
+git add conf/gitolite.conf
+git commit -a -m 'git server setup'
+git push
+
+# Prepopulate the arvados.git repo with our source. Silly, but until we can check out from remote trees,
+# we need this to make the tutorials work.
+su - git -c "git clone --bare git://github.com/curoverse/arvados.git /home/git/repositories/arvados.git"
+
+echo "ARVADOS_API_HOST_INSECURE=yes" > /etc/cron.d/gitolite-update
+echo "*/2 * * * * root /bin/bash -c 'source /etc/profile.d/rvm.sh && /usr/local/arvados/update-gitolite.rb production'" >> /etc/cron.d/gitolite-update
+
+# Create/update the repos now
+. /etc/profile.d/rvm.sh
+export ARVADOS_API_HOST=api
+export ARVADOS_API_HOST_INSECURE=yes
+export ARVADOS_API_TOKEN=@@API_SUPERUSER_SECRET@@
+/usr/local/arvados/update-gitolite.rb production
+
+echo "PATH=/usr/bin:/bin:/sbin" > /etc/cron.d/arvados-repo-update
+echo "*/5 * * * * git cd ~git/repositories/arvados.git; git fetch https://github.com/curoverse/arvados.git master:master" >> /etc/cron.d/arvados-repo-update
+
diff --git a/docker/api/setup.sh.in b/docker/api/setup.sh.in
new file mode 100755 (executable)
index 0000000..cba4759
--- /dev/null
@@ -0,0 +1,62 @@
+#!/bin/bash
+
+set -x
+
+. /etc/profile.d/rvm.sh
+
+export ARVADOS_API_HOST=api
+export ARVADOS_API_HOST_INSECURE=yes
+export ARVADOS_API_TOKEN=@@API_SUPERUSER_SECRET@@
+
+# All users group
+prefix=`arv --format=uuid user current | cut -d- -f1`
+read -rd $'\000' newgroup <<EOF; arv group create --group "$newgroup"
+{
+ "uuid":"$prefix-j7d0g-fffffffffffffff",
+ "name":"All users"
+}
+EOF
+
+# Arvados repository object
+all_users_group_uuid="$prefix-j7d0g-fffffffffffffff"
+repo_uuid=`arv --format=uuid repository create --repository '{"name":"arvados","fetch_url":"git@api:arvados.git","push_url":"git@api:arvados.git"}'`
+echo "Arvados repository uuid is $repo_uuid"
+
+read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"
+{
+ "tail_uuid":"$all_users_group_uuid",
+ "head_uuid":"$repo_uuid",
+ "link_class":"permission",
+ "name":"can_read"
+}
+EOF
+
+# Make sure the necessary keep_service objects exist
+arv keep_service list > /tmp/keep_service.list
+
+grep -q keep_server_0 /tmp/keep_service.list
+if [[ "$?" != "0" ]]; then
+  arv keep_service create --keep-service "$(cat /root/keep_server_0.json)"
+fi
+
+grep -q keep_server_1 /tmp/keep_service.list
+if [[ "$?" != "0" ]]; then
+  arv keep_service create --keep-service "$(cat /root/keep_server_1.json)"
+fi
+
+# User repository object
+user_uuid=`arv --format=uuid user current`
+repo_uuid=`arv --format=uuid repository create --repository '{"name":"@@ARVADOS_USER_NAME@@","fetch_url":"git@api:@@ARVADOS_USER_NAME@@.git","push_url":"git@api:@@ARVADOS_USER_NAME@@.git"}'`
+echo "User repository uuid is $repo_uuid"
+
+read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"
+{
+ "tail_uuid":"$user_uuid",
+ "head_uuid":"$repo_uuid",
+ "link_class":"permission",
+ "name":"can_write"
+}
+EOF
+
+# Shell machine object
+arv virtual_machine create --virtual-machine '{"hostname":"shell"}'
diff --git a/docker/api/slurm.conf.in b/docker/api/slurm.conf.in
new file mode 100644 (file)
index 0000000..7312a0e
--- /dev/null
@@ -0,0 +1,60 @@
+
+ControlMachine=api
+#SlurmUser=slurmd
+SlurmctldPort=6817
+SlurmdPort=6818
+AuthType=auth/munge
+#JobCredentialPrivateKey=/etc/slurm-llnl/slurm-key.pem
+#JobCredentialPublicCertificate=/etc/slurm-llnl/slurm-cert.pem
+StateSaveLocation=/tmp
+SlurmdSpoolDir=/tmp/slurmd
+SwitchType=switch/none
+MpiDefault=none
+SlurmctldPidFile=/var/run/slurmctld.pid
+SlurmdPidFile=/var/run/slurmd.pid
+ProctrackType=proctrack/pgid
+CacheGroups=0
+ReturnToService=2
+TaskPlugin=task/affinity
+#
+# TIMERS
+SlurmctldTimeout=300
+SlurmdTimeout=300
+InactiveLimit=0
+MinJobAge=300
+KillWait=30
+Waittime=0
+#
+# SCHEDULING
+SchedulerType=sched/backfill
+#SchedulerType=sched/builtin
+SchedulerPort=7321
+#SchedulerRootFilter=
+#SelectType=select/linear
+SelectType=select/cons_res
+SelectTypeParameters=CR_CPU_Memory
+FastSchedule=1
+#
+# LOGGING
+SlurmctldDebug=3
+#SlurmctldLogFile=
+SlurmdDebug=3
+#SlurmdLogFile=
+JobCompType=jobcomp/none
+#JobCompLoc=
+JobAcctGatherType=jobacct_gather/none
+#JobAcctLogfile=
+#JobAcctFrequency=
+#
+# COMPUTE NODES
+NodeName=DEFAULT
+# CPUs=8 State=UNKNOWN RealMemory=6967 Weight=6967
+PartitionName=DEFAULT MaxTime=INFINITE State=UP
+PartitionName=compute Default=YES Shared=yes
+#PartitionName=sysadmin Hidden=YES Shared=yes
+
+NodeName=compute[0-1]
+#NodeName=compute0 RealMemory=6967 Weight=6967
+
+PartitionName=compute Nodes=compute[0-1]
+PartitionName=crypto Nodes=compute[0-1]
index a4f91296156590de77cb72a09f4b3c4de043bbd9..e85bb72658ee48dbb464a5aa088e0403c7ca1054 100644 (file)
@@ -10,3 +10,32 @@ command=/usr/lib/postgresql/9.1/bin/postgres -D /var/lib/postgresql/9.1/main -c
 [program:apache2]
 command=/etc/apache2/foreground.sh
 stopsignal=6
+
+[program:munge]
+user=root
+command=/etc/init.d/munge start
+startsecs=0
+
+[program:slurm]
+user=root
+command=/etc/init.d/slurm-llnl start
+startsecs=0
+
+[program:cron]
+user=root
+command=/etc/init.d/cron start
+startsecs=0
+
+[program:setup]
+user=root
+command=/usr/local/bin/setup.sh
+startsecs=0
+
+[program:setup-gitolite]
+user=root
+command=/usr/local/bin/setup-gitolite.sh
+startsecs=0
+
+[program:crunch-dispatch]
+user=root
+command=/usr/local/bin/crunch-dispatch-run.sh
diff --git a/docker/api/update-gitolite.rb b/docker/api/update-gitolite.rb
new file mode 100755 (executable)
index 0000000..779099a
--- /dev/null
@@ -0,0 +1,162 @@
+#!/usr/bin/env ruby
+
+require 'rubygems'
+require 'pp'
+require 'arvados'
+require 'active_support/all'
+
+# This script does the actual gitolite config management on disk.
+#
+# Ward Vandewege <ward@curoverse.com>
+
+# Default is development
+production = ARGV[0] == "production"
+
+ENV["RAILS_ENV"] = "development"
+ENV["RAILS_ENV"] = "production" if production
+
+DEBUG = 1
+
+# load and merge in the environment-specific application config info
+# if present, overriding base config parameters as specified
+path = File.dirname(__FILE__) + '/config/arvados-clients.yml'
+if File.exists?(path) then
+  cp_config = YAML.load_file(path)[ENV['RAILS_ENV']]
+else
+  puts "Please create a\n " + File.dirname(__FILE__) + "/config/arvados-clients.yml\n file"
+  exit 1
+end
+
+gitolite_url = cp_config['gitolite_url']
+gitolite_tmp = cp_config['gitolite_tmp']
+
+gitolite_admin = File.join(File.expand_path(File.dirname(__FILE__)) + '/' + gitolite_tmp + '/gitolite-admin')
+
+ENV['ARVADOS_API_HOST'] = cp_config['arvados_api_host']
+ENV['ARVADOS_API_TOKEN'] = cp_config['arvados_api_token']
+
+keys = ''
+
+seen = Hash.new
+
+def ensure_repo(name,permissions,user_keys,gitolite_admin)
+  tmp = ''
+  # Just in case...
+  name.gsub!(/[^a-z0-9]/i,'')
+
+  keys = Hash.new()
+
+  user_keys.each do |uuid,p|
+    p.each do |k|
+      next if k[:public_key].nil?
+      keys[uuid] = Array.new() if not keys.key?(uuid)
+
+      key = k[:public_key]
+      # Handle putty-style ssh public keys
+      key.sub!(/^(Comment: "r[^\n]*\n)(.*)$/m,'ssh-rsa \2 \1')
+      key.sub!(/^(Comment: "d[^\n]*\n)(.*)$/m,'ssh-dss \2 \1')
+      key.gsub!(/\n/,'')
+      key.strip
+
+      keys[uuid].push(key)
+    end
+  end
+
+  cf = gitolite_admin + '/conf/auto/' + name + '.conf'
+
+  conf = "\nrepo #{name}\n"
+
+  commit = false
+
+  seen = {}
+  permissions.sort.each do |uuid,v|
+    conf += "\t#{v[:gitolite_permissions]}\t= #{uuid.to_s}\n"
+
+    count = 0
+    keys.include?(uuid) and keys[uuid].each do |v|
+      kf = gitolite_admin + '/keydir/arvados/' + uuid.to_s + "@#{count}.pub"
+      seen[kf] = true
+      if !File.exists?(kf) or IO::read(kf) != v then
+        commit = true
+        f = File.new(kf + ".tmp",'w')
+        f.write(v)
+        f.close()
+        # File.rename will overwrite the destination file if it exists
+        File.rename(kf + ".tmp",kf);
+      end
+      count += 1
+    end
+  end
+
+  if !File.exists?(cf) or IO::read(cf) != conf then
+    commit = true
+    f = File.new(cf + ".tmp",'w')
+    f.write(conf)
+    f.close()
+    # this is about as atomic as we can make the replacement of the file...
+    File.unlink(cf) if File.exists?(cf)
+    File.rename(cf + ".tmp",cf);
+  end
+
+  return commit,seen
+end
+
+begin
+
+  pwd = Dir.pwd
+  # Get our local gitolite-admin repo up to snuff
+  if not File.exists?(File.dirname(__FILE__) + '/' + gitolite_tmp) then
+    Dir.mkdir(File.join(File.dirname(__FILE__) + '/' + gitolite_tmp), 0700)
+  end
+  if not File.exists?(gitolite_admin) then
+    Dir.chdir(File.join(File.dirname(__FILE__) + '/' + gitolite_tmp))
+    `git clone #{gitolite_url}`
+  else
+    Dir.chdir(gitolite_admin)
+    `git pull`
+  end
+  Dir.chdir(pwd)
+
+  arv = Arvados.new( { :suppress_ssl_warnings => false } )
+
+  permissions = arv.repository.get_all_permissions
+
+  repos = permissions[:repositories]
+  user_keys = permissions[:user_keys]
+
+  @commit = false
+
+  @seen = {}
+
+  repos.each do |r|
+    next if r[:name].nil?
+    (@c,@s) = ensure_repo(r[:name],r[:user_permissions],user_keys,gitolite_admin)
+    @seen.merge!(@s)
+    @commit = true if @c
+  end
+
+  # Clean up public key files that should not be present
+  Dir.glob(gitolite_admin + '/keydir/arvados/*.pub') do |key_file|
+    next if key_file =~ /arvados_git_user.pub$/
+    next if @seen.has_key?(key_file)
+    puts "Extra file #{key_file}"
+    @commit = true
+    Dir.chdir(gitolite_admin)
+    key_file.gsub!(/^#{gitolite_admin}\//,'')
+    `git rm #{key_file}`
+  end
+
+  if @commit then
+    message = "#{Time.now().to_s}: update from API"
+    Dir.chdir(gitolite_admin)
+    `git add --all`
+    `git commit -m '#{message}'`
+    `git push`
+  end
+
+rescue Exception => bang
+  puts "Error: " + bang.to_s
+  puts bang.backtrace.join("\n")
+  exit 1
+end
+
index f2edc19c737107857e6216b41de52f2e04308d09..8615545a941018296663efb088e350e750302898 100755 (executable)
@@ -7,18 +7,23 @@ if [[ "$DOCKER" == "" ]]; then
     DOCKER=`which docker`
 fi
 
+COMPUTE_COUNTER=0
+
 function usage {
     echo >&2
     echo >&2 "usage: $0 (start|stop|restart|test) [options]"
     echo >&2
     echo >&2 "$0 start/stop/restart options:"
-    echo >&2 "  -d [port], --doc[=port]        Documentation server (default port 9898)"
-    echo >&2 "  -w [port], --workbench[=port]  Workbench server (default port 9899)"
-    echo >&2 "  -s [port], --sso[=port]        SSO server (default port 9901)"
-    echo >&2 "  -a [port], --api[=port]        API server (default port 9900)"
-    echo >&2 "  -k, --keep                     Keep servers"
-    echo >&2 "  --ssh                          Enable SSH access to server containers"
-    echo >&2 "  -h, --help                     Display this help and exit"
+    echo >&2 "  -d[port], --doc[=port]        Documentation server (default port 9898)"
+    echo >&2 "  -w[port], --workbench[=port]  Workbench server (default port 9899)"
+    echo >&2 "  -s[port], --sso[=port]        SSO server (default port 9901)"
+    echo >&2 "  -a[port], --api[=port]        API server (default port 9900)"
+    echo >&2 "  -c, --compute                 Compute nodes (starts 2)"
+    echo >&2 "  -v, --vm                      Shell server"
+    echo >&2 "  -n, --nameserver              Nameserver"
+    echo >&2 "  -k, --keep                    Keep servers"
+    echo >&2 "  --ssh                         Enable SSH access to server containers"
+    echo >&2 "  -h, --help                    Display this help and exit"
     echo >&2
     echo >&2 "  If no options are given, the action is applied to all servers."
     echo >&2
@@ -39,7 +44,16 @@ function start_container {
     fi
     if [[ "$2" != '' ]]; then
       local name="$2"
-      args="$args --name $name"
+      if [[ "$name" == "api_server" ]]; then
+        args="$args --dns=172.17.42.1 --dns-search=compute.dev.arvados --hostname api -P --name $name"
+      elif [[ "$name" == "compute" ]]; then
+        name=$name$COMPUTE_COUNTER
+        # We need --privileged because we run docker-inside-docker on the compute nodes
+        args="$args --dns=172.17.42.1 --dns-search=compute.dev.arvados --hostname compute$COMPUTE_COUNTER -P --privileged --name $name"
+        let COMPUTE_COUNTER=$(($COMPUTE_COUNTER + 1))
+      else
+        args="$args --dns=172.17.42.1 --dns-search=dev.arvados --hostname ${name#_server} --name $name"
+      fi
     fi
     if [[ "$3" != '' ]]; then
       local volume="$3"
@@ -66,13 +80,14 @@ function start_container {
     $DOCKER rm "$name" 2>/dev/null
 
     echo "Starting container:"
+    #echo "  $DOCKER run --dns=127.0.0.1 $args $image"
     echo "  $DOCKER run $args $image"
     container=`$DOCKER run $args $image`
     if [[ "$?" != "0" ]]; then
       echo "Unable to start container"
       exit 1
     fi
-    if $ENABLE_SSH
+    if [[ "$name" == "compute" || $ENABLE_SSH ]];
     then
       ip=$(ip_address $container )
       echo
@@ -130,12 +145,15 @@ function do_start {
     local start_doc=false
     local start_sso=false
     local start_api=false
+    local start_compute=false
     local start_workbench=false
+    local start_vm=false
+    local start_nameserver=false
     local start_keep=false
 
     # NOTE: This requires GNU getopt (part of the util-linux package on Debian-based distros).
-    local TEMP=`getopt -o d::s::a::w::kh \
-                  --long doc::,sso::,api::,workbench::,keep,help,ssh \
+    local TEMP=`getopt -o d::s::a::cw::nkvh \
+                  --long doc::,sso::,api::,compute,workbench::,nameserver,keep,vm,help,ssh \
                   -n "$0" -- "$@"`
 
     if [ $? != 0 ] ; then echo "Use -h for help"; exit 1 ; fi
@@ -164,12 +182,24 @@ function do_start {
                     *)  start_api=$2; shift 2 ;;
                 esac
                 ;;
+            -c | --compute)
+                start_compute=2
+                shift
+                ;;
             -w | --workbench)
                 case "$2" in
                     "") start_workbench=9899; shift 2 ;;
                     *)  start_workbench=$2; shift 2 ;;
                 esac
                 ;;
+            -v | --vm)
+                start_vm=true
+                shift
+                ;;
+            -n | --nameserver)
+                start_nameserver=true
+                shift
+                ;;
             -k | --keep)
                 start_keep=true
                 shift
@@ -194,13 +224,19 @@ function do_start {
     if [[ $start_doc == false &&
           $start_sso == false &&
           $start_api == false &&
+          $start_compute == false &&
           $start_workbench == false &&
+          $start_vm == false &&
+          $start_nameserver == false &&
           $start_keep == false ]]
     then
         start_doc=9898
         start_sso=9901
         start_api=9900
+        start_compute=2
         start_workbench=9899
+        start_vm=true
+        start_nameserver=true
         start_keep=true
     fi
 
@@ -214,6 +250,31 @@ function do_start {
         start_container "$start_api:443" "api_server" '' "sso_server:sso" "arvados/api"
     fi
 
+    if [[ $start_nameserver != false ]]
+    then
+      # We rely on skydock and skydns for dns discovery between the slurm controller and compute nodes,
+      # so make sure they are running
+      $DOCKER ps | grep skydns >/dev/null
+      if [[ "$?" != "0" ]]; then
+        echo "Starting crosbymichael/skydns container..."
+        $DOCKER rm "skydns" 2>/dev/null
+        $DOCKER run -d -p 172.17.42.1:53:53/udp --name skydns crosbymichael/skydns -nameserver 8.8.8.8:53 -domain arvados
+      fi
+      $DOCKER ps | grep skydock >/dev/null
+      if [[ "$?" != "0" ]]; then
+        echo "Starting crosbymichael/skydock container..."
+        $DOCKER rm "skydock" 2>/dev/null
+        $DOCKER run -d -v /var/run/docker.sock:/docker.sock --name skydock crosbymichael/skydock -ttl 30 -environment dev -s /docker.sock -domain arvados -name skydns
+      fi
+    fi
+
+    if [[ $start_compute != false ]]
+    then
+        for i in `seq 0 $(($start_compute - 1))`; do
+          start_container "" "compute" '' "api_server:api" "arvados/compute"
+        done
+    fi
+
     if [[ $start_keep != false ]]
     then
         # create `keep_volumes' array with a list of keep mount points
@@ -238,6 +299,11 @@ function do_start {
         start_container "$start_doc:80" "doc_server" '' '' "arvados/doc"
     fi
 
+    if [[ $start_vm != false ]]
+    then
+        start_container "" "shell" '' "api_server:api" "arvados/shell"
+    fi
+
     if [[ $start_workbench != false ]]
     then
         start_container "$start_workbench:80" "workbench_server" '' "api_server:api" "arvados/workbench"
@@ -258,12 +324,15 @@ function do_stop {
     local stop_doc=""
     local stop_sso=""
     local stop_api=""
+    local stop_compute=""
     local stop_workbench=""
+    local stop_nameserver=""
+    local stop_vm=""
     local stop_keep=""
 
     # NOTE: This requires GNU getopt (part of the util-linux package on Debian-based distros).
-    local TEMP=`getopt -o d::s::a::w::kh \
-                  --long doc::,sso::,api::,workbench::,keep,help,ssh \
+    local TEMP=`getopt -o dsacwnkvh \
+                  --long doc,sso,api,compute,workbench,nameserver,keep,vm,help \
                   -n "$0" -- "$@"`
 
     if [ $? != 0 ] ; then echo "Use -h for help"; exit 1 ; fi
@@ -275,18 +344,21 @@ function do_stop {
     do
         case $1 in
             -d | --doc)
-                stop_doc=doc_server ; shift ;;
+                stop_doc=doc_server ; shift ;;
             -s | --sso)
-                stop_sso=sso_server ; shift ;;
+                stop_sso=sso_server ; shift ;;
             -a | --api)
-                stop_api=api_server ; shift 2 ;;
+                stop_api=api_server ; shift ;;
+            -c | --compute)
+                stop_compute=`$DOCKER ps |grep -P "compute\d+" |grep -v api_server |cut -f1 -d ' '` ; shift ;;
             -w | --workbench)
-                stop_workbench=workbench_server ; shift 2 ;;
+                stop_workbench=workbench_server ; shift ;;
+            -n | --nameserver )
+                stop_nameserver="skydock skydns" ; shift ;;
+            -v | --vm )
+                stop_vm="shell" ; shift ;;
             -k | --keep )
                 stop_keep="keep_server_0 keep_server_1" ; shift ;;
-            --ssh)
-                shift
-                ;;
             --)
                 shift
                 break
@@ -302,17 +374,23 @@ function do_stop {
     if [[ $stop_doc == "" &&
           $stop_sso == "" &&
           $stop_api == "" &&
+          $stop_compute == "" &&
           $stop_workbench == "" &&
+          $stop_vm == "" &&
+          $stop_nameserver == "" &&
           $stop_keep == "" ]]
     then
         stop_doc=doc_server
         stop_sso=sso_server
         stop_api=api_server
+        stop_compute=`$DOCKER ps |grep -P "compute\d+" |grep -v api_server |cut -f1 -d ' '`
         stop_workbench=workbench_server
+        stop_vm=shell
+        stop_nameserver="skydock skydns"
         stop_keep="keep_server_0 keep_server_1"
     fi
 
-    $DOCKER stop $stop_doc $stop_sso $stop_api $stop_workbench $stop_keep \
+    $DOCKER stop $stop_doc $stop_sso $stop_api $stop_compute $stop_workbench $stop_nameserver $stop_keep $stop_vm \
         2>/dev/null
 }
 
index f1defceb7d2f2a1e85eaadb3ad7d81e25ec70263..79cb42444ec33ab1a7a1ec822f8e9edd56534af7 100644 (file)
@@ -22,6 +22,10 @@ RUN apt-get update && \
     /usr/local/rvm/bin/rvm alias create default ruby && \
     /bin/mkdir -p /usr/src/arvados
 
+ADD apt.arvados.org.list /etc/apt/sources.list.d/
+RUN apt-key adv --keyserver pgp.mit.edu --recv 1078ECD7
+RUN apt-get update && apt-get -qqy install python-arvados-python-client
+
 ADD generated/arvados.tar.gz /usr/src/arvados/
 
 # Update gem. This (hopefully) fixes
diff --git a/docker/base/apt.arvados.org.list b/docker/base/apt.arvados.org.list
new file mode 100644 (file)
index 0000000..7eb8716
--- /dev/null
@@ -0,0 +1,2 @@
+# apt.arvados.org
+deb http://apt.arvados.org/ wheezy main
index cbcc840667c4001365d5644be6fc4e1b16817d90..c9fbf05ff64570c2d7be6e51399f952a242469ec 100755 (executable)
@@ -11,4 +11,34 @@ then
     sudo apt-get -y install ruby1.9.3
 fi
 
-build_tools/build.rb $*
+function usage {
+    echo >&2
+    echo >&2 "usage: $0 [options]"
+    echo >&2
+    echo >&2 "Calling $0 without arguments will build all Arvados docker images"
+    echo >&2
+    echo >&2 "$0 options:"
+    echo >&2 "  -h, --help   Print this help text"
+    echo >&2 "  clean        Clear all build information"
+    echo >&2 "  realclean    clean and remove all Arvados Docker images except arvados/debian"
+    echo >&2 "  deepclean    realclean and remove arvados/debian, crosbymichael/skydns and "
+    echo >&2 "               crosbymichael/skydns Docker images"
+    echo >&2
+}
+
+if [ "$1" = '-h' ] || [ "$1" = '--help' ]; then
+  usage
+  exit 1
+fi
+
+build_tools/build.rb
+
+if [[ "$?" == "0" ]]; then
+    DOCKER=`which docker.io`
+
+    if [[ "$DOCKER" == "" ]]; then
+      DOCKER=`which docker`
+    fi
+
+    DOCKER=$DOCKER /usr/bin/make -f build_tools/Makefile $*
+fi
index 267e24403650202580ef0e64816cf7c892e9e548..74a04dff5e5d7be7b0369f0d79e1cda9f0d069cd 100644 (file)
@@ -1,18 +1,50 @@
-all: api-image doc-image workbench-image keep-image sso-image
+# This is the 'shell hack'. Call make with DUMP=1 to see the effect.
+ifdef DUMP
+OLD_SHELL := $(SHELL)
+SHELL = $(warning [$@])$(OLD_SHELL) -x
+endif
+
+all: skydns-image skydock-image api-image compute-image doc-image workbench-image keep-image sso-image shell-image
+
+IMAGE_FILES := $(shell ls *-image 2>/dev/null |grep -v debian-arvados-image)
+GENERATED_FILES := $(shell ls */generated/* 2>/dev/null)
+GENERATED_DIRS := $(shell ls */generated 2>/dev/null)
 
 # `make clean' removes the files generated in the build directory
 # but does not remove any docker images generated in previous builds
 clean:
-       -rm -rf build
-       -rm *-image */generated/*
-       -@rmdir */generated
-
-# `make realclean' will also remove the docker images and force
-# subsequent makes to build the entire chain from the ground up
+       @echo "make clean"
+       -@rm -rf build
+       +@[ "$(IMAGE_FILES)$(GENERATED_FILES)" = "" ] || rm $(IMAGE_FILES) $(GENERATED_FILES) 2>/dev/null
+       +@[ "$(GENERATED_DIRS)" = "" ] || rmdir */generated 2>/dev/null
+
+DEBIAN_IMAGE := $(shell $(DOCKER) images -q arvados/debian |head -n1)
+
+REALCLEAN_CONTAINERS := $(shell $(DOCKER) ps -a |grep -e arvados -e api_server -e keep_server -e doc_server -e workbench_server |cut -f 1 -d' ')
+REALCLEAN_IMAGES := $(shell $(DOCKER) images -q arvados/* |grep -v $(DEBIAN_IMAGE) 2>/dev/null)
+DEEPCLEAN_IMAGES := $(shell $(DOCKER) images -q arvados/*)
+SKYDNS_CONTAINERS := $(shell $(DOCKER) ps -a |grep -e crosbymichael/skydns -e crosbymichael/skydock |cut -f 1 -d' ')
+SKYDNS_IMAGES := $(shell $(DOCKER) images -q crosbymichael/skyd*)
+
+# `make realclean' will also remove the Arvados docker images (but not the
+# arvados/debian image) and force subsequent makes to build the entire chain
+# from the ground up
 realclean: clean
-       -[ -n "`$(DOCKER) ps -q`" ] && $(DOCKER) stop `$(DOCKER) ps -q`
-       -$(DOCKER) rm `$(DOCKER) ps -a |grep -e arvados -e api_server -e keep_server -e doc_server -e workbench_server |cut -f 1 -d' '`
-       -$(DOCKER) rmi `$(DOCKER) images -q arvados/*`
+       @echo "make realclean"
+       +@[ "`$(DOCKER) ps -q`" = '' ] || $(DOCKER) stop `$(DOCKER) ps -q`
+       +@[ "$(REALCLEAN_CONTAINERS)" = '' ] || $(DOCKER) rm $(REALCLEAN_CONTAINERS)
+       +@[ "$(REALCLEAN_IMAGES)" = '' ] || $(DOCKER) rmi $(REALCLEAN_IMAGES)
+
+# `make deepclean' will remove all Arvados docker images and the skydns/skydock
+# images and force subsequent makes to build the entire chain from the ground up
+deepclean: clean
+       @echo "make deepclean"
+       -@rm -f debian-arvados-image 2>/dev/null
+       +@[ "`$(DOCKER) ps -q`" = '' ] || $(DOCKER) stop `$(DOCKER) ps -q`
+       +@[ "$(REALCLEAN_CONTAINERS)" = '' ] || $(DOCKER) rm $(REALCLEAN_CONTAINERS)
+       +@[ "$(DEEPCLEAN_IMAGES)" = '' ] || $(DOCKER) rmi $(DEEPCLEAN_IMAGES)
+       +@[ "$(SKYDNS_CONTAINERS)" = '' ] || $(DOCKER) rm $(SKYDNS_CONTAINERS)
+       +@[ "$(SKYDNS_IMAGES)" = '' ] || $(DOCKER) rmi $(SKYDNS_IMAGES)
 
 # ============================================================
 # Dependencies for */generated files which are prerequisites
@@ -24,11 +56,17 @@ BUILD = build/.buildstamp
 
 BASE_DEPS = base/Dockerfile $(BASE_GENERATED)
 
+SLURM_DEPS = slurm/Dockerfile $(SLURM_GENERATED)
+
 JOBS_DEPS = jobs/Dockerfile
 
 JAVA_BWA_SAMTOOLS_DEPS = java-bwa-samtools/Dockerfile
 
-API_DEPS = api/Dockerfile $(API_GENERATED)
+API_DEPS = api/* $(API_GENERATED)
+
+SHELL_DEPS = shell/* $(SHELL_GENERATED)
+
+COMPUTE_DEPS = compute/* $(COMPUTE_GENERATED)
 
 DOC_DEPS = doc/Dockerfile doc/apache2_vhost
 
@@ -43,22 +81,50 @@ BCBIO_NEXTGEN_DEPS = bcbio-nextgen/Dockerfile
 
 BASE_GENERATED = base/generated/arvados.tar.gz
 
+SLURM_GENERATED = slurm/generated/*
+
+COMPUTE_GENERATED = compute/generated/setup.sh
+
+COMPUTE_GENERATED_IN = compute/setup.sh.in
+
 API_GENERATED = \
+        api/generated/arvados-clients.yml \
         api/generated/apache2_vhost \
         api/generated/config_databases.sh \
         api/generated/database.yml \
         api/generated/omniauth.rb \
         api/generated/application.yml \
+        api/generated/setup.sh \
+        api/generated/setup-gitolite.sh \
+        api/generated/slurm.conf \
         api/generated/superuser_token
 
 API_GENERATED_IN = \
+        api/arvados-clients.yml.in \
         api/apache2_vhost.in \
         api/config_databases.sh.in \
         api/database.yml.in \
         api/omniauth.rb.in \
         api/application.yml.in \
+        api/setup.sh.in \
+        api/setup-gitolite.sh.in \
+        api/slurm.conf.in \
         api/superuser_token.in
 
+SHELL_GENERATED = \
+        shell/generated/setup.sh \
+        shell/generated/superuser_token
+
+SHELL_GENERATED_IN = \
+        shell/setup.sh.in \
+        shell/superuser_token.in
+
+SLURM_GENERATED = \
+        slurm/generated/slurm.conf
+
+SLURM_GENERATED_IN = \
+        slurm/slurm.conf.in
+
 WORKBENCH_GENERATED = \
         workbench/generated/apache2_vhost \
         workbench/generated/application.yml
@@ -88,6 +154,10 @@ $(BUILD):
        cd build/sdk/ruby && gem build arvados.gemspec
        touch build/.buildstamp
 
+$(SLURM_GENERATED): config.yml $(BUILD)
+       $(CONFIG_RB)
+       mkdir -p slurm/generated
+
 $(BASE_GENERATED): config.yml $(BUILD)
        $(CONFIG_RB)
        mkdir -p base/generated
@@ -96,9 +166,14 @@ $(BASE_GENERATED): config.yml $(BUILD)
 $(API_GENERATED): config.yml $(API_GENERATED_IN)
        $(CONFIG_RB)
 
+$(SHELL_GENERATED): config.yml $(SHELL_GENERATED_IN)
+       $(CONFIG_RB)
+
 $(WORKBENCH_GENERATED): config.yml $(WORKBENCH_GENERATED_IN)
        $(CONFIG_RB)
 
+$(COMPUTE_GENERATED): config.yml $(COMPUTE_GENERATED_IN)
+
 $(WAREHOUSE_GENERATED): config.yml $(WAREHOUSE_GENERATED_IN)
        $(CONFIG_RB)
 
@@ -114,18 +189,36 @@ DOCKER_BUILD = $(DOCKER) build -q --rm=true
 # The main Arvados servers: api, doc, workbench, warehouse
 
 api-image: passenger-image $(BUILD) $(API_DEPS)
+       @echo "Building api-image"
        mkdir -p api/generated
        tar -czf api/generated/api.tar.gz -C build/services api
+       chmod 755 api/generated/setup.sh
+       chmod 755 api/generated/setup-gitolite.sh
        $(DOCKER_BUILD) -t arvados/api api
        date >api-image
 
+shell-image: base-image $(BUILD) $(SHELL_DEPS)
+       @echo "Building shell-image"
+       mkdir -p shell/generated
+       chmod 755 shell/generated/setup.sh
+       $(DOCKER_BUILD) -t arvados/shell shell
+       date >shell-image
+
+compute-image: slurm-image $(BUILD) $(COMPUTE_DEPS)
+       @echo "Building compute-image"
+       chmod 755 compute/generated/setup.sh
+       $(DOCKER_BUILD) -t arvados/compute compute
+       date >compute-image
+
 doc-image: base-image $(BUILD) $(DOC_DEPS)
+       @echo "Building doc-image"
        mkdir -p doc/generated
        tar -czf doc/generated/doc.tar.gz -C build doc
        $(DOCKER_BUILD) -t arvados/doc doc
        date >doc-image
 
-keep-image: debian-image $(BUILD) $(KEEP_DEPS)
+keep-image: debian-arvados-image $(BUILD) $(KEEP_DEPS)
+       @echo "Building keep-image"
        $(DOCKER_BUILD) -t arvados/keep keep
        date >keep-image
 
@@ -144,6 +237,7 @@ bcbio-nextgen-image: $(BUILD) $(BASE_GENERATED) $(BCBIO_NEXTGEN_DEPS)
        date >bcbio-nextgen-image
 
 workbench-image: passenger-image $(BUILD) $(WORKBENCH_DEPS)
+       @echo "Building workbench-image"
        mkdir -p workbench/generated
        tar -czf workbench/generated/workbench.tar.gz -C build/apps workbench
        $(DOCKER_BUILD) -t arvados/workbench workbench
@@ -154,6 +248,7 @@ warehouse-image: base-image $(WAREHOUSE_DEPS)
        date >warehouse-image
 
 sso-image: passenger-image $(SSO_DEPS)
+       @echo "Building sso-image"
        $(DOCKER_BUILD) -t arvados/sso sso
        date >sso-image
 
@@ -162,13 +257,31 @@ sso-image: passenger-image $(SSO_DEPS)
 # that are dependencies for every Arvados service.
 
 passenger-image: base-image
+       @echo "Building passenger-image"
        $(DOCKER_BUILD) -t arvados/passenger passenger
        date >passenger-image
 
-base-image: debian-image $(BASE_DEPS)
+slurm-image: base-image $(SLURM_DEPS)
+       @echo "Building slurm-image"
+       $(DOCKER_BUILD) -t arvados/slurm slurm
+       date >slurm-image
+
+base-image: debian-arvados-image $(BASE_DEPS)
+       @echo "Building base-image"
        $(DOCKER_BUILD) -t arvados/base base
        date >base-image
 
-debian-image:
+debian-arvados-image:
+       @echo "Building debian-arvados-image"
        ./mkimage-debootstrap.sh arvados/debian wheezy ftp://ftp.us.debian.org/debian/
-       date >debian-image
+       date >debian-arvados-image
+
+skydns-image:
+       @echo "Downloading skydns-image"
+       $(DOCKER) pull crosbymichael/skydns
+       date >skydns-image
+
+skydock-image:
+       @echo "Downloading skydock-image"
+       $(DOCKER) pull crosbymichael/skydock
+       date >skydock-image
index 5e3b1ed68dea63d20143d8419a738660fef26b3b..df76d52cd4bd5c95a35f5aad398dc5ad17fa55bd 100755 (executable)
@@ -64,9 +64,22 @@ def main options
       end
     end
 
+    print "Arvados needs to know the shell login name for the administrative user.\n"
+    print "This will also be used as the name for your git repository.\n"
+    print "\n"
+    user_name = ""
+    until is_valid_user_name? user_name
+      print "Enter a shell login name here: "
+      user_name = gets.strip
+      if not is_valid_user_name? user_name
+        print "That doesn't look like a valid shell login name. Please try again.\n"
+      end
+    end
+
     File.open 'config.yml', 'w' do |config_out|
       config = YAML.load_file 'config.yml.example'
       config['API_AUTO_ADMIN_USER'] = admin_email_address
+      config['ARVADOS_USER_NAME'] = user_name
       config['API_HOSTNAME'] = generate_api_hostname
       config['PUBLIC_KEY_PATH'] = find_or_create_ssh_key(config['API_HOSTNAME'])
       config.each_key do |var|
@@ -83,8 +96,9 @@ def main options
       docker_ok? docker_path and
       debootstrap_ok? and
       File.exists? 'config.yml'
-    warn "Building Arvados."
-    system({"DOCKER" => docker_path}, '/usr/bin/make', '-f', options[:makefile], *ARGV)
+    exit 0
+  else
+    exit 6
   end
 end
 
@@ -114,6 +128,15 @@ def is_valid_email? str
   str.match /^\S+@\S+\.\S+$/
 end
 
+# is_valid_user_name?
+#   Returns true if its arg looks like a valid unix username.
+#   This is a very very loose sanity check.
+#
+def is_valid_user_name? str
+  # borrowed from Debian's adduser (version 3.110)
+  str.match /^[_.A-Za-z0-9][-\@_.A-Za-z0-9]*\$?$/
+end
+
 # generate_api_hostname
 #   Generates a 5-character randomly chosen API hostname.
 #
@@ -221,6 +244,5 @@ if __FILE__ == $PROGRAM_NAME
       options[:makefile] = mk
     end
   end
-
   main options
 end
diff --git a/docker/compute/Dockerfile b/docker/compute/Dockerfile
new file mode 100644 (file)
index 0000000..929c136
--- /dev/null
@@ -0,0 +1,28 @@
+# Arvados compute node Docker container.
+
+FROM arvados/slurm
+MAINTAINER Ward Vandewege <ward@curoverse.com>
+
+RUN apt-get update && apt-get -qqy install supervisor python-pip python-pyvcf python-gflags python-google-api-python-client python-virtualenv libattr1-dev libfuse-dev python-dev python-llfuse fuse crunchstat python-arvados-fuse cron
+
+ADD fuse.conf /etc/fuse.conf
+
+RUN /usr/local/rvm/bin/rvm-exec default gem install arvados-cli arvados
+
+# Install Docker from the Docker Inc. repository
+RUN apt-get update -qq && apt-get install -qqy iptables ca-certificates lxc apt-transport-https
+RUN echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list
+RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
+RUN apt-get update -qq && apt-get install -qqy lxc-docker
+
+RUN addgroup --gid 4005 crunch && mkdir /home/crunch && useradd --uid 4005 --gid 4005 crunch && usermod crunch -G fuse,docker && chown crunch:crunch /home/crunch
+
+# Supervisor.
+ADD supervisor.conf /etc/supervisor/conf.d/arvados.conf
+ADD ssh.sh /usr/local/bin/ssh.sh
+ADD generated/setup.sh /usr/local/bin/setup.sh
+ADD wrapdocker /usr/local/bin/wrapdocker.sh
+
+VOLUME /var/lib/docker
+# Start the supervisor.
+CMD ["/usr/bin/supervisord", "-n"]
diff --git a/docker/compute/fuse.conf b/docker/compute/fuse.conf
new file mode 100644 (file)
index 0000000..4ed21ba
--- /dev/null
@@ -0,0 +1,10 @@
+# Set the maximum number of FUSE mounts allowed to non-root users.
+# The default is 1000.
+#
+#mount_max = 1000
+
+# Allow non-root users to specify the 'allow_other' or 'allow_root'
+# mount options.
+#
+user_allow_other
+
diff --git a/docker/compute/setup.sh.in b/docker/compute/setup.sh.in
new file mode 100755 (executable)
index 0000000..e107d80
--- /dev/null
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+. /etc/profile.d/rvm.sh
+
+export ARVADOS_API_HOST=api
+export ARVADOS_API_HOST_INSECURE=yes
+export ARVADOS_API_TOKEN=@@API_SUPERUSER_SECRET@@
+
+arv node create --node {} > /tmp/node.json
+
+UUID=`grep \"uuid\" /tmp//node.json  |cut -f4 -d\"`
+PING_SECRET=`grep \"ping_secret\" /tmp//node.json  |cut -f4 -d\"`
+
+echo "*/5 * * * * root /usr/bin/curl -k -d ping_secret=$PING_SECRET https://api/arvados/v1/nodes/$UUID/ping" > /etc/cron.d/node_ping
+
+# Send a ping now
+/usr/bin/curl -k -d ping_secret=$PING_SECRET https://api/arvados/v1/nodes/$UUID/ping?ping_secret=$PING_SECRET
+
+# Just make sure /dev/fuse permissions are correct (the device appears after fuse is loaded)
+chmod 1660 /dev/fuse && chgrp fuse /dev/fuse
diff --git a/docker/compute/ssh.sh b/docker/compute/ssh.sh
new file mode 100755 (executable)
index 0000000..664414b
--- /dev/null
@@ -0,0 +1,10 @@
+#!/bin/bash
+
+echo $ENABLE_SSH
+
+# Start ssh daemon if requested via the ENABLE_SSH env variable
+if [[ ! "$ENABLE_SSH" =~ (0|false|no|f|^$) ]]; then
+echo "STARTING"
+  /etc/init.d/ssh start
+fi
+
diff --git a/docker/compute/supervisor.conf b/docker/compute/supervisor.conf
new file mode 100644 (file)
index 0000000..f2cce3f
--- /dev/null
@@ -0,0 +1,29 @@
+[program:ssh]
+user=root
+command=/usr/local/bin/ssh.sh
+startsecs=0
+
+[program:munge]
+user=root
+command=/etc/init.d/munge start
+startsecs=0
+
+[program:slurm]
+user=root
+command=/etc/init.d/slurm-llnl start
+startsecs=0
+
+[program:cron]
+user=root
+command=/etc/init.d/cron start
+startsecs=0
+
+[program:setup]
+user=root
+command=/usr/local/bin/setup.sh
+startsecs=0
+
+[program:docker]
+user=root
+command=/usr/local/bin/wrapdocker.sh
+
diff --git a/docker/compute/wrapdocker b/docker/compute/wrapdocker
new file mode 100755 (executable)
index 0000000..cee1302
--- /dev/null
@@ -0,0 +1,90 @@
+#!/bin/bash
+
+# Borrowed from https://github.com/jpetazzo/dind under Apache2
+# and slightly modified.
+
+# First, make sure that cgroups are mounted correctly.
+CGROUP=/sys/fs/cgroup
+: {LOG:=stdio}
+
+[ -d $CGROUP ] ||
+       mkdir $CGROUP
+
+mountpoint -q $CGROUP ||
+       mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {
+               echo "Could not make a tmpfs mount. Did you use -privileged?"
+               exit 1
+       }
+
+if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security
+then
+    mount -t securityfs none /sys/kernel/security || {
+        echo "Could not mount /sys/kernel/security."
+        echo "AppArmor detection and -privileged mode might break."
+    }
+fi
+
+# Mount the cgroup hierarchies exactly as they are in the parent system.
+for SUBSYS in $(cut -d: -f2 /proc/1/cgroup)
+do
+        [ -d $CGROUP/$SUBSYS ] || mkdir $CGROUP/$SUBSYS
+        mountpoint -q $CGROUP/$SUBSYS ||
+                mount -n -t cgroup -o $SUBSYS cgroup $CGROUP/$SUBSYS
+
+        # The two following sections address a bug which manifests itself
+        # by a cryptic "lxc-start: no ns_cgroup option specified" when
+        # trying to start containers withina container.
+        # The bug seems to appear when the cgroup hierarchies are not
+        # mounted on the exact same directories in the host, and in the
+        # container.
+
+        # Named, control-less cgroups are mounted with "-o name=foo"
+        # (and appear as such under /proc/<pid>/cgroup) but are usually
+        # mounted on a directory named "foo" (without the "name=" prefix).
+        # Systemd and OpenRC (and possibly others) both create such a
+        # cgroup. To avoid the aforementioned bug, we symlink "foo" to
+        # "name=foo". This shouldn't have any adverse effect.
+        echo $SUBSYS | grep -q ^name= && {
+                NAME=$(echo $SUBSYS | sed s/^name=//)
+                ln -s $SUBSYS $CGROUP/$NAME
+        }
+
+        # Likewise, on at least one system, it has been reported that
+        # systemd would mount the CPU and CPU accounting controllers
+        # (respectively "cpu" and "cpuacct") with "-o cpuacct,cpu"
+        # but on a directory called "cpu,cpuacct" (note the inversion
+        # in the order of the groups). This tries to work around it.
+        [ $SUBSYS = cpuacct,cpu ] && ln -s $SUBSYS $CGROUP/cpu,cpuacct
+done
+
+# Note: as I write those lines, the LXC userland tools cannot setup
+# a "sub-container" properly if the "devices" cgroup is not in its
+# own hierarchy. Let's detect this and issue a warning.
+grep -q :devices: /proc/1/cgroup ||
+       echo "WARNING: the 'devices' cgroup should be in its own hierarchy."
+grep -qw devices /proc/1/cgroup ||
+       echo "WARNING: it looks like the 'devices' cgroup is not mounted."
+
+# Now, close extraneous file descriptors.
+pushd /proc/self/fd >/dev/null
+for FD in *
+do
+       case "$FD" in
+       # Keep stdin/stdout/stderr
+       [012])
+               ;;
+       # Nuke everything else
+       *)
+               eval exec "$FD>&-"
+               ;;
+       esac
+done
+popd >/dev/null
+
+
+# If a pidfile is still around (for example after a container restart),
+# delete it so that docker can start.
+rm -rf /var/run/docker.pid
+
+exec docker -d
+
index 515fcbefd5121bbddb567c54aaa768b1008cc538..30fc1d46ff54040296409722c9072b82ed0c259d 100644 (file)
@@ -7,6 +7,10 @@
 # true when starting the container.
 PUBLIC_KEY_PATH:
 
+# Username for your Arvados user. This will be used as your shell login name
+# as well as the name for your git repository.
+ARVADOS_USER_NAME:
+
 # ARVADOS_DOMAIN: the Internet domain of this installation.
 # ARVADOS_DNS_SERVER: the authoritative nameserver for ARVADOS_DOMAIN.
 ARVADOS_DOMAIN:         # e.g. arvados.internal
diff --git a/docker/shell/Dockerfile b/docker/shell/Dockerfile
new file mode 100644 (file)
index 0000000..1e1c883
--- /dev/null
@@ -0,0 +1,19 @@
+# Slurm node Docker container.
+
+FROM arvados/base
+MAINTAINER Ward Vandewege <ward@curoverse.com>
+
+RUN apt-get update && apt-get -qqy install supervisor python-pip python-pyvcf python-gflags python-google-api-python-client python-virtualenv libattr1-dev libfuse-dev python-dev python-llfuse fuse crunchstat python-arvados-fuse cron vim
+
+ADD fuse.conf /etc/fuse.conf
+
+ADD generated/superuser_token /tmp/superuser_token
+
+RUN /usr/local/rvm/bin/rvm-exec default gem install arvados-cli arvados
+
+# Supervisor.
+ADD supervisor.conf /etc/supervisor/conf.d/arvados.conf
+ADD generated/setup.sh /usr/local/bin/setup.sh
+
+# Start the supervisor.
+CMD ["/usr/bin/supervisord", "-n"]
diff --git a/docker/shell/fuse.conf b/docker/shell/fuse.conf
new file mode 100644 (file)
index 0000000..4ed21ba
--- /dev/null
@@ -0,0 +1,10 @@
+# Set the maximum number of FUSE mounts allowed to non-root users.
+# The default is 1000.
+#
+#mount_max = 1000
+
+# Allow non-root users to specify the 'allow_other' or 'allow_root'
+# mount options.
+#
+user_allow_other
+
diff --git a/docker/shell/setup.sh.in b/docker/shell/setup.sh.in
new file mode 100755 (executable)
index 0000000..2815201
--- /dev/null
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+USER_NAME="@@ARVADOS_USER_NAME@@"
+
+useradd $USER_NAME -s /bin/bash
+mkdir /home/$USER_NAME/.ssh -p
+
+cp ~root/.ssh/authorized_keys /home/$USER_NAME/.ssh/authorized_keys
+
+# Install our token
+mkdir -p /home/$USER_NAME/.config/arvados;
+echo "ARVADOS_API_HOST=api" >> /home/$USER_NAME/.config/arvados/settings.conf
+echo "ARVADOS_API_HOST_INSECURE=yes" >> /home/$USER_NAME/.config/arvados/settings.conf
+echo "ARVADOS_API_TOKEN=$(cat /tmp/superuser_token)" >> /home/$USER_NAME/.config/arvados/settings.conf
+chmod 600 /home/$USER_NAME/.config/arvados/settings.conf
+
+chown $USER_NAME:$USER_NAME /home/$USER_NAME -R
+
+rm -f /tmp/superuser_token
+
+
diff --git a/docker/shell/superuser_token.in b/docker/shell/superuser_token.in
new file mode 100644 (file)
index 0000000..49bb34e
--- /dev/null
@@ -0,0 +1 @@
+@@API_SUPERUSER_SECRET@@
diff --git a/docker/shell/supervisor.conf b/docker/shell/supervisor.conf
new file mode 100644 (file)
index 0000000..97ad540
--- /dev/null
@@ -0,0 +1,15 @@
+[program:ssh]
+user=root
+command=/etc/init.d/ssh start
+startsecs=0
+
+[program:cron]
+user=root
+command=/etc/init.d/cron start
+startsecs=0
+
+[program:setup]
+user=root
+command=/usr/local/bin/setup.sh
+startsecs=0
+
diff --git a/docker/slurm/Dockerfile b/docker/slurm/Dockerfile
new file mode 100644 (file)
index 0000000..7a60bf6
--- /dev/null
@@ -0,0 +1,11 @@
+# Slurm node Docker container.
+
+FROM arvados/base
+MAINTAINER Ward Vandewege <ward@curoverse.com>
+
+RUN apt-get update && apt-get -q -y install slurm-llnl munge
+
+ADD munge.key /etc/munge/
+RUN chown munge:munge /etc/munge/munge.key && chmod 600 /etc/munge/munge.key
+ADD generated/slurm.conf /etc/slurm-llnl/
+
diff --git a/docker/slurm/munge.key b/docker/slurm/munge.key
new file mode 100644 (file)
index 0000000..34036a0
Binary files /dev/null and b/docker/slurm/munge.key differ
diff --git a/docker/slurm/slurm.conf.in b/docker/slurm/slurm.conf.in
new file mode 100644 (file)
index 0000000..7312a0e
--- /dev/null
@@ -0,0 +1,60 @@
+
+ControlMachine=api
+#SlurmUser=slurmd
+SlurmctldPort=6817
+SlurmdPort=6818
+AuthType=auth/munge
+#JobCredentialPrivateKey=/etc/slurm-llnl/slurm-key.pem
+#JobCredentialPublicCertificate=/etc/slurm-llnl/slurm-cert.pem
+StateSaveLocation=/tmp
+SlurmdSpoolDir=/tmp/slurmd
+SwitchType=switch/none
+MpiDefault=none
+SlurmctldPidFile=/var/run/slurmctld.pid
+SlurmdPidFile=/var/run/slurmd.pid
+ProctrackType=proctrack/pgid
+CacheGroups=0
+ReturnToService=2
+TaskPlugin=task/affinity
+#
+# TIMERS
+SlurmctldTimeout=300
+SlurmdTimeout=300
+InactiveLimit=0
+MinJobAge=300
+KillWait=30
+Waittime=0
+#
+# SCHEDULING
+SchedulerType=sched/backfill
+#SchedulerType=sched/builtin
+SchedulerPort=7321
+#SchedulerRootFilter=
+#SelectType=select/linear
+SelectType=select/cons_res
+SelectTypeParameters=CR_CPU_Memory
+FastSchedule=1
+#
+# LOGGING
+SlurmctldDebug=3
+#SlurmctldLogFile=
+SlurmdDebug=3
+#SlurmdLogFile=
+JobCompType=jobcomp/none
+#JobCompLoc=
+JobAcctGatherType=jobacct_gather/none
+#JobAcctLogfile=
+#JobAcctFrequency=
+#
+# COMPUTE NODES
+NodeName=DEFAULT
+# CPUs=8 State=UNKNOWN RealMemory=6967 Weight=6967
+PartitionName=DEFAULT MaxTime=INFINITE State=UP
+PartitionName=compute Default=YES Shared=yes
+#PartitionName=sysadmin Hidden=YES Shared=yes
+
+NodeName=compute[0-1]
+#NodeName=compute0 RealMemory=6967 Weight=6967
+
+PartitionName=compute Nodes=compute[0-1]
+PartitionName=crypto Nodes=compute[0-1]
diff --git a/docker/slurm/supervisor.conf b/docker/slurm/supervisor.conf
new file mode 100644 (file)
index 0000000..6563b54
--- /dev/null
@@ -0,0 +1,14 @@
+[program:ssh]
+user=root
+command=/usr/local/bin/ssh.sh
+startsecs=0
+
+[program:munge]
+user=root
+command=/etc/init.d/munge start
+
+[program:slurm]
+user=root
+command=/etc/init.d/slurm-llnl start
+
+
index d9dfeff23463f24fa2cf197b7cb6f3c3a66d9338..97ed0135d6c9901eee56fefe3a149bbbd0e2e41d 100644 (file)
@@ -3,6 +3,9 @@
 FROM arvados/passenger
 MAINTAINER Ward Vandewege <ward@curoverse.com>
 
+# We need graphviz for the provenance graphs
+RUN apt-get update && apt-get -qqy install graphviz
+
 # Update Arvados source
 RUN /bin/mkdir -p /usr/src/arvados/apps
 ADD generated/workbench.tar.gz /usr/src/arvados/apps/