+++ /dev/null
-<div class="alert alert-block alert-info">
- <button type="button" class="close" data-dismiss="alert">×</button>
- <p>The Docker installation is not feature complete. We do not have a Docker container yet for crunch-dispatch and the arvados compute nodes. This means that running pipelines from a Docker-based Arvados install is currently not supported without additional manual configuration. Without that manual configuration, it is possible to use arv-crunch-job to run a 'local' job against your Docker-based Arvados installation. To do this, please refer to the "Debugging a Crunch script":{{site.baseurl}}/user/topics/tutorial-job-debug.html page.</p>
-</div>
h2. Docker
-{% include 'alert_docker' %}
-
"Installing with Docker":install-docker.html
h2. Manual installation
title: Installing with Docker
...
-{% include 'alert_docker' %}
+h2. Purpose
-h2. Prerequisites:
+This installation method is appropriate for local testing, evaluation, and
+development. For production use, this method is not recommended.
+
+h2. Prerequisites
# A GNU/Linux (virtual) machine
-# A working Docker installation
+# A working Docker installation (see "Installing Docker":https://docs.docker.com/installation/)
h2. Download the source tree
<pre>
PUBLIC_KEY_PATH
+ARVADOS_USER_NAME
API_HOSTNAME
API_AUTO_ADMIN_USER
</pre>
<pre><code>
~$ <span class="userinput">./build.sh</span>
...
- ---> 05f0ae429530
-Step 9 : ADD apache2_foreground.sh /etc/apache2/foreground.sh
- ---> 7292b241305a
-Step 10 : CMD ["/etc/apache2/foreground.sh"]
- ---> Running in 82d59061ead8
- ---> 72cee36a9281
-Successfully built 72cee36a9281
-Removing intermediate container 2bc8c98c83c7
-Removing intermediate container 9457483a59cf
-Removing intermediate container 7cc5723df67c
-Removing intermediate container 5cb2cede73de
-Removing intermediate container 0acc147a7f6d
-Removing intermediate container 82d59061ead8
-Removing intermediate container 9c022a467396
-Removing intermediate container 16044441463f
-Removing intermediate container cffbbddd82d1
-date >sso-image
+Step 7 : ADD generated/setup.sh /usr/local/bin/setup.sh
+ ---> d7c0e7fdf7ab
+Removing intermediate container f3d81180795d
+Step 8 : CMD ["/usr/bin/supervisord", "-n"]
+ ---> Running in 84c64cb9f0d5
+ ---> d6cbb5002604
+Removing intermediate container 84c64cb9f0d5
+Successfully built d6cbb5002604
+date >shell-image
</code></pre></notextile>
If all goes well, you should now have a number of docker images built:
<notextile>
<pre><code>~$ <span class="userinput">docker.io images</span>
-REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
-arvados/sso latest 72cee36a9281 11 seconds ago 1.727 GB
-arvados/keep latest c3842f856bcb 56 seconds ago 210.6 MB
-arvados/workbench latest b91aa980597c About a minute ago 2.07 GB
-arvados/doc latest 050e9e6b8213 About a minute ago 1.442 GB
-arvados/api latest 79843d0a8997 About a minute ago 2.112 GB
-arvados/passenger latest 2342a550da7f 2 minutes ago 1.658 GB
-arvados/base latest 68caefd8ea5b 5 minutes ago 1.383 GB
-arvados/debian 7.5 6e32119ffcd0 8 minutes ago 116.8 MB
-arvados/debian latest 6e32119ffcd0 8 minutes ago 116.8 MB
-arvados/debian wheezy 6e32119ffcd0 8 minutes ago 116.8 MB
+REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
+arvados/shell latest d6cbb5002604 10 minutes ago 1.613 GB
+arvados/sso latest 377f1fa0108e 11 minutes ago 1.807 GB
+arvados/keep latest ade0e0d2dd00 12 minutes ago 210.8 MB
+arvados/workbench latest b0e4fb6da385 12 minutes ago 2.218 GB
+arvados/doc latest 4b64daec9454 12 minutes ago 1.524 GB
+arvados/compute latest 7f1f5f7faf54 13 minutes ago 1.862 GB
+arvados/slurm latest f5bfd1008e6b 17 minutes ago 1.573 GB
+arvados/api latest 6b93c5f5fc42 17 minutes ago 2.274 GB
+arvados/passenger latest add2d11fdf24 18 minutes ago 1.738 GB
+arvados/base latest 81eaadd0c6f5 22 minutes ago 1.463 GB
+arvados/debian 7.6 f339ce275c01 6 days ago 116.8 MB
+arvados/debian latest f339ce275c01 6 days ago 116.8 MB
+arvados/debian wheezy f339ce275c01 6 days ago 116.8 MB
+crosbymichael/skydock latest e985023521f6 3 months ago 510.7 MB
+crosbymichael/skydns next 79c99a4608ed 3 months ago 525 MB
+crosbymichael/skydns latest 1923ce648d4c 5 months ago 137.5 MB
</code></pre></notextile>
h2. Updating the Arvados Docker containers
-If there has been an update to the Arvados Docker building code, it is safest to rebuild the Arvados Docker images from scratch. All build information can be cleared with the '--clean' option to build.sh:
+If there has been an update to the Arvados Docker building code, it is safest to rebuild the Arvados Docker images from scratch. All build information can be cleared with the 'clean' option to build.sh:
+
+<notextile>
+<pre><code>~$ <span class="userinput">./build.sh clean</span></code></pre>
+</notextile>
+
+You can also use 'realclean', which does what 'clean' does and in addition removes all Arvados Docker containers and images from your system, with the exception of the arvados/debian image.
<notextile>
-<pre><code>~$ <span class="userinput">./build.sh --clean</span></code></pre>
+<pre><code>~$ <span class="userinput">./build.sh realclean</span></code></pre>
</notextile>
-You can also use '--realclean', which does what '--clean' does and in addition removes all Arvados Docker containers and images from your system.
+Finally, the 'deepclean' option does what 'realclean' does, and also removes the arvados/debian, crosbymichael/skydns and crosbymichael/skydock images.
<notextile>
-<pre><code>~$ <span class="userinput">./build.sh --realclean</span></code></pre>
+<pre><code>~$ <span class="userinput">./build.sh deepclean</span></code></pre>
</notextile>
h2. Running the Arvados Docker containers
usage: ./arvdock (start|stop|restart|test) [options]
./arvdock start/stop/restart options:
- -d [port], --doc[=port] Documentation server (default port 9898)
- -w [port], --workbench[=port] Workbench server (default port 9899)
- -s [port], --sso[=port] SSO server (default port 9901)
- -a [port], --api[=port] API server (default port 9900)
- -k, --keep Keep servers
- --ssh Enable SSH access to server containers
- -h, --help Display this help and exit
+ -d[port], --doc[=port] Documentation server (default port 9898)
+ -w[port], --workbench[=port] Workbench server (default port 9899)
+ -s[port], --sso[=port] SSO server (default port 9901)
+ -a[port], --api[=port] API server (default port 9900)
+ -c, --compute Compute nodes (starts 2)
+ -v, --vm Shell server
+ -n, --nameserver Nameserver
+ -k, --keep Keep servers
+ --ssh Enable SSH access to server containers
+ -h, --help Display this help and exit
If no options are given, the action is applied to all servers.
bc. ARVADOS_API_HOST and ARVADOS_API_TOKEN need to be defined as environment variables
-Then follow the instructions for "getting an API token,":{{site.baseurl}}/user/reference/api-tokens.html and try @arv user current@ again.
+follow the instructions for "getting an API token,":{{site.baseurl}}/user/reference/api-tokens.html and try @arv user current@ again.
MAINTAINER Tim Pierce <twp@curoverse.com>
# Install postgres and apache.
-# Clone a git repository of Arvados source -- not used to build, but
-# will be used by the Commit model and anything else that needs to
-# check a git repo for crunch scripts.
-#
RUN apt-get update && \
- apt-get -q -y install procps postgresql postgresql-server-dev-9.1 apache2 \
- supervisor && \
- git clone --bare git://github.com/curoverse/arvados.git /var/cache/git/arvados.git
+ apt-get -q -y install procps postgresql postgresql-server-dev-9.1 apache2 slurm-llnl munge \
+ supervisor sudo libwww-perl libio-socket-ssl-perl libcrypt-ssleay-perl \
+ libjson-perl cron
+
+ADD munge.key /etc/munge/
+RUN chown munge:munge /etc/munge/munge.key && chmod 600 /etc/munge/munge.key
+ADD generated/slurm.conf /etc/slurm-llnl/
+
+RUN /usr/local/rvm/bin/rvm-exec default gem install arvados-cli arvados
+# /for crunch-dispatch
RUN /bin/mkdir -p /usr/src/arvados/services
ADD generated/api.tar.gz /usr/src/arvados/services/
a2enmod ssl && \
/bin/mkdir /var/run/apache2
+# Install a token for root
+RUN mkdir -p /root/.config/arvados; echo "ARVADOS_API_HOST=api" >> /root/.config/arvados/settings.conf && echo "ARVADOS_API_HOST_INSECURE=yes" >> /root/.config/arvados/settings.conf && echo "ARVADOS_API_TOKEN=$(cat /tmp/superuser_token)" >> /root/.config/arvados/settings.conf && chmod 600 /root/.config/arvados/settings.conf
+
+# Set up directory for job commit repo
+RUN mkdir -p /var/lib/arvados
+# Add crunch user
+RUN addgroup --gid 4005 crunch && mkdir /home/crunch && useradd --uid 4005 --gid 4005 crunch && chown crunch:crunch /home/crunch
+
+# Create keep and compute node objects
+ADD keep_server_0.json /root/
+ADD keep_server_1.json /root/
+
+# Set up update-gitolite.rb
+RUN mkdir /usr/local/arvados/config -p
+ADD generated/arvados-clients.yml /usr/local/arvados/config/
+ADD update-gitolite.rb /usr/local/arvados/
+
# Supervisor.
ADD supervisor.conf /etc/supervisor/conf.d/arvados.conf
ADD ssh.sh /usr/local/bin/ssh.sh
+ADD generated/setup.sh /usr/local/bin/setup.sh
+ADD generated/setup-gitolite.sh /usr/local/bin/setup-gitolite.sh
+ADD crunch-dispatch-run.sh /usr/local/bin/crunch-dispatch-run.sh
ADD apache2_foreground.sh /etc/apache2/foreground.sh
# Start the supervisor.
# Net::HTTP.get(URI("http://169.254.169.254/latest/meta-data/#{iface}-ipv4")).match(/^[\d\.]+$/)[0]
# end << '172.16.0.23'
# %>
+ permit_create_collection_with_unsigned_manifest: true
+ git_repositories_dir: /home/git/repositories
+ crunch_job_wrapper: :slurm_immediate
+ action_mailer.raise_delivery_errors: false
+ action_mailer.perform_deliveries: false
test:
uuid_prefix: zzzzz
--- /dev/null
+# apt.arvados.org
+deb http://apt.arvados.org/ wheezy main
--- /dev/null
+production:
+ gitolite_url: 'git@api:gitolite-admin.git'
+ gitolite_tmp: 'gitolite-tmp'
+ arvados_api_host: 'api'
+ arvados_api_token: '@@API_SUPERUSER_SECRET@@'
--- /dev/null
+#!/bin/bash
+set -e
+export PATH="$PATH":/usr/src/arvados/services/crunch
+export PERLLIB=/usr/src/arvados/sdk/perl/lib
+export ARVADOS_API_HOST=api
+export ARVADOS_API_HOST_INSECURE=yes
+export CRUNCH_DISPATCH_LOCKFILE=/var/lock/crunch-dispatch
+
+if [[ ! -e $CRUNCH_DISPATCH_LOCKFILE ]]; then
+ touch $CRUNCH_DISPATCH_LOCKFILE
+fi
+
+export CRUNCH_JOB_BIN=/usr/src/arvados/services/crunch/crunch-job
+export HOME=`pwd`
+fuser -TERM -k $CRUNCH_DISPATCH_LOCKFILE || true
+
+# Give the compute nodes some time to start up
+sleep 5
+
+cd /usr/src/arvados/services/api
+export RAILS_ENV=production
+/usr/local/rvm/bin/rvm-exec default bundle install
+exec /usr/local/rvm/bin/rvm-exec default bundle exec ./script/crunch-dispatch.rb 2>&1
+
--- /dev/null
+{
+ "service_host": "keep_server_0.keep.dev.arvados",
+ "service_port": 25107,
+ "service_ssl_flag": "false",
+ "service_type": "disk"
+}
--- /dev/null
+{
+ "service_host": "keep_server_1.keep.dev.arvados",
+ "service_port": 25107,
+ "service_ssl_flag": "false",
+ "service_type": "disk"
+}
+
--- /dev/null
+#!/bin/bash
+
+ssh-keygen -q -N '' -t rsa -f /root/.ssh/id_rsa
+
+useradd git
+mkdir /home/git
+
+# Set up gitolite repository
+cp ~root/.ssh/id_rsa.pub ~git/root-authorized_keys.pub
+chown git:git /home/git -R
+su - git -c "mkdir -p ~/bin"
+
+su - git -c "git clone git://github.com/sitaramc/gitolite"
+su - git -c "gitolite/install -ln ~/bin"
+su - git -c "PATH=/home/git/bin:$PATH gitolite setup -pk ~git/root-authorized_keys.pub"
+
+# Make sure the repositories are created in such a way that they are readable
+# by the api server
+sed -i 's/0077/0022/g' /home/git/.gitolite.rc
+
+# And make sure that the existing repos are equally readable, or the API server commit model will freak out...
+chmod 755 /home/git/repositories
+chmod +rx /home/git/repositories/*git -R
+
+# Now set up the gitolite repo(s) we use
+mkdir -p /usr/local/arvados/gitolite-tmp/
+# Make ssh store the host key
+ssh -o "StrictHostKeyChecking no" git@api info
+# Now check out the tree
+git clone git@api:gitolite-admin.git /usr/local/arvados/gitolite-tmp/gitolite-admin/
+cd /usr/local/arvados/gitolite-tmp/gitolite-admin
+mkdir keydir/arvados
+mkdir conf/admin
+mkdir conf/auto
+echo "
+
+@arvados_git_user = arvados_git_user
+
+repo @all
+ RW+ = @arvados_git_user
+
+" > conf/admin/arvados.conf
+echo '
+include "auto/*.conf"
+include "admin/*.conf"
+' >> conf/gitolite.conf
+
+#su - git -c "ssh-keygen -t rsa"
+cp /root/.ssh/id_rsa.pub keydir/arvados/arvados_git_user.pub
+# Replace the 'root' key with the user key, just in case
+cp /root/.ssh/authorized_keys keydir/root-authorized_keys.pub
+# But also make sure we have the root key installed so it can access all keys
+git add keydir/root-authorized_keys.pub
+git add keydir/arvados/arvados_git_user.pub
+git add conf/admin/arvados.conf
+git add keydir/arvados/
+git add conf/gitolite.conf
+git commit -a -m 'git server setup'
+git push
+
+# Prepopulate the arvados.git repo with our source. Silly, but until we can check out from remote trees,
+# we need this to make the tutorials work.
+su - git -c "git clone --bare git://github.com/curoverse/arvados.git /home/git/repositories/arvados.git"
+
+echo "ARVADOS_API_HOST_INSECURE=yes" > /etc/cron.d/gitolite-update
+echo "*/2 * * * * root /bin/bash -c 'source /etc/profile.d/rvm.sh && /usr/local/arvados/update-gitolite.rb production'" >> /etc/cron.d/gitolite-update
+
+# Create/update the repos now
+. /etc/profile.d/rvm.sh
+export ARVADOS_API_HOST=api
+export ARVADOS_API_HOST_INSECURE=yes
+export ARVADOS_API_TOKEN=@@API_SUPERUSER_SECRET@@
+/usr/local/arvados/update-gitolite.rb production
+
+echo "PATH=/usr/bin:/bin:/sbin" > /etc/cron.d/arvados-repo-update
+echo "*/5 * * * * git cd ~git/repositories/arvados.git; git fetch https://github.com/curoverse/arvados.git master:master" >> /etc/cron.d/arvados-repo-update
+
--- /dev/null
+#!/bin/bash
+
+set -x
+
+. /etc/profile.d/rvm.sh
+
+export ARVADOS_API_HOST=api
+export ARVADOS_API_HOST_INSECURE=yes
+export ARVADOS_API_TOKEN=@@API_SUPERUSER_SECRET@@
+
+# All users group
+prefix=`arv --format=uuid user current | cut -d- -f1`
+read -rd $'\000' newgroup <<EOF; arv group create --group "$newgroup"
+{
+ "uuid":"$prefix-j7d0g-fffffffffffffff",
+ "name":"All users"
+}
+EOF
+
+# Arvados repository object
+all_users_group_uuid="$prefix-j7d0g-fffffffffffffff"
+repo_uuid=`arv --format=uuid repository create --repository '{"name":"arvados","fetch_url":"git@api:arvados.git","push_url":"git@api:arvados.git"}'`
+echo "Arvados repository uuid is $repo_uuid"
+
+read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"
+{
+ "tail_uuid":"$all_users_group_uuid",
+ "head_uuid":"$repo_uuid",
+ "link_class":"permission",
+ "name":"can_read"
+}
+EOF
+
+# Make sure the necessary keep_service objects exist
+arv keep_service list > /tmp/keep_service.list
+
+grep -q keep_server_0 /tmp/keep_service.list
+if [[ "$?" != "0" ]]; then
+ arv keep_service create --keep-service "$(cat /root/keep_server_0.json)"
+fi
+
+grep -q keep_server_1 /tmp/keep_service.list
+if [[ "$?" != "0" ]]; then
+ arv keep_service create --keep-service "$(cat /root/keep_server_1.json)"
+fi
+
+# User repository object
+user_uuid=`arv --format=uuid user current`
+repo_uuid=`arv --format=uuid repository create --repository '{"name":"@@ARVADOS_USER_NAME@@","fetch_url":"git@api:@@ARVADOS_USER_NAME@@.git","push_url":"git@api:@@ARVADOS_USER_NAME@@.git"}'`
+echo "User repository uuid is $repo_uuid"
+
+read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"
+{
+ "tail_uuid":"$user_uuid",
+ "head_uuid":"$repo_uuid",
+ "link_class":"permission",
+ "name":"can_write"
+}
+EOF
+
+# Shell machine object
+arv virtual_machine create --virtual-machine '{"hostname":"shell"}'
--- /dev/null
+
+ControlMachine=api
+#SlurmUser=slurmd
+SlurmctldPort=6817
+SlurmdPort=6818
+AuthType=auth/munge
+#JobCredentialPrivateKey=/etc/slurm-llnl/slurm-key.pem
+#JobCredentialPublicCertificate=/etc/slurm-llnl/slurm-cert.pem
+StateSaveLocation=/tmp
+SlurmdSpoolDir=/tmp/slurmd
+SwitchType=switch/none
+MpiDefault=none
+SlurmctldPidFile=/var/run/slurmctld.pid
+SlurmdPidFile=/var/run/slurmd.pid
+ProctrackType=proctrack/pgid
+CacheGroups=0
+ReturnToService=2
+TaskPlugin=task/affinity
+#
+# TIMERS
+SlurmctldTimeout=300
+SlurmdTimeout=300
+InactiveLimit=0
+MinJobAge=300
+KillWait=30
+Waittime=0
+#
+# SCHEDULING
+SchedulerType=sched/backfill
+#SchedulerType=sched/builtin
+SchedulerPort=7321
+#SchedulerRootFilter=
+#SelectType=select/linear
+SelectType=select/cons_res
+SelectTypeParameters=CR_CPU_Memory
+FastSchedule=1
+#
+# LOGGING
+SlurmctldDebug=3
+#SlurmctldLogFile=
+SlurmdDebug=3
+#SlurmdLogFile=
+JobCompType=jobcomp/none
+#JobCompLoc=
+JobAcctGatherType=jobacct_gather/none
+#JobAcctLogfile=
+#JobAcctFrequency=
+#
+# COMPUTE NODES
+NodeName=DEFAULT
+# CPUs=8 State=UNKNOWN RealMemory=6967 Weight=6967
+PartitionName=DEFAULT MaxTime=INFINITE State=UP
+PartitionName=compute Default=YES Shared=yes
+#PartitionName=sysadmin Hidden=YES Shared=yes
+
+NodeName=compute[0-1]
+#NodeName=compute0 RealMemory=6967 Weight=6967
+
+PartitionName=compute Nodes=compute[0-1]
+PartitionName=crypto Nodes=compute[0-1]
[program:apache2]
command=/etc/apache2/foreground.sh
stopsignal=6
+
+[program:munge]
+user=root
+command=/etc/init.d/munge start
+startsecs=0
+
+[program:slurm]
+user=root
+command=/etc/init.d/slurm-llnl start
+startsecs=0
+
+[program:cron]
+user=root
+command=/etc/init.d/cron start
+startsecs=0
+
+[program:setup]
+user=root
+command=/usr/local/bin/setup.sh
+startsecs=0
+
+[program:setup-gitolite]
+user=root
+command=/usr/local/bin/setup-gitolite.sh
+startsecs=0
+
+[program:crunch-dispatch]
+user=root
+command=/usr/local/bin/crunch-dispatch-run.sh
--- /dev/null
+#!/usr/bin/env ruby
+
+require 'rubygems'
+require 'pp'
+require 'arvados'
+require 'active_support/all'
+
+# This script does the actual gitolite config management on disk.
+#
+# Ward Vandewege <ward@curoverse.com>
+
+# Default is development
+production = ARGV[0] == "production"
+
+ENV["RAILS_ENV"] = "development"
+ENV["RAILS_ENV"] = "production" if production
+
+DEBUG = 1
+
+# load and merge in the environment-specific application config info
+# if present, overriding base config parameters as specified
+path = File.dirname(__FILE__) + '/config/arvados-clients.yml'
+if File.exists?(path) then
+ cp_config = YAML.load_file(path)[ENV['RAILS_ENV']]
+else
+ puts "Please create a\n " + File.dirname(__FILE__) + "/config/arvados-clients.yml\n file"
+ exit 1
+end
+
+gitolite_url = cp_config['gitolite_url']
+gitolite_tmp = cp_config['gitolite_tmp']
+
+gitolite_admin = File.join(File.expand_path(File.dirname(__FILE__)) + '/' + gitolite_tmp + '/gitolite-admin')
+
+ENV['ARVADOS_API_HOST'] = cp_config['arvados_api_host']
+ENV['ARVADOS_API_TOKEN'] = cp_config['arvados_api_token']
+
+keys = ''
+
+seen = Hash.new
+
+def ensure_repo(name,permissions,user_keys,gitolite_admin)
+ tmp = ''
+ # Just in case...
+ name.gsub!(/[^a-z0-9]/i,'')
+
+ keys = Hash.new()
+
+ user_keys.each do |uuid,p|
+ p.each do |k|
+ next if k[:public_key].nil?
+ keys[uuid] = Array.new() if not keys.key?(uuid)
+
+ key = k[:public_key]
+ # Handle putty-style ssh public keys
+ key.sub!(/^(Comment: "r[^\n]*\n)(.*)$/m,'ssh-rsa \2 \1')
+ key.sub!(/^(Comment: "d[^\n]*\n)(.*)$/m,'ssh-dss \2 \1')
+ key.gsub!(/\n/,'')
+ key.strip
+
+ keys[uuid].push(key)
+ end
+ end
+
+ cf = gitolite_admin + '/conf/auto/' + name + '.conf'
+
+ conf = "\nrepo #{name}\n"
+
+ commit = false
+
+ seen = {}
+ permissions.sort.each do |uuid,v|
+ conf += "\t#{v[:gitolite_permissions]}\t= #{uuid.to_s}\n"
+
+ count = 0
+ keys.include?(uuid) and keys[uuid].each do |v|
+ kf = gitolite_admin + '/keydir/arvados/' + uuid.to_s + "@#{count}.pub"
+ seen[kf] = true
+ if !File.exists?(kf) or IO::read(kf) != v then
+ commit = true
+ f = File.new(kf + ".tmp",'w')
+ f.write(v)
+ f.close()
+ # File.rename will overwrite the destination file if it exists
+ File.rename(kf + ".tmp",kf);
+ end
+ count += 1
+ end
+ end
+
+ if !File.exists?(cf) or IO::read(cf) != conf then
+ commit = true
+ f = File.new(cf + ".tmp",'w')
+ f.write(conf)
+ f.close()
+ # this is about as atomic as we can make the replacement of the file...
+ File.unlink(cf) if File.exists?(cf)
+ File.rename(cf + ".tmp",cf);
+ end
+
+ return commit,seen
+end
+
+begin
+
+ pwd = Dir.pwd
+ # Get our local gitolite-admin repo up to snuff
+ if not File.exists?(File.dirname(__FILE__) + '/' + gitolite_tmp) then
+ Dir.mkdir(File.join(File.dirname(__FILE__) + '/' + gitolite_tmp), 0700)
+ end
+ if not File.exists?(gitolite_admin) then
+ Dir.chdir(File.join(File.dirname(__FILE__) + '/' + gitolite_tmp))
+ `git clone #{gitolite_url}`
+ else
+ Dir.chdir(gitolite_admin)
+ `git pull`
+ end
+ Dir.chdir(pwd)
+
+ arv = Arvados.new( { :suppress_ssl_warnings => false } )
+
+ permissions = arv.repository.get_all_permissions
+
+ repos = permissions[:repositories]
+ user_keys = permissions[:user_keys]
+
+ @commit = false
+
+ @seen = {}
+
+ repos.each do |r|
+ next if r[:name].nil?
+ (@c,@s) = ensure_repo(r[:name],r[:user_permissions],user_keys,gitolite_admin)
+ @seen.merge!(@s)
+ @commit = true if @c
+ end
+
+ # Clean up public key files that should not be present
+ Dir.glob(gitolite_admin + '/keydir/arvados/*.pub') do |key_file|
+ next if key_file =~ /arvados_git_user.pub$/
+ next if @seen.has_key?(key_file)
+ puts "Extra file #{key_file}"
+ @commit = true
+ Dir.chdir(gitolite_admin)
+ key_file.gsub!(/^#{gitolite_admin}\//,'')
+ `git rm #{key_file}`
+ end
+
+ if @commit then
+ message = "#{Time.now().to_s}: update from API"
+ Dir.chdir(gitolite_admin)
+ `git add --all`
+ `git commit -m '#{message}'`
+ `git push`
+ end
+
+rescue Exception => bang
+ puts "Error: " + bang.to_s
+ puts bang.backtrace.join("\n")
+ exit 1
+end
+
DOCKER=`which docker`
fi
+COMPUTE_COUNTER=0
+
function usage {
echo >&2
echo >&2 "usage: $0 (start|stop|restart|test) [options]"
echo >&2
echo >&2 "$0 start/stop/restart options:"
- echo >&2 " -d [port], --doc[=port] Documentation server (default port 9898)"
- echo >&2 " -w [port], --workbench[=port] Workbench server (default port 9899)"
- echo >&2 " -s [port], --sso[=port] SSO server (default port 9901)"
- echo >&2 " -a [port], --api[=port] API server (default port 9900)"
- echo >&2 " -k, --keep Keep servers"
- echo >&2 " --ssh Enable SSH access to server containers"
- echo >&2 " -h, --help Display this help and exit"
+ echo >&2 " -d[port], --doc[=port] Documentation server (default port 9898)"
+ echo >&2 " -w[port], --workbench[=port] Workbench server (default port 9899)"
+ echo >&2 " -s[port], --sso[=port] SSO server (default port 9901)"
+ echo >&2 " -a[port], --api[=port] API server (default port 9900)"
+ echo >&2 " -c, --compute Compute nodes (starts 2)"
+ echo >&2 " -v, --vm Shell server"
+ echo >&2 " -n, --nameserver Nameserver"
+ echo >&2 " -k, --keep Keep servers"
+ echo >&2 " --ssh Enable SSH access to server containers"
+ echo >&2 " -h, --help Display this help and exit"
echo >&2
echo >&2 " If no options are given, the action is applied to all servers."
echo >&2
fi
if [[ "$2" != '' ]]; then
local name="$2"
- args="$args --name $name"
+ if [[ "$name" == "api_server" ]]; then
+ args="$args --dns=172.17.42.1 --dns-search=compute.dev.arvados --hostname api -P --name $name"
+ elif [[ "$name" == "compute" ]]; then
+ name=$name$COMPUTE_COUNTER
+ # We need --privileged because we run docker-inside-docker on the compute nodes
+ args="$args --dns=172.17.42.1 --dns-search=compute.dev.arvados --hostname compute$COMPUTE_COUNTER -P --privileged --name $name"
+ let COMPUTE_COUNTER=$(($COMPUTE_COUNTER + 1))
+ else
+ args="$args --dns=172.17.42.1 --dns-search=dev.arvados --hostname ${name#_server} --name $name"
+ fi
fi
if [[ "$3" != '' ]]; then
local volume="$3"
$DOCKER rm "$name" 2>/dev/null
echo "Starting container:"
+ #echo " $DOCKER run --dns=127.0.0.1 $args $image"
echo " $DOCKER run $args $image"
container=`$DOCKER run $args $image`
if [[ "$?" != "0" ]]; then
echo "Unable to start container"
exit 1
fi
- if $ENABLE_SSH
+ if [[ "$name" == "compute" || $ENABLE_SSH ]];
then
ip=$(ip_address $container )
echo
local start_doc=false
local start_sso=false
local start_api=false
+ local start_compute=false
local start_workbench=false
+ local start_vm=false
+ local start_nameserver=false
local start_keep=false
# NOTE: This requires GNU getopt (part of the util-linux package on Debian-based distros).
- local TEMP=`getopt -o d::s::a::w::kh \
- --long doc::,sso::,api::,workbench::,keep,help,ssh \
+ local TEMP=`getopt -o d::s::a::cw::nkvh \
+ --long doc::,sso::,api::,compute,workbench::,nameserver,keep,vm,help,ssh \
-n "$0" -- "$@"`
if [ $? != 0 ] ; then echo "Use -h for help"; exit 1 ; fi
*) start_api=$2; shift 2 ;;
esac
;;
+ -c | --compute)
+ start_compute=2
+ shift
+ ;;
-w | --workbench)
case "$2" in
"") start_workbench=9899; shift 2 ;;
*) start_workbench=$2; shift 2 ;;
esac
;;
+ -v | --vm)
+ start_vm=true
+ shift
+ ;;
+ -n | --nameserver)
+ start_nameserver=true
+ shift
+ ;;
-k | --keep)
start_keep=true
shift
if [[ $start_doc == false &&
$start_sso == false &&
$start_api == false &&
+ $start_compute == false &&
$start_workbench == false &&
+ $start_vm == false &&
+ $start_nameserver == false &&
$start_keep == false ]]
then
start_doc=9898
start_sso=9901
start_api=9900
+ start_compute=2
start_workbench=9899
+ start_vm=true
+ start_nameserver=true
start_keep=true
fi
start_container "$start_api:443" "api_server" '' "sso_server:sso" "arvados/api"
fi
+ if [[ $start_nameserver != false ]]
+ then
+ # We rely on skydock and skydns for dns discovery between the slurm controller and compute nodes,
+ # so make sure they are running
+ $DOCKER ps | grep skydns >/dev/null
+ if [[ "$?" != "0" ]]; then
+ echo "Starting crosbymichael/skydns container..."
+ $DOCKER rm "skydns" 2>/dev/null
+ $DOCKER run -d -p 172.17.42.1:53:53/udp --name skydns crosbymichael/skydns -nameserver 8.8.8.8:53 -domain arvados
+ fi
+ $DOCKER ps | grep skydock >/dev/null
+ if [[ "$?" != "0" ]]; then
+ echo "Starting crosbymichael/skydock container..."
+ $DOCKER rm "skydock" 2>/dev/null
+ $DOCKER run -d -v /var/run/docker.sock:/docker.sock --name skydock crosbymichael/skydock -ttl 30 -environment dev -s /docker.sock -domain arvados -name skydns
+ fi
+ fi
+
+ if [[ $start_compute != false ]]
+ then
+ for i in `seq 0 $(($start_compute - 1))`; do
+ start_container "" "compute" '' "api_server:api" "arvados/compute"
+ done
+ fi
+
if [[ $start_keep != false ]]
then
# create `keep_volumes' array with a list of keep mount points
start_container "$start_doc:80" "doc_server" '' '' "arvados/doc"
fi
+ if [[ $start_vm != false ]]
+ then
+ start_container "" "shell" '' "api_server:api" "arvados/shell"
+ fi
+
if [[ $start_workbench != false ]]
then
start_container "$start_workbench:80" "workbench_server" '' "api_server:api" "arvados/workbench"
local stop_doc=""
local stop_sso=""
local stop_api=""
+ local stop_compute=""
local stop_workbench=""
+ local stop_nameserver=""
+ local stop_vm=""
local stop_keep=""
# NOTE: This requires GNU getopt (part of the util-linux package on Debian-based distros).
- local TEMP=`getopt -o d::s::a::w::kh \
- --long doc::,sso::,api::,workbench::,keep,help,ssh \
+ local TEMP=`getopt -o dsacwnkvh \
+ --long doc,sso,api,compute,workbench,nameserver,keep,vm,help \
-n "$0" -- "$@"`
if [ $? != 0 ] ; then echo "Use -h for help"; exit 1 ; fi
do
case $1 in
-d | --doc)
- stop_doc=doc_server ; shift 2 ;;
+ stop_doc=doc_server ; shift ;;
-s | --sso)
- stop_sso=sso_server ; shift 2 ;;
+ stop_sso=sso_server ; shift ;;
-a | --api)
- stop_api=api_server ; shift 2 ;;
+ stop_api=api_server ; shift ;;
+ -c | --compute)
+ stop_compute=`$DOCKER ps |grep -P "compute\d+" |grep -v api_server |cut -f1 -d ' '` ; shift ;;
-w | --workbench)
- stop_workbench=workbench_server ; shift 2 ;;
+ stop_workbench=workbench_server ; shift ;;
+ -n | --nameserver )
+ stop_nameserver="skydock skydns" ; shift ;;
+ -v | --vm )
+ stop_vm="shell" ; shift ;;
-k | --keep )
stop_keep="keep_server_0 keep_server_1" ; shift ;;
- --ssh)
- shift
- ;;
--)
shift
break
if [[ $stop_doc == "" &&
$stop_sso == "" &&
$stop_api == "" &&
+ $stop_compute == "" &&
$stop_workbench == "" &&
+ $stop_vm == "" &&
+ $stop_nameserver == "" &&
$stop_keep == "" ]]
then
stop_doc=doc_server
stop_sso=sso_server
stop_api=api_server
+ stop_compute=`$DOCKER ps |grep -P "compute\d+" |grep -v api_server |cut -f1 -d ' '`
stop_workbench=workbench_server
+ stop_vm=shell
+ stop_nameserver="skydock skydns"
stop_keep="keep_server_0 keep_server_1"
fi
- $DOCKER stop $stop_doc $stop_sso $stop_api $stop_workbench $stop_keep \
+ $DOCKER stop $stop_doc $stop_sso $stop_api $stop_compute $stop_workbench $stop_nameserver $stop_keep $stop_vm \
2>/dev/null
}
/usr/local/rvm/bin/rvm alias create default ruby && \
/bin/mkdir -p /usr/src/arvados
+ADD apt.arvados.org.list /etc/apt/sources.list.d/
+RUN apt-key adv --keyserver pgp.mit.edu --recv 1078ECD7
+RUN apt-get update && apt-get -qqy install python-arvados-python-client
+
ADD generated/arvados.tar.gz /usr/src/arvados/
# Update gem. This (hopefully) fixes
--- /dev/null
+# apt.arvados.org
+deb http://apt.arvados.org/ wheezy main
sudo apt-get -y install ruby1.9.3
fi
-build_tools/build.rb $*
+function usage {
+ echo >&2
+ echo >&2 "usage: $0 [options]"
+ echo >&2
+ echo >&2 "Calling $0 without arguments will build all Arvados docker images"
+ echo >&2
+ echo >&2 "$0 options:"
+ echo >&2 " -h, --help Print this help text"
+ echo >&2 " clean Clear all build information"
+ echo >&2 " realclean clean and remove all Arvados Docker images except arvados/debian"
+ echo >&2 " deepclean realclean and remove arvados/debian, crosbymichael/skydns and "
+ echo >&2 " crosbymichael/skydns Docker images"
+ echo >&2
+}
+
+if [ "$1" = '-h' ] || [ "$1" = '--help' ]; then
+ usage
+ exit 1
+fi
+
+build_tools/build.rb
+
+if [[ "$?" == "0" ]]; then
+ DOCKER=`which docker.io`
+
+ if [[ "$DOCKER" == "" ]]; then
+ DOCKER=`which docker`
+ fi
+
+ DOCKER=$DOCKER /usr/bin/make -f build_tools/Makefile $*
+fi
-all: api-image doc-image workbench-image keep-image sso-image
+# This is the 'shell hack'. Call make with DUMP=1 to see the effect.
+ifdef DUMP
+OLD_SHELL := $(SHELL)
+SHELL = $(warning [$@])$(OLD_SHELL) -x
+endif
+
+all: skydns-image skydock-image api-image compute-image doc-image workbench-image keep-image sso-image shell-image
+
+IMAGE_FILES := $(shell ls *-image 2>/dev/null |grep -v debian-arvados-image)
+GENERATED_FILES := $(shell ls */generated/* 2>/dev/null)
+GENERATED_DIRS := $(shell ls */generated 2>/dev/null)
# `make clean' removes the files generated in the build directory
# but does not remove any docker images generated in previous builds
clean:
- -rm -rf build
- -rm *-image */generated/*
- -@rmdir */generated
-
-# `make realclean' will also remove the docker images and force
-# subsequent makes to build the entire chain from the ground up
+ @echo "make clean"
+ -@rm -rf build
+ +@[ "$(IMAGE_FILES)$(GENERATED_FILES)" = "" ] || rm $(IMAGE_FILES) $(GENERATED_FILES) 2>/dev/null
+ +@[ "$(GENERATED_DIRS)" = "" ] || rmdir */generated 2>/dev/null
+
+DEBIAN_IMAGE := $(shell $(DOCKER) images -q arvados/debian |head -n1)
+
+REALCLEAN_CONTAINERS := $(shell $(DOCKER) ps -a |grep -e arvados -e api_server -e keep_server -e doc_server -e workbench_server |cut -f 1 -d' ')
+REALCLEAN_IMAGES := $(shell $(DOCKER) images -q arvados/* |grep -v $(DEBIAN_IMAGE) 2>/dev/null)
+DEEPCLEAN_IMAGES := $(shell $(DOCKER) images -q arvados/*)
+SKYDNS_CONTAINERS := $(shell $(DOCKER) ps -a |grep -e crosbymichael/skydns -e crosbymichael/skydock |cut -f 1 -d' ')
+SKYDNS_IMAGES := $(shell $(DOCKER) images -q crosbymichael/skyd*)
+
+# `make realclean' will also remove the Arvados docker images (but not the
+# arvados/debian image) and force subsequent makes to build the entire chain
+# from the ground up
realclean: clean
- -[ -n "`$(DOCKER) ps -q`" ] && $(DOCKER) stop `$(DOCKER) ps -q`
- -$(DOCKER) rm `$(DOCKER) ps -a |grep -e arvados -e api_server -e keep_server -e doc_server -e workbench_server |cut -f 1 -d' '`
- -$(DOCKER) rmi `$(DOCKER) images -q arvados/*`
+ @echo "make realclean"
+ +@[ "`$(DOCKER) ps -q`" = '' ] || $(DOCKER) stop `$(DOCKER) ps -q`
+ +@[ "$(REALCLEAN_CONTAINERS)" = '' ] || $(DOCKER) rm $(REALCLEAN_CONTAINERS)
+ +@[ "$(REALCLEAN_IMAGES)" = '' ] || $(DOCKER) rmi $(REALCLEAN_IMAGES)
+
+# `make deepclean' will remove all Arvados docker images and the skydns/skydock
+# images and force subsequent makes to build the entire chain from the ground up
+deepclean: clean
+ @echo "make deepclean"
+ -@rm -f debian-arvados-image 2>/dev/null
+ +@[ "`$(DOCKER) ps -q`" = '' ] || $(DOCKER) stop `$(DOCKER) ps -q`
+ +@[ "$(REALCLEAN_CONTAINERS)" = '' ] || $(DOCKER) rm $(REALCLEAN_CONTAINERS)
+ +@[ "$(DEEPCLEAN_IMAGES)" = '' ] || $(DOCKER) rmi $(DEEPCLEAN_IMAGES)
+ +@[ "$(SKYDNS_CONTAINERS)" = '' ] || $(DOCKER) rm $(SKYDNS_CONTAINERS)
+ +@[ "$(SKYDNS_IMAGES)" = '' ] || $(DOCKER) rmi $(SKYDNS_IMAGES)
# ============================================================
# Dependencies for */generated files which are prerequisites
BASE_DEPS = base/Dockerfile $(BASE_GENERATED)
+SLURM_DEPS = slurm/Dockerfile $(SLURM_GENERATED)
+
JOBS_DEPS = jobs/Dockerfile
JAVA_BWA_SAMTOOLS_DEPS = java-bwa-samtools/Dockerfile
-API_DEPS = api/Dockerfile $(API_GENERATED)
+API_DEPS = api/* $(API_GENERATED)
+
+SHELL_DEPS = shell/* $(SHELL_GENERATED)
+
+COMPUTE_DEPS = compute/* $(COMPUTE_GENERATED)
DOC_DEPS = doc/Dockerfile doc/apache2_vhost
BASE_GENERATED = base/generated/arvados.tar.gz
+SLURM_GENERATED = slurm/generated/*
+
+COMPUTE_GENERATED = compute/generated/setup.sh
+
+COMPUTE_GENERATED_IN = compute/setup.sh.in
+
API_GENERATED = \
+ api/generated/arvados-clients.yml \
api/generated/apache2_vhost \
api/generated/config_databases.sh \
api/generated/database.yml \
api/generated/omniauth.rb \
api/generated/application.yml \
+ api/generated/setup.sh \
+ api/generated/setup-gitolite.sh \
+ api/generated/slurm.conf \
api/generated/superuser_token
API_GENERATED_IN = \
+ api/arvados-clients.yml.in \
api/apache2_vhost.in \
api/config_databases.sh.in \
api/database.yml.in \
api/omniauth.rb.in \
api/application.yml.in \
+ api/setup.sh.in \
+ api/setup-gitolite.sh.in \
+ api/slurm.conf.in \
api/superuser_token.in
+SHELL_GENERATED = \
+ shell/generated/setup.sh \
+ shell/generated/superuser_token
+
+SHELL_GENERATED_IN = \
+ shell/setup.sh.in \
+ shell/superuser_token.in
+
+SLURM_GENERATED = \
+ slurm/generated/slurm.conf
+
+SLURM_GENERATED_IN = \
+ slurm/slurm.conf.in
+
WORKBENCH_GENERATED = \
workbench/generated/apache2_vhost \
workbench/generated/application.yml
cd build/sdk/ruby && gem build arvados.gemspec
touch build/.buildstamp
+$(SLURM_GENERATED): config.yml $(BUILD)
+ $(CONFIG_RB)
+ mkdir -p slurm/generated
+
$(BASE_GENERATED): config.yml $(BUILD)
$(CONFIG_RB)
mkdir -p base/generated
$(API_GENERATED): config.yml $(API_GENERATED_IN)
$(CONFIG_RB)
+$(SHELL_GENERATED): config.yml $(SHELL_GENERATED_IN)
+ $(CONFIG_RB)
+
$(WORKBENCH_GENERATED): config.yml $(WORKBENCH_GENERATED_IN)
$(CONFIG_RB)
+$(COMPUTE_GENERATED): config.yml $(COMPUTE_GENERATED_IN)
+
$(WAREHOUSE_GENERATED): config.yml $(WAREHOUSE_GENERATED_IN)
$(CONFIG_RB)
# The main Arvados servers: api, doc, workbench, warehouse
api-image: passenger-image $(BUILD) $(API_DEPS)
+ @echo "Building api-image"
mkdir -p api/generated
tar -czf api/generated/api.tar.gz -C build/services api
+ chmod 755 api/generated/setup.sh
+ chmod 755 api/generated/setup-gitolite.sh
$(DOCKER_BUILD) -t arvados/api api
date >api-image
+shell-image: base-image $(BUILD) $(SHELL_DEPS)
+ @echo "Building shell-image"
+ mkdir -p shell/generated
+ chmod 755 shell/generated/setup.sh
+ $(DOCKER_BUILD) -t arvados/shell shell
+ date >shell-image
+
+compute-image: slurm-image $(BUILD) $(COMPUTE_DEPS)
+ @echo "Building compute-image"
+ chmod 755 compute/generated/setup.sh
+ $(DOCKER_BUILD) -t arvados/compute compute
+ date >compute-image
+
doc-image: base-image $(BUILD) $(DOC_DEPS)
+ @echo "Building doc-image"
mkdir -p doc/generated
tar -czf doc/generated/doc.tar.gz -C build doc
$(DOCKER_BUILD) -t arvados/doc doc
date >doc-image
-keep-image: debian-image $(BUILD) $(KEEP_DEPS)
+keep-image: debian-arvados-image $(BUILD) $(KEEP_DEPS)
+ @echo "Building keep-image"
$(DOCKER_BUILD) -t arvados/keep keep
date >keep-image
date >bcbio-nextgen-image
workbench-image: passenger-image $(BUILD) $(WORKBENCH_DEPS)
+ @echo "Building workbench-image"
mkdir -p workbench/generated
tar -czf workbench/generated/workbench.tar.gz -C build/apps workbench
$(DOCKER_BUILD) -t arvados/workbench workbench
date >warehouse-image
sso-image: passenger-image $(SSO_DEPS)
+ @echo "Building sso-image"
$(DOCKER_BUILD) -t arvados/sso sso
date >sso-image
# that are dependencies for every Arvados service.
passenger-image: base-image
+ @echo "Building passenger-image"
$(DOCKER_BUILD) -t arvados/passenger passenger
date >passenger-image
-base-image: debian-image $(BASE_DEPS)
+slurm-image: base-image $(SLURM_DEPS)
+ @echo "Building slurm-image"
+ $(DOCKER_BUILD) -t arvados/slurm slurm
+ date >slurm-image
+
+base-image: debian-arvados-image $(BASE_DEPS)
+ @echo "Building base-image"
$(DOCKER_BUILD) -t arvados/base base
date >base-image
-debian-image:
+debian-arvados-image:
+ @echo "Building debian-arvados-image"
./mkimage-debootstrap.sh arvados/debian wheezy ftp://ftp.us.debian.org/debian/
- date >debian-image
+ date >debian-arvados-image
+
+skydns-image:
+ @echo "Downloading skydns-image"
+ $(DOCKER) pull crosbymichael/skydns
+ date >skydns-image
+
+skydock-image:
+ @echo "Downloading skydock-image"
+ $(DOCKER) pull crosbymichael/skydock
+ date >skydock-image
end
end
+ print "Arvados needs to know the shell login name for the administrative user.\n"
+ print "This will also be used as the name for your git repository.\n"
+ print "\n"
+ user_name = ""
+ until is_valid_user_name? user_name
+ print "Enter a shell login name here: "
+ user_name = gets.strip
+ if not is_valid_user_name? user_name
+ print "That doesn't look like a valid shell login name. Please try again.\n"
+ end
+ end
+
File.open 'config.yml', 'w' do |config_out|
config = YAML.load_file 'config.yml.example'
config['API_AUTO_ADMIN_USER'] = admin_email_address
+ config['ARVADOS_USER_NAME'] = user_name
config['API_HOSTNAME'] = generate_api_hostname
config['PUBLIC_KEY_PATH'] = find_or_create_ssh_key(config['API_HOSTNAME'])
config.each_key do |var|
docker_ok? docker_path and
debootstrap_ok? and
File.exists? 'config.yml'
- warn "Building Arvados."
- system({"DOCKER" => docker_path}, '/usr/bin/make', '-f', options[:makefile], *ARGV)
+ exit 0
+ else
+ exit 6
end
end
str.match /^\S+@\S+\.\S+$/
end
+# is_valid_user_name?
+# Returns true if its arg looks like a valid unix username.
+# This is a very very loose sanity check.
+#
+def is_valid_user_name? str
+ # borrowed from Debian's adduser (version 3.110)
+ str.match /^[_.A-Za-z0-9][-\@_.A-Za-z0-9]*\$?$/
+end
+
# generate_api_hostname
# Generates a 5-character randomly chosen API hostname.
#
options[:makefile] = mk
end
end
-
main options
end
--- /dev/null
+# Arvados compute node Docker container.
+
+FROM arvados/slurm
+MAINTAINER Ward Vandewege <ward@curoverse.com>
+
+RUN apt-get update && apt-get -qqy install supervisor python-pip python-pyvcf python-gflags python-google-api-python-client python-virtualenv libattr1-dev libfuse-dev python-dev python-llfuse fuse crunchstat python-arvados-fuse cron
+
+ADD fuse.conf /etc/fuse.conf
+
+RUN /usr/local/rvm/bin/rvm-exec default gem install arvados-cli arvados
+
+# Install Docker from the Docker Inc. repository
+RUN apt-get update -qq && apt-get install -qqy iptables ca-certificates lxc apt-transport-https
+RUN echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list
+RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
+RUN apt-get update -qq && apt-get install -qqy lxc-docker
+
+RUN addgroup --gid 4005 crunch && mkdir /home/crunch && useradd --uid 4005 --gid 4005 crunch && usermod crunch -G fuse,docker && chown crunch:crunch /home/crunch
+
+# Supervisor.
+ADD supervisor.conf /etc/supervisor/conf.d/arvados.conf
+ADD ssh.sh /usr/local/bin/ssh.sh
+ADD generated/setup.sh /usr/local/bin/setup.sh
+ADD wrapdocker /usr/local/bin/wrapdocker.sh
+
+VOLUME /var/lib/docker
+# Start the supervisor.
+CMD ["/usr/bin/supervisord", "-n"]
--- /dev/null
+# Set the maximum number of FUSE mounts allowed to non-root users.
+# The default is 1000.
+#
+#mount_max = 1000
+
+# Allow non-root users to specify the 'allow_other' or 'allow_root'
+# mount options.
+#
+user_allow_other
+
--- /dev/null
+#!/bin/bash
+
+. /etc/profile.d/rvm.sh
+
+export ARVADOS_API_HOST=api
+export ARVADOS_API_HOST_INSECURE=yes
+export ARVADOS_API_TOKEN=@@API_SUPERUSER_SECRET@@
+
+arv node create --node {} > /tmp/node.json
+
+UUID=`grep \"uuid\" /tmp//node.json |cut -f4 -d\"`
+PING_SECRET=`grep \"ping_secret\" /tmp//node.json |cut -f4 -d\"`
+
+echo "*/5 * * * * root /usr/bin/curl -k -d ping_secret=$PING_SECRET https://api/arvados/v1/nodes/$UUID/ping" > /etc/cron.d/node_ping
+
+# Send a ping now
+/usr/bin/curl -k -d ping_secret=$PING_SECRET https://api/arvados/v1/nodes/$UUID/ping?ping_secret=$PING_SECRET
+
+# Just make sure /dev/fuse permissions are correct (the device appears after fuse is loaded)
+chmod 1660 /dev/fuse && chgrp fuse /dev/fuse
--- /dev/null
+#!/bin/bash
+
+echo $ENABLE_SSH
+
+# Start ssh daemon if requested via the ENABLE_SSH env variable
+if [[ ! "$ENABLE_SSH" =~ (0|false|no|f|^$) ]]; then
+echo "STARTING"
+ /etc/init.d/ssh start
+fi
+
--- /dev/null
+[program:ssh]
+user=root
+command=/usr/local/bin/ssh.sh
+startsecs=0
+
+[program:munge]
+user=root
+command=/etc/init.d/munge start
+startsecs=0
+
+[program:slurm]
+user=root
+command=/etc/init.d/slurm-llnl start
+startsecs=0
+
+[program:cron]
+user=root
+command=/etc/init.d/cron start
+startsecs=0
+
+[program:setup]
+user=root
+command=/usr/local/bin/setup.sh
+startsecs=0
+
+[program:docker]
+user=root
+command=/usr/local/bin/wrapdocker.sh
+
--- /dev/null
+#!/bin/bash
+
+# Borrowed from https://github.com/jpetazzo/dind under Apache2
+# and slightly modified.
+
+# First, make sure that cgroups are mounted correctly.
+CGROUP=/sys/fs/cgroup
+: {LOG:=stdio}
+
+[ -d $CGROUP ] ||
+ mkdir $CGROUP
+
+mountpoint -q $CGROUP ||
+ mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {
+ echo "Could not make a tmpfs mount. Did you use -privileged?"
+ exit 1
+ }
+
+if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security
+then
+ mount -t securityfs none /sys/kernel/security || {
+ echo "Could not mount /sys/kernel/security."
+ echo "AppArmor detection and -privileged mode might break."
+ }
+fi
+
+# Mount the cgroup hierarchies exactly as they are in the parent system.
+for SUBSYS in $(cut -d: -f2 /proc/1/cgroup)
+do
+ [ -d $CGROUP/$SUBSYS ] || mkdir $CGROUP/$SUBSYS
+ mountpoint -q $CGROUP/$SUBSYS ||
+ mount -n -t cgroup -o $SUBSYS cgroup $CGROUP/$SUBSYS
+
+ # The two following sections address a bug which manifests itself
+ # by a cryptic "lxc-start: no ns_cgroup option specified" when
+ # trying to start containers withina container.
+ # The bug seems to appear when the cgroup hierarchies are not
+ # mounted on the exact same directories in the host, and in the
+ # container.
+
+ # Named, control-less cgroups are mounted with "-o name=foo"
+ # (and appear as such under /proc/<pid>/cgroup) but are usually
+ # mounted on a directory named "foo" (without the "name=" prefix).
+ # Systemd and OpenRC (and possibly others) both create such a
+ # cgroup. To avoid the aforementioned bug, we symlink "foo" to
+ # "name=foo". This shouldn't have any adverse effect.
+ echo $SUBSYS | grep -q ^name= && {
+ NAME=$(echo $SUBSYS | sed s/^name=//)
+ ln -s $SUBSYS $CGROUP/$NAME
+ }
+
+ # Likewise, on at least one system, it has been reported that
+ # systemd would mount the CPU and CPU accounting controllers
+ # (respectively "cpu" and "cpuacct") with "-o cpuacct,cpu"
+ # but on a directory called "cpu,cpuacct" (note the inversion
+ # in the order of the groups). This tries to work around it.
+ [ $SUBSYS = cpuacct,cpu ] && ln -s $SUBSYS $CGROUP/cpu,cpuacct
+done
+
+# Note: as I write those lines, the LXC userland tools cannot setup
+# a "sub-container" properly if the "devices" cgroup is not in its
+# own hierarchy. Let's detect this and issue a warning.
+grep -q :devices: /proc/1/cgroup ||
+ echo "WARNING: the 'devices' cgroup should be in its own hierarchy."
+grep -qw devices /proc/1/cgroup ||
+ echo "WARNING: it looks like the 'devices' cgroup is not mounted."
+
+# Now, close extraneous file descriptors.
+pushd /proc/self/fd >/dev/null
+for FD in *
+do
+ case "$FD" in
+ # Keep stdin/stdout/stderr
+ [012])
+ ;;
+ # Nuke everything else
+ *)
+ eval exec "$FD>&-"
+ ;;
+ esac
+done
+popd >/dev/null
+
+
+# If a pidfile is still around (for example after a container restart),
+# delete it so that docker can start.
+rm -rf /var/run/docker.pid
+
+exec docker -d
+
# true when starting the container.
PUBLIC_KEY_PATH:
+# Username for your Arvados user. This will be used as your shell login name
+# as well as the name for your git repository.
+ARVADOS_USER_NAME:
+
# ARVADOS_DOMAIN: the Internet domain of this installation.
# ARVADOS_DNS_SERVER: the authoritative nameserver for ARVADOS_DOMAIN.
ARVADOS_DOMAIN: # e.g. arvados.internal
--- /dev/null
+# Slurm node Docker container.
+
+FROM arvados/base
+MAINTAINER Ward Vandewege <ward@curoverse.com>
+
+RUN apt-get update && apt-get -qqy install supervisor python-pip python-pyvcf python-gflags python-google-api-python-client python-virtualenv libattr1-dev libfuse-dev python-dev python-llfuse fuse crunchstat python-arvados-fuse cron vim
+
+ADD fuse.conf /etc/fuse.conf
+
+ADD generated/superuser_token /tmp/superuser_token
+
+RUN /usr/local/rvm/bin/rvm-exec default gem install arvados-cli arvados
+
+# Supervisor.
+ADD supervisor.conf /etc/supervisor/conf.d/arvados.conf
+ADD generated/setup.sh /usr/local/bin/setup.sh
+
+# Start the supervisor.
+CMD ["/usr/bin/supervisord", "-n"]
--- /dev/null
+# Set the maximum number of FUSE mounts allowed to non-root users.
+# The default is 1000.
+#
+#mount_max = 1000
+
+# Allow non-root users to specify the 'allow_other' or 'allow_root'
+# mount options.
+#
+user_allow_other
+
--- /dev/null
+#!/bin/bash
+
+USER_NAME="@@ARVADOS_USER_NAME@@"
+
+useradd $USER_NAME -s /bin/bash
+mkdir /home/$USER_NAME/.ssh -p
+
+cp ~root/.ssh/authorized_keys /home/$USER_NAME/.ssh/authorized_keys
+
+# Install our token
+mkdir -p /home/$USER_NAME/.config/arvados;
+echo "ARVADOS_API_HOST=api" >> /home/$USER_NAME/.config/arvados/settings.conf
+echo "ARVADOS_API_HOST_INSECURE=yes" >> /home/$USER_NAME/.config/arvados/settings.conf
+echo "ARVADOS_API_TOKEN=$(cat /tmp/superuser_token)" >> /home/$USER_NAME/.config/arvados/settings.conf
+chmod 600 /home/$USER_NAME/.config/arvados/settings.conf
+
+chown $USER_NAME:$USER_NAME /home/$USER_NAME -R
+
+rm -f /tmp/superuser_token
+
+
--- /dev/null
+@@API_SUPERUSER_SECRET@@
--- /dev/null
+[program:ssh]
+user=root
+command=/etc/init.d/ssh start
+startsecs=0
+
+[program:cron]
+user=root
+command=/etc/init.d/cron start
+startsecs=0
+
+[program:setup]
+user=root
+command=/usr/local/bin/setup.sh
+startsecs=0
+
--- /dev/null
+# Slurm node Docker container.
+
+FROM arvados/base
+MAINTAINER Ward Vandewege <ward@curoverse.com>
+
+RUN apt-get update && apt-get -q -y install slurm-llnl munge
+
+ADD munge.key /etc/munge/
+RUN chown munge:munge /etc/munge/munge.key && chmod 600 /etc/munge/munge.key
+ADD generated/slurm.conf /etc/slurm-llnl/
+
--- /dev/null
+
+ControlMachine=api
+#SlurmUser=slurmd
+SlurmctldPort=6817
+SlurmdPort=6818
+AuthType=auth/munge
+#JobCredentialPrivateKey=/etc/slurm-llnl/slurm-key.pem
+#JobCredentialPublicCertificate=/etc/slurm-llnl/slurm-cert.pem
+StateSaveLocation=/tmp
+SlurmdSpoolDir=/tmp/slurmd
+SwitchType=switch/none
+MpiDefault=none
+SlurmctldPidFile=/var/run/slurmctld.pid
+SlurmdPidFile=/var/run/slurmd.pid
+ProctrackType=proctrack/pgid
+CacheGroups=0
+ReturnToService=2
+TaskPlugin=task/affinity
+#
+# TIMERS
+SlurmctldTimeout=300
+SlurmdTimeout=300
+InactiveLimit=0
+MinJobAge=300
+KillWait=30
+Waittime=0
+#
+# SCHEDULING
+SchedulerType=sched/backfill
+#SchedulerType=sched/builtin
+SchedulerPort=7321
+#SchedulerRootFilter=
+#SelectType=select/linear
+SelectType=select/cons_res
+SelectTypeParameters=CR_CPU_Memory
+FastSchedule=1
+#
+# LOGGING
+SlurmctldDebug=3
+#SlurmctldLogFile=
+SlurmdDebug=3
+#SlurmdLogFile=
+JobCompType=jobcomp/none
+#JobCompLoc=
+JobAcctGatherType=jobacct_gather/none
+#JobAcctLogfile=
+#JobAcctFrequency=
+#
+# COMPUTE NODES
+NodeName=DEFAULT
+# CPUs=8 State=UNKNOWN RealMemory=6967 Weight=6967
+PartitionName=DEFAULT MaxTime=INFINITE State=UP
+PartitionName=compute Default=YES Shared=yes
+#PartitionName=sysadmin Hidden=YES Shared=yes
+
+NodeName=compute[0-1]
+#NodeName=compute0 RealMemory=6967 Weight=6967
+
+PartitionName=compute Nodes=compute[0-1]
+PartitionName=crypto Nodes=compute[0-1]
--- /dev/null
+[program:ssh]
+user=root
+command=/usr/local/bin/ssh.sh
+startsecs=0
+
+[program:munge]
+user=root
+command=/etc/init.d/munge start
+
+[program:slurm]
+user=root
+command=/etc/init.d/slurm-llnl start
+
+
FROM arvados/passenger
MAINTAINER Ward Vandewege <ward@curoverse.com>
+# We need graphviz for the provenance graphs
+RUN apt-get update && apt-get -qqy install graphviz
+
# Update Arvados source
RUN /bin/mkdir -p /usr/src/arvados/apps
ADD generated/workbench.tar.gz /usr/src/arvados/apps/