- Computation with Crunch:
- api/execution.html.textile.liquid
- architecture/dispatchcloud.html.textile.liquid
+ - architecture/singularity.html.textile.liquid
- Other:
- api/permission-model.html.textile.liquid
- architecture/federation.html.textile.liquid
- install/install-shell-server.html.textile.liquid
- install/install-webshell.html.textile.liquid
- install/install-arv-git-httpd.html.textile.liquid
- - Containers API (cloud):
+ - Containers API (all):
- install/install-jobs-image.html.textile.liquid
+ - Containers API (cloud):
- install/crunch2-cloud/install-compute-node.html.textile.liquid
- install/crunch2-cloud/install-dispatch-cloud.html.textile.liquid
- - Containers API (slurm):
+ - Compute nodes (Slurm or LSF):
+ - install/crunch2/install-compute-node-docker.html.textile.liquid
+ - install/crunch2/install-compute-node-singularity.html.textile.liquid
+ - Containers API (Slurm):
- install/crunch2-slurm/install-dispatch.html.textile.liquid
- install/crunch2-slurm/configure-slurm.html.textile.liquid
- - install/crunch2-slurm/install-compute-node.html.textile.liquid
- install/crunch2-slurm/install-test.html.textile.liquid
- - Containers API (lsf):
+ - Containers API (LSF):
- install/crunch2-lsf/install-dispatch.html.textile.liquid
- Additional configuration:
- - install/singularity.html.textile.liquid
- install/container-shell-access.html.textile.liquid
- External dependencies:
- install/install-postgresql.html.textile.liquid
h2. Scheduling parameters
-Parameters to be passed to the container scheduler (e.g., SLURM) when running a container.
+Parameters to be passed to the container scheduler (e.g., Slurm) when running a container.
table(table table-bordered table-condensed).
|_. Key|_. Type|_. Description|_. Notes|
--- /dev/null
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+If you plan to use custom certificates, please set the variable <i>USE_LETSENCRYPT=no</i> and copy your certificates to the directory specified with the variable @CUSTOM_CERTS_DIR@ (usually "./certs") in the remote directory where you copied the @provision.sh@ script. From this dir, the provision script will install the certificates required for the role you're installing.
+
+The script expects cert/key files with these basenames (matching the role except for <i>keepweb</i>, which is split in both <i>download / collections</i>):
+
+* "controller"
+* "websocket"
+* "workbench"
+* "workbench2"
+* "webshell"
+* "download" # Part of keepweb
+* "collections" # Part of keepweb
+* "keepproxy"
+
+Ie., for 'keepproxy', the script will lookup for
+
+<notextile>
+<pre><code>${CUSTOM_CERTS_DIR}/keepproxy.crt
+${CUSTOM_CERTS_DIR}/keepproxy.key
+</code></pre>
+</notextile>
--- /dev/null
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+h2(#singularity_mksquashfs_configuration). Singularity mksquashfs configuration
+
+{% if show_docker_warning != nil %}
+{% include 'notebox_begin_warning' %}
+This section is only relevant when using Singularity. Skip this section when using Docker.
+{% include 'notebox_end' %}
+{% endif %}
+
+Docker images are converted on the fly by @mksquashfs@, which can consume a considerable amount of RAM. The RAM usage of mksquashfs can be restricted in @/etc/singularity/singularity.conf@ with a line like @mksquashfs mem = 256M@. The amount of memory made available for mksquashfs should be configured lower than the smallest amount of memory requested by a container on the cluster to avoid the conversion being killed for using too much memory. The default memory allocation in CWL is 256M, so that is also a good choice for the @mksquashfs mem@ setting.
# To submit work, create a "container request":{{site.baseurl}}/api/methods/container_requests.html in the @Committed@ state.
# The system will fufill the container request by creating or reusing a "Container object":{{site.baseurl}}/api/methods/containers.html and assigning it to the @container_uuid@ field. If the same request has been submitted in the past, it may reuse an existing container. The reuse behavior can be suppressed with @use_existing: false@ in the container request.
-# The dispatcher process will notice a new container in @Queued@ state and submit a container executor to the underlying work queuing system (such as SLURM).
+# The dispatcher process will notice a new container in @Queued@ state and submit a container executor to the underlying work queuing system (such as Slurm).
# The container executes. Upon termination the container goes into the @Complete@ state. If the container execution was interrupted or lost due to system failure, it will go into the @Cancelled@ state.
# When the container associated with the container request is completed, the container request will go into the @Final@ state.
# The @output_uuid@ field of the container request contains the uuid of output collection produced by container request.
Priority 1000 is the highest priority.
-The actual order that containers execute is determined by the underlying scheduling software (e.g. SLURM) and may be based on a combination of container priority, submission time, available resources, and other factors.
+The actual order that containers execute is determined by the underlying scheduling software (e.g. Slurm) and may be based on a combination of container priority, submission time, available resources, and other factors.
In the current implementation, the magnitude of difference in priority between two containers affects the weight of priority vs age in determining scheduling order. If two containers have only a small difference in priority (for example, 500 and 501) and the lower priority container has a longer queue time, the lower priority container may be scheduled before the higher priority container. Use a greater magnitude difference (for example, 500 and 600) to give higher weight to priority over queue time.
--- /dev/null
+---
+layout: default
+navsection: architecture
+title: Singularity
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+Arvados can be configured to use "Singularity":https://sylabs.io/singularity/ instead of Docker to execute containers on cloud nodes or a Slurm/LSF cluster. Singularity may be preferable due to its simpler installation and lack of long-running daemon process and special system users/groups. For on premises Slurm/LSF clusters, see the "Set up a compute node with Singularity":{{ site.baseurl }}/install/crunch2/install-compute-node-singularity.html page. For cloud compute clusters, see the "Build a cloud compute node image":{{ site.baseurl }}/install/crunch2-cloud/install-compute-node.html page.
+
+h2. Design overview
+
+When Arvados is configured to use Singularity as the runtime engine for Crunch, containers are executed by Singularity. The images specified in workflows and tool definitions must be Docker images uploaded via @arv-keepdocker@ or @arvados-cwl-runner@. When Singularity is the runtime engine, these images are converted to Singularity format (@.sif@) at runtime, as needed.
+
+To avoid repeating this conversion work unnecessarily, the @.sif@ files are cached in @Keep@. This is done on a per-user basis. If it does not exist yet, a new Arvados project named @.cache@ is automatically created in the user's home project. Similarly, a subproject named @auto-generated singularity images@ will be created in the @.cache@ project. The automatically generated @.sif@ files are stored in collections in that project, with an expiration date two weeks in the future. If the cached image exists when Crunch runs a new container, the expiration date will be pushed out, so that it is always 2 weeks in the future from the most recent start of a container using the image.
+
+It is safe to empty out or even remove the .cache project or any of its contents; if necessary the cache projects and the @.sif@ files will automatically be regenerated.
+
+h2. Notes
+
+* Programs running in Singularity containers may behave differently than when run in Docker, due to differences between Singularity and Docker. For example, the root (image) filesystem is read-only in a Singularity container. Programs that attempt to write outside a designated output or temporary directory are likely to fail.
+
+* When using Singularity as the runtime engine, the compute node needs to have a compatible Singularity executable installed, as well as the @mksquashfs@ program used to convert Docker images to Singularity's @.sif@ format. The Arvados "compute node image build script":{{ site.baseurl }}/install/crunch2-cloud/install-compute-node.html includes these executables since Arvados 2.3.0.
+
+h2. Limitations
+
+Arvados @Singularity@ support is a work in progress. These are the current limitations of the implementation:
+
+* Even when using the Singularity runtime, users' container images are expected to be saved in Docker format. Specifying a @.sif@ file as an image when submitting a container request is not yet supported.
+* Arvados' Singularity implementation does not yet limit the amount of memory available in a container. Each container will have access to all memory on the host where it runs, unless memory use is restricted by Slurm/LSF.
+* The Docker ENTRYPOINT instruction is ignored.
+* Arvados is tested with Singularity version 3.7.4. Other versions may not work.
This can be done via the "cloud console":https://console.cloud.google.com/kubernetes/ or via the command line:
<pre>
-$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2 --cluster-version 1.15
+$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2
</pre>
It takes a few minutes for the cluster to be initialized.
{% endcomment %}
{% include 'notebox_begin_warning' %}
-arvados-dispatch-cloud is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm.
+@arvados-dispatch-cloud@ is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm or LSF.
{% include 'notebox_end' %}
# "Introduction":#introduction
# "Create an SSH keypair":#sshkeypair
# "The build script":#building
+# "Singularity mksquashfs configuration":#singularity_mksquashfs_configuration
# "Build an AWS image":#aws
# "Build an Azure image":#azure
</code></pre>
</notextile>
+{% assign show_docker_warning = true %}
+
+{% include 'singularity_mksquashfs_configuration' %}
+
+The desired amount of memory to make available for @mksquashfs@ can be configured in an argument to the build script, see the next section. It defaults to @256M@.
+
h2(#building). The build script
The necessary files are located in the @arvados/tools/compute-images@ directory in the source tree. A build script is provided to generate the image. The @--help@ argument lists all available options:
--azure-sku (default: unset, required if building for Azure, e.g. 16.04-LTS)
Azure SKU image to use
--ssh_user (default: packer)
- The user packer will use lo log into the image
- --domain (default: arvadosapi.com)
- The domain part of the FQDN for the cluster
- --resolver (default: 8.8.8.8)
+ The user packer will use to log into the image
+ --resolver (default: host's network provided)
The dns resolver for the machine
--reposuffix (default: unset)
Set this to "-dev" to track the unstable/dev Arvados repositories
--public-key-file (required)
Path to the public key file that a-d-c will use to log into the compute node
+ --mksquashfs-mem (default: 256M)
+ Only relevant when using Singularity. This is the amount of memory mksquashfs is allowed to use.
--debug
Output debug information (default: false)
</code></pre></notextile>
{% endcomment %}
{% include 'notebox_begin_warning' %}
-arvados-dispatch-cloud is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm.
+@arvados-dispatch-cloud@ is only relevant for cloud installations. Skip this section if you are installing an on premises cluster that will spool jobs to Slurm or LSF.
{% include 'notebox_end' %}
# "Introduction":#introduction
{% endcomment %}
{% include 'notebox_begin_warning' %}
-arvados-dispatch-lsf is only relevant for on premises clusters that will spool jobs to LSF. Skip this section if you are installing a cloud cluster.
+@arvados-dispatch-lsf@ is only relevant for on premises clusters that will spool jobs to LSF. Skip this section if you use Slurm or if you are installing a cloud cluster.
{% include 'notebox_end' %}
h2(#overview). Overview
In order to run containers, you must choose a user that has permission to set up FUSE mounts and run Singularity/Docker containers on each compute node. This install guide refers to this user as the @crunch@ user. We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions. However, you can run the dispatcher under any account with sufficient permissions across the cluster.
-Set up all of your compute nodes "as you would for a SLURM cluster":../crunch2-slurm/install-compute-node.html.
+Set up all of your compute nodes with "Docker":../crunch2/install-compute-node-singularity.html or "Singularity":../crunch2/install-compute-node-docker.html.
*Current limitations*:
* Arvados container priority is not propagated to LSF job priority. This can cause inefficient use of compute resources, and even deadlock if there are fewer compute nodes than concurrent Arvados workflows.
Arvados-dispatch-lsf reads the common configuration file at @/etc/arvados/config.yml@.
+Add a DispatchLSF entry to the Services section, using the hostname where @arvados-dispatch-lsf@ will run, and an available port:
+
+<notextile>
+<pre> Services:
+ DispatchLSF:
+ InternalURLs:
+ "http://<code class="userinput">hostname.zzzzz.arvadosapi.com:9007</code>": {}</pre>
+</notextile>
+
Review the following configuration parameters and adjust as needed.
h3(#BsubSudoUser). Containers.LSF.BsubSudoUser
{% endcomment %}
{% include 'notebox_begin_warning' %}
-crunch-dispatch-slurm is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+@crunch-dispatch-slurm@ is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you use LSF or if you are installing a cloud cluster.
{% include 'notebox_end' %}
-Containers can be dispatched to a Slurm cluster. The dispatcher sends work to the cluster using Slurm's @sbatch@ command, so it works in a variety of SLURM configurations.
+Containers can be dispatched to a Slurm cluster. The dispatcher sends work to the cluster using Slurm's @sbatch@ command, so it works in a variety of Slurm configurations.
In order to run containers, you must run the dispatcher as a user that has permission to set up FUSE mounts and run Docker containers on each compute node. This install guide refers to this user as the @crunch@ user. We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions. However, you can run the dispatcher under any account with sufficient permissions across the cluster.
Whenever you change this file, you will need to update the copy _on every compute node_ as well as the controller node, and then run @sudo scontrol reconfigure@.
-*@ControlMachine@* should be a DNS name that resolves to the Slurm controller (dispatch/API server). This must resolve correctly on all Slurm worker nodes as well as the controller itself. In general SLURM is very sensitive about all of the nodes being able to communicate with the controller _and one another_, all using the same DNS names.
+*@ControlMachine@* should be a DNS name that resolves to the Slurm controller (dispatch/API server). This must resolve correctly on all Slurm worker nodes as well as the controller itself. In general Slurm is very sensitive about all of the nodes being able to communicate with the controller _and one another_, all using the same DNS names.
*@SelectType=select/linear@* is needed on cloud-based installations that update node sizes dynamically, but it can only schedule one container at a time on each node. On a static or homogeneous cluster, use @SelectType=select/cons_res@ with @SelectTypeParameters=CR_CPU_Memory@ instead to enable node sharing.
* In @application.yml@: <code>assign_node_hostname: worker1-%<slot_number>04d</code>
* In @slurm.conf@: <code>NodeName=worker1-[0000-0255]</code>
-If your worker hostnames are already assigned by other means, and the full set of names is known in advance, have your worker node bootstrapping script (see "Installing a compute node":install-compute-node.html) send its current hostname, rather than expect Arvados to assign one.
+If your worker hostnames are already assigned by other means, and the full set of names is known in advance, have your worker node bootstrapping script send its current hostname, rather than expect Arvados to assign one.
* In @application.yml@: <code>assign_node_hostname: false</code>
* In @slurm.conf@: <code>NodeName=alice,bob,clay,darlene</code>
{% endcomment %}
{% include 'notebox_begin_warning' %}
-crunch-dispatch-slurm is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+@crunch-dispatch-slurm@ is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you use LSF or if you are installing a cloud cluster.
{% include 'notebox_end' %}
# "Introduction":#introduction
h2(#introduction). Introduction
-This assumes you already have a Slurm cluster, and have "set up all of your compute nodes":install-compute-node.html. Slurm packages are available for CentOS, Debian and Ubuntu. Please see your distribution package repositories. For information on installing Slurm from source, see "this install guide":https://slurm.schedmd.com/quickstart_admin.html
+This assumes you already have a Slurm cluster, and have set up all of your compute nodes with "Docker":../crunch2/install-compute-node-docker.html or "Singularity":../crunch2/install-compute-node-singularity.html. Slurm packages are available for CentOS, Debian and Ubuntu. Please see your distribution package repositories. For information on installing Slurm from source, see "this install guide":https://slurm.schedmd.com/quickstart_admin.html
The Arvados Slurm dispatcher can run on any node that can submit requests to both the Arvados API server and the Slurm controller (via @sbatch@). It is not resource-intensive, so you can run it on the API server node.
h3(#PrioritySpread). Containers.Slurm.PrioritySpread
crunch-dispatch-slurm adjusts the "nice" values of its Slurm jobs to ensure containers are prioritized correctly relative to one another. This option tunes the adjustment mechanism.
-* If non-Arvados jobs run on your Slurm cluster, and your Arvados containers are waiting too long in the Slurm queue because their "nice" values are too high for them to compete with other SLURM jobs, you should use a smaller PrioritySpread value.
+* If non-Arvados jobs run on your Slurm cluster, and your Arvados containers are waiting too long in the Slurm queue because their "nice" values are too high for them to compete with other Slurm jobs, you should use a smaller PrioritySpread value.
* If you have an older Slurm system that limits nice values to 10000, a smaller @PrioritySpread@ can help avoid reaching that limit.
* In other cases, a larger value is beneficial because it reduces the total number of adjustments made by executing @scontrol@.
Some versions of Docker (at least 1.9), when run under systemd, require the cgroup parent to be specified as a systemd slice. This causes an error when specifying a cgroup parent created outside systemd, such as those created by Slurm.
-You can work around this issue by disabling the Docker daemon's systemd integration. This makes it more difficult to manage Docker services with systemd, but Crunch does not require that functionality, and it will be able to use Slurm's cgroups as container parents. To do this, "configure the Docker daemon on all compute nodes":install-compute-node.html#configure_docker_daemon to run with the option @--exec-opt native.cgroupdriver=cgroupfs@.
+You can work around this issue by disabling the Docker daemon's systemd integration. This makes it more difficult to manage Docker services with systemd, but Crunch does not require that functionality, and it will be able to use Slurm's cgroups as container parents. To do this, configure the Docker daemon on all compute nodes to run with the option @--exec-opt native.cgroupdriver=cgroupfs@.
{% include 'notebox_end' %}
{% endcomment %}
{% include 'notebox_begin_warning' %}
-crunch-dispatch-slurm is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+@crunch-dispatch-slurm@ is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you use LSF or if you are installing a cloud cluster.
{% include 'notebox_end' %}
h2. Test compute node setup
h2. Test the dispatcher
+Make sure all of your compute nodes are set up with "Docker":../crunch2/install-compute-node-docker.html or "Singularity":../crunch2/install-compute-node-singularity.html.
+
On the dispatch node, start monitoring the crunch-dispatch-slurm logs:
<notextile>
---
layout: default
navsection: installguide
-title: Set up a Slurm compute node
+title: Set up a compute node with Docker
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
{% endcomment %}
{% include 'notebox_begin_warning' %}
-crunch-dispatch-slurm is only relevant for on premises clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+This page describes the requirements for a compute node in a Slurm or LSF cluster that will run containers dispatched by @crunch-dispatch-slurm@ or @arvados-dispatch-lsf@. If you are installing a cloud cluster, refer to "Build a cloud compute node image":/install/crunch2-cloud/install-compute-node.html.
+{% include 'notebox_end' %}
+
+{% include 'notebox_begin_warning' %}
+These instructions apply when Containers.RuntimeEngine is set to @docker@, refer to "Set up a compute node with Singularity":install-compute-node-singularity.html when running @singularity@.
{% include 'notebox_end' %}
# "Introduction":#introduction
# "Set up Docker":#docker
# "Update fuse.conf":#fuse
# "Update docker-cleaner.json":#docker-cleaner
-# "Configure Linux cgroups accounting":#cgroups
-# "Install Docker":#install_docker
-# "Configure the Docker daemon":#configure_docker_daemon
# "Install'python-arvados-fuse and crunch-run and arvados-docker-cleaner":#install-packages
h2(#introduction). Introduction
-This page describes how to configure a compute node so that it can be used to run containers dispatched by Arvados, with Slurm on a static cluster. These steps must be performed on every compute node.
+This page describes how to configure a compute node so that it can be used to run containers dispatched by Arvados on a static cluster. These steps must be performed on every compute node.
h2(#docker). Set up Docker
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Set up a compute node with Singularity
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% include 'notebox_begin_warning' %}
+This page describes the requirements for a compute node in a Slurm or LSF cluster that will run containers dispatched by @crunch-dispatch-slurm@ or @arvados-dispatch-lsf@. If you are installing a cloud cluster, refer to "Build a cloud compute node image":/install/crunch2-cloud/install-compute-node.html.
+{% include 'notebox_end' %}
+
+{% include 'notebox_begin_warning' %}
+These instructions apply when Containers.RuntimeEngine is set to @singularity@, refer to "Set up a compute node with Docker":install-compute-node-docker.html when running @docker@.
+{% include 'notebox_end' %}
+
+# "Introduction":#introduction
+# "Install python-arvados-fuse and crunch-run and squashfs-tools":#install-packages
+# "Set up Singularity":#singularity
+# "Singularity mksquashfs configuration":#singularity_mksquashfs_configuration
+
+h2(#introduction). Introduction
+
+Please refer to the "Singularity":{{site.baseurl}}/architecture/singularity.html documentation in the Architecture section.
+
+This page describes how to configure a compute node so that it can be used to run containers dispatched by Arvados on a static cluster. These steps must be performed on every compute node.
+
+{% assign arvados_component = 'python-arvados-fuse crunch-run squashfs-tools' %}
+
+{% include 'install_packages' %}
+
+h2(#singularity). Set up Singularity
+
+Follow the "Singularity installation instructions":https://sylabs.io/guides/3.7/user-guide/quick_start.html. Make sure @singularity@ and @mksquashfs@ are working:
+
+<notextile>
+<pre><code>$ <span class="userinput">singularity version</span>
+3.7.4
+$ <span class="userinput">mksquashfs -version</span>
+mksquashfs version 4.3-git (2014/06/09)
+[...]
+</code></pre>
+</notextile>
+
+Then update @Containers.RuntimeEngine@ in your cluster configuration:
+
+<notextile>
+<pre><code> # Container runtime: "docker" (default) or "singularity"
+ RuntimeEngine: singularity
+</code></pre>
+</notextile>
+
+{% include 'singularity_mksquashfs_configuration' %}
{% endcomment %}
{% include 'notebox_begin' %}
-This section is about installing an Arvados cluster. If you are just looking to install Arvados client tools and libraries, "go to the SDK section.":{{site.baseurl}}/sdk
+This section is about installing an Arvados cluster. If you are just looking to install Arvados client tools and libraries, "go to the SDK section.":{{site.baseurl}}/sdk/
{% include 'notebox_end' %}
Arvados components run on GNU/Linux systems, and supports AWS, GCP and Azure cloud platforms as well as on-premises installs. Arvados supports Debian and derivatives such as Ubuntu, as well as Red Hat and derivatives such as CentOS. "Arvados is Free Software":{{site.baseurl}}/user/copying/copying.html and self-install installations are not limited in any way. Commercial support and development are also available from "Curii Corporation.":mailto:info@curii.com
Edit the variables in the <i>local.params</i> file. Pay attention to the <b>*_INT_IP, *_TOKEN</b> and <b>*KEY</b> variables. Those variables will be used to do a search and replace on the <i>pillars/*</i> in place of any matching __VARIABLE__.
-The <i>multi_host</i> include LetsEncrypt salt code to automatically request and install the certificates for the public-facing hosts (API/controller, Workbench, Keepproxy/Keepweb) using AWS' Route53. If you will provide custom certificates, please set the variable <i>USE_LETSENCRYPT=no</i>.
+The <i>multi_host</i> example includes Let's Encrypt salt code to automatically request and install the certificates for the public-facing hosts (API/controller, Workbench, Keepproxy/Keepweb) using AWS' Route53.
+
+{% include 'install_custom_certificates' %}
h3(#further_customization). Further customization of the installation (modifying the salt pillars and states)
Edit the variables in the <i>local.params</i> file. Pay attention to the <b>*_PORT, *_TOKEN</b> and <b>*KEY</b> variables.
+The <i>single_host</i> examples use self-signed SSL certificates, which are deployed using the same mechanism used to deploy custom certificates.
+
+{% include 'install_custom_certificates' %}
+
+If you want to use valid certificates provided by Let's Encrypt, please set the variable <i>USE_LETSENCRYPT=yes</i> and make sure that all the FQDNs that you will use for the public-facing applications (API/controller, Workbench, Keepproxy/Keepweb) are reachable.
+
h3(#single_host_multiple_hostnames). Single host / multiple hostnames (Alternative configuration)
<notextile>
<pre><code>cp local.params.example.single_host_multiple_hostnames local.params
+++ /dev/null
----
-layout: default
-navsection: installguide
-title: Singularity container runtime
-...
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-h2(#overview). Overview
-
-Arvados can be configured to use "Singularity":https://sylabs.io/singularity/ instead of Docker to execute containers on cloud nodes or a SLURM/LSF cluster. Singularity may be preferable due to its simpler installation and lack of long-running daemon process and special system users/groups.
-
-*Current limitations*:
-* Even when using the singularity runtime, users' container images are expected to be saved in Docker format using @arv keep docker@. Arvados converts the Docker image to Singularity format (@.sif@) at runtime as needed. Specifying a @.sif@ file as an image when submitting a container request is not yet supported.
-* Singularity does not limit the amount of memory available in a container. Each container will have access to all memory on the host where it runs, unless memory use is restricted by SLURM/LSF.
-* Programs running in containers may behave differently due to differences between Singularity and Docker.
-** The root (image) filesystem is read-only in a Singularity container. Programs that attempt to write outside a designated output or temporary directory are likely to fail.
-** The Docker ENTRYPOINT instruction is ignored.
-* Arvados is tested with Singularity version 3.7.4. Other versions may not work.
-
-*Notes*:
-
-* Docker images are converted on the fly by @mksquashfs@, which can consume a considerable amount of RAM. The RAM usage of mksquashfs can be restricted in @/etc/singularity/singularity.conf@ with a line like @mksquashfs mem = 512M@. The amount of memory made available for mksquashfs should be configured lower than the smallest amount of memory requested by a container on the cluster to avoid the conversion being killed for using too much memory.
-
-h2(#configuration). Configuration
-
-To use singularity, first make sure "Singularity is installed":https://sylabs.io/guides/3.7/user-guide/quick_start.html on your cloud worker image or SLURM/LSF compute nodes as applicable. Note @squashfs-tools@ is required.
-
-<notextile>
-<pre><code>$ <span class="userinput">singularity version</span>
-3.7.4
-$ <span class="userinput">mksquashfs -version</span>
-mksquashfs version 4.3-git (2014/06/09)
-[...]
-</code></pre>
-</notextile>
-
-Then update @Containers.RuntimeEngine@ in your cluster configuration:
-
-<notextile>
-<pre><code> # Container runtime: "docker" (default) or "singularity"
- RuntimeEngine: singularity
-</code></pre>
-</notextile>
-
-Restart your dispatcher (@crunch-dispatch-slurm@, @arvados-dispatch-cloud@, or @arvados-dispatch-lsf@) after updating your configuration file.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This section documents language bindings for the "Arvados API":{{site.baseurl}}/api and Keep that are available for various programming languages. Not all features are available in every SDK. The most complete SDK is the Python SDK. Note that this section only gives a high level overview of each SDK. Consult the "Arvados API":{{site.baseurl}}/api section for detailed documentation about Arvados API calls available on each resource.
+This section documents language bindings for the "Arvados API":{{site.baseurl}}/api/index.html and Keep that are available for various programming languages. Not all features are available in every SDK. The most complete SDK is the Python SDK. Note that this section only gives a high level overview of each SDK. Consult the "Arvados API":{{site.baseurl}}/api/index.html section for detailed documentation about Arvados API calls available on each resource.
* "Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html (also includes essential command line tools such as "arv-put" and "arv-get")
* "Command line SDK":{{site.baseurl}}/sdk/cli/install.html ("arv")
This page describes how to set up the runtime environment (e.g., the programs, libraries, and other dependencies needed to run a job) that a workflow step will be run in using "Docker":https://www.docker.com/ or "Singularity":https://sylabs.io/singularity/. Docker and Singularity are tools for building and running containers that isolate applications from other applications running on the same node. For detailed information, see the "Docker User Guide":https://docs.docker.com/userguide/ and the "Introduction to Singularity":https://sylabs.io/guides/3.5/user-guide/introduction.html.
-Note that Arvados always works with Docker images, even when it is configured to use Singularity to run containers. There are some differences between the two runtimes that can affect your containers. See the "Singularity container runtime":{{site.baseurl}}/install/singularity.html page for details.
+Note that Arvados always works with Docker images, even when it is configured to use Singularity to run containers. There are some differences between the two runtimes that can affect your containers. See the "Singularity architecture":{{site.baseurl}}/architecture/singularity.html page for details.
This page describes:
RailsAPI:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
Controller:
InternalURLs: {SAMPLE: {}}
ExternalURL: ""
ExternalURL: ""
Keepbalance:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
GitHTTP:
InternalURLs: {SAMPLE: {}}
ExternalURL: ""
ExternalURL: ""
DispatchCloud:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
DispatchLSF:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
Keepproxy:
InternalURLs: {SAMPLE: {}}
ExternalURL: ""
# the old URL (with trailing slash omitted) to preserve
# rendezvous ordering.
Rendezvous: ""
- ExternalURL: "-"
+ ExternalURL: ""
Composer:
InternalURLs: {SAMPLE: {}}
ExternalURL: ""
ExternalURL: ""
Health:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
PostgreSQL:
# max concurrent connections per arvados server daemon
AssignNodeHostname: "compute%<slot_number>d"
LSF:
- # Additional arguments to bsub when submitting Arvados
- # containers as LSF jobs.
+ # Arguments to bsub when submitting Arvados containers as LSF jobs.
+ #
+ # Template variables starting with % will be substituted as follows:
+ #
+ # %U uuid
+ # %C number of VCPUs
+ # %M memory in MB
+ # %T tmp in MB
+ #
+ # Use %% to express a literal %. The %%J in the default will be changed
+ # to %J, which is interpreted by bsub itself.
#
# Note that the default arguments cause LSF to write two files
# in /tmp on the compute node each time an Arvados container
# runs. Ensure you have something in place to delete old files
- # from /tmp, or adjust these arguments accordingly.
- BsubArgumentsList: ["-o", "/tmp/crunch-run.%J.out", "-e", "/tmp/crunch-run.%J.err"]
+ # from /tmp, or adjust the "-o" and "-e" arguments accordingly.
+ BsubArgumentsList: ["-o", "/tmp/crunch-run.%%J.out", "-e", "/tmp/crunch-run.%%J.err", "-J", "%U", "-n", "%C", "-D", "%MMB", "-R", "rusage[mem=%MMB:tmp=%TMB] span[hosts=1]"]
# Use sudo to switch to this user account when submitting LSF
# jobs.
RailsAPI:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
Controller:
InternalURLs: {SAMPLE: {}}
ExternalURL: ""
ExternalURL: ""
Keepbalance:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
GitHTTP:
InternalURLs: {SAMPLE: {}}
ExternalURL: ""
ExternalURL: ""
DispatchCloud:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
DispatchLSF:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
Keepproxy:
InternalURLs: {SAMPLE: {}}
ExternalURL: ""
# the old URL (with trailing slash omitted) to preserve
# rendezvous ordering.
Rendezvous: ""
- ExternalURL: "-"
+ ExternalURL: ""
Composer:
InternalURLs: {SAMPLE: {}}
ExternalURL: ""
ExternalURL: ""
Health:
InternalURLs: {SAMPLE: {}}
- ExternalURL: "-"
+ ExternalURL: ""
PostgreSQL:
# max concurrent connections per arvados server daemon
AssignNodeHostname: "compute%<slot_number>d"
LSF:
- # Additional arguments to bsub when submitting Arvados
- # containers as LSF jobs.
+ # Arguments to bsub when submitting Arvados containers as LSF jobs.
+ #
+ # Template variables starting with % will be substituted as follows:
+ #
+ # %U uuid
+ # %C number of VCPUs
+ # %M memory in MB
+ # %T tmp in MB
+ #
+ # Use %% to express a literal %. The %%J in the default will be changed
+ # to %J, which is interpreted by bsub itself.
#
# Note that the default arguments cause LSF to write two files
# in /tmp on the compute node each time an Arvados container
# runs. Ensure you have something in place to delete old files
- # from /tmp, or adjust these arguments accordingly.
- BsubArgumentsList: ["-o", "/tmp/crunch-run.%J.out", "-e", "/tmp/crunch-run.%J.err"]
+ # from /tmp, or adjust the "-o" and "-e" arguments accordingly.
+ BsubArgumentsList: ["-o", "/tmp/crunch-run.%%J.out", "-e", "/tmp/crunch-run.%%J.err", "-J", "%U", "-n", "%C", "-D", "%MMB", "-R", "rusage[mem=%MMB:tmp=%TMB] span[hosts=1]"]
# Use sudo to switch to this user account when submitting LSF
# jobs.
arvMountCmd := []string{
"arv-mount",
"--foreground",
- "--allow-other",
"--read-write",
"--storage-classes", strings.Join(runner.Container.OutputStorageClasses, ","),
fmt.Sprintf("--crunchstat-interval=%v", runner.statInterval.Seconds())}
+ if runner.executor.Runtime() == "docker" {
+ arvMountCmd = append(arvMountCmd, "--allow-other")
+ }
+
if runner.Container.RuntimeConstraints.KeepCacheRAM > 0 {
arvMountCmd = append(arvMountCmd, "--file-cache", fmt.Sprintf("%d", runner.Container.RuntimeConstraints.KeepCacheRAM))
}
cr.statInterval = 5 * time.Second
bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground", "--allow-other",
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
"--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
"--mount-by-pdh", "by_id", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
c.Check(bindmounts, DeepEquals, map[string]bindmount{"/tmp": {realTemp + "/tmp2", false}})
bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground", "--allow-other",
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
"--read-write", "--storage-classes", "foo,bar", "--crunchstat-interval=5",
"--mount-by-pdh", "by_id", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
c.Check(bindmounts, DeepEquals, map[string]bindmount{"/out": {realTemp + "/tmp2", false}, "/tmp": {realTemp + "/tmp3", false}})
bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground", "--allow-other",
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
"--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
"--mount-by-pdh", "by_id", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
c.Check(bindmounts, DeepEquals, map[string]bindmount{"/tmp": {realTemp + "/tmp2", false}, "/etc/arvados/ca-certificates.crt": {stubCertPath, true}})
bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground", "--allow-other",
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
"--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
"--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
c.Check(bindmounts, DeepEquals, map[string]bindmount{"/keeptmp": {realTemp + "/keep1/tmp0", false}})
bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground", "--allow-other",
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
"--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
"--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
c.Check(bindmounts, DeepEquals, map[string]bindmount{
bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground", "--allow-other",
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
"--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
"--file-cache", "512", "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
c.Check(bindmounts, DeepEquals, map[string]bindmount{
bindmounts, err := cr.SetupMounts()
c.Check(err, IsNil)
- c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground", "--allow-other",
+ c.Check(am.Cmd, DeepEquals, []string{"arv-mount", "--foreground",
"--read-write", "--storage-classes", "default", "--crunchstat-interval=5",
"--file-cache", "512", "--mount-tmp", "tmp0", "--mount-by-pdh", "by_id", "--mount-by-id", "by_uuid", realTemp + "/keep1"})
c.Check(bindmounts, DeepEquals, map[string]bindmount{
func (disp *dispatcher) bsubArgs(container arvados.Container) ([]string, error) {
args := []string{"bsub"}
- args = append(args, disp.Cluster.Containers.LSF.BsubArgumentsList...)
- args = append(args, "-J", container.UUID)
- args = append(args, disp.bsubConstraintArgs(container)...)
- if u := disp.Cluster.Containers.LSF.BsubSudoUser; u != "" {
- args = append([]string{"sudo", "-E", "-u", u}, args...)
- }
- return args, nil
-}
-func (disp *dispatcher) bsubConstraintArgs(container arvados.Container) []string {
- // TODO: propagate container.SchedulingParameters.Partitions
tmp := int64(math.Ceil(float64(dispatchcloud.EstimateScratchSpace(&container)) / 1048576))
vcpus := container.RuntimeConstraints.VCPUs
mem := int64(math.Ceil(float64(container.RuntimeConstraints.RAM+
container.RuntimeConstraints.KeepCacheRAM+
int64(disp.Cluster.Containers.ReserveExtraRAM)) / 1048576))
- return []string{
- "-n", fmt.Sprintf("%d", vcpus),
- "-D", fmt.Sprintf("%dMB", mem), // ulimit -d (note this doesn't limit the total container memory usage)
- "-R", fmt.Sprintf("rusage[mem=%dMB:tmp=%dMB] span[hosts=1]", mem, tmp),
+
+ repl := map[string]string{
+ "%%": "%",
+ "%C": fmt.Sprintf("%d", vcpus),
+ "%M": fmt.Sprintf("%d", mem),
+ "%T": fmt.Sprintf("%d", tmp),
+ "%U": container.UUID,
}
+
+ re := regexp.MustCompile(`%.`)
+ var substitutionErrors string
+ for _, a := range disp.Cluster.Containers.LSF.BsubArgumentsList {
+ args = append(args, re.ReplaceAllStringFunc(a, func(s string) string {
+ subst := repl[s]
+ if len(subst) == 0 {
+ substitutionErrors += fmt.Sprintf("Unknown substitution parameter %s in BsubArgumentsList, ", s)
+ }
+ return subst
+ }))
+ }
+ if len(substitutionErrors) != 0 {
+ return nil, fmt.Errorf("%s", substitutionErrors[:len(substitutionErrors)-2])
+ }
+
+ if u := disp.Cluster.Containers.LSF.BsubSudoUser; u != "" {
+ args = append([]string{"sudo", "-E", "-u", u}, args...)
+ }
+ return args, nil
}
// Check the next bjobs report, and invoke TrackContainer for all the
switch prog {
case "bsub":
defaultArgs := s.disp.Cluster.Containers.LSF.BsubArgumentsList
- c.Assert(len(args) > len(defaultArgs), check.Equals, true)
- c.Check(args[:len(defaultArgs)], check.DeepEquals, defaultArgs)
- args = args[len(defaultArgs):]
-
- c.Check(args[0], check.Equals, "-J")
+ c.Assert(len(args), check.Equals, len(defaultArgs))
+ // %%J must have been rewritten to %J
+ c.Check(args[1], check.Equals, "/tmp/crunch-run.%J.out")
+ args = args[4:]
switch args[1] {
case arvadostest.LockedContainerUUID:
c.Check(args, check.DeepEquals, []string{
loadingContext = self.loadingContext.copy()
loadingContext.do_validate = False
- loadingContext.do_update = False
if submitting:
+ loadingContext.do_update = False
# Document may have been auto-updated. Reload the original
# document with updating disabled because we want to
# submit the document with its original CWL version, not
# file to determine what version of cwltool and schema-salad to
# build.
install_requires=[
- 'cwltool==3.1.20210922203925',
- 'schema-salad==8.2.20210918131710',
+ 'cwltool==3.1.20211020155521',
+ 'schema-salad==8.2.20211020114435',
'arvados-python-client{}'.format(pysdk_dep),
'setuptools',
'ciso8601 >= 2.0.0',
# skip this token
next
end
- if (auth.user.uuid =~ /-tpzed-000000000000000/).nil?
+ if (auth.user.uuid =~ /-tpzed-000000000000000/).nil? and (auth.user.uuid =~ /-tpzed-anonymouspublic/).nil?
CurrentApiClientHelper.act_as_system_user do
auth.update_attributes!(expires_at: exp_date)
end
# skip this token
next
end
- if not auth.user.nil? and (auth.user.uuid =~ /-tpzed-000000000000000/).nil?
+ if not auth.user.nil? and (auth.user.uuid =~ /-tpzed-000000000000000/).nil? and (auth.user.uuid =~ /-tpzed-anonymouspublic/).nil?
user_ids.add(auth.user_id)
token_count += 1
end
api_client_auth = ApiClientAuthorization.where(attr).first
if !api_client_auth
+ # The anonymous user token should never expire but we are not allowed to
+ # set :expires_at to nil, so we set it to 1000 years in the future.
+ attr[:expires_at] = Time.now + 1000.years
api_client_auth = ApiClientAuthorization.create!(attr)
end
api_client_auth
options = {}
OptionParser.new do |parser|
parser.on('--exclusive', 'Manage SSH keys file exclusively.')
- parser.on('--rotate-tokens', 'Always create new user tokens. Usually needed with --token-lifetime.')
+ parser.on('--rotate-tokens', 'Force a rotation of all user tokens.')
parser.on('--skip-missing-users', "Don't try to create any local accounts.")
parser.on('--token-lifetime SECONDS', 'Create user tokens that expire after SECONDS.', Integer)
+ parser.on('--debug', 'Enable debug output')
end.parse!(into: options)
exclusive_banner = "#######################################################################################
keys = ''
begin
+ debug = false
+ if options[:"debug"]
+ debug = true
+ end
arv = Arvados.new({ :suppress_ssl_warnings => false })
logincluster_arv = Arvados.new({ :api_host => (ENV['LOGINCLUSTER_ARVADOS_API_HOST'] || ENV['ARVADOS_API_HOST']),
:api_token => (ENV['LOGINCLUSTER_ARVADOS_API_TOKEN'] || ENV['ARVADOS_API_TOKEN']),
end
else
if pwnam[l[:username]].uid < uid_min
- STDERR.puts "Account #{l[:username]} uid #{pwnam[l[:username]].uid} < uid_min #{uid_min}. Skipping"
+ STDERR.puts "Account #{l[:username]} uid #{pwnam[l[:username]].uid} < uid_min #{uid_min}. Skipping" if debug
true
end
end
# Collect all keys
logins.each do |l|
+ STDERR.puts("Considering #{l[:username]} ...") if debug
keys[l[:username]] = Array.new() if not keys.has_key?(l[:username])
key = l[:public_key]
if !key.nil?
tokenfile = File.join(configarvados, "settings.conf")
begin
- if !File.exist?(tokenfile) || options[:"rotate-tokens"]
+ STDERR.puts "Processing #{tokenfile} ..." if debug
+ newToken = false
+ if File.exist?(tokenfile)
+ # check if the token is still valid
+ myToken = ENV["ARVADOS_API_TOKEN"]
+ userEnv = IO::read(tokenfile)
+ if (m = /^ARVADOS_API_TOKEN=(.*?\n)/m.match(userEnv))
+ begin
+ tmp_arv = Arvados.new({ :api_host => (ENV['LOGINCLUSTER_ARVADOS_API_HOST'] || ENV['ARVADOS_API_HOST']),
+ :api_token => (m[1]),
+ :suppress_ssl_warnings => false })
+ tmp_arv.user.current
+ rescue Arvados::TransactionFailedError => e
+ if e.to_s =~ /401 Unauthorized/
+ STDERR.puts "Account #{l[:username]} token not valid, creating new token."
+ newToken = true
+ else
+ raise
+ end
+ end
+ end
+ elsif !File.exist?(tokenfile) || options[:"rotate-tokens"]
+ STDERR.puts "Account #{l[:username]} token file not found, creating new token."
+ newToken = true
+ end
+ if newToken
aca_params = {owner_uuid: l[:user_uuid], api_client_id: 0}
if options[:"token-lifetime"] && options[:"token-lifetime"] > 0
aca_params.merge!(expires_at: (Time.now + options[:"token-lifetime"]))
"aws_source_ami": "ami-04d70e069399af2e9",
"build_environment": "aws",
"public_key_file": "",
+ "mksquashfs_mem": "",
"reposuffix": "",
"resolver": "",
"ssh_user": "admin",
"type": "shell",
"execute_command": "sudo -S env {{ .Vars }} /bin/bash '{{ .Path }}'",
"script": "scripts/base.sh",
- "environment_vars": ["RESOLVER={{user `resolver`}}","REPOSUFFIX={{user `reposuffix`}}"]
+ "environment_vars": ["RESOLVER={{user `resolver`}}","REPOSUFFIX={{user `reposuffix`}}","MKSQUASHFS_MEM={{user `mksquashfs_mem`}}"]
}]
}
"location": "centralus",
"project_id": "",
"public_key_file": "",
+ "mksquashfs_mem": "",
"reposuffix": "",
"resolver": "",
"resource_group": null,
"type": "shell",
"execute_command": "sudo -S env {{ .Vars }} /bin/bash '{{ .Path }}'",
"script": "scripts/base.sh",
- "environment_vars": ["RESOLVER={{user `resolver`}}","REPOSUFFIX={{user `reposuffix`}}"]
+ "environment_vars": ["RESOLVER={{user `resolver`}}","REPOSUFFIX={{user `reposuffix`}}","MKSQUASHFS_MEM={{user `mksquashfs_mem`}}"]
}]
}
Set this to "-dev" to track the unstable/dev Arvados repositories
--public-key-file (required)
Path to the public key file that a-d-c will use to log into the compute node
+ --mksquashfs-mem (default: 256M)
+ Only relevant when using Singularity. This is the amount of memory mksquashfs is allowed to use.
--debug
Output debug information (default: false)
SSH_USER=
AWS_DEFAULT_REGION=us-east-1
PUBLIC_KEY_FILE=
+MKSQUASHFS_MEM=256M
PARSEDOPTS=$(getopt --name "$0" --longoptions \
- help,json-file:,arvados-cluster-id:,aws-source-ami:,aws-profile:,aws-secrets-file:,aws-region:,aws-vpc-id:,aws-subnet-id:,gcp-project-id:,gcp-account-file:,gcp-zone:,azure-secrets-file:,azure-resource-group:,azure-location:,azure-sku:,azure-cloud-environment:,ssh_user:,resolver:,reposuffix:,public-key-file:,debug \
+ help,json-file:,arvados-cluster-id:,aws-source-ami:,aws-profile:,aws-secrets-file:,aws-region:,aws-vpc-id:,aws-subnet-id:,gcp-project-id:,gcp-account-file:,gcp-zone:,azure-secrets-file:,azure-resource-group:,azure-location:,azure-sku:,azure-cloud-environment:,ssh_user:,resolver:,reposuffix:,public-key-file:,mksquashfs-mem:,debug \
-- "" "$@")
if [ $? -ne 0 ]; then
exit 1
--public-key-file)
PUBLIC_KEY_FILE="$2"; shift
;;
+ --mksquashfs-mem)
+ MKSQUASHFS_MEM="$2"; shift
+ ;;
--debug)
# If you want to debug a build issue, add the -debug flag to the build
# command in question.
if [[ "$PUBLIC_KEY_FILE" != "" ]]; then
EXTRA2+=" -var public_key_file=$PUBLIC_KEY_FILE"
fi
+if [[ "$MKSQUASHFS_MEM" != "" ]]; then
+ EXTRA2+=" -var mksquashfs_mem=$MKSQUASHFS_MEM"
+fi
+
echo
packer version
make -C ./builddir install
ln -sf /var/lib/arvados/bin/* /usr/local/bin/
+# set `mksquashfs mem` in the singularity config file if it is configured
+if [ "$MKSQUASHFS_MEM" != "" ]; then
+ echo "mksquashfs mem = ${MKSQUASHFS_MEM}" >> /var/lib/arvados/etc/singularity/singularity.conf
+fi
+
# Print singularity version installed
singularity --version
cp -vr /vagrant/tests /home/vagrant/tests;
sed 's#cluster_fixme_or_this_wont_work#harpo#g;
s#domain_fixme_or_this_wont_work#local#g;
- s/#\ BRANCH=\"main\"/\ BRANCH=\"main\"/g;
- s#CONTROLLER_EXT_SSL_PORT=443#CONTROLLER_EXT_SSL_PORT=8443#g' \
+ s#CONTROLLER_EXT_SSL_PORT=443#CONTROLLER_EXT_SSL_PORT=8443#g;
+ s#RELEASE=\"production\"#RELEASE=\"development\"#g;
+ s/# VERSION=.*$/VERSION=\"latest\"/g;
+ s/#\ BRANCH=\"main\"/\ BRANCH=\"main\"/g' \
/vagrant/local.params.example.single_host_multiple_hostnames > /tmp/local.params.single_host_multiple_hostnames"
+
arv.vm.provision "shell",
path: "provision.sh",
args: [
# "--debug",
"--config /tmp/local.params.single_host_multiple_hostnames",
+ "--development",
"--test",
"--vagrant"
].join(" ")
### LETSENCRYPT
letsencrypt:
domainsets:
- __CLUSTER__.__DOMAIN__:
+ controller.__CLUSTER__.__DOMAIN__:
- __CLUSTER__.__DOMAIN__
-
-### NGINX
-nginx:
- ### SNIPPETS
- snippets:
- __CLUSTER__.__DOMAIN___letsencrypt_cert.conf:
- - ssl_certificate: /etc/letsencrypt/live/__CLUSTER__.__DOMAIN__/fullchain.pem
- - ssl_certificate_key: /etc/letsencrypt/live/__CLUSTER__.__DOMAIN__/privkey.pem
### LETSENCRYPT
letsencrypt:
domainsets:
- keep.__CLUSTER__.__DOMAIN__:
+ keepproxy.__CLUSTER__.__DOMAIN__:
- keep.__CLUSTER__.__DOMAIN__
-
-### NGINX
-nginx:
- ### SNIPPETS
- snippets:
- keep.__CLUSTER__.__DOMAIN___letsencrypt_cert.conf:
- - ssl_certificate: /etc/letsencrypt/live/keep.__CLUSTER__.__DOMAIN__/fullchain.pem
- - ssl_certificate_key: /etc/letsencrypt/live/keep.__CLUSTER__.__DOMAIN__/privkey.pem
collections.__CLUSTER__.__DOMAIN__:
- collections.__CLUSTER__.__DOMAIN__
- '*.collections.__CLUSTER__.__DOMAIN__'
-
-### NGINX
-nginx:
- ### SNIPPETS
- snippets:
- download.__CLUSTER__.__DOMAIN___letsencrypt_cert.conf:
- - ssl_certificate: /etc/letsencrypt/live/download.__CLUSTER__.__DOMAIN__/fullchain.pem
- - ssl_certificate_key: /etc/letsencrypt/live/download.__CLUSTER__.__DOMAIN__/privkey.pem
- collections.__CLUSTER__.__DOMAIN___letsencrypt_cert.conf:
- - ssl_certificate: /etc/letsencrypt/live/collections.__CLUSTER__.__DOMAIN__/fullchain.pem
- - ssl_certificate_key: /etc/letsencrypt/live/collections.__CLUSTER__.__DOMAIN__/privkey.pem
domainsets:
webshell.__CLUSTER__.__DOMAIN__:
- webshell.__CLUSTER__.__DOMAIN__
-
-### NGINX
-nginx:
- ### SNIPPETS
- snippets:
- webshell.__CLUSTER__.__DOMAIN___letsencrypt_cert.conf:
- - ssl_certificate: /etc/letsencrypt/live/webshell.__CLUSTER__.__DOMAIN__/fullchain.pem
- - ssl_certificate_key: /etc/letsencrypt/live/webshell.__CLUSTER__.__DOMAIN__/privkey.pem
### LETSENCRYPT
letsencrypt:
domainsets:
- ws.__CLUSTER__.__DOMAIN__:
+ websocket.__CLUSTER__.__DOMAIN__:
- ws.__CLUSTER__.__DOMAIN__
-
-### NGINX
-nginx:
- ### SNIPPETS
- snippets:
- ws.__CLUSTER__.__DOMAIN___letsencrypt_cert.conf:
- - ssl_certificate: /etc/letsencrypt/live/ws.__CLUSTER__.__DOMAIN__/fullchain.pem
- - ssl_certificate_key: /etc/letsencrypt/live/ws.__CLUSTER__.__DOMAIN__/privkey.pem
domainsets:
workbench2.__CLUSTER__.__DOMAIN__:
- workbench2.__CLUSTER__.__DOMAIN__
-
-### NGINX
-nginx:
- ### SNIPPETS
- snippets:
- workbench2.__CLUSTER__.__DOMAIN___letsencrypt_cert.conf:
- - ssl_certificate: /etc/letsencrypt/live/workbench2.__CLUSTER__.__DOMAIN__/fullchain.pem
- - ssl_certificate_key: /etc/letsencrypt/live/workbench2.__CLUSTER__.__DOMAIN__/privkey.pem
domainsets:
workbench.__CLUSTER__.__DOMAIN__:
- workbench.__CLUSTER__.__DOMAIN__
-
-### NGINX
-nginx:
- ### SNIPPETS
- snippets:
- workbench.__CLUSTER__.__DOMAIN___letsencrypt_cert.conf:
- - ssl_certificate: /etc/letsencrypt/live/workbench.__CLUSTER__.__DOMAIN__/fullchain.pem
- - ssl_certificate_key: /etc/letsencrypt/live/workbench.__CLUSTER__.__DOMAIN__/privkey.pem
### SITES
servers:
managed:
- arvados_api:
+ arvados_api.conf:
enabled: true
overwrite: true
config:
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_collections_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: '~^(.*\.)?collections\.__CLUSTER__\.__DOMAIN__'
+ - listen:
+ - 80
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ ### COLLECTIONS
+ arvados_collections_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ __CERT_REQUIRES__
+ config:
+ - server:
+ - server_name: '~^(.*\.)?collections\.__CLUSTER__\.__DOMAIN__'
+ - listen:
+ - __KEEPWEB_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://collections_downloads_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_max_body_size: 0
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
+ - access_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.error.log
servers:
managed:
### DEFAULT
- arvados_controller_default:
+ arvados_controller_default.conf:
enabled: true
overwrite: true
config:
- server_name: __CLUSTER__.__DOMAIN__
- listen:
- 80 default
+ - location /.well-known:
+ - root: /var/www
- location /:
- return: '301 https://$host$request_uri'
- arvados_controller_ssl:
+ arvados_controller_ssl.conf:
enabled: true
overwrite: true
requires:
- cmd: create-initial-cert-__CLUSTER__.__DOMAIN__-__CLUSTER__.__DOMAIN__
+ __CERT_REQUIRES__
config:
- server:
- server_name: __CLUSTER__.__DOMAIN__
- proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
- proxy_set_header: 'X-External-Client $external_client'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/__CLUSTER__.__DOMAIN___letsencrypt_cert[.]conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.error.log
- client_max_body_size: 128m
--- /dev/null
+---
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+### NGINX
+nginx:
+ servers:
+ managed:
+ ### DEFAULT
+ arvados_download_default.conf:
+ enabled: true
+ overwrite: true
+ config:
+ - server:
+ - server_name: download.__CLUSTER__.__DOMAIN__
+ - listen:
+ - 80
+ - location /:
+ - return: '301 https://$host$request_uri'
+
+ ### DOWNLOAD
+ arvados_download_ssl.conf:
+ enabled: true
+ overwrite: true
+ requires:
+ __CERT_REQUIRES__
+ config:
+ - server:
+ - server_name: download.__CLUSTER__.__DOMAIN__
+ - listen:
+ - __KEEPWEB_EXT_SSL_PORT__ http2 ssl
+ - index: index.html index.htm
+ - location /:
+ - proxy_pass: 'http://collections_downloads_upstream'
+ - proxy_read_timeout: 90
+ - proxy_connect_timeout: 90
+ - proxy_redirect: 'off'
+ - proxy_set_header: X-Forwarded-Proto https
+ - proxy_set_header: 'Host $http_host'
+ - proxy_set_header: 'X-Real-IP $remote_addr'
+ - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
+ - proxy_buffering: 'off'
+ - client_max_body_size: 0
+ - proxy_http_version: '1.1'
+ - proxy_request_buffering: 'off'
+ - include: snippets/ssl_hardening_default.conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
+ - access_log: /var/log/nginx/download.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/download.__CLUSTER__.__DOMAIN__.error.log
servers:
managed:
### DEFAULT
- arvados_keepproxy_default:
+ arvados_keepproxy_default.conf:
enabled: true
overwrite: true
config:
- location /:
- return: '301 https://$host$request_uri'
- arvados_keepproxy_ssl:
+ arvados_keepproxy_ssl.conf:
enabled: true
overwrite: true
requires:
- cmd: create-initial-cert-keep.__CLUSTER__.__DOMAIN__-keep.__CLUSTER__.__DOMAIN__
+ __CERT_REQUIRES__
config:
- server:
- server_name: keep.__CLUSTER__.__DOMAIN__
- listen:
- - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - __KEEP_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- location /:
- proxy_pass: 'http://keepproxy_upstream'
- proxy_http_version: '1.1'
- proxy_request_buffering: 'off'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/keep.__CLUSTER__.__DOMAIN___letsencrypt_cert[.]conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.error.log
#
# SPDX-License-Identifier: AGPL-3.0
+# Keepweb upstream is common to both downloads and collections
### NGINX
nginx:
### SERVER
http:
upstream collections_downloads_upstream:
- server: 'localhost:9002 fail_timeout=10s'
-
- servers:
- managed:
- ### DEFAULT
- arvados_collections_download_default:
- enabled: true
- overwrite: true
- config:
- - server:
- - server_name: '~^((.*\.)?collections|download)\.__CLUSTER__\.__DOMAIN__'
- - listen:
- - 80
- - location /:
- - return: '301 https://$host$request_uri'
-
- ### COLLECTIONS
- arvados_collections_ssl:
- enabled: true
- overwrite: true
- requires:
- cmd: 'create-initial-cert-collections.__CLUSTER__.__DOMAIN__-collections.__CLUSTER__.__DOMAIN__+*.__CLUSTER__.__DOMAIN__'
- config:
- - server:
- - server_name: '*.collections.__CLUSTER__.__DOMAIN__'
- - listen:
- - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- - index: index.html index.htm
- - location /:
- - proxy_pass: 'http://collections_downloads_upstream'
- - proxy_read_timeout: 90
- - proxy_connect_timeout: 90
- - proxy_redirect: 'off'
- - proxy_set_header: X-Forwarded-Proto https
- - proxy_set_header: 'Host $http_host'
- - proxy_set_header: 'X-Real-IP $remote_addr'
- - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
- - proxy_buffering: 'off'
- - client_max_body_size: 0
- - proxy_http_version: '1.1'
- - proxy_request_buffering: 'off'
- - include: snippets/ssl_hardening_default.conf
- - include: snippets/collections.__CLUSTER__.__DOMAIN___letsencrypt_cert[.]conf
- - access_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.error.log
-
- ### DOWNLOAD
- arvados_download_ssl:
- enabled: true
- overwrite: true
- requires:
- cmd: create-initial-cert-download.__CLUSTER__.__DOMAIN__-download.__CLUSTER__.__DOMAIN__
- config:
- - server:
- - server_name: download.__CLUSTER__.__DOMAIN__
- - listen:
- - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- - index: index.html index.htm
- - location /:
- - proxy_pass: 'http://collections_downloads_upstream'
- - proxy_read_timeout: 90
- - proxy_connect_timeout: 90
- - proxy_redirect: 'off'
- - proxy_set_header: X-Forwarded-Proto https
- - proxy_set_header: 'Host $http_host'
- - proxy_set_header: 'X-Real-IP $remote_addr'
- - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
- - proxy_buffering: 'off'
- - client_max_body_size: 0
- - proxy_http_version: '1.1'
- - proxy_request_buffering: 'off'
- - include: snippets/ssl_hardening_default.conf
- - include: snippets/download.__CLUSTER__.__DOMAIN___letsencrypt_cert[.]conf
- - access_log: /var/log/nginx/download.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/download.__CLUSTER__.__DOMAIN__.error.log
### SITES
servers:
managed:
- arvados_webshell_default:
+ arvados_webshell_default.conf:
enabled: true
overwrite: true
config:
- location /:
- return: '301 https://$host$request_uri'
- arvados_webshell_ssl:
+ arvados_webshell_ssl.conf:
enabled: true
overwrite: true
requires:
- cmd: create-initial-cert-webshell.__CLUSTER__.__DOMAIN__-webshell.__CLUSTER__.__DOMAIN__
+ __CERT_REQUIRES__
config:
- server:
- server_name: webshell.__CLUSTER__.__DOMAIN__
- listen:
- - __CONTROLLER_EXT_SSL_PORT__ http2 ssl
+ - __WEBSHELL_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- location /shell.__CLUSTER__.__DOMAIN__:
- proxy_pass: 'http://webshell_upstream'
- add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
- include: snippets/ssl_hardening_default.conf
- - include: snippets/webshell.__CLUSTER__.__DOMAIN___letsencrypt_cert[.]conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.error.log
servers:
managed:
### DEFAULT
- arvados_websocket_default:
+ arvados_websocket_default.conf:
enabled: true
overwrite: true
config:
- location /:
- return: '301 https://$host$request_uri'
- arvados_websocket_ssl:
+ arvados_websocket_ssl.conf:
enabled: true
overwrite: true
requires:
- cmd: create-initial-cert-ws.__CLUSTER__.__DOMAIN__-ws.__CLUSTER__.__DOMAIN__
+ __CERT_REQUIRES__
config:
- server:
- server_name: ws.__CLUSTER__.__DOMAIN__
- proxy_http_version: '1.1'
- proxy_request_buffering: 'off'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/ws.__CLUSTER__.__DOMAIN___letsencrypt_cert[.]conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.error.log
servers:
managed:
### DEFAULT
- arvados_workbench2_default:
+ arvados_workbench2_default.conf:
enabled: true
overwrite: true
config:
- location /:
- return: '301 https://$host$request_uri'
- arvados_workbench2_ssl:
+ arvados_workbench2_ssl.conf:
enabled: true
overwrite: true
requires:
- cmd: create-initial-cert-workbench2.__CLUSTER__.__DOMAIN__-workbench2.__CLUSTER__.__DOMAIN__
+ __CERT_REQUIRES__
config:
- server:
- server_name: workbench2.__CLUSTER__.__DOMAIN__
- location /config.json:
- return: {{ "200 '" ~ '{"API_HOST":"__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__"}' ~ "'" }}
- include: snippets/ssl_hardening_default.conf
- - include: snippets/workbench2.__CLUSTER__.__DOMAIN___letsencrypt_cert[.]conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.error.log
servers:
managed:
### DEFAULT
- arvados_workbench_default:
+ arvados_workbench_default.conf:
enabled: true
overwrite: true
config:
- location /:
- return: '301 https://$host$request_uri'
- arvados_workbench_ssl:
+ arvados_workbench_ssl.conf:
enabled: true
overwrite: true
requires:
- cmd: create-initial-cert-workbench.__CLUSTER__.__DOMAIN__-workbench.__CLUSTER__.__DOMAIN__
+ __CERT_REQUIRES__
config:
- server:
- server_name: workbench.__CLUSTER__.__DOMAIN__
- proxy_set_header: 'X-Real-IP $remote_addr'
- proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/workbench.__CLUSTER__.__DOMAIN___letsencrypt_cert[.]conf
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
- access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.error.log
tls:
# certificate: ''
# key: ''
- # required to test with arvados-snakeoil certs
- insecure: true
+ # When using arvados-snakeoil certs set insecure: true
+ insecure: false
resources:
virtual_machines:
enabled: true
overwrite: true
requires:
- file: nginx_snippet_arvados-snakeoil.conf
+ __CERT_REQUIRES__
config:
- server:
- server_name: __CLUSTER__.__DOMAIN__
- proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
- proxy_set_header: 'X-External-Client $external_client'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/arvados-snakeoil.conf
- - access_log: /var/log/nginx/__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/__CLUSTER__.__DOMAIN__.error.log
+ - ssl_certificate: __CERT_PEM__
+ - ssl_certificate_key: __CERT_KEY__
+ - access_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/controller.__CLUSTER__.__DOMAIN__.error.log
- client_max_body_size: 128m
enabled: true
overwrite: true
requires:
- file: nginx_snippet_arvados-snakeoil.conf
+ file: extra_custom_certs_file_copy_arvados-keepproxy.pem
config:
- server:
- server_name: keep.__CLUSTER__.__DOMAIN__
- proxy_http_version: '1.1'
- proxy_request_buffering: 'off'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/arvados-snakeoil.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-keepproxy.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-keepproxy.key
- access_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.error.log
- return: '301 https://$host$request_uri'
### COLLECTIONS / DOWNLOAD
- arvados_collections_download_ssl.conf:
+ {%- for vh in [
+ 'collections',
+ 'download'
+ ]
+ %}
+ arvados_{{ vh }}.conf:
enabled: true
overwrite: true
requires:
- file: nginx_snippet_arvados-snakeoil.conf
+ file: extra_custom_certs_file_copy_arvados-{{ vh }}.pem
config:
- server:
- - server_name: collections.__CLUSTER__.__DOMAIN__ download.__CLUSTER__.__DOMAIN__
+ - server_name: {{ vh }}.__CLUSTER__.__DOMAIN__
- listen:
- __CONTROLLER_EXT_SSL_PORT__ http2 ssl
- index: index.html index.htm
- proxy_http_version: '1.1'
- proxy_request_buffering: 'off'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/arvados-snakeoil.conf
- - access_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.access.log combined
- - error_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.error.log
+ - ssl_certificate: /etc/nginx/ssl/arvados-{{ vh }}.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-{{ vh }}.key
+ - access_log: /var/log/nginx/{{ vh }}.__CLUSTER__.__DOMAIN__.access.log combined
+ - error_log: /var/log/nginx/{{ vh }}.__CLUSTER__.__DOMAIN__.error.log
+ {%- endfor %}
# replace with the IP address of your resolver
# - resolver: 127.0.0.1
- arvados-snakeoil.conf:
- - ssl_certificate: /etc/ssl/private/arvados-snakeoil-cert.pem
- - ssl_certificate_key: /etc/ssl/private/arvados-snakeoil-cert.key
-
### SITES
servers:
managed:
enabled: true
overwrite: true
requires:
- file: nginx_snippet_arvados-snakeoil.conf
+ file: extra_custom_certs_file_copy_arvados-webshell.pem
config:
- server:
- server_name: webshell.__CLUSTER__.__DOMAIN__
- add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
{%- endfor %}
- include: snippets/ssl_hardening_default.conf
- - include: snippets/arvados-snakeoil.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-webshell.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-webshell.key
- access_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.error.log
enabled: true
overwrite: true
requires:
- file: nginx_snippet_arvados-snakeoil.conf
+ file: extra_custom_certs_file_copy_arvados-websocket.pem
config:
- server:
- server_name: ws.__CLUSTER__.__DOMAIN__
- proxy_http_version: '1.1'
- proxy_request_buffering: 'off'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/arvados-snakeoil.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-websocket.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-websocket.key
- access_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.error.log
enabled: true
overwrite: true
requires:
- file: nginx_snippet_arvados-snakeoil.conf
+ file: extra_custom_certs_file_copy_arvados-workbench2.pem
config:
- server:
- server_name: workbench2.__CLUSTER__.__DOMAIN__
- location /config.json:
- return: {{ "200 '" ~ '{"API_HOST":"__CLUSTER__.__DOMAIN__:__CONTROLLER_EXT_SSL_PORT__"}' ~ "'" }}
- include: snippets/ssl_hardening_default.conf
- - include: snippets/arvados-snakeoil.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-workbench2.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-workbench2.key
- access_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.error.log
enabled: true
overwrite: true
requires:
- file: nginx_snippet_arvados-snakeoil.conf
+ file: extra_custom_certs_file_copy_arvados-workbench.pem
config:
- server:
- server_name: workbench.__CLUSTER__.__DOMAIN__
- proxy_set_header: 'X-Real-IP $remote_addr'
- proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
- include: snippets/ssl_hardening_default.conf
- - include: snippets/arvados-snakeoil.conf
+ - ssl_certificate: /etc/nginx/ssl/arvados-workbench.pem
+ - ssl_certificate_key: /etc/nginx/ssl/arvados-workbench.key
- access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.access.log combined
- error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.error.log
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+{%- set orig_cert_dir = salt['pillar.get']('extra_custom_certs_dir', '/srv/salt/certs') %}
+{%- set dest_cert_dir = '/etc/nginx/ssl' %}
+{%- set certs = salt['pillar.get']('extra_custom_certs', []) %}
+
+extra_custom_certs_file_directory_certs_dir:
+ file.directory:
+ - name: /etc/nginx/ssl
+ - require:
+ - pkg: nginx_install
+
+{%- for cert in certs %}
+ {%- set cert_file = 'arvados-' ~ cert ~ '.pem' %}
+ {#- set csr_file = 'arvados-' ~ cert ~ '.csr' #}
+ {%- set key_file = 'arvados-' ~ cert ~ '.key' %}
+ {% for c in [cert_file, key_file] %}
+extra_custom_certs_file_copy_{{ c }}:
+ file.copy:
+ - name: {{ dest_cert_dir }}/{{ c }}
+ - source: {{ orig_cert_dir }}/{{ c }}
+ - force: true
+ - user: root
+ - group: root
+ - unless: cmp {{ dest_cert_dir }}/{{ c }} {{ orig_cert_dir }}/{{ c }}
+ - require:
+ - file: extra_custom_certs_file_directory_certs_dir
+ {%- endfor %}
+{%- endfor %}
#
# SPDX-License-Identifier: Apache-2.0
+# WARNING: This file is only used for testing purposes, and should not be used
+# in a production environment
+
{%- set curr_tpldir = tpldir %}
{%- set tpldir = 'arvados' %}
{%- from "arvados/map.jinja" import arvados with context %}
{%- set tpldir = curr_tpldir %}
+{%- set orig_cert_dir = salt['pillar.get']('extra_custom_certs_dir', '/srv/salt/certs') %}
+
include:
- nginx.passenger
- nginx.config
# we'll keep it simple here.
{%- set arvados_ca_cert_file = '/etc/ssl/private/arvados-snakeoil-ca.pem' %}
{%- set arvados_ca_key_file = '/etc/ssl/private/arvados-snakeoil-ca.key' %}
-{%- set arvados_cert_file = '/etc/ssl/private/arvados-snakeoil-cert.pem' %}
-{%- set arvados_csr_file = '/etc/ssl/private/arvados-snakeoil-cert.csr' %}
-{%- set arvados_key_file = '/etc/ssl/private/arvados-snakeoil-cert.key' %}
{%- if grains.get('os_family') == 'Debian' %}
{%- set arvados_ca_cert_dest = '/usr/local/share/ca-certificates/arvados-snakeoil-ca.crt' %}
{%- set update_ca_cert = '/usr/sbin/update-ca-certificates' %}
{%- set openssl_conf = '/etc/ssl/openssl.cnf' %}
+
+extra_snakeoil_certs_ssl_cert_pkg_installed:
+ pkg.installed:
+ - name: ssl-cert
+ - require_in:
+ - sls: postgres
+
{%- else %}
{%- set arvados_ca_cert_dest = '/etc/pki/ca-trust/source/anchors/arvados-snakeoil-ca.pem' %}
{%- set update_ca_cert = '/usr/bin/update-ca-trust' %}
{%- set openssl_conf = '/etc/pki/tls/openssl.cnf' %}
+
{%- endif %}
-arvados_test_salt_states_examples_single_host_snakeoil_certs_dependencies_pkg_installed:
+extra_snakeoil_certs_dependencies_pkg_installed:
pkg.installed:
- pkgs:
- openssl
# random generator, cf
# https://github.com/openssl/openssl/issues/7754
#
-arvados_test_salt_states_examples_single_host_snakeoil_certs_file_comment_etc_openssl_conf:
+extra_snakeoil_certs_file_comment_etc_openssl_conf:
file.comment:
- name: /etc/ssl/openssl.cnf
- regex: ^RANDFILE.*
- onlyif: grep -q ^RANDFILE /etc/ssl/openssl.cnf
- require_in:
- - cmd: arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_ca_cmd_run
+ - cmd: extra_snakeoil_certs_arvados_snakeoil_ca_cmd_run
-arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_ca_cmd_run:
+extra_snakeoil_certs_arvados_snakeoil_ca_cmd_run:
# Taken from https://github.com/arvados/arvados/blob/master/tools/arvbox/lib/arvbox/docker/service/certificate/run
cmd.run:
- name: |
- test -f {{ arvados_ca_cert_file }}
- openssl verify -CAfile {{ arvados_ca_cert_file }} {{ arvados_ca_cert_file }}
- require:
- - pkg: arvados_test_salt_states_examples_single_host_snakeoil_certs_dependencies_pkg_installed
+ - pkg: extra_snakeoil_certs_dependencies_pkg_installed
+
+# Create independent certs for each vhost
+{%- for vh in [
+ 'collections',
+ 'controller',
+ 'download',
+ 'keepproxy',
+ 'webshell',
+ 'workbench',
+ 'workbench2',
+ 'websocket',
+ ]
+%}
+# We're creating these in a tmp directory, so they're copied to their destination
+# with the `custom_certs` state file, as if using custom certificates.
+{%- set arvados_cert_file = orig_cert_dir ~ '/arvados-' ~ vh ~ '.pem' %}
+{%- set arvados_csr_file = orig_cert_dir ~ '/arvados-' ~ vh ~ '.csr' %}
+{%- set arvados_key_file = orig_cert_dir ~ '/arvados-' ~ vh ~ '.key' %}
-arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_cert_cmd_run:
+extra_snakeoil_certs_arvados_snakeoil_cert_{{ vh }}_cmd_run:
cmd.run:
- name: |
- cat > /tmp/openssl.cnf <<-CNF
+ cat > /tmp/{{ vh }}.openssl.cnf <<-CNF
[req]
default_bits = 2048
prompt = no
default_md = sha256
- req_extensions = rext
distinguished_name = dn
+ req_extensions = rext
+ [rext]
+ subjectAltName = @alt_names
[dn]
C = CC
ST = Some State
L = Some Location
- O = Arvados Formula
- OU = arvados-formula
- CN = {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ O = Arvados Provision Example Single Host / Multiple Hostnames
+ OU = arvados-provision-example-single_host_multiple_hostnames
+ CN = {{ vh }}.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
emailAddress = admin@{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
- [rext]
- subjectAltName = @alt_names
[alt_names]
{%- for entry in grains.get('ipv4') %}
IP.{{ loop.index }} = {{ entry }}
{%- endfor %}
- {%- for entry in [
- 'keep',
- 'collections',
- 'download',
- 'ws',
- 'workbench',
- 'workbench2',
+ DNS.1 = {{ vh }}.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- if vh in [
+ 'controller',
+ 'keepproxy',
+ 'websocket'
]
%}
- DNS.{{ loop.index }} = {{ entry }}.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
- {%- endfor %}
- DNS.7 = {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- if vh == 'controller' %}
+ DNS.2 = {{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- elif vh == 'keepproxy' %}
+ DNS.2 = keep.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- elif vh == 'websocket' %}
+ DNS.2 = ws.{{ arvados.cluster.name }}.{{ arvados.cluster.domain }}
+ {%- endif %}
+ {%- endif %}
CNF
# The req
openssl req \
- -config /tmp/openssl.cnf \
+ -config /tmp/{{ vh }}.openssl.cnf \
-new \
-nodes \
-sha256 \
-out {{ arvados_csr_file }} \
- -keyout {{ arvados_key_file }} > /tmp/snake_oil_certs.output 2>&1 && \
+ -keyout {{ arvados_key_file }} > /tmp/snakeoil_certs.{{ vh }}.output 2>&1 && \
# The cert
openssl x509 \
-req \
-days 365 \
-in {{ arvados_csr_file }} \
-out {{ arvados_cert_file }} \
- -extfile /tmp/openssl.cnf \
+ -extfile /tmp/{{ vh }}.openssl.cnf \
-extensions rext \
-CA {{ arvados_ca_cert_file }} \
-CAkey {{ arvados_ca_key_file }} \
- test -f {{ arvados_key_file }}
- openssl verify -CAfile {{ arvados_ca_cert_file }} {{ arvados_cert_file }}
- require:
- - pkg: arvados_test_salt_states_examples_single_host_snakeoil_certs_dependencies_pkg_installed
- - cmd: arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_ca_cmd_run
- # We need this before we can add the nginx's snippet
- - require_in:
- - file: nginx_snippet_arvados-snakeoil.conf
-
-{%- if grains.get('os_family') == 'Debian' %}
-arvados_test_salt_states_examples_single_host_snakeoil_certs_ssl_cert_pkg_installed:
- pkg.installed:
- - name: ssl-cert
+ - pkg: extra_snakeoil_certs_dependencies_pkg_installed
+ - cmd: extra_snakeoil_certs_arvados_snakeoil_ca_cmd_run
- require_in:
- - sls: postgres
+ - file: extra_custom_certs_file_copy_arvados-{{ vh }}.pem
+ - file: extra_custom_certs_file_copy_arvados-{{ vh }}.key
-arvados_test_salt_states_examples_single_host_snakeoil_certs_certs_permissions_cmd_run:
+ {%- if grains.get('os_family') == 'Debian' %}
+extra_snakeoil_certs_certs_permissions_{{ vh}}_cmd_run:
file.managed:
- name: {{ arvados_key_file }}
- owner: root
- group: ssl-cert
- require:
- - cmd: arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_cert_cmd_run
- - pkg: arvados_test_salt_states_examples_single_host_snakeoil_certs_ssl_cert_pkg_installed
- - require_in:
- - file: nginx_snippet_arvados-snakeoil.conf
-{%- endif %}
+ - cmd: extra_snakeoil_certs_arvados_snakeoil_cert_{{ vh }}_cmd_run
+ - pkg: extra_snakeoil_certs_ssl_cert_pkg_installed
+ {%- endif %}
+{%- endfor %}
tls:
# certificate: ''
# key: ''
- # required to test with arvados-snakeoil certs
+ # When using arvados-snakeoil certs set insecure: true
insecure: true
### TOKENS
SHELL_INT_IP=10.0.0.7
INITIAL_USER="admin"
-INITIAL_USER_PASSWORD="password"
# If not specified, the initial user email will be composed as
# INITIAL_USER@CLUSTER.DOMAIN
# salt formula (https://github.com/saltstack-formulas/letsencrypt-formula) to try to
# automatically obtain and install SSL certificates for your instances or set this
# variable to "no", provide and upload your own certificates to the instances and
-# modify the 'nginx_*' salt pillars accordingly
+# modify the 'nginx_*' salt pillars accordingly (see CUSTOM_CERTS_DIR below)
USE_LETSENCRYPT="yes"
USE_LETSENCRYPT_IAM_USER="yes"
# For collections, we need to obtain a wildcard certificate for
LE_AWS_ACCESS_KEY_ID="AKIABCDEFGHIJKLMNOPQ"
LE_AWS_SECRET_ACCESS_KEY="thisistherandomstringthatisyoursecretkey"
+# If you going to provide your own certificates for Arvados, the provision script can
+# help you deploy them. In order to do that, you need to set `USE_LETSENCRYPT=no` above,
+# and copy the required certificates under the directory specified in the next line.
+# The certs will be copied from this directory by the provision script.
+CUSTOM_CERTS_DIR="./certs"
+# The script expects cert/key files with these basenames (matching the role except for
+# keepweb, which is split in both downoad/collections):
+# "controller"
+# "websocket"
+# "workbench"
+# "workbench2"
+# "webshell"
+# "download" # Part of keepweb
+# "collections" # Part of keepweb
+# "keep" # Keepproxy
+# Ie., 'keep', the script will lookup for
+# ${CUSTOM_CERTS_DIR}/keep.crt
+# ${CUSTOM_CERTS_DIR}/keep.key
+
# The directory to check for the config files (pillars, states) you want to use.
# There are a few examples under 'config_examples'.
# CONFIG_DIR="local_config_dir"
# salt formula (https://github.com/saltstack-formulas/letsencrypt-formula) to try to
# automatically obtain and install SSL certificates for your instances or set this
# variable to "no", provide and upload your own certificates to the instances and
-# modify the 'nginx_*' salt pillars accordingly
+# modify the 'nginx_*' salt pillars accordingly (see CUSTOM_CERTS_DIR below)
USE_LETSENCRYPT="no"
+# If you going to provide your own certificates for Arvados, the provision script can
+# help you deploy them. In order to do that, you need to set `USE_LETSENCRYPT=no` above,
+# and copy the required certificates under the directory specified in the next line.
+# The certs will be copied from this directory by the provision script.
+CUSTOM_CERTS_DIR="./certs"
+# The script expects cert/key files with these basenames (matching the role except for
+# keepweb, which is split in both downoad/collections):
+# "controller"
+# "websocket"
+# "workbench"
+# "workbench2"
+# "webshell"
+# "download" # Part of keepweb
+# "collections" # Part of keepweb
+# "keepproxy"
+# Ie., 'keepproxy', the script will lookup for
+# ${CUSTOM_CERTS_DIR}/keepproxy.crt
+# ${CUSTOM_CERTS_DIR}/keepproxy.key
+
# The directory to check for the config files (pillars, states) you want to use.
# There are a few examples under 'config_examples'.
# CONFIG_DIR="local_config_dir"
echo >&2 " for the selected role/s"
echo >&2 " - writes the resulting files into <dest_dir>"
echo >&2 " -v, --vagrant Run in vagrant and use the /vagrant shared dir"
+ echo >&2 " --development Run in dev mode, using snakeoil certs"
echo >&2
}
fi
TEMP=$(getopt -o c:dhp:r:tv \
- --long config:,debug,dump-config:,help,roles:,test,vagrant \
+ --long config:,debug,development,dump-config:,help,roles:,test,vagrant \
-n "${0}" -- "${@}")
if [ ${?} != 0 ];
DUMP_CONFIG="yes"
shift 2
;;
+ --development)
+ DEV_MODE="yes"
+ shift 1
+ ;;
-r | --roles)
for i in ${2//,/ }
do
done
}
+DEV_MODE="no"
CONFIG_FILE="${SCRIPT_DIR}/local.params"
CONFIG_DIR="local_config_dir"
DUMP_CONFIG="no"
WORKBENCH1_EXT_SSL_PORT=443
WORKBENCH2_EXT_SSL_PORT=3001
+USE_LETSENCRYPT="no"
+CUSTOM_CERTS_DIR="./certs"
+
## These are ARVADOS-related parameters
# For a stable release, change RELEASE "production" and VERSION to the
# package version (including the iteration, e.g. X.Y.Z-1) of the
# States, extra states
if [ -d "${F_DIR}"/extra/extra ]; then
- for f in $(ls "${F_DIR}"/extra/extra/*.sls); do
+ if [ "$DEV_MODE" = "yes" ]; then
+ # In dev mode, we create some snake oil certs that we'll
+ # use as CUSTOM_CERTS, so we don't skip the states file
+ SKIP_SNAKE_OIL="dont_snakeoil_certs"
+ else
+ SKIP_SNAKE_OIL="snakeoil_certs"
+ fi
+ for f in $(ls "${F_DIR}"/extra/extra/*.sls | grep -v ${SKIP_SNAKE_OIL}); do
echo " - extra.$(basename ${f} | sed 's/.sls$//g')" >> ${S_DIR}/top.sls
done
+ # Use custom certs
+ if [ "x${USE_LETSENCRYPT}" != "xyes" ]; then
+ mkdir -p "${F_DIR}"/extra/extra/files
+ fi
fi
# If we want specific roles for a node, just add the desired states
echo " - nginx.passenger" >> ${S_DIR}/top.sls
# Currently, only available on config_examples/multi_host/aws
if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
- if [ "x${USE_LETSENCRYPT_IAM_USER}" = "xyes" ]; then
- grep -q "aws_credentials" ${S_DIR}/top.sls || echo " - aws_credentials" >> ${S_DIR}/top.sls
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
+ grep -q "aws_credentials" ${S_DIR}/top.sls || echo " - extra.aws_credentials" >> ${S_DIR}/top.sls
fi
grep -q "letsencrypt" ${S_DIR}/top.sls || echo " - letsencrypt" >> ${S_DIR}/top.sls
+ else
+ # Use custom certs
+ # Copy certs to formula extra/files
+ # In dev mode, the files will be created and put in the destination directory by the
+ # snakeoil_certs.sls state file
+ mkdir -p /srv/salt/certs
+ cp -rv ${CUSTOM_CERTS_DIR}/* /srv/salt/certs/
+ # We add the custom_certs state
+ grep -q "custom_certs" ${S_DIR}/top.sls || echo " - extra.custom_certs" >> ${S_DIR}/top.sls
fi
+
echo " - postgres" >> ${S_DIR}/top.sls
echo " - docker.software" >> ${S_DIR}/top.sls
echo " - arvados" >> ${S_DIR}/top.sls
echo " - nginx_workbench2_configuration" >> ${P_DIR}/top.sls
echo " - nginx_workbench_configuration" >> ${P_DIR}/top.sls
echo " - postgresql" >> ${P_DIR}/top.sls
+
# Currently, only available on config_examples/multi_host/aws
if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
- if [ "x${USE_LETSENCRYPT_IAM_USER}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
grep -q "aws_credentials" ${P_DIR}/top.sls || echo " - aws_credentials" >> ${P_DIR}/top.sls
fi
grep -q "letsencrypt" ${P_DIR}/top.sls || echo " - letsencrypt" >> ${P_DIR}/top.sls
+
+ # As the pillar differ whether we use LE or custom certs, we need to do a final edition on them
+ for c in controller websocket workbench workbench2 webshell download collections keepproxy; do
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${c}.${CLUSTER}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${c}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${c}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ ${P_DIR}/nginx_${c}_configuration.sls
+ done
+ else
+ # Use custom certs (either dev mode or prod)
+ grep -q "extra_custom_certs" ${P_DIR}/top.sls || echo " - extra_custom_certs" >> ${P_DIR}/top.sls
+ # And add the certs in the custom_certs pillar
+ echo "extra_custom_certs_dir: /srv/salt/certs" > ${P_DIR}/extra_custom_certs.sls
+ echo "extra_custom_certs:" >> ${P_DIR}/extra_custom_certs.sls
+
+ for c in controller websocket workbench workbench2 webshell download collections keepproxy; do
+ grep -q ${c} ${P_DIR}/extra_custom_certs.sls || echo " - ${c}" >> ${P_DIR}/extra_custom_certs.sls
+
+ # As the pillar differ whether we use LE or custom certs, we need to do a final edition on them
+ sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${c}.pem/g;
+ s#__CERT_PEM__#/etc/nginx/ssl/arvados-${c}.pem#g;
+ s#__CERT_KEY__#/etc/nginx/ssl/arvados-${c}.key#g" \
+ ${P_DIR}/nginx_${c}_configuration.sls
+ done
fi
else
# If we add individual roles, make sure we add the repo first
grep -q "postgres.client" ${S_DIR}/top.sls || echo " - postgres.client" >> ${S_DIR}/top.sls
grep -q "nginx.passenger" ${S_DIR}/top.sls || echo " - nginx.passenger" >> ${S_DIR}/top.sls
### If we don't install and run LE before arvados-api-server, it fails and breaks everything
- ### after it so we add this here, as we are, after all, sharing the host for api and controller
+ ### after it. So we add this here as we are, after all, sharing the host for api and controller
# Currently, only available on config_examples/multi_host/aws
if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
- if [ "x${USE_LETSENCRYPT_IAM_USER}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
grep -q "aws_credentials" ${S_DIR}/top.sls || echo " - aws_credentials" >> ${S_DIR}/top.sls
fi
- grep -q "letsencrypt" ${S_DIR}/top.sls || echo " - letsencrypt" >> ${S_DIR}/top.sls
+ grep -q "letsencrypt" ${S_DIR}/top.sls || echo " - letsencrypt" >> ${S_DIR}/top.sls
+ else
+ # Use custom certs
+ cp -v ${CUSTOM_CERTS_DIR}/controller.* "${F_DIR}/extra/extra/files/"
+ # We add the custom_certs state
+ grep -q "custom_certs" ${S_DIR}/top.sls || echo " - extra.custom_certs" >> ${S_DIR}/top.sls
fi
grep -q "arvados.${R}" ${S_DIR}/top.sls || echo " - arvados.${R}" >> ${S_DIR}/top.sls
# Pillars
grep -q "nginx.passenger" ${S_DIR}/top.sls || echo " - nginx.passenger" >> ${S_DIR}/top.sls
# Currently, only available on config_examples/multi_host/aws
if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
- if [ "x${USE_LETSENCRYPT_IAM_USER}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
grep -q "aws_credentials" ${S_DIR}/top.sls || echo " - aws_credentials" >> ${S_DIR}/top.sls
fi
grep -q "letsencrypt" ${S_DIR}/top.sls || echo " - letsencrypt" >> ${S_DIR}/top.sls
+ else
+ # Use custom certs, special case for keepweb
+ if [ ${R} = "keepweb" ]; then
+ cp -v ${CUSTOM_CERTS_DIR}/download.* "${F_DIR}/extra/extra/files/"
+ cp -v ${CUSTOM_CERTS_DIR}/collections.* "${F_DIR}/extra/extra/files/"
+ else
+ cp -v ${CUSTOM_CERTS_DIR}/${R}.* "${F_DIR}/extra/extra/files/"
+ fi
+ # We add the custom_certs state
+ grep -q "custom_certs" ${S_DIR}/top.sls || echo " - extra.custom_certs" >> ${S_DIR}/top.sls
+
fi
# webshell role is just a nginx vhost, so it has no state
if [ "${R}" != "webshell" ]; then
- grep -q "arvados.${R}" ${S_DIR}/top.sls || echo " - arvados.${R}" >> ${S_DIR}/top.sls
+ grep -q "arvados.${R}" ${S_DIR}/top.sls || echo " - arvados.${R}" >> ${S_DIR}/top.sls
fi
# Pillars
grep -q "nginx_passenger" ${P_DIR}/top.sls || echo " - nginx_passenger" >> ${P_DIR}/top.sls
grep -q "nginx_${R}_configuration" ${P_DIR}/top.sls || echo " - nginx_${R}_configuration" >> ${P_DIR}/top.sls
+ # Special case for keepweb
+ if [ ${R} = "keepweb" ]; then
+ grep -q "nginx_download_configuration" ${P_DIR}/top.sls || echo " - nginx_download_configuration" >> ${P_DIR}/top.sls
+ grep -q "nginx_collections_configuration" ${P_DIR}/top.sls || echo " - nginx_collections_configuration" >> ${P_DIR}/top.sls
+ fi
+
# Currently, only available on config_examples/multi_host/aws
if [ "x${USE_LETSENCRYPT}" = "xyes" ]; then
- if [ "x${USE_LETSENCRYPT_IAM_USER}" = "xyes" ]; then
+ if [ "x${USE_LETSENCRYPT_IAM_USER}" != "xyes" ]; then
grep -q "aws_credentials" ${P_DIR}/top.sls || echo " - aws_credentials" >> ${P_DIR}/top.sls
fi
grep -q "letsencrypt" ${P_DIR}/top.sls || echo " - letsencrypt" >> ${P_DIR}/top.sls
grep -q "letsencrypt_${R}_configuration" ${P_DIR}/top.sls || echo " - letsencrypt_${R}_configuration" >> ${P_DIR}/top.sls
+
+ # As the pillar differ whether we use LE or custom certs, we need to do a final edition on them
+ # Special case for keepweb
+ if [ ${R} = "keepweb" ]; then
+ for kwsub in download collections; do
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${kwsub}.${CLUSTER}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${kwsub}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${kwsub}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ ${P_DIR}/nginx_${kwsub}_configuration.sls
+ done
+ else
+ sed -i "s/__CERT_REQUIRES__/cmd: create-initial-cert-${R}.${CLUSTER}.${DOMAIN}*/g;
+ s#__CERT_PEM__#/etc/letsencrypt/live/${R}.${CLUSTER}.${DOMAIN}/fullchain.pem#g;
+ s#__CERT_KEY__#/etc/letsencrypt/live/${R}.${CLUSTER}.${DOMAIN}/privkey.pem#g" \
+ ${P_DIR}/nginx_${R}_configuration.sls
+ fi
+ else
+ grep -q ${R} ${P_DIR}/extra_custom_certs.sls || echo " - ${R}" >> ${P_DIR}/extra_custom_certs.sls
+
+ # As the pillar differ whether we use LE or custom certs, we need to do a final edition on them
+ # Special case for keepweb
+ if [ ${R} = "keepweb" ]; then
+ for kwsub in download collections; do
+ sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${kwsub}.pem/g;
+ s#__CERT_PEM__#/etc/nginx/ssl/arvados-${kwsub}.pem#g;
+ s#__CERT_KEY__#/etc/nginx/ssl/arvados-${kwsub}.key#g" \
+ ${P_DIR}/nginx_${kwsub}_configuration.sls
+ done
+ else
+ sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${R}.pem/g;
+ s#__CERT_PEM__#/etc/nginx/ssl/arvados-${R}.pem#g;
+ s#__CERT_KEY__#/etc/nginx/ssl/arvados-${R}.key#g" \
+ ${P_DIR}/nginx_${R}_configuration.sls
+ fi
fi
;;
"shell")
# END FIXME! #16992 Temporary fix for psql call in arvados-api-server
# Leave a copy of the Arvados CA so the user can copy it where it's required
-echo "Copying the Arvados CA certificate to the installer dir, so you can import it"
-# If running in a vagrant VM, also add default user to docker group
-if [ "x${VAGRANT}" = "xyes" ]; then
- cp /etc/ssl/certs/arvados-snakeoil-ca.pem /vagrant/${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.pem
-
- echo "Adding the vagrant user to the docker group"
- usermod -a -G docker vagrant
-else
- cp /etc/ssl/certs/arvados-snakeoil-ca.pem ${SCRIPT_DIR}/${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.pem
+if [ "$DEV_MODE" = "yes" ]; then
+ echo "Copying the Arvados CA certificate to the installer dir, so you can import it"
+ # If running in a vagrant VM, also add default user to docker group
+ if [ "x${VAGRANT}" = "xyes" ]; then
+ cp /etc/ssl/certs/arvados-snakeoil-ca.pem /vagrant/${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.pem
+
+ echo "Adding the vagrant user to the docker group"
+ usermod -a -G docker vagrant
+ else
+ cp /etc/ssl/certs/arvados-snakeoil-ca.pem ${SCRIPT_DIR}/${CLUSTER}.${DOMAIN}-arvados-snakeoil-ca.pem
+ fi
fi
# Test that the installation finished correctly