Options:
- --json-file (required)
- Path to the packer json file
- --arvados-cluster-id (required)
- The ID of the Arvados cluster, e.g. zzzzz
- --aws-profile (default: false)
- AWS profile to use (valid profile from ~/.aws/config
- --aws-secrets-file (default: false, required if building for AWS)
- AWS secrets file which will be sourced from this script
- --aws-source-ami (default: false, required if building for AWS)
- The AMI to use as base for building the images
- --aws-region (default: us-east-1)
+ --json-file <path>
+ Path to the packer json file (required)
+ --arvados-cluster-id <xxxxx>
+ The ID of the Arvados cluster, e.g. zzzzz(required)
+ --aws-profile <profile>
+ AWS profile to use (valid profile from ~/.aws/config (optional)
+ --aws-secrets-file <path>
+ AWS secrets file which will be sourced from this script (optional)
+ When building for AWS, either an AWS profile or an AWS secrets file
+ must be provided.
+ --aws-source-ami <ami-xxxxxxxxxxxxxxxxx>
+ The AMI to use as base for building the images (required if building for AWS)
+ --aws-region <region> (default: us-east-1)
The AWS region to use for building the images
- --aws-vpc-id (optional)
- VPC id for AWS, otherwise packer will pick the default one
- --aws-subnet-id
- Subnet id for AWS otherwise packer will pick the default one for the VPC
- --aws-ebs-autoscale (default: false)
- Install the AWS EBS autoscaler daemon.
- --gcp-project-id (default: false, required if building for GCP)
- GCP project id
- --gcp-account-file (default: false, required if building for GCP)
- GCP account file
- --gcp-zone (default: us-central1-f)
+ --aws-vpc-id <vpc-id>
+ VPC id for AWS, if not specified packer will derive from the subnet id or pick the default one.
+ --aws-subnet-id <subnet-xxxxxxxxxxxxxxxxx>
+ Subnet id for AWS, if not specified packer will pick the default one for the VPC.
+ --aws-ebs-autoscale
+ Install the AWS EBS autoscaler daemon (default: do not install the AWS EBS autoscaler).
+ --aws-associate-public-ip <true|false>
+ Associate a public IP address with the node used for building the compute image.
+ Required when the machine running packer can not reach the node used for building
+ the compute image via its private IP. (default: true if building for AWS)
+ Note: if the subnet has "Auto-assign public IPv4 address" enabled, disabling this
+ flag will have no effect.
+ --aws-ena-support <true|false>
+ Enable enhanced networking (default: true if building for AWS)
+ --gcp-project-id <project-id>
+ GCP project id (required if building for GCP)
+ --gcp-account-file <path>
+ GCP account file (required if building for GCP)
+ --gcp-zone <zone> (default: us-central1-f)
GCP zone
- --azure-secrets-file (default: false, required if building for Azure)
- Azure secrets file which will be sourced from this script
- --azure-resource-group (default: false, required if building for Azure)
- Azure resource group
- --azure-location (default: false, required if building for Azure)
- Azure location, e.g. centralus, eastus, westeurope
- --azure-sku (default: unset, required if building for Azure, e.g. 16.04-LTS)
+ --azure-secrets-file <patch>
+ Azure secrets file which will be sourced from this script (required if building for Azure)
+ --azure-resource-group <resouce-group>
+ Azure resource group (required if building for Azure)
+ --azure-location <location>
+ Azure location, e.g. centralus, eastus, westeurope (required if building for Azure)
+ --azure-sku <sku> (required if building for Azure, e.g. 16.04-LTS)
Azure SKU image to use
- --ssh_user (default: packer)
+ --ssh_user <user> (default: packer)
The user packer will use to log into the image
- --resolver (default: host's network provided)
- The dns resolver for the machine
- --reposuffix (default: unset)
+ --resolver <resolver_IP>
+ The dns resolver for the machine (default: host's network provided)
+ --reposuffix <suffix>
Set this to "-dev" to track the unstable/dev Arvados repositories
- --public-key-file (required)
- Path to the public key file that a-d-c will use to log into the compute node
+ --public-key-file <path>
+ Path to the public key file that a-d-c will use to log into the compute node (required)
--mksquashfs-mem (default: 256M)
Only relevant when using Singularity. This is the amount of memory mksquashfs is allowed to use.
- --nvidia-gpu-support (default: false)
- Install all the necessary tooling for Nvidia GPU support
- --debug (default: false)
- Output debug information
+ --nvidia-gpu-support
+ Install all the necessary tooling for Nvidia GPU support (default: do not install Nvidia GPU support)
+ --debug
+ Output debug information (default: no debug output is printed)
</code></pre></notextile>
h2(#dns-resolution). DNS resolution
<notextile>
<pre><code>scp -r provision.sh local* user@host:
-# if you use custom certificates (not Let's Encrypt), make sure to copy those too:
-# scp -r certs user@host:
ssh user@host sudo ./provision.sh --roles comma,separated,list,of,roles,to,apply
</code></pre>
</notextile>
<notextile>
<pre><code>SSL_MODE="bring-your-own"
-CUSTOM_CERTS_DIR="${SCRIPT_DIR}/certs"
</code></pre>
</notextile>
<notextile>
<pre><code>scp -r provision.sh local* tests user@host:
-# if you have set SSL_MODE to "bring-your-own", make sure to also copy the certificate files:
-# scp -r certs user@host:
ssh user@host sudo ./provision.sh
</code></pre>
</notextile>
logger = logging.getLogger('arvados.cwl-runner')
metrics = logging.getLogger('arvados.cwl-runner.metrics')
+def cleanup_name_for_collection(name):
+ return name.replace("/", " ")
+
class ArvadosContainer(JobBase):
"""Submit and manage a Crunch container request for executing a CWL CommandLineTool."""
if runtimeContext.submit_runner_cluster:
extra_submit_params["cluster_id"] = runtimeContext.submit_runner_cluster
- container_request["output_name"] = "Output from step %s" % (self.name)
+ container_request["output_name"] = cleanup_name_for_collection("Output from step %s" % (self.name))
container_request["output_ttl"] = self.output_ttl
container_request["mounts"] = mounts
container_request["secret_mounts"] = secret_mounts
from arvados.errors import ApiError
import arvados_cwl.util
-from .arvcontainer import RunnerContainer
+from .arvcontainer import RunnerContainer, cleanup_name_for_collection
from .runner import Runner, upload_docker, upload_job_order, upload_workflow_deps, make_builder
from .arvtool import ArvadosCommandTool, validate_cluster_target, ArvadosExpressionTool
from .arvworkflow import ArvadosWorkflow, upload_workflow
if not self.output_name:
self.output_name = "Output from workflow %s" % runtimeContext.name
+ self.output_name = cleanup_name_for_collection(self.output_name)
+
if self.work_api == "containers":
if self.ignore_docker_for_reuse:
raise Exception("--ignore-docker-for-reuse not supported with containers API.")
{
"variables": {
"arvados_cluster": "",
- "associate_public_ip_address": "true",
"aws_access_key": "",
"aws_profile": "",
"aws_secret_key": "",
"aws_source_ami": "ami-031283ff8a43b021c",
"aws_ebs_autoscale": "",
+ "aws_associate_public_ip_address": "",
+ "aws_ena_support": "",
"build_environment": "aws",
"public_key_file": "",
"mksquashfs_mem": "",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `aws_default_region`}}",
- "ena_support": "true",
+ "ena_support": "{{user `aws_ena_support`}}",
"source_ami": "{{user `aws_source_ami`}}",
"instance_type": "m5.large",
"vpc_id": "{{user `vpc_id`}}",
"subnet_id": "{{user `subnet_id`}}",
- "associate_public_ip_address": "{{user `associate_public_ip_address`}}",
+ "associate_public_ip_address": "{{user `aws_associate_public_ip_address`}}",
"ssh_username": "{{user `ssh_user`}}",
"ami_name": "arvados-{{user `arvados_cluster`}}-compute-{{isotime \"20060102150405\"}}",
"launch_block_device_mappings": [{
Options:
- --json-file (required)
- Path to the packer json file
- --arvados-cluster-id (required)
- The ID of the Arvados cluster, e.g. zzzzz
- --aws-profile (default: false)
- AWS profile to use (valid profile from ~/.aws/config
- --aws-secrets-file (default: false, required if building for AWS)
- AWS secrets file which will be sourced from this script
- --aws-source-ami (default: false, required if building for AWS)
- The AMI to use as base for building the images
- --aws-region (default: us-east-1)
+ --json-file <path>
+ Path to the packer json file (required)
+ --arvados-cluster-id <xxxxx>
+ The ID of the Arvados cluster, e.g. zzzzz(required)
+ --aws-profile <profile>
+ AWS profile to use (valid profile from ~/.aws/config (optional)
+ --aws-secrets-file <path>
+ AWS secrets file which will be sourced from this script (optional)
+ When building for AWS, either an AWS profile or an AWS secrets file
+ must be provided.
+ --aws-source-ami <ami-xxxxxxxxxxxxxxxxx>
+ The AMI to use as base for building the images (required if building for AWS)
+ --aws-region <region> (default: us-east-1)
The AWS region to use for building the images
- --aws-vpc-id (optional)
- VPC id for AWS, otherwise packer will pick the default one
- --aws-subnet-id
- Subnet id for AWS otherwise packer will pick the default one for the VPC
- --aws-ebs-autoscale (default: false)
- Install the AWS EBS autoscaler daemon.
- --gcp-project-id (default: false, required if building for GCP)
- GCP project id
- --gcp-account-file (default: false, required if building for GCP)
- GCP account file
- --gcp-zone (default: us-central1-f)
+ --aws-vpc-id <vpc-id>
+ VPC id for AWS, if not specified packer will derive from the subnet id or pick the default one.
+ --aws-subnet-id <subnet-xxxxxxxxxxxxxxxxx>
+ Subnet id for AWS, if not specified packer will pick the default one for the VPC.
+ --aws-ebs-autoscale
+ Install the AWS EBS autoscaler daemon (default: do not install the AWS EBS autoscaler).
+ --aws-associate-public-ip <true|false>
+ Associate a public IP address with the node used for building the compute image.
+ Required when the machine running packer can not reach the node used for building
+ the compute image via its private IP. (default: true if building for AWS)
+ Note: if the subnet has "Auto-assign public IPv4 address" enabled, disabling this
+ flag will have no effect.
+ --aws-ena-support <true|false>
+ Enable enhanced networking (default: true if building for AWS)
+ --gcp-project-id <project-id>
+ GCP project id (required if building for GCP)
+ --gcp-account-file <path>
+ GCP account file (required if building for GCP)
+ --gcp-zone <zone> (default: us-central1-f)
GCP zone
- --azure-secrets-file (default: false, required if building for Azure)
- Azure secrets file which will be sourced from this script
- --azure-resource-group (default: false, required if building for Azure)
- Azure resource group
- --azure-location (default: false, required if building for Azure)
- Azure location, e.g. centralus, eastus, westeurope
- --azure-sku (default: unset, required if building for Azure, e.g. 16.04-LTS)
+ --azure-secrets-file <patch>
+ Azure secrets file which will be sourced from this script (required if building for Azure)
+ --azure-resource-group <resouce-group>
+ Azure resource group (required if building for Azure)
+ --azure-location <location>
+ Azure location, e.g. centralus, eastus, westeurope (required if building for Azure)
+ --azure-sku <sku> (required if building for Azure, e.g. 16.04-LTS)
Azure SKU image to use
- --ssh_user (default: packer)
+ --ssh_user <user> (default: packer)
The user packer will use to log into the image
- --resolver (default: host's network provided)
- The dns resolver for the machine
- --reposuffix (default: unset)
+ --resolver <resolver_IP>
+ The dns resolver for the machine (default: host's network provided)
+ --reposuffix <suffix>
Set this to "-dev" to track the unstable/dev Arvados repositories
- --public-key-file (required)
- Path to the public key file that a-d-c will use to log into the compute node
+ --public-key-file <path>
+ Path to the public key file that a-d-c will use to log into the compute node (required)
--mksquashfs-mem (default: 256M)
Only relevant when using Singularity. This is the amount of memory mksquashfs is allowed to use.
- --nvidia-gpu-support (default: false)
- Install all the necessary tooling for Nvidia GPU support
- --debug (default: false)
- Output debug information
+ --nvidia-gpu-support
+ Install all the necessary tooling for Nvidia GPU support (default: do not install Nvidia GPU support)
+ --debug
+ Output debug information (default: no debug output is printed)
For more information, see the Arvados documentation at https://doc.arvados.org/install/crunch2-cloud/install-compute-node.html
AWS_VPC_ID=
AWS_SUBNET_ID=
AWS_EBS_AUTOSCALE=
+AWS_ASSOCIATE_PUBLIC_IP=true
+AWS_ENA_SUPPORT=true
GCP_PROJECT_ID=
GCP_ACCOUNT_FILE=
GCP_ZONE=
NVIDIA_GPU_SUPPORT=
PARSEDOPTS=$(getopt --name "$0" --longoptions \
- help,json-file:,arvados-cluster-id:,aws-source-ami:,aws-profile:,aws-secrets-file:,aws-region:,aws-vpc-id:,aws-subnet-id:,aws-ebs-autoscale,gcp-project-id:,gcp-account-file:,gcp-zone:,azure-secrets-file:,azure-resource-group:,azure-location:,azure-sku:,azure-cloud-environment:,ssh_user:,resolver:,reposuffix:,public-key-file:,mksquashfs-mem:,nvidia-gpu-support,debug \
+ help,json-file:,arvados-cluster-id:,aws-source-ami:,aws-profile:,aws-secrets-file:,aws-region:,aws-vpc-id:,aws-subnet-id:,aws-ebs-autoscale,aws-associate-public-ip:,aws-ena-support:,gcp-project-id:,gcp-account-file:,gcp-zone:,azure-secrets-file:,azure-resource-group:,azure-location:,azure-sku:,azure-cloud-environment:,ssh_user:,resolver:,reposuffix:,public-key-file:,mksquashfs-mem:,nvidia-gpu-support,debug \
-- "" "$@")
if [ $? -ne 0 ]; then
exit 1
--aws-ebs-autoscale)
AWS_EBS_AUTOSCALE=1
;;
+ --aws-associate-public-ip)
+ AWS_ASSOCIATE_PUBLIC_IP="$2"; shift
+ ;;
+ --aws-ena-support)
+ AWS_ENA_SUPPORT="$2"; shift
+ ;;
--gcp-project-id)
GCP_PROJECT_ID="$2"; shift
;;
fi
+AWS=0
EXTRA2=""
if [[ -n "$AWS_SOURCE_AMI" ]]; then
EXTRA2+=" -var aws_source_ami=$AWS_SOURCE_AMI"
+ AWS=1
fi
if [[ -n "$AWS_PROFILE" ]]; then
EXTRA2+=" -var aws_profile=$AWS_PROFILE"
+ AWS=1
fi
if [[ -n "$AWS_VPC_ID" ]]; then
- EXTRA2+=" -var vpc_id=$AWS_VPC_ID -var associate_public_ip_address=true "
+ EXTRA2+=" -var vpc_id=$AWS_VPC_ID"
+ AWS=1
fi
if [[ -n "$AWS_SUBNET_ID" ]]; then
- EXTRA2+=" -var subnet_id=$AWS_SUBNET_ID -var associate_public_ip_address=true "
+ EXTRA2+=" -var subnet_id=$AWS_SUBNET_ID"
+ AWS=1
fi
if [[ -n "$AWS_DEFAULT_REGION" ]]; then
EXTRA2+=" -var aws_default_region=$AWS_DEFAULT_REGION"
+ AWS=1
fi
if [[ -n "$AWS_EBS_AUTOSCALE" ]]; then
EXTRA2+=" -var aws_ebs_autoscale=$AWS_EBS_AUTOSCALE"
+ AWS=1
+fi
+if [[ $AWS -eq 1 ]]; then
+ EXTRA2+=" -var aws_associate_public_ip_address=$AWS_ASSOCIATE_PUBLIC_IP"
+ EXTRA2+=" -var aws_ena_support=$AWS_ENA_SUPPORT"
fi
if [[ -n "$GCP_PROJECT_ID" ]]; then
EXTRA2+=" -var project_id=$GCP_PROJECT_ID"
# Add the arvados signing key
cat /tmp/1078ECD7.asc | $SUDO apt-key add -
-# Add the debian keys
-wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get install --yes debian-keyring debian-archive-keyring
+# Add the debian keys (but don't abort if we can't find them, e.g. on Ubuntu where we don't need them)
+wait_for_apt_locks && $SUDO DEBIAN_FRONTEND=noninteractive apt-get install --yes debian-keyring debian-archive-keyring 2>/dev/null || true
# Fix locale
$SUDO /bin/sed -ri 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen
{%- set passenger_mod = '/usr/lib64/nginx/modules/ngx_http_passenger_module.so'
if grains.osfinger in ('CentOS Linux-7',) else
'/usr/lib/nginx/modules/ngx_http_passenger_module.so' %}
-{%- set passenger_ruby = '/usr/local/rvm/rubies/ruby-2.7.2/bin/ruby'
+{%- set passenger_ruby = '/usr/local/rvm/wrappers/default/ruby'
if grains.osfinger in ('CentOS Linux-7', 'Ubuntu-18.04', 'Debian-10') else
'/usr/bin/ruby' %}
{%- set passenger_mod = '/usr/lib64/nginx/modules/ngx_http_passenger_module.so'
if grains.osfinger in ('CentOS Linux-7',) else
'/usr/lib/nginx/modules/ngx_http_passenger_module.so' %}
-{%- set passenger_ruby = '/usr/local/rvm/rubies/ruby-2.7.2/bin/ruby'
+{%- set passenger_ruby = '/usr/local/rvm/wrappers/default/ruby'
if grains.osfinger in ('CentOS Linux-7', 'Ubuntu-18.04', 'Debian-10') else
'/usr/bin/ruby' %}
{%- set passenger_mod = '/usr/lib64/nginx/modules/ngx_http_passenger_module.so'
if grains.osfinger in ('CentOS Linux-7',) else
'/usr/lib/nginx/modules/ngx_http_passenger_module.so' %}
-{%- set passenger_ruby = '/usr/local/rvm/rubies/ruby-2.7.2/bin/ruby'
+{%- set passenger_ruby = '/usr/local/rvm/wrappers/default/ruby'
if grains.osfinger in ('CentOS Linux-7', 'Ubuntu-18.04', 'Debian-10') else
'/usr/bin/ruby' %}
# Please set it to the FULL PATH to the certs dir if you're going to use a different dir
# Default is "${SCRIPT_DIR}/certs", where the variable "SCRIPT_DIR" has the path to the
# directory where the "provision.sh" script was copied in the destination host.
-# CUSTOM_CERTS_DIR="${SCRIPT_DIR}/certs"
+# CUSTOM_CERTS_DIR="${SCRIPT_DIR}/local_config_dir/certs"
# The script expects cert/key files with these basenames (matching the role except for
# keepweb, which is split in both download/collections):
# "controller"
# CUSTOM_CERTS_DIR is only used when SSL_MODE is set to "bring-your-own".
# See https://doc.arvados.org/intall/salt-single-host.html#bring-your-own for more information.
-# CUSTOM_CERTS_DIR="${SCRIPT_DIR}/certs"
+# CUSTOM_CERTS_DIR="${SCRIPT_DIR}/local_config_dir/certs"
# The directory to check for the config files (pillars, states) you want to use.
# There are a few examples under 'config_examples'.
# CUSTOM_CERTS_DIR is only used when SSL_MODE is set to "bring-your-own".
# See https://doc.arvados.org/intall/salt-single-host.html#bring-your-own for more information.
-# CUSTOM_CERTS_DIR="${SCRIPT_DIR}/certs"
+# CUSTOM_CERTS_DIR="${SCRIPT_DIR}/local_config_dir/certs"
# The directory to check for the config files (pillars, states) you want to use.
# There are a few examples under 'config_examples'.
SSL_MODE="self-signed"
USE_LETSENCRYPT_ROUTE53="no"
-CUSTOM_CERTS_DIR="${SCRIPT_DIR}/certs"
+CUSTOM_CERTS_DIR="${SCRIPT_DIR}/local_config_dir/certs"
## These are ARVADOS-related parameters
# For a stable release, change RELEASE "production" and VERSION to the