|==--output-name OUTPUT_NAME==|Name to use for collection that stores the final output.|
|==--output-tags OUTPUT_TAGS==|Tags for the final output collection separated by commas, e.g., =='--output-tags tag0,tag1,tag2'==.|
|==--ignore-docker-for-reuse==|Ignore Docker image version when deciding whether to reuse past containers.|
-|==--submit==| Submit workflow runner to Arvados to manage the workflow (default).|
-|==--local==| Run workflow on local host (still submits containers to Arvados).|
+|==--submit==| Submit workflow to run on Arvados.|
+|==--local==| Run workflow on local host (submits containers to Arvados).|
|==--create-template==| (Deprecated) synonym for --create-workflow.|
|==--create-workflow==| Register an Arvados workflow that can be run from Workbench|
-|==--update-workflow== UUID|Update an existing Arvados workflow or pipeline template with the given UUID.|
+|==--update-workflow== UUID|Update an existing Arvados workflow with the given UUID.|
|==--wait==| After submitting workflow runner, wait for completion.|
|==--no-wait==| Submit workflow runner and exit.|
|==--log-timestamps==| Prefix logging lines with timestamp|
|==--no-log-timestamps==| No timestamp on logging lines|
|==--compute-checksum==| Compute checksum of contents while collecting outputs|
-|==--submit-runner-ram== SUBMIT_RUNNER_RAM|RAM (in MiB) required for the workflow runner (default 1024)|
-|==--submit-runner-image== SUBMIT_RUNNER_IMAGE|Docker image for workflow runner|
+|==--submit-runner-ram== SUBMIT_RUNNER_RAM|RAM (in MiB) required for the workflow runner job (default 1024)|
+|==--submit-runner-image== SUBMIT_RUNNER_IMAGE|Docker image for workflow runner job|
|==--always-submit-runner==|When invoked with --submit --wait, always submit a runner to manage the workflow, even when only running a single CommandLineTool|
|==--match-submitter-images==|Where Arvados has more than one Docker image of the same name, use image from the Docker instance on the submitting node.|
|==--submit-request-uuid== UUID|Update and commit to supplied container request instead of creating a new one.|
|==--submit-runner-cluster== CLUSTER_ID|Submit workflow runner to a remote cluster|
-|==--name NAME==|Name to use for workflow execution instance.|
+|==--collection-cache-size== COLLECTION_CACHE_SIZE|Collection cache size (in MiB, default 256).|
+|==--name== NAME|Name to use for workflow execution instance.|
|==--on-error== {stop,continue}|Desired workflow behavior when a step fails. One of 'stop' (do not submit any more steps) or 'continue' (may submit other steps that are not downstream from the error). Default is 'continue'.|
-|==--enable-dev==|Enable loading and running development versions of CWL spec.|
-|==--storage-classes== STORAGE_CLASSES|Specify comma separated list of storage classes to be used when saving the final workflow output to Keep.|
-|==--intermediate-storage-classes== STORAGE_CLASSES|Specify comma separated list of storage classes to be used when intermediate workflow output to Keep.|
+|==--enable-dev==|Enable loading and running development versions of the CWL standards.|
+|==--storage-classes== STORAGE_CLASSES|Specify comma separated list of storage classes to be used when saving final workflow output to Keep.|
+|==--intermediate-storage-classes== INTERMEDIATE_STORAGE_CLASSES|Specify comma separated list of storage classes to be used when saving intermediate workflow output to Keep.|
|==--intermediate-output-ttl== N|If N > 0, intermediate output collections will be trashed N seconds after creation. Default is 0 (don't trash).|
|==--priority== PRIORITY|Workflow priority (range 1..1000, higher has precedence over lower)|
-|==--thread-count== THREAD_COUNT|Number of threads to use for container submit and output collection.|
+|==--thread-count== THREAD_COUNT|Number of threads to use for job submit and output collection.|
|==--http-timeout== HTTP_TIMEOUT|API request timeout in seconds. Default is 300 seconds (5 minutes).|
-|==--enable-preemptible==|Use preemptible instances. Control individual steps with "arv:UsePreemptible":cwl-extensions.html#UsePreemptible hint.|
+|==--defer-downloads==|When submitting a workflow, defer downloading HTTP URLs to workflow launch instead of downloading to Keep before submit.|
+|==--varying-url-params== VARYING_URL_PARAMS|A comma separated list of URL query parameters that should be ignored when storing HTTP URLs in Keep.|
+|==--prefer-cached-downloads==|If a HTTP URL is found in Keep, skip upstream URL freshness check (will not notice if the upstream has changed, but also not error if upstream is unavailable).|
+|==--enable-preemptible==|Use preemptible instances. Control individual steps with arv:UsePreemptible hint.|
|==--disable-preemptible==|Don't use preemptible instances.|
-|==--skip-schemas==|Skip loading of extension schemas (the $schemas section).|
+|==--copy-deps==| Copy dependencies into the destination project.|
+|==--no-copy-deps==| Leave dependencies where they are.|
+|==--skip-schemas==| Skip loading of schemas|
|==--trash-intermediate==|Immediately trash intermediate outputs on workflow success.|
|==--no-trash-intermediate==|Do not trash intermediate outputs (default).|
h3. Build a reusable library of components
-Build a reusable library of components. Share tool wrappers and subworkflows between projects. Make use of and contribute to "community maintained workflows and tools":https://github.com/common-workflow-library and tool registries such as "Dockstore":http://dockstore.org .
+Share tool wrappers and subworkflows between projects. Make use of and contribute to "community maintained workflows and tools":https://github.com/common-workflow-library and tool registries such as "Dockstore":http://dockstore.org .
h3. Supply scripts as input parameters
You can get the designated temporary directory using @$(runtime.tmpdir)@ in your CWL file, or from the @$TMPDIR@ environment variable in your script.
-Similarly, you can get the designated output directory using $(runtime.outdir), or from the @HOME@ environment variable in your script.
+Similarly, you can get the designated output directory using @$(runtime.outdir)@, or from the @HOME@ environment variable in your script.
h3. Specifying @ResourceRequirement@
coresMin: 2
tmpdirMin: 90000
{% endcodeblock %}
+
+h3. Importing data into Keep
+
+You can use HTTP URLs as File input parameters and @arvados-cwl-runner@ will download them to Keep for you:
+
+{% codeblock as yaml %}
+fastq1:
+ class: File
+ location: https://example.com/genomes/sampleA_1.fastq
+fastq2:
+ class: File
+ location: https://example.com/genomes/sampleA_2.fastq
+{% endcodeblock %}
+
+Files are downloaded and stored in Keep collections with HTTP header information stored in metadata. If a file was previously downloaded, @arvados-cwl-runner@ uses HTTP caching rules to decide if a file should be re-downloaded or not.
+
+The default behavior is to transfer the files on the client, prior to submitting the workflow run. This guarantees the data is available when the workflow is submitted. However, if data transfer is time consuming and you are submitting multiple workflow runs in a row, or the node submitting the workflow has limited bandwidth, you can use the @--defer-download@ option to have the data transfer performed by workflow runner process on a compute node, after the workflow is submitted.
+
+@arvados-cwl-runner@ provides two additional options to control caching behavior.
+
+* @--varying-url-params@ will ignore the listed URL query parameters from any HTTP URLs when checking if a URL has already been downloaded to Keep.
+* @--prefer-cached-downloads@ will search Keep for the previously downloaded URL and use that if found, without checking the upstream resource. This means changes in the upstream resource won't be detected, but it also means the workflow will not fail if the upstream resource becomes inaccessible.
+
+One use of this is to import files from "AWS S3 signed URLs":https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html
+
+Here is an example usage. The use of @--varying-url-params=AWSAccessKeyId,Signature,Expires@ is especially relevant, this removes these parameters from the cached URL, which means that if a new signed URL for the same object is generated later, it can be found in the cache.
+
+{% codeblock as sh %}
+arvados-cwl-runner --defer-download \
+ --varying-url-params=AWSAccessKeyId,Signature,Expires \
+ --prefer-cached-downloads \
+ workflow.cwl params.yml
+{% endcodeblock %}
c1.save_new()
loc = c1.manifest_locator()
c2 = Collection(loc)
- self.assertEqual(c1.manifest_text, c2.manifest_text)
+ self.assertEqual(c1.manifest_text(strip=True), c2.manifest_text(strip=True))
self.assertEqual(c1.replication_desired, c2.replication_desired)
def test_replication_desired_not_loaded_if_provided(self):
c1.save_new()
loc = c1.manifest_locator()
c2 = Collection(loc, replication_desired=2)
- self.assertEqual(c1.manifest_text, c2.manifest_text)
+ self.assertEqual(c1.manifest_text(strip=True), c2.manifest_text(strip=True))
self.assertNotEqual(c1.replication_desired, c2.replication_desired)
def test_storage_classes_desired_kept_on_load(self):
c1.save_new()
loc = c1.manifest_locator()
c2 = Collection(loc)
- self.assertEqual(c1.manifest_text, c2.manifest_text)
+ self.assertEqual(c1.manifest_text(strip=True), c2.manifest_text(strip=True))
self.assertEqual(c1.storage_classes_desired(), c2.storage_classes_desired())
def test_storage_classes_change_after_save(self):
c2.save(storage_classes=['highIO'])
self.assertEqual(['highIO'], c2.storage_classes_desired())
c3 = Collection(loc)
- self.assertEqual(c1.manifest_text, c3.manifest_text)
+ self.assertEqual(c1.manifest_text(strip=True), c3.manifest_text(strip=True))
self.assertEqual(['highIO'], c3.storage_classes_desired())
def test_storage_classes_desired_not_loaded_if_provided(self):
c1.save_new()
loc = c1.manifest_locator()
c2 = Collection(loc, storage_classes_desired=['default'])
- self.assertEqual(c1.manifest_text, c2.manifest_text)
+ self.assertEqual(c1.manifest_text(strip=True), c2.manifest_text(strip=True))
self.assertNotEqual(c1.storage_classes_desired(), c2.storage_classes_desired())
def test_init_manifest(self):
ARVADOS_ROOT="$ARVBOX_DATA/arvados"
fi
-if test -z "$COMPOSER_ROOT" ; then
- COMPOSER_ROOT="$ARVBOX_DATA/composer"
-fi
-
if test -z "$WORKBENCH2_ROOT" ; then
WORKBENCH2_ROOT="$ARVBOX_DATA/workbench2"
fi
docker_run_dev() {
docker run \
"--volume=$ARVADOS_ROOT:/usr/src/arvados:rw" \
- "--volume=$COMPOSER_ROOT:/usr/src/composer:rw" \
"--volume=$WORKBENCH2_ROOT:/usr/src/workbench2:rw" \
"--volume=$PG_DATA:/var/lib/postgresql:rw" \
"--volume=$VAR_DATA:$ARVADOS_CONTAINER_PATH:rw" \
git clone https://git.arvados.org/arvados.git "$ARVADOS_ROOT"
git -C "$ARVADOS_ROOT" checkout $ARVADOS_BRANCH
fi
- if ! test -d "$COMPOSER_ROOT" ; then
- git clone https://github.com/arvados/composer.git "$COMPOSER_ROOT"
- git -C "$COMPOSER_ROOT" checkout arvados-fork
- fi
if ! test -d "$WORKBENCH2_ROOT" ; then
git clone https://git.arvados.org/arvados-workbench2.git "$WORKBENCH2_ROOT"
git -C "$ARVADOS_ROOT" checkout $WORKBENCH2_BRANCH
"$ARVBOX_BASE/$1/gopath" \
"$ARVBOX_BASE/$1/Rlibs" \
"$ARVBOX_BASE/$1/arvados" \
- "$ARVBOX_BASE/$1/composer" \
"$ARVBOX_BASE/$1/workbench2" \
"$ARVBOX_BASE/$2"
echo "Created new arvbox $2"
export GEM_HOME=$HOME/.gem
GEMLOCK=$HOME/gems.lock
+export LANG=en_US.UTF-8
+export LANGUAGE=en_US:en
+export LC_ALL=en_US.UTF-8
+
defaultdev=$(/sbin/ip route|awk '/default/ { print $5 }')
dockerip=$(/sbin/ip route | grep default | awk '{ print $3 }')
containerip=$(ip addr show $defaultdev | grep 'inet ' | sed 's/ *inet \(.*\)\/.*/\1/')
echo
echo "Your Arvados-in-a-box is ready!"
-echo "Workbench is running at https://$localip"
-echo "Workbench2 is running at https://$localip:${services[workbench2-ssl]}"
+echo "Workbench is hosted at https://$localip"
+echo "Workbench2 is hosted at https://$localip:${services[workbench2-ssl]}"
+echo "Documentation is hosted at http://$localip:${services[doc]}"
rm -r /tmp/arvbox-ready
download_cache = /var/lib/pip
EOF
+cd /usr/src/arvados/sdk/ruby
+run_bundler --binstubs=binstubs
+
cd /usr/src/arvados/sdk/cli
run_bundler --binstubs=binstubs