---
layout: default
navsection: userguide
title: "Job and Pipeline Reference"
...
h2. Submitting jobs
table(table table-bordered table-condensed).
|_. Attribute |_. Type|_. Accepted values |_. Required|_. Description|
|script |string |filename |yes |The actual script that will be run by crunch. Must be the name of an executable file in the crunch_scripts/ directory at the git revision specified by script_version.|
|script_version |string |git branch, tag, or version hash |yes |The code version to run, which is available in the specified repository. May be a git hash or tag to specify an exact version, or a branch. If it is a branch, use the branch head.|
|repository |string |name of git repository hosted by Arvados |yes |The repository to search for script_version.|
|script_parameters |object |any JSON object |yes |The input parameters for the job, with the parameter names as keys mapping to parameter values.|
|minimum_script_version |string |git branch, tag, or version hash |no |The minimum acceptable script version when deciding whether to re-use a past job.|
|exclude_script_versions|array of strings|git branch, tag, or version hash|no |Script versions to exclude when deciding whether to re-use a past job.|
|nondeterministic |boolean | |no |If true, never re-use a past job, and flag this job so it will never be considered for re-use.|
|no_reuse |boolean | |no |If true, do not re-use a past job, but this job may be re-used.|
When a job is executed, the 'script_version' field is resolved to an exact git revision and the git hash for that revision is recorded in 'script_version'. If 'script_version' can't be resolved, the job submission will be rejected.
h3. Reusing jobs
Because Arvados records the exact version of the script, input parameters, and runtime environment [1] that was used to run the job, if the script is deterministic (meaning that the same code version is guaranteed to produce the same outputs from the same inputs) then it is possible to re-use the results of past jobs, and avoid re-running the computation to save time. Arvados uses the following algorithm to determine if a past job can be re-used:
notextile.
# If 'nondeterministic' or 'no_reuse' are true, always create a new job.
# Find a list of acceptable values for 'script_version'. If 'minimum_script_version' is specified, this is the set of all revisions in the git commit graph between 'minimum_script_version' and 'script_version' (inclusive) [2]. If 'minimum_script_version' is not specified, only 'script_version' is added to the list. If 'exclude_script_versions' is specified, the listed versions are excluded from the list.
# Select jobs have the same 'script' and 'script_parameters' attributes, and where the 'script_version' attribute is in the list of acceptable versions. Exclude jobs that failed or set 'nondeterministic' to true.
# If there is more than one candidate job, check that all selected past jobs actually did produce the same output.
# If everything passed, re-use one of the selected past jobs (if there is more than one match, which job will be returned is undefined). Otherwise create a new job.
fn1. As of this writing, versioning the runtime environment is still under development.
fn2. This may include parallel branches if there is more than one path between 'minimum_script_version' and 'script_version' in the git commit graph. Use 'exclude_script_versions' to blacklist specific versions.
h3. Examples
Run the script "crunch_scripts/hash.py" in the repository "you" using the "master" branch head. Arvados is allowed to re-use a previous job if the script_version of the past job is the same as the "master" branch head (i.e., there have not been any subsequent commits to "master").
{
"script": "hash.py",
"repository": "you",
"script_version": "master",
"script_parameters": {
"input": "c1bad4b39ca5a924e481008009d94e32+210"
}
}
Run using exactly the version "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5". Arvados is allowed to re-use a previous job if the "script_version" of that job is also "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5".
{
"script": "hash.py",
"repository": "you",
"script_version": "d00220fb38d4b85ca8fc28a8151702a2b9d1dec5",
"script_parameters": {
"input": "c1bad4b39ca5a924e481008009d94e32+210"
}
}
Arvados is allowed to re-use a previous job if the "script_version" of the past job is between "earlier_version_tag" and the head of the "master" branch (inclusive), but not "blacklisted_version_tag". If there are no previous jobs, run the job using the head of the "master" branch as specified in "script_version".
{
"script": "hash.py",
"repository": "you",
"minimum_script_version": "earlier_version_tag",
"script_version": "master",
"exclude_script_versions": ["blacklisted_version_tag"],
"script_parameters": {
"input": "c1bad4b39ca5a924e481008009d94e32+210"
}
}
Run the script "crunch_scripts/monte-carlo.py" in the repository "you" using the "master" branch head. Because it is marked as "nondeterministic", never re-use previous jobs, and never re-use this job.
{
"script": "monte-carlo.py",
"repository": "you",
"script_version": "master",
"nondeterministic": true,
"script_parameters": {
"input": "c1bad4b39ca5a924e481008009d94e32+210"
}
}
h2. Pipelines
Pipelines consist of a set of "components". Each component is an Arvados job submission, so when a component job is submitted, Arvados may re-use past jobs based on the rules described above.
table(table table-bordered table-condensed).
|_. Attribute |_. Type |_. Accepted values |_. Required|_. Description|
|name |string |any |yes |The human-readable name of the pipeline template.|
|components |object |JSON object containing job submission objects|yes |The component jobs that make up the pipeline, with the component name as the key. |
h3. Script parameters
When used in a pipeline, each parameter in the 'script_parameters' attribute of a component job can specify that the input parameter must be supplied by the user, or the input parameter should be linked to the output of another component. To do this, the value of the parameter should be JSON object containing one of the following attributes:
table(table table-bordered table-condensed).
|_. Attribute |_. Type |_. Accepted values |_. Description|
|default |any |any |The default value for this parameter.|
|required |boolean |true or false |Specifies whether the parameter is required to have a value or not.|
|dataclass |string |One of 'Collection', 'File' [3], 'number', or 'text' |Data type of this parameter.|
|output_of |string |the name of another component in the pipeline |Specifies that the value of this parameter should be set to the 'output' attribute of the job that corresponds to the specified component.|
The 'output_of' parameter is especially important, as this is how components are actually linked together to form a pipeline. Component jobs that depend on the output of other components do not run until the parent job completes and has produced output. If the parent job fails, the entire pipeline fails.
fn3. The 'File' type refers to a specific file within a Keep collection in the form 'collection_hash/filename', for example '887cd41e9c613463eab2f0d885c6dd96+83/bob.txt'.
h3. Examples
This is a pipeline named "Filter md5 hash values" with two components, "do_hash" and "filter". The "input" script parameter of the "do_hash" component is required to be filled in by the user, and the expected data type is "Collection". This also specifies that the "input" script parameter of the "filter" component is the output of "do_hash", so "filter" will not run until "do_hash" completes successfully. When the pipeline runs, past jobs that meet the criteria described above may be substituted for either or both components to avoid redundant computation.
{
"name": "Filter md5 hash values",
"components": {
"do_hash": {
"script": "hash.py",
"repository": "you",
"script_version": "master",
"script_parameters": {
"input": {
"required": true,
"dataclass": "Collection"
}
},
},
"filter": {
"script": "0-filter.py",
"repository": "you",
"script_version": "master",
"script_parameters": {
"input": {
"output_of": "do_hash"
}
},
}
}
}
This pipeline consists of three components. The components "thing1" and "thing2" both depend on "cat_in_the_hat". Once the "cat_in_the_hat" job is complete, both "thing1" and "thing2" can run in parallel, because they do not depend on each other.
{
"name": "Wreck the house",
"components": {
"cat_in_the_hat": {
"script": "cat.py",
"repository": "you",
"script_version": "master",
"script_parameters": { }
},
"thing1": {
"script": "thing1.py",
"repository": "you",
"script_version": "master",
"script_parameters": {
"input": {
"output_of": "cat_in_the_hat"
}
},
},
"thing2": {
"script": "thing2.py",
"repository": "you",
"script_version": "master",
"script_parameters": {
"input": {
"output_of": "cat_in_the_hat"
}
},
},
}
}
This pipeline consists of three components. The component "cleanup" depends on "thing1" and "thing2". Both "thing1" and "thing2" are started immediately and can run in parallel, because they do not depend on each other, but "cleanup" cannot begin until both "thing1" and "thing2" have completed.
{
"name": "Clean the house",
"components": {
"thing1": {
"script": "thing1.py",
"repository": "you",
"script_version": "master",
"script_parameters": { }
},
"thing2": {
"script": "thing2.py",
"repository": "you",
"script_version": "master",
"script_parameters": { }
},
"cleanup": {
"script": "cleanup.py",
"repository": "you",
"script_version": "master",
"script_parameters": {
"mess1": {
"output_of": "thing1"
},
"mess2": {
"output_of": "thing2"
}
}
}
}
}