5 title: "arv subcommands"
9 _In order to use the @arv@ command, make sure that you have a "working environment.":{{site.baseurl}}/user/getting_started/check-environment.html_
11 h3(#arv-create). arv create
13 @arv create@ can be used to create Arvados objects from the command line. Arv create opens up the editor of your choice (set the EDITOR environment variable) and allows you to type or paste a json or yaml description. When saved the object will be created on the API server, if it passes validation.
17 $ <code class="userinput">arv create --help</code>
19 --project-uuid, -p <s>: Project uuid in which to create the object
20 --help, -h: Show this message
24 h3(#arv-edit). arv edit
26 @arv edit@ can be used to edit Arvados objects from the command line. Arv edit opens up the editor of your choice (set the EDITOR environment variable) with the json or yaml description of the object. Saving the file will update the Arvados object on the API server, if it passes validation.
30 $ <code class="userinput">arv edit --help</code>
31 Arvados command line client
32 Usage: arv edit [uuid] [fields...]
34 Fetch the specified Arvados object, select the specified fields,
35 open an interactive text editor on a text representation (json or
36 yaml, use --format) and then update the object. Will use 'nano'
37 by default, customize with the EDITOR or VISUAL environment variable.
41 h3(#arv-copy). arv copy
43 @arv copy@ can be used to copy a pipeline instance, template or collection from one Arvados instance to another. It takes care of copying the object and all its dependencies.
47 $ <code class="userinput">arv copy --help</code>
48 usage: arv-copy [-h] [-v] [--progress] [--no-progress] [-f] --src
49 SOURCE_ARVADOS --dst DESTINATION_ARVADOS [--recursive]
50 [--no-recursive] [--dst-git-repo DST_GIT_REPO]
51 [--project-uuid PROJECT_UUID] [--retries RETRIES]
54 Copy a pipeline instance, template or collection from one Arvados instance to
58 object_uuid The UUID of the object to be copied.
61 -h, --help show this help message and exit
62 -v, --verbose Verbose output.
63 --progress Report progress on copying collections. (default)
64 --no-progress Do not report progress on copying collections.
65 -f, --force Perform copy even if the object appears to exist at
66 the remote destination.
67 --src SOURCE_ARVADOS The name of the source Arvados instance (required).
68 May be either a pathname to a config file, or the
70 $HOME/.config/arvados/instance_name.conf.
71 --dst DESTINATION_ARVADOS
72 The name of the destination Arvados instance
73 (required). May be either a pathname to a config file,
74 or the basename of a file in
75 $HOME/.config/arvados/instance_name.conf.
76 --recursive Recursively copy any dependencies for this object.
78 --no-recursive Do not copy any dependencies. NOTE: if this option is
79 given, the copied object will need to be updated
80 manually in order to be functional.
81 --dst-git-repo DST_GIT_REPO
82 The name of the destination git repository. Required
83 when copying a pipeline recursively.
84 --project-uuid PROJECT_UUID
85 The UUID of the project at the destination to which
86 the pipeline should be copied.
87 --retries RETRIES Maximum number of times to retry server requests that
88 encounter temporary failures (e.g., server down).
95 @arv tag@ is used to tag Arvados objects.
99 $ <code class="userinput">arv tag --help</code>
102 arv tag add tag1 [tag2 ...] --object object_uuid1 [object_uuid2...]
103 arv tag remove tag1 [tag2 ...] --object object_uuid1 [object_uuid2...]
106 --dry-run, -n: Don't actually do anything
107 --verbose, -v: Print some things on stderr
108 --uuid, -u: Return the UUIDs of the objects in the response, one per
110 --json, -j: Return the entire response received from the API server, as
112 --human, -h: Return the response received from the API server, as a JSON
113 object with whitespace added for human consumption
114 --pretty, -p: Synonym of --human
115 --yaml, -y: Return the response received from the API server, in YAML
117 --help, -e: Show this message
124 @arv ws@ provides access to the websockets event stream.
128 $ <code class="userinput">arv ws --help</code>
129 usage: arv-ws [-h] [-u UUID] [-f FILTERS]
130 [--poll-interval POLL_INTERVAL | --no-poll]
131 [-p PIPELINE | -j JOB]
134 -h, --help show this help message and exit
135 -u UUID, --uuid UUID Filter events on object_uuid
136 -f FILTERS, --filters FILTERS
137 Arvados query filter to apply to log events (JSON
139 --poll-interval POLL_INTERVAL
140 If websockets is not available, specify the polling
141 interval, default is every 15 seconds
142 --no-poll Do not poll if websockets are not available, just fail
143 -p PIPELINE, --pipeline PIPELINE
144 Supply pipeline uuid, print log output from pipeline
146 -j JOB, --job JOB Supply job uuid, print log output from jobs
150 h3(#arv-keep). arv keep
152 @arv keep@ provides access to the Keep storage service.
156 $ <code class="userinput">arv keep --help</code>
157 Usage: arv keep [method] [--parameters]
158 Use 'arv keep [method] --help' to get more information about specific methods.
160 Available methods: ls, get, put, less, check, docker
164 h3(#arv-keep-put). arv keep put
168 $ <code class="userinput">arv keep put --help</code>
169 usage: arv-put [-h] [--max-manifest-depth N | --normalize]
170 [--as-stream | --stream | --as-manifest | --in-manifest | --manifest | --as-raw | --raw]
171 [--use-filename FILENAME] [--filename FILENAME]
172 [--portable-data-hash] [--project-uuid UUID] [--name NAME]
173 [--progress | --no-progress | --batch-progress]
174 [--resume | --no-resume] [--retries RETRIES]
177 Copy data from the local filesystem to Keep.
179 positional arguments:
180 path Local file or directory. Default: read from standard
184 -h, --help show this help message and exit
185 --max-manifest-depth N
186 Maximum depth of directory tree to represent in the
187 manifest structure. A directory structure deeper than
188 this will be represented as a single stream in the
189 manifest. If N=0, the manifest will contain a single
190 stream. Default: -1 (unlimited), i.e., exactly one
191 manifest stream per filesystem directory that contains
193 --normalize Normalize the manifest by re-ordering files and
194 streams after writing data.
195 --as-stream Synonym for --stream.
196 --stream Store the file content and display the resulting
197 manifest on stdout. Do not write the manifest to Keep
198 or save a Collection object in Arvados.
199 --as-manifest Synonym for --manifest.
200 --in-manifest Synonym for --manifest.
201 --manifest Store the file data and resulting manifest in Keep,
202 save a Collection object in Arvados, and display the
203 manifest locator (Collection uuid) on stdout. This is
204 the default behavior.
205 --as-raw Synonym for --raw.
206 --raw Store the file content and display the data block
207 locators on stdout, separated by commas, with a
208 trailing newline. Do not store a manifest.
209 --use-filename FILENAME
210 Synonym for --filename.
211 --filename FILENAME Use the given filename in the manifest, instead of the
212 name of the local file. This is useful when "-" or
213 "/dev/stdin" is given as an input file. It can be used
214 only if there is exactly one path given and it is not
215 a directory. Implies --manifest.
216 --portable-data-hash Print the portable data hash instead of the Arvados
217 UUID for the collection created by the upload.
218 --project-uuid UUID Store the collection in the specified project, instead
219 of your Home project.
220 --name NAME Save the collection with the specified name.
221 --progress Display human-readable progress on stderr (bytes and,
222 if possible, percentage of total data size). This is
223 the default behavior when stderr is a tty.
224 --no-progress Do not display human-readable progress on stderr, even
226 --batch-progress Display machine-readable progress on stderr (bytes
227 and, if known, total data size).
228 --resume Continue interrupted uploads from cached state
230 --no-resume Do not continue interrupted uploads from cached state.
231 --retries RETRIES Maximum number of times to retry server requests that
232 encounter temporary failures (e.g., server down).
237 h3(#arv-keep-get). arv keep get
241 $ <code class="userinput">arv keep get --help</code>
242 usage: arv-get [-h] [--retries RETRIES]
243 [--progress | --no-progress | --batch-progress]
244 [--hash HASH | --md5sum] [-n] [-r] [-f | --skip-existing]
245 locator [destination]
247 Copy data from Keep to a local file or pipe.
249 positional arguments:
250 locator Collection locator, optionally with a file path or
252 destination Local file or directory where the data is to be written.
253 Default: /dev/stdout.
256 -h, --help show this help message and exit
257 --retries RETRIES Maximum number of times to retry server requests that
258 encounter temporary failures (e.g., server down). Default
260 --progress Display human-readable progress on stderr (bytes and, if
261 possible, percentage of total data size). This is the
262 default behavior when it is not expected to interfere
263 with the output: specifically, stderr is a tty _and_
264 either stdout is not a tty, or output is being written to
265 named files rather than stdout.
266 --no-progress Do not display human-readable progress on stderr.
267 --batch-progress Display machine-readable progress on stderr (bytes and,
268 if known, total data size).
269 --hash HASH Display the hash of each file as it is read from Keep,
270 using the given hash algorithm. Supported algorithms
271 include md5, sha1, sha224, sha256, sha384, and sha512.
272 --md5sum Display the MD5 hash of each file as it is read from
274 -n Do not write any data -- just read from Keep, and report
275 md5sums if requested.
276 -r Retrieve all files in the specified collection/prefix.
277 This is the default behavior if the "locator" argument
278 ends with a forward slash.
279 -f Overwrite existing files while writing. The default
280 behavior is to refuse to write *anything* if any of the
281 output files already exist. As a special case, -f is not
282 needed to write to /dev/stdout.
283 --skip-existing Skip files that already exist. The default behavior is to
284 refuse to write *anything* if any files exist that would
285 have to be overwritten. This option causes even devices,
286 sockets, and fifos to be skipped.
291 h3(#arv-pipeline-run). arv pipeline run
293 @arv pipeline run@ can be used to start a pipeline run from the command line.
295 The User Guide has a page with a bit more information on "using arv pipeline run":{{site.baseurl}}/user/topics/running-pipeline-command-line.html.
299 $ <code class="userinput">arv pipeline run --help</code>
301 --dry-run, -n: Do not start any new jobs or wait for existing jobs to
302 finish. Just find out whether jobs are finished,
303 queued, or running for each component.
304 --status-text <s>: Store plain text status in given file. (Default:
306 --status-json <s>: Store json-formatted pipeline in given file. (Default:
308 --no-wait: Do not wait for jobs to finish. Just look up status,
309 submit new jobs if needed, and exit.
310 --no-reuse: Do not reuse existing jobs to satisfy pipeline
311 components. Submit a new job for every component.
312 --debug, -d: Print extra debugging information on stderr.
313 --debug-level <i>: Set debug verbosity level.
314 --template <s>: UUID of pipeline template, or path to local pipeline
316 --instance <s>: UUID of pipeline instance.
317 --submit: Submit the pipeline instance to the server, and exit.
318 Let the Crunch dispatch service satisfy the components
319 by finding/running jobs.
320 --run-pipeline-here: Manage the pipeline instance in-process. Submit jobs
321 to Crunch as needed. Do not exit until the pipeline
323 --run-jobs-here: Run jobs in the local terminal session instead of
324 submitting them to Crunch. Implies
325 --run-pipeline-here. Note: this results in a
326 significantly different job execution environment, and
327 some Crunch features are not supported. It can be
328 necessary to modify a pipeline in order to make it run
330 --run-here: Synonym for --run-jobs-here.
331 --description <s>: Description for the pipeline instance.
332 --version, -v: Print version and exit
333 --help, -h: Show this message
337 h3(#arv-run). arv run
339 The @arv-run@ command creates Arvados pipelines at the command line that fan out to multiple concurrent tasks across Arvado compute nodes.
341 The User Guide has a page on "using arv-run":{{site.baseurl}}/user/topics/arv-run.html.
345 $ <code class="userinput">arv run --help</code>
346 usage: arv-run [-h] [--retries RETRIES] [--dry-run] [--local]
347 [--docker-image DOCKER_IMAGE] [--ignore-rcode] [--no-reuse]
348 [--no-wait] [--project-uuid PROJECT_UUID] [--git-dir GIT_DIR]
349 [--repository REPOSITORY] [--script-version SCRIPT_VERSION]
352 positional arguments:
356 -h, --help show this help message and exit
357 --retries RETRIES Maximum number of times to retry server requests that
358 encounter temporary failures (e.g., server down).
360 --dry-run Print out the pipeline that would be submitted and
362 --local Run locally using arv-run-pipeline-instance
363 --docker-image DOCKER_IMAGE
364 Docker image to use, default arvados/jobs
365 --ignore-rcode Commands that return non-zero return codes should not
366 be considered failed.
367 --no-reuse Do not reuse past jobs.
368 --no-wait Do not wait and display logs after submitting command,
370 --project-uuid PROJECT_UUID
371 Parent project of the pipeline
372 --git-dir GIT_DIR Git repository passed to arv-crunch-job when using
374 --repository REPOSITORY
375 repository field of component, default 'arvados'
376 --script-version SCRIPT_VERSION
377 script_version field of component, default 'master'