--- layout: default navsection: userguide title: "Running a Crunch job on the command line" ... This tutorial introduces how to run individual Crunch jobs using the @arv@ command line tool. {% include 'tutorial_expectations' %} You will create a job to run the "hash" Crunch script. The "hash" script computes the MD5 hash of each file in a collection. h2. Jobs Crunch pipelines consist of one or more jobs. A "job" is a single run of a specific version of a Crunch script with a specific input. You can also run jobs individually. A request to run a Crunch job are is described using a JSON object. For example:
~$ cat >~/the_job <<EOF
{
 "script": "hash",
 "repository": "arvados",
 "script_version": "master",
 "script_parameters": {
  "input": "c1bad4b39ca5a924e481008009d94e32+210"
 },
 "no_reuse": "true"
}
EOF
* @cat@ is a standard Unix utility that writes a sequence of input to standard output. * @<~/the_job@ redirects standard output to a file called @~/the_job@. * @"repository"@ is the name of a Git repository to search for the script version. You can access a list of available git repositories on the Arvados Workbench under "*Code repositories*":https://{{site.arvados_workbench_host}}/repositories. * @"script_version"@ specifies the version of the script that you wish to run. This can be in the form of an explicit Git revision hash, a tag, or a branch. Arvados logs the script version that was used in the run, enabling you to go back and re-run any past job with the guarantee that the exact same code will be used as was used in the previous run. * @"script"@ specifies the name of the script to run. The script must be given relative to the @crunch_scripts/@ subdirectory of the Git repository. * @"script_parameters"@ are provided to the script. In this case, the input is the PGP data Collection that we "put in Keep earlier":{{site.baseurl}}/user/tutorials/tutorial-keep.html. * Setting the @"no_reuse"@ flag tells Crunch not to reuse work from past jobs. This helps ensure that you can watch a new Job process for the rest of this tutorial, without reusing output from a past run that you made, or somebody else marked as public. (If you want to experiment, after the first run below finishes, feel free to edit this job to remove the @"no_reuse"@ line and resubmit it. See what happens!) Use @arv job create@ to actually submit the job. It should print out a JSON object which describes the newly created job:
~$ arv job create --job "$(cat ~/the_job)"
{
 "href":"https://qr1hi.arvadosapi.com/arvados/v1/jobs/qr1hi-8i9sb-1pm1t02dezhupss",
 "kind":"arvados#job",
 "etag":"ax3cn7w9whq2hdh983yxvq09p",
 "uuid":"qr1hi-8i9sb-1pm1t02dezhupss",
 "owner_uuid":"qr1hi-tpzed-9zdpkpni2yddge6",
 "created_at":"2013-12-16T20:44:32Z",
 "modified_by_client_uuid":"qr1hi-ozdt8-obw7foaks3qjyej",
 "modified_by_user_uuid":"qr1hi-tpzed-9zdpkpni2yddge6",
 "modified_at":"2013-12-16T20:44:32Z",
 "updated_at":"2013-12-16T20:44:33Z",
 "submit_id":null,
 "priority":null,
 "script":"hash",
 "script_parameters":{
  "input":"c1bad4b39ca5a924e481008009d94e32+210"
 },
 "script_version":"d9cd657b733d578ac0d2167dd75967aa4f22e0ac",
 "cancelled_at":null,
 "cancelled_by_client_uuid":null,
 "cancelled_by_user_uuid":null,
 "started_at":null,
 "finished_at":null,
 "output":null,
 "success":null,
 "running":null,
 "is_locked_by_uuid":null,
 "log":null,
 "runtime_constraints":{},
 "tasks_summary":{},
 "dependencies":[
  "c1bad4b39ca5a924e481008009d94e32+210"
 ]
}
The job is now queued and will start running as soon as it reaches the front of the queue. Fields to pay attention to include: * @"uuid"@ is the unique identifier for this specific job. * @"script_version"@ is the actual revision of the script used. This is useful if the version was described using the "repository:branch" format. h2. Monitor job progress Go to "*Recent jobs*":https://{{site.arvados_workbench_host}}/jobs in the Workbench. Your job should be near the top of the table. This table refreshes automatically. When the job has completed successfully, it will show finished in the *Status* column. h2. Inspect the job output On the "Workbench dashboard":https://{{site.arvados_workbench_host}}, look for the *Output* column of the *Recent jobs* table. Click on the link under *Output* for your job to go to the files page with the job output. The files page lists all the files that were output by the job. Click on the link under the *file* column to view a file, or click on the download icon to download the output file. On the command line, you can use @arv job get@ to access a JSON object describing the output:
~$ arv job get --uuid qr1hi-8i9sb-xxxxxxxxxxxxxxx
{
 "href":"https://qr1hi.arvadosapi.com/arvados/v1/jobs/qr1hi-8i9sb-1pm1t02dezhupss",
 "kind":"arvados#job",
 "etag":"1bk98tdj0qipjy0rvrj03ta5r",
 "uuid":"qr1hi-8i9sb-1pm1t02dezhupss",
 "owner_uuid":"qr1hi-tpzed-9zdpkpni2yddge6",
 "created_at":"2013-12-16T20:44:32Z",
 "modified_by_client_uuid":null,
 "modified_by_user_uuid":"qr1hi-tpzed-9zdpkpni2yddge6",
 "modified_at":"2013-12-16T20:44:55Z",
 "updated_at":"2013-12-16T20:44:55Z",
 "submit_id":null,
 "priority":null,
 "script":"hash",
 "script_parameters":{
  "input":"c1bad4b39ca5a924e481008009d94e32+210"
 },
 "script_version":"d9cd657b733d578ac0d2167dd75967aa4f22e0ac",
 "cancelled_at":null,
 "cancelled_by_client_uuid":null,
 "cancelled_by_user_uuid":null,
 "started_at":"2013-12-16T20:44:36Z",
 "finished_at":"2013-12-16T20:44:53Z",
 "output":"dd755dbc8d49a67f4fe7dc843e4f10a6+54",
 "success":true,
 "running":false,
 "is_locked_by_uuid":"qr1hi-tpzed-9zdpkpni2yddge6",
 "log":"2afdc6c8b67372ffd22d8ce89d35411f+91",
 "runtime_constraints":{},
 "tasks_summary":{
  "done":2,
  "running":0,
  "failed":0,
  "todo":0
 },
 "dependencies":[
  "c1bad4b39ca5a924e481008009d94e32+210"
 ]
}
* @"output"@ is the unique identifier for this specific job's output. This is a Keep collection. Because the output of Arvados jobs should be deterministic, the known expected output is dd755dbc8d49a67f4fe7dc843e4f10a6+54. Now you can list the files in the collection:
~$ arv keep ls dd755dbc8d49a67f4fe7dc843e4f10a6+54
./md5sum.txt
This collection consists of the @md5sum.txt@ file. Use @arv keep get@ to show the contents of the @md5sum.txt@ file:
~$ arv keep get dd755dbc8d49a67f4fe7dc843e4f10a6+54/md5sum.txt
44b8ae3fde7a8a88d2f7ebd237625b4f ./var-GS000016015-ASM.tsv.bz2
This MD5 hash matches the MD5 hash which we "computed earlier":{{site.baseurl}}/user/tutorials/tutorial-keep.html. h2. The job log When the job completes, you can access the job log. On the Workbench, visit "*Recent jobs*":https://{{site.arvados_workbench_host}}/jobs %(rarr)→% your job's UUID under the *uuid* column %(rarr)→% the collection link on the *log* row. On the command line, the Keep identifier listed in the @"log"@ field from @arv job get@ specifies a collection. You can list the files in the collection:
~$ arv keep ls xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx+91
./qr1hi-8i9sb-xxxxxxxxxxxxxxx.log.txt
The log collection consists of one log file named with the job's UUID. You can access it using @arv keep get@:
~$ arv keep get xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx+91/qr1hi-8i9sb-xxxxxxxxxxxxxxx.log.txt
2013-12-16_20:44:35 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  check slurm allocation
2013-12-16_20:44:35 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  node compute13 - 8 slots
2013-12-16_20:44:36 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  start
2013-12-16_20:44:36 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  Install revision d9cd657b733d578ac0d2167dd75967aa4f22e0ac
2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  Clean-work-dir exited 0
2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  Install exited 0
2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  script hash
2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  script_version d9cd657b733d578ac0d2167dd75967aa4f22e0ac
2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  script_parameters {"input":"c1bad4b39ca5a924e481008009d94e32+210"}
2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  runtime_constraints {"max_tasks_per_node":0}
2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  start level 0
2013-12-16_20:44:37 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 0 done, 0 running, 1 todo
2013-12-16_20:44:38 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 job_task qr1hi-ot0gb-23c1k3kwrf8da62
2013-12-16_20:44:38 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 child 7681 started on compute13.1
2013-12-16_20:44:38 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 0 done, 1 running, 0 todo
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 child 7681 on compute13.1 exit 0 signal 0 success=true
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 success in 1 seconds
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 0 output
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  wait for last 0 children to finish
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 1 done, 0 running, 1 todo
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  start level 1
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 1 done, 0 running, 1 todo
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 job_task qr1hi-ot0gb-iwr0o3unqothg28
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 child 7716 started on compute13.1
2013-12-16_20:44:39 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 1 done, 1 running, 0 todo
2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 child 7716 on compute13.1 exit 0 signal 0 success=true
2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 success in 13 seconds
2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575 1 output dd755dbc8d49a67f4fe7dc843e4f10a6+54
2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  wait for last 0 children to finish
2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  status: 2 done, 0 running, 0 todo
2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  release job allocation
2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  Freeze not implemented
2013-12-16_20:44:52 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  collate
2013-12-16_20:44:53 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  output dd755dbc8d49a67f4fe7dc843e4f10a6+54+K@qr1hi
2013-12-16_20:44:53 qr1hi-8i9sb-xxxxxxxxxxxxxxx 7575  finish