--- layout: default navsection: userguide title: "Intro: Jobs" navorder: 8 --- h1. Intro: Jobs You can run MapReduce jobs by storing a job script in a git repository and creating a "job":../api/Jobs.html. Crunch jobs offer several advantages over running programs on your own local machine: * Increase concurrency by running tasks asynchronously, using many CPUs and network interfaces at once (especially beneficial for CPU-bound and I/O-bound tasks respectively). * Track inputs, outputs, and settings so you can verify that the inputs, settings, and sequence of programs you used to arrive at an output is really what you think it was. * Ensure that your programs and workflows are repeatable with different versions of your code, OS updates, etc. * Interrupt and resume long-running jobs consisting of many short tasks. * Maintain timing statistics automatically, so they're there when you want them. h3. Structure of a job A job consists of a number of tasks which can be executed asynchronously. A single job program, or "crunch script", executes each task of a given job. The logic of a typical crunch script looks like this: * If this is the first task: examine the input, divide it into a number of asynchronous tasks, instruct Arvados to queue these tasks, output nothing, and indicate successful completion. * Otherwise, fetch a portion of the input from the cloud storage system, do some computation, store some output in the cloud, output a fragment of the output manifest, and indicate successful completion. When all job tasks have completed, the output fragments are assembled into a single output manifest. If a job task fails, it is automatically re-attempted. If a task fails repeatedly and running it on a different compute node doesn't help, any tasks still running are allowed to complete, and the job is abandoned. h3. Developing and testing crunch scripts Usually, it makes sense to test your script locally on small data sets. When you are satisfied that it works, commit it to the git repository and run it in Arvados. Save your job script (say, @foo@) in @{git-repo}/crunch_scripts/foo@. Make sure you have @ARVADOS_API_TOKEN@ and @ARVADOS_API_HOST@ set correctly ("more info":api-tokens.html). Test your function:
script_name=foo
repo_path=/path/to/repo

read -rd $'\000' newjob <

The @--job@ argument to @arv-crunch-job@ is the same as the @--job@ argument to @arv job create@, except:



* node resource constraints are ignored.

You will see the progress of the job in your terminal.  Press Control-C to create a checkpoint and stop the job.

h3. Location of temporary files

Crunch job tasks are supplied with @TASK_WORK@ and @JOB_WORK@ environment variables, to be used as scratch space. When running in local development mode, Crunch puts these in a directory called @crunch-job-{USERID}@ in @TMPDIR@ (or @/tmp@ if @TMPDIR@ is not set).

* Set @TMPDIR@ to @/scratch@ to make Crunch use a directory like @/scratch/crunch-job-{USERID}/@ for temporary space.

* Set @CRUNCH_TMP@ to @/scratch/foo@ to make Crunch use @/scratch/foo/@ for temporary space (omitting its customary @crunch-job-{USERID}@ part)

h3. Testing job scripts without SDKs and Keep access

Read and write data to @/tmp/@ instead of Keep. This only works with the Python SDK.

export KEEP_LOCAL_STORE=/tmp
Use the Perl SDK libraries directly from the arvados source tree.
export PERLLIB=/path/to/arvados/sdk/perl/lib