X-Git-Url: https://git.arvados.org/arvados.git/blobdiff_plain/ad79a64bd1503e1e47d3849a00b894c4a6bc9810..bf243e064a7a2ee4e69a87dc3ba46e949a545150:/doc/user/topics/tutorial-parallel.html.textile.liquid
diff --git a/doc/user/topics/tutorial-parallel.html.textile.liquid b/doc/user/topics/tutorial-parallel.html.textile.liquid
index d7a093a245..9be610358b 100644
--- a/doc/user/topics/tutorial-parallel.html.textile.liquid
+++ b/doc/user/topics/tutorial-parallel.html.textile.liquid
@@ -1,45 +1,45 @@
---
layout: default
navsection: userguide
-title: "Parallel Crunch tasks"
+title: "Concurrent Crunch tasks"
...
-In the previous tutorials, we used @arvados.job_setup.one_task_per_input_file()@ to automatically parallelize our jobs by creating a separate task per file. For some types of jobs, you may need to split the work up differently, for example creating tasks to process different segments of a single large file. In this this tutorial will demonstrate how to create Crunch tasks directly.
+In the previous tutorials, we used @arvados.job_setup.one_task_per_input_file()@ to automatically create concurrent jobs by creating a separate task per file. For some types of jobs, you may need to split the work up differently, for example creating tasks to process different segments of a single large file. In this this tutorial will demonstrate how to create Crunch tasks directly.
Start by entering the @crunch_scripts@ directory of your Git repository:
~$ cd you/crunch_scripts
+
~$ cd $USER/crunch_scripts
~/you/crunch_scripts$ nano parallel-hash.py
+notextile. ~/$USER/crunch_scripts$ nano concurrent-hash.py
Add the following code to compute the MD5 hash of each file in a collection:
-~/you/crunch_scripts$ chmod +x parallel-hash.py
+notextile. ~/$USER/crunch_scripts$ chmod +x concurrent-hash.py
Add the file to the Git staging area, commit, and push:
~/you/crunch_scripts$ git add parallel-hash.py
-~/you/crunch_scripts$ git commit -m"parallel hash"
-~/you/crunch_scripts$ git push origin master
+~/$USER/crunch_scripts$ git add concurrent-hash.py
+~/$USER/crunch_scripts$ git commit -m"concurrent hash"
+~/$USER/crunch_scripts$ git push origin master
~/you/crunch_scripts$ cat >~/the_job <<EOF
+~/$USER/crunch_scripts$ cat >~/the_job <<EOF
{
- "script": "parallel-hash.py",
+ "script": "concurrent-hash.py",
"repository": "$USER",
"script_version": "master",
"script_parameters":
@@ -48,13 +48,13 @@ You should now be able to run your new script using Crunch, with "script" referr
}
}
EOF
-~/you/crunch_scripts$ arv job create --job "$(cat ~/the_job)"
+~/$USER/crunch_scripts$ arv job create --job "$(cat ~/the_job)"
{
...
"uuid":"qr1hi-xxxxx-xxxxxxxxxxxxxxx"
...
}
-~/you/crunch_scripts$ arv job get --uuid qr1hi-xxxxx-xxxxxxxxxxxxxxx
+~/$USER/crunch_scripts$ arv job get --uuid qr1hi-xxxxx-xxxxxxxxxxxxxxx
{
...
"output":"e2ccd204bca37c77c0ba59fc470cd0f7+162",
@@ -65,16 +65,14 @@ EOF
(Your shell should automatically fill in @$USER@ with your login name. The job JSON that gets saved should have @"repository"@ pointed at your personal Git repository.)
-Because the job ran in parallel, each instance of parallel-hash creates a separate @md5sum.txt@ as output. Arvados automatically collates theses files into a single collection, which is the output of the job:
+Because the job ran in concurrent, each instance of concurrent-hash creates a separate @md5sum.txt@ as output. Arvados automatically collates theses files into a single collection, which is the output of the job:
-~/you/crunch_scripts$ arv keep ls e2ccd204bca37c77c0ba59fc470cd0f7+162
+~/$USER/crunch_scripts$ arv keep ls e2ccd204bca37c77c0ba59fc470cd0f7+162
./md5sum.txt
-~/you/crunch_scripts$ arv keep get e2ccd204bca37c77c0ba59fc470cd0f7+162/md5sum.txt
+~/$USER/crunch_scripts$ arv keep get e2ccd204bca37c77c0ba59fc470cd0f7+162/md5sum.txt
0f1d6bcf55c34bed7f92a805d2d89bbf alice.txt
504938460ef369cd275e4ef58994cffe bob.txt
8f3b36aff310e06f3c5b9e95678ff77a carol.txt
-
-