---
layout: default
navsection: userguide
navmenu: Tutorials
title: "Parallel Crunch tasks"
...
h1. Parallel Crunch tasks
In the tutorial "writing a crunch script,":tutorial-firstscript.html our script used a "for" loop to compute the md5 hashes for each file in sequence. This approach, while simple, is not able to take advantage of the compute cluster with multiple nodes and cores to speed up computation by running tasks in parallel. This tutorial will demonstrate how to create parallel Crunch tasks.
Start by entering the @crunch_scripts@ directory of your git repository:
~$ cd you/crunch_scripts
~/you/crunch_scripts$ nano parallel-hash.py
Add the following code to compute the md5 hash of each file in a
{% include 'parallel_hash_script_py' %}
Make the file executable:
notextile. ~/you/crunch_scripts$ chmod +x parallel-hash.py
Next, add the file to @git@ staging, commit and push:
~/you/crunch_scripts$ git add parallel-hash.py
~/you/crunch_scripts$ git commit -m"parallel hash"
~/you/crunch_scripts$ git push origin master
~/you/crunch_scripts$ cat >~/the_job <<EOF
{
"script": "parallel-hash.py",
"script_version": "you:master",
"script_parameters":
{
"input": "887cd41e9c613463eab2f0d885c6dd96+83"
}
}
EOF
~/you/crunch_scripts$ arv job create --job "$(cat ~/the_job)"
{
...
"uuid":"qr1hi-xxxxx-xxxxxxxxxxxxxxx"
...
}
~/you/crunch_scripts$ arv job get --uuid qr1hi-xxxxx-xxxxxxxxxxxxxxx
{
...
"output":"e2ccd204bca37c77c0ba59fc470cd0f7+162",
...
}
~/you/crunch_scripts$ arv keep get e2ccd204bca37c77c0ba59fc470cd0f7+162
md5sum.txt
md5sum.txt
md5sum.txt
~/you/crunch_scripts$ arv keep get e2ccd204bca37c77c0ba59fc470cd0f7+162/md5sum.txt
0f1d6bcf55c34bed7f92a805d2d89bbf alice.txt
504938460ef369cd275e4ef58994cffe bob.txt
8f3b36aff310e06f3c5b9e95678ff77a carol.txt