The <i>multi_host</i> include LetsEncrypt salt code to automatically request and install the certificates for the public-facing hosts (API, Workbench) so it will need the hostnames to be reachable from the Internet. If this cluster will not be the case, please set the variable <i>USE_LETSENCRYPT=no</i>.
-## "Further customization of the installation (modifying the salt pillars and states)":#further_customization
+h3(#further_customization). Further customization of the installation (modifying the salt pillars and states)
You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the <i>pillars/arvados.sls</i> file, where you will need to provide some information that can be retrieved as output of the terraform run.
<notextile>
<pre><code>scp -r provision.sh local* user@host:
-ssh user@host sudo provision.sh --roles comma,separated,list,of,roles,to,apply
+ssh user@host sudo ./provision.sh --roles comma,separated,list,of,roles,to,apply
</code></pre>
</notextile>
#. Database
<notextile>
<pre><code>scp -r provision.sh local* user@host:
-ssh user@host sudo provision.sh --config local.params --roles database
+ssh user@host sudo ./provision.sh --config local.params --roles database
</code></pre>
</notextile>
#. API
<notextile>
<pre><code>scp -r provision.sh local* user@host:
-ssh user@host sudo provision.sh --config local.params --roles api,controller,websocket,dispatcher
+ssh user@host sudo ./provision.sh --config local.params --roles api,controller,websocket,dispatcher
</code></pre>
</notextile>
#. Keepstore/s
<notextile>
<pre><code>scp -r provision.sh local* user@host:
-ssh user@host sudo provision.sh --config local.params --roles keepstore
+ssh user@host sudo ./provision.sh --config local.params --roles keepstore
</code></pre>
</notextile>
#. Workbench
<notextile>
<pre><code>scp -r provision.sh local* user@host:
-ssh user@host sudo provision.sh --config local.params --roles workbench,workbench2
+ssh user@host sudo ./provision.sh --config local.params --roles workbench,workbench2
</code></pre>
</notextile>
#. Keepproxy / Keepweb
<notextile>
<pre><code>scp -r provision.sh local* user@host:
-ssh user@host sudo provision.sh --config local.params --roles keepproxy,keepweb
+ssh user@host sudo ./provision.sh --config local.params --roles keepproxy,keepweb
</code></pre>
</notextile>
-#. Shell
+#. Shell (here we copy the CLI test workflow too)
<notextile>
-<pre><code>scp -r provision.sh local* user@host:
-ssh user@host sudo provision.sh --config local.params --roles shell
+<pre><code>scp -r provision.sh local* tests user@host:
+ssh user@host sudo ./provision.sh --config local.params --roles shell
</code></pre>
</notextile>
h2(#test_install). Test the installed cluster running a simple workflow
-The @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@ directory in the node. If you want to run it, just ssh to the node, change to that directory and run:
+If you followed the instructions above, the @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@ directory in the @shell@ node. If you want to run it, just ssh to the node, change to that directory and run:
<notextile>
<pre><code>cd /tmp/cluster_tests
-./run-test.sh
+sudo /run-test.sh
</code></pre>
</notextile>