+# In @local.params@, set @DATABASE_PASSWORD@ to the correct value. "See the previous section describing correct quoting":#localparams
+# In @local_config_dir/pillars/arvados.sls@ you may need to adjust the database name and user. This can be found in the section @arvados.cluster.database@.
+
+h2(#further_customization). Further customization of the installation (optional)
+
+If you are installing on AWS and have followed all of the naming conventions recommend in this guide, you probably don't need to do any further customization.
+
+If you are installing on a different cloud provider or on HPC, other changes may require editing the Saltstack pillars and states files found in @local_config_dir@. In particular, @local_config_dir/pillars/arvados.sls@ contains the template (in the @arvados.cluster@ section) used to produce the Arvados configuration file that is distributed to all the nodes. Consult the "Configuration reference":config.html for a comprehensive list of configuration keys.
+
+Any extra Salt "state" files you add under @local_config_dir/states@ will be added to the Salt run and applied to the hosts.
+
+h2(#create_a_compute_image). Configure compute nodes
+
+{% include 'branchname' %}
+
+If you will use fixed compute nodes with an HPC scheduler such as SLURM or LSF, you will need to "Set up your compute nodes with Docker":{{site.baseurl}}/install/crunch2/install-compute-node-docker.html or "Set up your compute nodes with Singularity":{{site.baseurl}}/install/crunch2/install-compute-node-singularity.html.
+
+On cloud installations, containers are dispatched in Docker daemons running in the _compute instances_, which need some additional setup.
+
+h3. Build the compute image
+
+Follow "the instructions to build a cloud compute node image":{{site.baseurl}}/install/crunch2-cloud/install-compute-node.html using the compute image builder script found in @arvados/tools/compute-images@ in your Arvados clone from "step 3":#download.
+
+h3. Configure the compute image
+
+Once the image has been created, open @local_config_dir/pillars/arvados.sls@ and edit as follows (AWS specific settings described here, other cloud providers will have similar settings in their respective configuration section):
+
+# In the @arvados.cluster.Containers.CloudVMs@ section:
+## Set @ImageID@ to the AMI produced by Packer
+## Set @DriverParameters.Region@ to the appropriate AWS region
+## Set @DriverParameters.AdminUsername@ to the admin user account on the image
+## Set the @DriverParameters.SecurityGroupIDs@ list to the VPC security group which you set up to allow SSH connections to these nodes
+## Set @DriverParameters.SubnetID@ to the value of SubnetId of your VPC
+# Update @arvados.cluster.Containers.DispatchPrivateKey@ and paste the contents of the @~/.ssh/id_dispatcher@ file you generated in an earlier step.
+# Update @arvados.cluster.InstanceTypes@ as necessary. The example instance types are for AWS, other cloud providers will of course have different instance types with different names and specifications.
+(AWS specific) If m5/c5 node types are not available, replace them with m4/c4. You'll need to double check the values for Price and IncludedScratch/AddedScratch for each type that is changed.
+
+h2(#installation). Begin installation
+
+At this point, you are ready to run the installer script in deploy mode that will conduct all of the Arvados installation.
+
+Run this in the @~/arvados-setup-xarv1@ directory:
+
+<pre>
+./installer.sh deploy
+</pre>
+
+This will install and configure Arvados on all the nodes. It will take a while and produce a lot of logging. If it runs into an error, it will stop.
+
+h2(#test-install). Confirm the cluster is working
+
+When everything has finished, you can run the diagnostics.
+
+Depending on where you are running the installer, you need to provide @-internal-client@ or @-external-client@.
+
+If you are running the diagnostics from one of the Arvados machines inside the private network, you want @-internal-client@ .
+
+You are an "external client" if you running the diagnostics from your workstation outside of the private network.
+
+<pre>
+./installer.sh diagnostics (-internal-client|-external-client)
+</pre>
+
+h3(#debugging). Debugging issues
+
+The installer records log files for each deployment.
+
+Most service logs go to @/var/log/syslog@.
+
+The logs for Rails API server and for Workbench can be found in
+
+@/var/www/arvados-api/current/log/production.log@
+and
+@/var/www/arvados-workbench/current/log/production.log@
+
+on the appropriate instances.
+
+Workbench 2 is a client-side Javascript application. If you are having trouble loading Workbench 2, check the browser's developer console (this can be found in "Tools → Developer Tools").
+
+h3(#iterating). Iterating on config changes
+
+You can iterate on the config and maintain the cluster by making changes to @local.params@ and @local_config_dir@ and running @installer.sh deploy@ again.
+
+If you are debugging a configuration issue on a specific node, you can speed up the cycle a bit by deploying just one node:
+
+<pre>
+./installer.sh deploy keep0.xarv1.example.com@
+</pre>
+
+However, once you have a final configuration, you should run a full deploy to ensure that the configuration has been synchronized on all the nodes.
+
+h3(#common-problems). Common problems and solutions
+
+h4. PG::UndefinedTable: ERROR: relation \"api_clients\" does not exist
+
+The arvados-api-server package sets up the database as a post-install script. If the database host or password wasn't set correctly (or quoted correctly) at the time that package is installed, it won't be able to set up the database.
+
+This will manifest as an error like this:
+
+<pre>
+#<ActiveRecord::StatementInvalid: PG::UndefinedTable: ERROR: relation \"api_clients\" does not exist
+</pre>
+
+If this happens, you need to
+
+1. correct the database information
+2. run @./installer.sh deploy xarv1.example.com@ to update the configuration on the API/controller node
+3. Log in to the API/controller server node, then run this command to re-run the post-install script, which will set up the database:
+<pre>dpkg-reconfigure arvados-api-server</pre>
+4. Re-run @./installer.sh deploy@ again to synchronize everything, and so that the install steps that need to contact the API server are run successfully.
+
+h4. Missing ENA support (AWS Specific)
+
+If the AMI wasn't built with ENA (extended networking) support and the instance type requires it, it'll fail to start. You'll see an error in syslog on the node that runs @arvados-dispatch-cloud@. The solution is to build a new AMI with --aws-ena-support true
+
+h2(#initial_user). Initial user and login
+
+At this point you should be able to log into the Arvados cluster. The initial URL will be
+
+https://workbench.@${CLUSTER}.${DOMAIN}@
+
+If you did *not* "configure a different authentication provider":#authentication you will be using the "Test" provider, and the provision script creates an initial user for testing purposes. This user is configured as administrator of the newly created cluster. It uses the values of @INITIAL_USER@ and @INITIAL_USER_PASSWORD@ the @local.params@ file.
+
+If you *did* configure a different authentication provider, the first user to log in will automatically be given Arvados admin privileges.