## "Iterating on config changes":#iterating
## "Common problems and solutions":#common-problems
# "Initial user and login":#initial_user
+# "Monitoring and Metrics":#monitoring
# "After the installation":#post_install
h2(#introduction). Introduction
# @workbench2.${CLUSTER}.${DOMAIN}@
# @webshell.${CLUSTER}.${DOMAIN}@
# @shell.${CLUSTER}.${DOMAIN}@
+# @prometheus.${CLUSTER}.${DOMAIN}@
+# @grafana.${CLUSTER}.${DOMAIN}@
For more information, see "DNS entries and TLS certificates":install-manual-prerequisites.html#dnstls.
h4. Terraform code configuration
-Each section described above contain a @terraform.tfvars@ file with some configuration values that you should set before applying each configuration. You should set the cluster prefix and domain name in @vpc/terraform.tfvars@:
+Each section described above contain a @terraform.tfvars@ file with some configuration values that you should set before applying each configuration. You should set the cluster prefix and domain name in @terraform/vpc/terraform.tfvars@:
<pre><code>region_name = "us-east-1"
# cluster_name = "xarv1"
The @data-storage/terraform.tfvars@ and @services/terraform.tfvars@ let you configure the location of your ssh public key (default @~/.ssh/id_rsa.pub@) and the instance type to use (default @m5a.large@).
+h4. Set credentials
+
+You will need an AWS access key and secret key to create the infrastructure.
+
+<pre><code>
+$ export AWS_ACCESS_KEY_ID="anaccesskey"
+$ export AWS_SECRET_ACCESS_KEY="asecretkey"
+</code></pre>
+
h4. Create the infrastructure
Build the infrastructure by running @./installer.sh terraform@. The last stage will output the information needed to set up the cluster's domain and continue with the installer. for example:
# Set @CLUSTER@ to the 5-character cluster identifier (e.g "xarv1")
# Set @DOMAIN@ to the base DNS domain of the environment, e.g. "example.com"
# Set the @*_INT_IP@ variables with the internal (private) IP addresses of each host. Since services share hosts, some hosts are the same. See "note about /etc/hosts":#etchosts
-# Edit @CLUSTER_INT_CIDR@, this should be the CIDR of the private network that Arvados is running on, e.g. the VPC.
-CIDR stands for "Classless Inter-Domain Routing" and describes which portion of the IP address that refers to the network. For example 192.168.3.0/24 means that the first 24 bits are the network (192.168.3) and the last 8 bits are a specific host on that network.
+# Edit @CLUSTER_INT_CIDR@, this should be the CIDR of the private network that Arvados is running on, e.g. the VPC. If you used terraform, this is emitted as @vpc_cidr@.
+_CIDR stands for "Classless Inter-Domain Routing" and describes which portion of the IP address that refers to the network. For example 192.168.3.0/24 means that the first 24 bits are the network (192.168.3) and the last 8 bits are a specific host on that network._
_AWS Specific: Go to the AWS console and into the VPC service, there is a column in this table view of the VPCs that gives the CIDR for the VPC (IPv4 CIDR)._
# Set @INITIAL_USER_EMAIL@ to your email address, as you will be the first admin user of the system.
# Set each @KEY@ / @TOKEN@ / @PASSWORD@ to a random string. You can use @installer.sh generate-tokens@
If you are debugging a configuration issue on a specific node, you can speed up the cycle a bit by deploying just one node:
<pre>
-./installer.sh deploy keep0.xarv1.example.com@
+./installer.sh deploy keep0.xarv1.example.com
</pre>
However, once you have a final configuration, you should run a full deploy to ensure that the configuration has been synchronized on all the nodes.
If you *did* configure a different authentication provider, the first user to log in will automatically be given Arvados admin privileges.
+h2(#monitoring). Monitoring and Metrics
+
+You can monitor the health and performance of the system using the admin dashboard:
+
+https://grafana.@${CLUSTER}.${DOMAIN}@
+
+To log in, use username "admin" and @${INITIAL_USER_PASSWORD}@ from @local.conf@.
+
+Once logged in, you will want to add the dashboards to the front page.
+
+# On the left icon bar, click on "Browse"
+# You should see a folder called "Arvados Cluster", click to open it
+## If you don't see anything, make sure the check box next to "Starred" is not selected
+# You should see three dashboards "Arvados cluster overview", "Node exporter" and "Postgres exporter"
+# Visit each dashboard, at the top of the page click on the star next to the title to "Mark as favorite"
+# They should now be linked on the front page.
+
h2(#post_install). After the installation
As part of the operation of @installer.sh@, it automatically creates a @git@ repository with your configuration templates. You should retain this repository but *be aware that it contains sensitive information* (passwords and tokens used by the Arvados services as well as cloud credentials if you used Terraform to create the infrastructure).