# "Choose the SSL configuration":#certificates
## "Using a Let's Encrypt certificates":#lets-encrypt
## "Bring your own certificates":#bring-your-own
+### "Securing your TLS certificate keys":#secure-tls-keys
# "Create a compute image":#create_a_compute_image
# "Begin installation":#installation
# "Further customization of the installation":#further_customization
# "Initial user and login":#initial_user
# "Monitoring and Metrics":#monitoring
# "Load balancing controllers":#load_balancing
-## "Rolling upgrades procedure":#rolling-upgrades
# "After the installation":#post_install
h2(#introduction). Introduction
## postgresql server
## arvados api server
## arvados controller (recommendend hostname @controller.${DOMAIN}@)
-## arvados websocket (recommendend hostname @ws.${DOMAIN}@)
-## arvados cloud dispatcher
-## arvados keepbalance
# KEEPSTORE nodes (at least 1 if using S3 as a Keep backend, else 2)
## arvados keepstore (recommendend hostnames @keep0.${DOMAIN}@ and @keep1.${DOMAIN}@)
-# KEEPPROXY node
+# WORKBENCH node
+## arvados legacy workbench URLs (recommendend hostname @workbench.${DOMAIN}@)
+## arvados workbench2 (recommendend hostname @workbench2.${DOMAIN}@)
+## arvados webshell (recommendend hostname @webshell.${DOMAIN}@)
+## arvados websocket (recommendend hostname @ws.${DOMAIN}@)
+## arvados cloud dispatcher
+## arvados keepbalance
## arvados keepproxy (recommendend hostname @keep.${DOMAIN}@)
## arvados keepweb (recommendend hostname @download.${DOMAIN}@ and @*.collections.${DOMAIN}@)
-# WORKBENCH node
-## arvados workbench (recommendend hostname @workbench.${DOMAIN}@)
-## arvados workbench2 (recommendend hostname @workbench2.${DOMAIN}@)
-## arvados webshell (recommendend hostname @webshell.${DOMAIN}@)
# SHELL node (optional)
## arvados shell (recommended hostname @shell.${DOMAIN}@)
h3. Parameters from @local.params@:
-# Set @CLUSTER@ to the 5-character cluster identifier (e.g "xarv1")
-# Set @DOMAIN@ to the base DNS domain of the environment, e.g. "xarv1.example.com"
+# Set @CLUSTER@ to the 5-character cluster identifier. (e.g. "xarv1")
+# Set @DOMAIN@ to the base DNS domain of the environment. (e.g. "xarv1.example.com")
# Set the @*_INT_IP@ variables with the internal (private) IP addresses of each host. Since services share hosts, some hosts are the same. See "note about /etc/hosts":#etchosts
# Edit @CLUSTER_INT_CIDR@, this should be the CIDR of the private network that Arvados is running on, e.g. the VPC. If you used terraform, this is emitted as @cluster_int_cidr@.
_CIDR stands for "Classless Inter-Domain Routing" and describes which portion of the IP address that refers to the network. For example 192.168.3.0/24 means that the first 24 bits are the network (192.168.3) and the last 8 bits are a specific host on that network._
MANAGEMENT_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
SYSTEM_ROOT_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ANONYMOUS_USER_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-WORKBENCH_SECRET_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DATABASE_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
</code></pre>
# Set @DATABASE_PASSWORD@ to a random string (unless you "already have a database":#ext-database then you should set it to that database's password)
h3. Object storage in S3 (AWS Specific)
-Open @local_config_dir/pillars/arvados.sls@ and edit as follows:
-
-# In the @arvados.cluster.Volumes.DriverParameters@ section, set @Region@ to the appropriate AWS region (e.g. 'us-east-1')
+If you "followed the recommendend naming scheme":#keep-bucket for both the bucket and role (or used the provided Terraform script), you're done.
-If "followed the recommendend naming scheme":#keep-bucket for both the bucket and role (or used the provided Terraform script), you're done.
+If you did not follow the recommendend naming scheme for either the bucket or role, you'll need to update these parameters in @local.params@:
-If you did not follow the recommendend naming scheme for either the bucket or role, you'll need to update these parameters as well:
+# Set @KEEP_AWS_S3_BUCKET@ to the value of "keepstore bucket you created earlier":#keep-bucket
+# Set @KEEP_AWS_IAM_ROLE@ to "keepstore role you created earlier":#keep-bucket
-# Set @Bucket@ to the value of "keepstore bucket you created earlier":#keep-bucket
-# Set @IAMRole@ to "keepstore role you created earlier":#keep-bucket
+You can also configure a specific AWS Region for the S3 bucket by setting @KEEP_AWS_REGION@.
{% include 'ssl_config_multi' %}
# In @local.params@, remove 'database' from the list of roles assigned to the controller node:
<pre><code>NODES=(
- [controller.${DOMAIN}]=api,controller,websocket,dispatcher,keepbalance
+ [controller.${DOMAIN}]=controller,websocket,dispatcher,keepbalance
...
)
</code></pre>
-# In @local.params@, set @DATABASE_INT_IP@ to the database endpoint (can be a hostname, does not have to be an IP address).
-<pre><code>DATABASE_INT_IP=...
+# In @local.params@, set @DATABASE_INT_IP@ to empty string and @DATABASE_EXTERNAL_SERVICE_HOST_OR_IP@ to the database endpoint (can be a hostname, does not have to be an IP address).
+<pre><code>DATABASE_INT_IP=""
+...
+DATABASE_EXTERNAL_SERVICE_HOST_OR_IP="arvados.xxxxxxx.eu-east-1.rds.amazonaws.com"
</code></pre>
-# In @local.params@, set @DATABASE_PASSWORD@ to the correct value. "See the previous section describing correct quoting":#localparams
-# In @local_config_dir/pillars/arvados.sls@ you may need to adjust the database name and user. This can be found in the section @arvados.cluster.database@.
+# In @local.params.secrets@, set @DATABASE_PASSWORD@ to the correct value. "See the previous section describing correct quoting":#localparams
+# In @local.params@ you may need to adjust the database name and user.
h2(#further_customization). Further customization of the installation (optional)
Most service logs go to @/var/log/syslog@.
-The logs for Rails API server and for Workbench can be found in
-
-@/var/www/arvados-api/current/log/production.log@
-and
-@/var/www/arvados-workbench/current/log/production.log@
-
-on the appropriate instances.
+The logs for Rails API server can be found in @/var/www/arvados-api/current/log/production.log@ on the appropriate instance(s).
Workbench 2 is a client-side Javascript application. If you are having trouble loading Workbench 2, check the browser's developer console (this can be found in "Tools → Developer Tools").
h2(#load_balancing). Load balancing controllers (optional)
-In order to handle high loads and perform rolling upgrades, the controller & api services can be scaled to a number of hosts and the installer make this implementation a fairly simple task.
+In order to handle high loads and perform rolling upgrades, the controller service can be scaled to a number of hosts and the installer make this implementation a fairly simple task.
First, you should take care of the infrastructure deployment: if you use our Terraform code, you will need to set up the @terraform.tfvars@ in @terraform/vpc/@ so that in addition to the node named @controller@ (the load-balancer), a number of @controllerN@ nodes (backends) are defined as needed, and added to the @internal_service_hosts@ list.
-We suggest that the backend nodes just hold the controller & api services and nothing else, so they can be easily created or destroyed as needed without other service disruption. Because of this, you will need to set up a custom @dns_aliases@ variable map.
+We suggest that the backend nodes just hold the controller service and nothing else, so they can be easily created or destroyed as needed without other service disruption.
-The following is an example @terraform/vpc/terraform.tfvars@ file that describes a cluster with a load-balancer, 2 backend nodes, a separate database node, a keepstore node and a workbench node that will also hold other miscelaneous services:
+The following is an example @terraform/vpc/terraform.tfvars@ file that describes a cluster with a load-balancer, 2 backend nodes, a separate database node, a shell node, a keepstore node and a workbench node that will also hold other miscelaneous services:
<pre><code>region_name = "us-east-1"
cluster_name = "xarv1"
domain_name = "xarv1.example.com"
-internal_service_hosts = [ "keep0", "database", "controller1", "controller2" ]
+# Include controller nodes in this list so instances are assigned to the
+# private subnet. Only the balancer node should be connecting to them.
+internal_service_hosts = [ "keep0", "shell", "database", "controller1", "controller2" ]
+
+# Assign private IPs for the controller nodes. These will be used to create
+# internal DNS resolutions that will get used by the balancer and database nodes.
private_ip = {
controller = "10.1.1.11"
workbench = "10.1.1.15"
database = "10.1.2.12"
controller1 = "10.1.2.21"
controller2 = "10.1.2.22"
+ shell = "10.1.2.17"
keep0 = "10.1.2.13"
-}
-dns_aliases = {
- workbench = [
- "ws",
- "workbench2",
- "keep",
- "download",
- "prometheus",
- "grafana",
- "*.collections"
- ]
}</code></pre>
-Once the infrastructure is deployed, you'll then need to define which node will be using the @balancer@ role in @local.params@, as it's being shown in this partial example:
+Once the infrastructure is deployed, you'll then need to define which node will be using the @balancer@ role and which will be the @controller@ nodes in @local.params@, as it's being shown in this partial example:
-<pre><code>...
-NODES=(
+<pre><code>NODES=(
[controller.${DOMAIN}]=balancer
- [controller1.${DOMAIN}]=api,controller
- [controller2.${DOMAIN}]=api,controller
+ [controller1.${DOMAIN}]=controller
+ [controller2.${DOMAIN}]=controller
[database.${DOMAIN}]=database
- [workbench.${DOMAIN}]=monitoring,workbench,workbench2,keepproxy,keepweb,websocket,keepbalance,dispatcher
- [keep0.${DOMAIN}]=keepstore
+ ...
)
-...</code></pre>
-
-h3(#rolling-upgrades). Rolling upgrades procedure
-
-Once you have more than one controller backend node, it's easy to take one of those from the backend pool to upgrade it to a newer version of Arvados (that might involve applying database migrations) by adding its name to the @DISABLED_CONTROLLER@ variable in @local.params@. For example:
-
-<pre><code>...
-DISABLED_CONTROLLER="controller1"
-...</code></pre>
-
-Then, apply the configuration change to just the load-balancer:
-
-<pre><code class="userinput">./installer.sh deploy controller.xarv1.example.com</code></pre>
+</code></pre>
-This will allow you to do the necessary changes to the @controller1@ node without service disruption, as it will not be receiving any traffic until you remove it from the @DISABLED_CONTROLLER@ variable.
+Note that we also set the @database@ role to its own node instead of just leaving it in a shared controller node.
-You can do the same for the rest of the backend controllers one at a time to complete the upgrade.
+Each time you run @installer.sh deploy@, the system will automatically do rolling upgrades. This means it will make changes to one controller node at a time, after removing it from the balancer so that there's no downtime.
h2(#post_install). After the installation