X-Git-Url: https://git.arvados.org/arvados.git/blobdiff_plain/f54cc984969657be50c093b917feb49a19d78c22..44c93373e97da98645d41ae8f09c6eef6788bb26:/doc/install/salt-multi-host.html.textile.liquid diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid index f3afcd5031..1778338f53 100644 --- a/doc/install/salt-multi-host.html.textile.liquid +++ b/doc/install/salt-multi-host.html.textile.liquid @@ -11,7 +11,6 @@ SPDX-License-Identifier: CC-BY-SA-3.0 # "Introduction":#introduction # "Hosts preparation":#hosts_preparation -## "Hosts setup using terraform (experimental)":#hosts_setup_using_terraform ## "Create a compute image":#create_a_compute_image # "Multi host install using the provision.sh script":#multi_host # "Choose the desired configuration":#choose_configuration @@ -21,8 +20,7 @@ SPDX-License-Identifier: CC-BY-SA-3.0 # "Run the provision.sh script":#run_provision_script # "Initial user and login":#initial_user # "Test the installed cluster running a simple workflow":#test_install - - +# "After the installation":#post_install h2(#introduction). Introduction @@ -49,6 +47,7 @@ We suggest distributing the Arvados components in the following way, creating at ## arvados controller ## arvados websocket ## arvados cloud dispatcher +## arvados keepbalance # WORKBENCH node: ## arvados workbench ## arvados workbench2 @@ -65,14 +64,6 @@ Note that these hosts can be virtual machines in your infrastructure and they do Again, if your infrastructure differs from the setup proposed above (ie, using RDS or an existing DB server), remember that you will need to edit the configuration files for the scripts so they work with your infrastructure. - -h3(#hosts_setup_using_terraform). Hosts setup using terraform (AWS, experimental) - -We added a few "terraform":https://terraform.io/ scripts (https://github.com/arvados/arvados/tree/main/tools/terraform) to let you create these instances easier in an AWS account. Check "the Arvados terraform documentation":/doc/install/terraform.html for more details. - - - - h2(#multi_host). Multi host install using the provision.sh script {% include 'branchname' %} @@ -106,32 +97,15 @@ cp -r config_examples/multi_host/aws local_config_dir Edit the variables in the local.params file. Pay attention to the *_INT_IP, *_TOKEN and *KEY variables. Those variables will be used to do a search and replace on the pillars/* in place of any matching __VARIABLE__. -The multi_host include LetsEncrypt salt code to automatically request and install the certificates for the public-facing hosts (API/controller, Workbench, Keepproxy/Keepweb) using AWS' Route53. - -If you plan to use custom certificates, please set the variable USE_LETSENCRYPT=no and copy your certificates to the directory specified with the variable @CUSTOM_CERTS_DIR@ (usually "./certs") in the remote directory where you copied the @provision.sh@ script. From this dir, the provision script will install the certificates required for the role you're installing. +The multi_host example includes Let's Encrypt salt code to automatically request and install the certificates for the public-facing hosts (API/controller, Workbench, Keepproxy/Keepweb) using AWS' Route53. -The script expects cert/key files with these basenames (matching the role except for keepweb, which is split in both downoad / collections): +{% include 'multi_host_install_custom_certificates' %} -* "controller" -* "websocket" -* "workbench" -* "workbench2" -* "webshell" -* "download" # Part of keepweb -* "collections" # Part of keepweb -* "keepproxy" - -Ie., for 'keepproxy', the script will lookup for - - -
${CUSTOM_CERTS_DIR}/keepproxy.crt
-${CUSTOM_CERTS_DIR}/keepproxy.key
-
-
+If you want to use valid certificates provided by Let's Encrypt, set the variable SSL_MODE=lets-encrypt and make sure that all the FQDNs that you will use for the public-facing applications (API/controller, Workbench, Keepproxy/Keepweb) are reachable. h3(#further_customization). Further customization of the installation (modifying the salt pillars and states) -You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the pillars/arvados.sls file, where you will need to provide some information that can be retrieved as output of the terraform run. +You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the pillars/arvados.sls file, where you will need to provide some information that describes your environment. Any extra state file you add under local_config_dir/states will be added to the salt run and applied to the hosts. @@ -139,9 +113,9 @@ h2(#installation_order). Installation order A few Arvados nodes need to be installed in certain order. The required order is -#. Database -#. API server -#. The other nodes can be installed in any order after the two above +* Database +* API server +* The other nodes can be installed in any order after the two above h2(#run_provision_script). Run the provision.sh script @@ -149,6 +123,8 @@ When you finished customizing the configuration, you are ready to copy the files
scp -r provision.sh local* user@host:
+# if you use custom certificates (not Let's Encrypt), make sure to copy those too:
+# scp -r certs user@host:
 ssh user@host sudo ./provision.sh --roles comma,separated,list,of,roles,to,apply
 
@@ -165,42 +141,42 @@ arvados: Failed: 0 The distribution of role as described above can be applied running these commands: -#. Database +h4. Database
scp -r provision.sh local* user@host:
 ssh user@host sudo ./provision.sh --config local.params --roles database
 
-#. API +h4. API
scp -r provision.sh local* user@host:
-ssh user@host sudo ./provision.sh --config local.params --roles api,controller,websocket,dispatcher
+ssh user@host sudo ./provision.sh --config local.params --roles api,controller,websocket,dispatcher,keepbalance
 
-#. Keepstore/s +h4. Keepstore(s)
scp -r provision.sh local* user@host:
 ssh user@host sudo ./provision.sh --config local.params --roles keepstore
 
-#. Workbench +h4. Workbench
scp -r provision.sh local* user@host:
 ssh user@host sudo ./provision.sh --config local.params --roles workbench,workbench2,webshell
 
-#. Keepproxy / Keepweb +h4. Keepproxy / Keepweb
scp -r provision.sh local* user@host:
 ssh user@host sudo ./provision.sh --config local.params --roles keepproxy,keepweb
 
-#. Shell (here we copy the CLI test workflow too) +h4. Shell (here we copy the CLI test workflow too)
scp -r provision.sh local* tests user@host:
 ssh user@host sudo ./provision.sh --config local.params --roles shell
@@ -316,3 +292,9 @@ INFO Final output collection d6c69a88147dde9d52a418d50ef788df+123
 INFO Final process status is success
 
+ +h2(#post_install). After the installation + +Once the installation is complete, it is recommended to keep a copy of your local configuration files. Committing them to version control is a good idea. + +Re-running the Salt-based installer is not recommended for maintaining and upgrading Arvados, please see "Maintenance and upgrading":{{site.baseurl}}/admin/maintenance-and-upgrading.html for more information.