X-Git-Url: https://git.arvados.org/arvados.git/blobdiff_plain/f54cc984969657be50c093b917feb49a19d78c22..f04d5211ed026a4e0cbdca77dad447700eb88772:/doc/install/salt-multi-host.html.textile.liquid diff --git a/doc/install/salt-multi-host.html.textile.liquid b/doc/install/salt-multi-host.html.textile.liquid index f3afcd5031..e497240c4c 100644 --- a/doc/install/salt-multi-host.html.textile.liquid +++ b/doc/install/salt-multi-host.html.textile.liquid @@ -11,7 +11,6 @@ SPDX-License-Identifier: CC-BY-SA-3.0 # "Introduction":#introduction # "Hosts preparation":#hosts_preparation -## "Hosts setup using terraform (experimental)":#hosts_setup_using_terraform ## "Create a compute image":#create_a_compute_image # "Multi host install using the provision.sh script":#multi_host # "Choose the desired configuration":#choose_configuration @@ -65,14 +64,6 @@ Note that these hosts can be virtual machines in your infrastructure and they do Again, if your infrastructure differs from the setup proposed above (ie, using RDS or an existing DB server), remember that you will need to edit the configuration files for the scripts so they work with your infrastructure. - -h3(#hosts_setup_using_terraform). Hosts setup using terraform (AWS, experimental) - -We added a few "terraform":https://terraform.io/ scripts (https://github.com/arvados/arvados/tree/main/tools/terraform) to let you create these instances easier in an AWS account. Check "the Arvados terraform documentation":/doc/install/terraform.html for more details. - - - - h2(#multi_host). Multi host install using the provision.sh script {% include 'branchname' %} @@ -106,32 +97,13 @@ cp -r config_examples/multi_host/aws local_config_dir Edit the variables in the local.params file. Pay attention to the *_INT_IP, *_TOKEN and *KEY variables. Those variables will be used to do a search and replace on the pillars/* in place of any matching __VARIABLE__. -The multi_host include LetsEncrypt salt code to automatically request and install the certificates for the public-facing hosts (API/controller, Workbench, Keepproxy/Keepweb) using AWS' Route53. - -If you plan to use custom certificates, please set the variable USE_LETSENCRYPT=no and copy your certificates to the directory specified with the variable @CUSTOM_CERTS_DIR@ (usually "./certs") in the remote directory where you copied the @provision.sh@ script. From this dir, the provision script will install the certificates required for the role you're installing. +The multi_host example includes Let's Encrypt salt code to automatically request and install the certificates for the public-facing hosts (API/controller, Workbench, Keepproxy/Keepweb) using AWS' Route53. -The script expects cert/key files with these basenames (matching the role except for keepweb, which is split in both downoad / collections): - -* "controller" -* "websocket" -* "workbench" -* "workbench2" -* "webshell" -* "download" # Part of keepweb -* "collections" # Part of keepweb -* "keepproxy" - -Ie., for 'keepproxy', the script will lookup for - - -
${CUSTOM_CERTS_DIR}/keepproxy.crt
-${CUSTOM_CERTS_DIR}/keepproxy.key
-
-
+{% include 'install_custom_certificates' %} h3(#further_customization). Further customization of the installation (modifying the salt pillars and states) -You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the pillars/arvados.sls file, where you will need to provide some information that can be retrieved as output of the terraform run. +You will need further customization to suit your environment, which can be done editing the Saltstack pillars and states files. Pay particular attention to the pillars/arvados.sls file, where you will need to provide some information that describes your environment. Any extra state file you add under local_config_dir/states will be added to the salt run and applied to the hosts.