--- layout: default navsection: installguide title: Single host Arvados ... {% comment %} Copyright (C) The Arvados Authors. All rights reserved. SPDX-License-Identifier: CC-BY-SA-3.0 {% endcomment %} # "Install Saltstack":#saltstack # "Single host install using the provision.sh script":#single_host # "Final steps":#final_steps ## "DNS configuration":#dns_configuration ## "Install root certificate":#ca_root_certificate # "Initial user and login":#initial_user # "Test the installed cluster running a simple workflow":#test_install h2(#saltstack). Install Saltstack If you already have a Saltstack environment you can skip this section. The simplest way to get Salt up and running on a node is to use the bootstrap script they provide:
curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
sudo sh /tmp/bootstrap_salt.sh -XUdfP -x python3
For more information check "Saltstack's documentation":https://docs.saltstack.com/en/latest/topics/installation/index.html h2(#single_host). Single host install using the provision.sh script This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository. Use the @provision.sh@ script to deploy Arvados, which is implemented with the @arvados-formula@ in a Saltstack master-less setup: * edit the variables at the very beginning of the file, * run the script as root * wait for it to finish This will install all the main Arvados components to get you up and running. The whole installation procedure takes somewhere between 15 to 60 minutes, depending on the host and your network bandwidth. On a virtual machine with 1 core and 1 GB RAM, it takes ~25 minutes to do the initial install. If everything goes OK, you'll get some final lines stating something like:
arvados: Succeeded: 109 (changed=9)
arvados: Failed:      0
h2(#final_steps). Final configuration steps h3(#dns_configuration). DNS configuration After the setup is done, you need to set up your DNS to be able to access the cluster. The simplest way to do this is to edit your @/etc/hosts@ file (as root):
export CLUSTER="arva2"
export DOMAIN="arv.local"
export HOST_IP="127.0.0.2"    # This is valid either if installing in your computer directly
                              # or in a Vagrant VM. If you're installing it on a remote host
                              # just change the IP to match that of the host.
echo "${HOST_IP} api keep keep0 collections download ws workbench workbench2 ${CLUSTER}.${DOMAIN} api.${CLUSTER}.${DOMAIN} keep.${CLUSTER}.${DOMAIN} keep0.${CLUSTER}.${DOMAIN} collections.${CLUSTER}.${DOMAIN} download.${CLUSTER}.${DOMAIN} ws.${CLUSTER}.${DOMAIN} workbench.${CLUSTER}.${DOMAIN} workbench2.${CLUSTER}.${DOMAIN}" >> /etc/hosts
h3(#ca_root_certificate). Install root certificate Arvados uses SSL to encrypt communications. Its UI uses AJAX which will silently fail if the certificate is not valid or signed by an unknown Certification Authority. For this reason, the @arvados-formula@ has a helper state to create a root certificate to authorize Arvados services. The @provision.sh@ script will leave a copy of the generated CA's certificate (@arvados-snakeoil-ca.pem@) in the script's directory so ypu can add it to your workstation. Installing the root certificate into your web browser will prevent security errors when accessing Arvados services with your web browser. # Go to the certificate manager in your browser. #* In Chrome, this can be found under "Settings → Advanced → Manage Certificates" or by entering @chrome://settings/certificates@ in the URL bar. #* In Firefox, this can be found under "Preferences → Privacy & Security" or entering @about:preferences#privacy@ in the URL bar and then choosing "View Certificates...". # Select the "Authorities" tab, then press the "Import" button. Choose @arvados-snakeoil-ca.pem@ The certificate will be added under the "Arvados Formula". To access your Arvados instance using command line clients (such as arv-get and arv-put) without security errors, install the certificate into the OS certificate storage. * On Debian/Ubuntu:
cp arvados-root-cert.pem /usr/local/share/ca-certificates/
/usr/sbin/update-ca-certificates
* On CentOS:
cp arvados-root-cert.pem /etc/pki/ca-trust/source/anchors/
/usr/bin/update-ca-trust
h2(#initial_user). Initial user and login At this point you should be able to log into the Arvados cluster. If you changed nothing in the @provision.sh@ script, the initial URL will be: * https://workbench.arva2.arv.local or, in general, the url format will be: * https://workbench.@.@ By default, the provision script creates an initial user for testing purposes. This user is configured as administrator of the newly created cluster. Assuming you didn't change these values in the @provision.sh@ script, the initial credentials are: * User: 'admin' * Password: 'password' * Email: 'admin@arva2.arv.local' h2(#test_install). Test the installed cluster running a simple workflow The @provision.sh@ script saves a simple example test workflow in the @/tmp/cluster_tests@. If you want to run it, just change to that directory and run:
cd /tmp/cluster_tests
./run-test.sh
It will create a test user, upload a small workflow and run it. If everything goes OK, the output should similar to this (some output was shortened for clarity):
Creating Arvados Standard Docker Images project
Arvados project uuid is 'arva2-j7d0g-0prd8cjlk6kfl7y'
{
 ...
 "uuid":"arva2-o0j2j-n4zu4cak5iifq2a",
 "owner_uuid":"arva2-tpzed-000000000000000",
 ...
}
Uploading arvados/jobs' docker image to the project
2.1.1: Pulling from arvados/jobs
8559a31e96f4: Pulling fs layer
...
Status: Downloaded newer image for arvados/jobs:2.1.1
docker.io/arvados/jobs:2.1.1
2020-11-23 21:43:39 arvados.arv_put[32678] INFO: Creating new cache file at /home/vagrant/.cache/arvados/arv-put/c59256eda1829281424c80f588c7cc4d
2020-11-23 21:43:46 arvados.arv_put[32678] INFO: Collection saved as 'Docker image arvados jobs:2.1.1 sha256:0dd50'
arva2-4zz18-1u5pvbld7cvxuy2
Creating initial user ('admin')
Setting up user ('admin')
{
 "items":[
  {
   ...
   "owner_uuid":"arva2-tpzed-000000000000000",
   ...
   "uuid":"arva2-o0j2j-1ownrdne0ok9iox"
  },
  {
   ...
   "owner_uuid":"arva2-tpzed-000000000000000",
   ...
   "uuid":"arva2-o0j2j-1zbeyhcwxc1tvb7"
  },
  {
   ...
   "email":"admin@arva2.arv.local",
   ...
   "owner_uuid":"arva2-tpzed-000000000000000",
   ...
   "username":"admin",
   "uuid":"arva2-tpzed-3wrm93zmzpshrq2",
   ...
  }
 ],
 "kind":"arvados#HashList"
}
Activating user 'admin'
{
 ...
 "email":"admin@arva2.arv.local",
 ...
 "username":"admin",
 "uuid":"arva2-tpzed-3wrm93zmzpshrq2",
 ...
}
Running test CWL workflow
INFO /usr/bin/cwl-runner 2.1.1, arvados-python-client 2.1.1, cwltool 3.0.20200807132242
INFO Resolved 'hasher-workflow.cwl' to 'file:///tmp/cluster_tests/hasher-workflow.cwl'
...
INFO Using cluster arva2 (https://arva2.arv.local:8443/)
INFO Upload local files: "test.txt"
INFO Uploaded to ea34d971b71d5536b4f6b7d6c69dc7f6+50 (arva2-4zz18-c8uvwqdry4r8jao)
INFO Using collection cache size 256 MiB
INFO [container hasher-workflow.cwl] submitted container_request arva2-xvhdp-v1bkywd58gyocwm
INFO [container hasher-workflow.cwl] arva2-xvhdp-v1bkywd58gyocwm is Final
INFO Overall process status is success
INFO Final output collection d6c69a88147dde9d52a418d50ef788df+123
{
    "hasher_out": {
        "basename": "hasher3.md5sum.txt",
        "class": "File",
        "location": "keep:d6c69a88147dde9d52a418d50ef788df+123/hasher3.md5sum.txt",
        "size": 95
    }
}
INFO Final process status is success