3 navsection: installguide
4 title: Maintenance and upgrading
7 Copyright (C) The Arvados Authors. All rights reserved.
9 SPDX-License-Identifier: CC-BY-SA-3.0
12 # "Commercial support":#commercial_support
13 # "Maintaining Arvados":#maintaining
14 ## "Modification of the config.yml file":#configuration
15 ## "Distributing the configuration file":#distribution
16 ## "Restart the services affected by the change":#restart
17 # "Upgrading Arvados":#upgrading
19 h2(#commercial_support). Commercial support
21 Arvados is "100% open source software":{{site.baseurl}}/user/copying/copying.html. Anyone can download, install, maintain and upgrade it. However, if this is not something you want to spend your time and energy doing, "Curii Corporation":https://curii.com provides managed Arvados installations as well as commercial support for Arvados. Please contact "info@curii.com":mailto:info@curii.com for more information.
23 If you'd prefer to do things yourself, a few starting points for maintaining and upgrading Arvados can be found below.
25 h2(#maintaining). Maintaining Arvados
27 After Arvados is installed, periodic configuration changes may be required to adapt the software to your needs. Arvados uses a unified configuration file, which is normally found at @/etc/arvados/config.yml@.
29 Making a configuration change to Arvados typically involves three steps:
31 * modification of the @config.yml@ file
32 * distribution of the modified file to the machines in the cluster
33 * restarting of the services affected by the change
35 h3(#configchange). Modification of the @config.yml@ file
37 Consult the "configuration reference":{{site.baseurl}}/admin/config.html or another part of the documentation to identify the change to be made.
39 Preserve a copy of your existing configuration file as a backup, and make the desired modification.
41 Run @arvados-server config-check@ to make sure the configuration file has no errors and no warnings.
43 h3(#distribution). Distribute the configuration file
45 It is very important to keep the @config.yml@ file in sync between all the Arvados system nodes, to avoid issues with services running on different versions of the configuration.
47 We provide "installer.sh":../install/salt-multi-host.html#installation to distribute config changes. You may also do your own orchestration e.g. @scp@, configuration management software, etc.
49 h3(#restart). Restart the services affected by the change
51 If you know which Arvados service uses the specific configuration that was modified, restart those services. When in doubt, restart all Arvados system services.
53 To check for services that have not restarted since the configuration file was updated, run the @arvados-server check@ command on each system node.
55 To test functionality and check for common problems, run the @arvados-client sudo diagnostics@ command on a system node.
57 h2(#upgrading). Upgrading Arvados
59 Upgrading Arvados typically involves the following steps:
61 # consult the "upgrade notes":{{site.baseurl}}/admin/upgrading.html and the "release notes":https://arvados.org/releases/ for the release you want to upgrade to
62 # Wait for the cluster to be idle and stop Arvados services.
63 # Make a backup of your database, as a precaution.
64 # update the configuration file for the new release, if necessary (see "Maintaining Arvados":#maintaining above)
65 # Update compute nodes
66 ## (cloud) Rebuild and deploy the "compute node image":{{site.baseurl}}/install/crunch2-cloud/install-compute-node.html
67 ## (slurm/LSF) Upgrade the @python3-arvados-fuse@ package used on your compute nodes
68 # Install new packages using @apt upgrade@ or @dnf upgrade@.
69 # Wait for package installation scripts as they perform any necessary data migrations.
70 # Run @arvados-server config-check@ to detect configuration errors or deprecated entries.
71 # Verify that the Arvados services were restarted as part of the package upgrades.
72 # Run @arvados-server check@ to detect services that did not restart properly.
73 # Run @arvados-client sudo diagnostics@ to test functionality.