--- layout: default navsection: installguide title: Arvados-in-a-box ... {% comment %} Copyright (C) The Arvados Authors. All rights reserved. SPDX-License-Identifier: CC-BY-SA-3.0 {% endcomment %} Arvbox is a Docker-based self-contained development, demonstration and testing environment for Arvados. It is not intended for production use. h2. Quick start
$ curl -O https://git.arvados.org/arvados.git/blob_plain/refs/heads/main:/tools/arvbox/bin/arvbox $ chmod +x arvbox $ ./arvbox start localdemo latest $ ./arvbox adduser demouser demo@example.comYou can now log in as @demouser@ using the password you selected. h2. Requirements * Linux 3.x+ and Docker 1.9+ * Minimum of 3 GiB of RAM + additional memory to run jobs * Minimum of 3 GiB of disk + storage for actual data h2. Usage
$ arvbox Arvados-in-a-box https://doc.arvados.org/install/arvbox.html start|runh2. Install root certificate Arvbox creates root certificate to authorize Arvbox services. Installing the root certificate into your web browser will prevent security errors when accessing Arvbox services with your web browser. Every Arvbox instance generates a new root signing key. # Export the certificate using @arvbox root-cert@ # Go to the certificate manager in your browser. #* In Chrome, this can be found under "Settings → Advanced → Manage Certificates" or by entering @chrome://settings/certificates@ in the URL bar. #* In Firefox, this can be found under "Preferences → Privacy & Security" or entering @about:preferences#privacy@ in the URL bar and then choosing "View Certificates...". # Select the "Authorities" tab, then press the "Import" button. Choose @arvbox-root-cert.pem@ The certificate will be added under the "Arvados testing" organization as "arvbox testing root CA". To access your Arvbox instance using command line clients (such as arv-get and arv-put) without security errors, install the certificate into the OS certificate storage. h3. On Debian/Ubuntu:[tag] start arvbox container stop stop arvbox container restart stop, then run again status print some information about current arvbox ip print arvbox docker container ip address host print arvbox published host shell enter shell as root ashell enter shell as 'arvbox' psql enter postgres console open open arvbox workbench in a web browser root-cert get copy of root certificate update stop, pull latest image, run build build arvbox Docker image reboot stop, build arvbox Docker image, run rebuild build arvbox Docker image, no layer cache checkpoint create database backup restore restore checkpoint hotreset reset database and restart API without restarting container reset delete arvbox arvados data (be careful!) destroy delete all arvbox code and data (be careful!) log tail log of specified service ls list directories inside arvbox cat get contents of files inside arvbox pipe run a bash script piped in from stdin sv change state of service inside arvbox clone clone dev arvbox adduser add a user login removeuser remove user login listusers list user logins
cp arvbox-root-cert.pem /usr/local/share/ca-certificates/
/usr/sbin/update-ca-certificates
cp arvbox-root-cert.pem /etc/pki/ca-trust/source/anchors/
/usr/bin/update-ca-trust
$ arvbox start publicdemoThis attempts to auto-detect the correct IP address to use by taking the IP address of the default route device. If the auto-detection is wrong, you want to publish a hostname instead of a raw address, or you need to access it through a different device (such as a router or firewall), set @ARVBOX_PUBLISH_IP@ to the desire hostname or IP address.
$ export ARVBOX_PUBLISH_IP=example.com $ arvbox start publicdemoNote: this expects to bind the host's port 80 (http) for workbench, so you cannot have a conflicting web server already running on the host. It does not attempt to take bind the host's port 22 (ssh), as a result the arvbox ssh port is not published. h2. Notes Services are designed to install and auto-configure on start or restart. For example, the service script for keepstore always compiles keepstore from source and registers the daemon with the API server. Services are run with process supervision, so a service which exits will be restarted. Dependencies between services are handled by repeatedly trying and failing the service script until dependencies are fulfilled (by other service scripts) enabling the service script to complete.