Chen Chen <aflyhorse@gmail.com>
Veritas Genetics, Inc. <*@veritasgenetics.com>
Curii Corporation, Inc. <*@curii.com>
+Dante Tsang <dante@dantetsang.com>
+Codex Genetics Ltd <info@codexgenetics.com>
\ No newline at end of file
multi_json (~> 1.0)
websocket-driver (>= 0.2.0)
public_suffix (4.0.3)
- rack (2.0.7)
+ rack (2.2.2)
rack-mini-profiler (1.0.2)
rack (>= 1.2.0)
rack-test (0.6.3)
uglifier (~> 2.0)
BUNDLED WITH
- 1.11
+ 1.16.6
- install/arvbox.html.textile.liquid
- Arvados on Kubernetes:
- install/arvados-on-kubernetes.html.textile.liquid
+ - install/arvados-on-kubernetes-minikube.html.textile.liquid
+ - install/arvados-on-kubernetes-GKE.html.textile.liquid
- Manual installation:
- install/install-manual-prerequisites.html.textile.liquid
- install/packages.html.textile.liquid
+AutoReloadConfig: true
Clusters:
zzzzz:
ManagementToken: e687950a23c3a9bceec28c6223a06c79
---
layout: default
navsection: installguide
-title: Arvados on Kubernetes - Google Kubernetes Engine
+title: Arvados on GKE
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This page documents the setup of the prerequisites to run the "Arvados on Kubernetes":/install/arvados-on-kubernetes.html @Helm@ chart on @Google Kubernetes Engine@ (GKE).
+This page documents setting up and running the "Arvados on Kubernetes":/install/arvados-on-kubernetes.html @Helm@ chart on @Google Kubernetes Engine@ (GKE).
+
+{% include 'notebox_begin_warning' %}
+This Helm chart does not retain any state after it is deleted. An Arvados cluster created with this Helm chart is entirely ephemeral, and all data stored on the cluster will be deleted when it is shut down. This will be fixed in a future version.
+{% include 'notebox_end' %}
+
+h2. Prerequisites
h3. Install tooling
* Follow the instructions at "https://docs.helm.sh/using_helm/#installing-helm":https://docs.helm.sh/using_helm/#installing-helm
-h3. Boot the GKE cluster
+h3. Create the GKE cluster
This can be done via the "cloud console":https://console.cloud.google.com/kubernetes/ or via the command line:
<pre>
-$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2 --cluster-version 1.10
+$ gcloud container clusters create <CLUSTERNAME> --zone us-central1-a --machine-type n1-standard-2 --cluster-version 1.15
</pre>
It takes a few minutes for the cluster to be initialized.
$ kubectl get nodes
</pre>
-Now proceed to the "Initialize helm on the Kubernetes cluster":/install/arvados-on-kubernetes.html#helm section.
+Test @helm@ by running
+
+<pre>
+$ helm ls
+</pre>
+
+There should be no errors. The command will return nothing.
+
+h2(#git). Clone the repository
+
+Clone the repository and nagivate to the @arvados-kubernetes/charts/arvados@ directory:
+
+<pre>
+$ git clone https://github.com/arvados/arvados-kubernetes.git
+$ cd arvados-kubernetes/charts/arvados
+</pre>
+
+h2(#Start). Start the Arvados cluster
+
+Next, determine the IP address that the Arvados cluster will use to expose its API, Workbench, etc. If you want this Arvados cluster to be reachable from places other than the local machine, the IP address will need to be routable as appropriate.
+
+<pre>
+$ ./cert-gen.sh <IP ADDRESS>
+</pre>
+
+The @values.yaml@ file contains a number of variables that can be modified. At a minimum, review and/or modify the values for
+
+<pre>
+ adminUserEmail
+ adminUserPassword
+ superUserSecret
+ anonymousUserSecret
+</pre>
+
+Now start the Arvados cluster:
+
+<pre>
+$ helm install arvados . --set externalIP=<IP ADDRESS>
+</pre>
+
+At this point, you can use kubectl to see the Arvados cluster boot:
+
+<pre>
+$ kubectl get pods
+$ kubectl get svc
+</pre>
+
+After a few minutes, there shouldn't be any services listed with a 'Pending' external IP address. At that point you can access Arvados Workbench at the IP address specified
+
+* https://<IP ADDRESS>
+
+with the username and password specified in the @values.yaml@ file.
+
+Alternatively, use the Arvados cli tools or SDKs. First set the environment variables:
+
+<pre>
+$ export ARVADOS_API_TOKEN=<superUserSecret from values.yaml>
+$ export ARVADOS_API_HOST=<STATIC IP>:444
+$ export ARVADOS_API_HOST_INSECURE=true
+</pre>
+
+Test access with:
+
+<pre>
+$ arv user current
+</pre>
+
+h2(#reload). Reload
+
+If you make changes to the Helm chart (e.g. to @values.yaml@), you can reload Arvados with
+
+<pre>
+$ helm upgrade arvados .
+</pre>
+
+h2. Shut down
+
+{% include 'notebox_begin_warning' %}
+This Helm chart does not retain any state after it is deleted. An Arvados cluster created with this Helm chart is entirely ephemeral, and <strong>all data stored on the Arvados cluster will be deleted</strong> when it is shut down. This will be fixed in a future version.
+{% include 'notebox_end' %}
+
+<pre>
+$ helm del arvados
+</pre>
+
+<pre>
+$ gcloud container clusters delete <CLUSTERNAME> --zone us-central1-a
+</pre>
---
layout: default
navsection: installguide
-title: Arvados on Kubernetes - Minikube
+title: Arvados on Minikube
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This page documents the setup of the prerequisites to run the "Arvados on Kubernetes":/install/arvados-on-kubernetes.html @Helm@ chart on @Minikube@.
+This page documents setting up and running the "Arvados on Kubernetes":/install/arvados-on-kubernetes.html @Helm@ chart on @Minikube@.
+
+{% include 'notebox_begin_warning' %}
+This Helm chart does not retain any state after it is deleted. An Arvados cluster created with this Helm chart is entirely ephemeral, and all data stored on the cluster will be deleted when it is shut down. This will be fixed in a future version.
+{% include 'notebox_end' %}
+
+h2. Prerequisites
h3. Install tooling
$ kubectl get nodes
</pre>
-Now proceed to the "Initialize helm on the Kubernetes cluster":/install/arvados-on-kubernetes.html#helm section.
+Test @helm@ by running
+
+<pre>
+$ helm ls
+</pre>
+
+There should be no errors. The command will return nothing.
+
+h2(#git). Clone the repository
+
+Clone the repository and nagivate to the @arvados-kubernetes/charts/arvados@ directory:
+
+<pre>
+$ git clone https://github.com/arvados/arvados-kubernetes.git
+$ cd arvados-kubernetes/charts/arvados
+</pre>
+
+h2(#Start). Start the Arvados cluster
+
+All Arvados services will be accessible on Minikube's IP address. This will be a local IP address, you can see what it is by running
+
+<pre>
+$ minikube ip
+192.168.39.15
+</pre>
+
+Generate self-signed SSL certificates for the Arvados services:
+
+<pre>
+$ ./cert-gen.sh `minikube ip`
+</pre>
+
+The @values.yaml@ file contains a number of variables that can be modified. At a minimum, review and/or modify the values for
+
+<pre>
+ adminUserEmail
+ adminUserPassword
+ superUserSecret
+ anonymousUserSecret
+</pre>
+
+Now start the Arvados cluster:
+
+<pre>
+$ helm install arvados . --set externalIP=`minikube ip`
+</pre>
+
+And update the Kubernetes services to have the Minikube IP as their 'external' IP:
+
+<pre>
+$ ./minikube-external-ip.sh
+</pre>
+
+At this point, you can use kubectl to see the Arvados cluster boot:
+
+<pre>
+$ kubectl get pods
+$ kubectl get svc
+</pre>
+
+After a few minutes, you can access Arvados Workbench at the Minikube IP address:
+
+* https://<MINIKUBE IP>
+
+with the username and password specified in the @values.yaml@ file.
+
+Alternatively, use the Arvados cli tools or SDKs. First set the environment variables:
+
+<pre>
+$ export ARVADOS_API_TOKEN=<superUserSecret from values.yaml>
+$ export ARVADOS_API_HOST=<MINIKUBE IP>:444
+$ export ARVADOS_API_HOST_INSECURE=true
+</pre>
+
+Test access with:
+
+<pre>
+$ arv user current
+</pre>
+
+h2(#reload). Reload
+
+If you make changes to the Helm chart (e.g. to @values.yaml@), you can reload Arvados with
+
+<pre>
+$ helm upgrade arvados .
+</pre>
+
+h2. Shut down
+
+{% include 'notebox_begin_warning' %}
+This Helm chart does not retain any state after it is deleted. An Arvados cluster created with this Helm chart is entirely ephemeral, and <strong>all data stored on the Arvados cluster will be deleted</strong> when it is shut down. This will be fixed in a future version.
+{% include 'notebox_end' %}
+
+<pre>
+$ helm del arvados
+</pre>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Arvados on Kubernetes is implemented as a Helm Chart.
+Arvados on Kubernetes is implemented as a @Helm 3@ chart.
{% include 'notebox_begin_warning' %}
-This Helm Chart does not retain any state after it is deleted. An Arvados cluster created with this Helm Chart is entirely ephemeral, and all data stored on the cluster will be deleted when it is shut down. This will be fixed in a future version.
+This Helm chart does not retain any state after it is deleted. An Arvados cluster created with this Helm chart is entirely ephemeral, and all data stored on the cluster will be deleted when it is shut down. This will be fixed in a future version.
{% include 'notebox_end' %}
h2(#overview). Overview
-This Helm Chart provides a basic, small Arvados cluster.
+This Helm chart provides a basic, small Arvados cluster.
Current limitations, to be addressed in the future:
-* An Arvados cluster created with this Helm Chart is entirely ephemeral, and all data stored on the cluster will be deleted when it is shut down.
-* No dynamic scaling of compute nodes (but you can adjust @values.yaml@ and "reload the Helm Chart":#reload
+* An Arvados cluster created with this Helm chart is entirely ephemeral, and all data stored on the cluster will be deleted when it is shut down.
+* Workbench2 is not present yet
+* No dynamic scaling of compute nodes (but you can adjust @values.yaml@ and reload the Helm chart)
* All compute nodes are the same size
* Compute nodes have no cpu/memory/disk constraints yet
* No git server
h2. Requirements
-* Kubernetes 1.10+ cluster with at least 3 nodes, 2 or more cores per node
-* @kubectl@ and @helm@ installed locally, and able to connect to your Kubernetes cluster
+* Minikube or Google Kubernetes Engine (Kubernetes 1.10+ with at least 3 nodes, 2+ cores per node)
+* @kubectl@ and @Helm 3@ installed locally, and able to connect to your Kubernetes cluster
-If you do not have a Kubernetes cluster already set up, you can use "Google Kubernetes Engine":/install/arvados-on-kubernetes-GKE.html for multi-node development and testing or "another Kubernetes solution":https://kubernetes.io/docs/setup/pick-right-solution/. Minikube is not supported yet.
+Please refer to "Arvados on Minikube":/install/arvados-on-kubernetes-minikube.html or "Arvados on GKE":/install/arvados-on-kubernetes-GKE.html for detailed installation instructions.
-h2(#helm). Initialize helm on the Kubernetes cluster
-
-If you already have helm running on the Kubernetes cluster, proceed directly to "Start the Arvados cluster":#Start below.
-
-<pre>
-$ helm init
-$ kubectl create serviceaccount --namespace kube-system tiller
-$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
-$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
-</pre>
-
-Test @helm@ by running
-
-<pre>
-$ helm ls
-</pre>
-
-There should be no errors. The command will return nothing.
-
-h2(#git). Clone the repository
-
-Clone the repository and nagivate to the @arvados-kubernetes/charts/arvados@ directory:
-
-<pre>
-$ git clone https://github.com/arvados/arvados-kubernetes.git
-$ cd arvados-kubernetes/charts/arvados
-</pre>
-
-h2(#Start). Start the Arvados cluster
-
-Next, determine the IP address that the Arvados cluster will use to expose its API, Workbench, etc. If you want this Arvados cluster to be reachable from places other than the local machine, the IP address will need to be routable as appropriate.
-
-<pre>
-$ ./cert-gen.sh <IP ADDRESS>
-</pre>
-
-The @values.yaml@ file contains a number of variables that can be modified. At a minimum, review and/or modify the values for
-
-<pre>
- adminUserEmail
- adminUserPassword
- superUserSecret
- anonymousUserSecret
-</pre>
-
-Now start the Arvados cluster:
-
-<pre>
-$ helm install --name arvados . --set externalIP=<IP ADDRESS>
-</pre>
-
-At this point, you can use kubectl to see the Arvados cluster boot:
-
-<pre>
-$ kubectl get pods
-$ kubectl get svc
-</pre>
-
-After a few minutes, you can access Arvados Workbench at the IP address specified
-
-* https://<IP ADDRESS>
-
-with the username and password specified in the @values.yaml@ file.
-
-Alternatively, use the Arvados cli tools or SDKs:
-
-Set the environment variables:
-
-<pre>
-$ export ARVADOS_API_TOKEN=<superUserSecret from values.yaml>
-$ export ARVADOS_API_HOST=<STATIC IP>:444
-$ export ARVADOS_API_HOST_INSECURE=true
-</pre>
-
-Test access with:
-
-<pre>
-$ arv user current
-</pre>
-
-h2(#reload). Reload
-
-If you make changes to the Helm Chart (e.g. to @values.yaml@), you can reload Arvados with
-
-<pre>
-$ helm upgrade arvados .
-</pre>
-
-h2. Shut down
-
-{% include 'notebox_begin_warning' %}
-This Helm Chart does not retain any state after it is deleted. An Arvados cluster created with this Helm Chart is entirely ephemeral, and <strong>all data stored on the Arvados cluster will be deleted</strong> when it is shut down. This will be fixed in a future version.
-{% include 'notebox_end' %}
-
-<pre>
-$ helm del arvados --purge
-</pre>
# If the AccessViaHosts section is empty or omitted, all
# keepstore servers will have read/write access to the
# volume.
- "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107/": {}
- "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107/": {ReadOnly: true}
+ "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107": {}
+ "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107": {ReadOnly: true}
Driver: <span class="userinput">Azure</span>
DriverParameters:
# If the AccessViaHosts section is empty or omitted, all
# keepstore servers will have read/write access to the
# volume.
- "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107/": {}
- "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107/": {ReadOnly: true}
+ "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107": {}
+ "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107": {ReadOnly: true}
Driver: <span class="userinput">S3</span>
DriverParameters:
Keepstore:
# No ExternalURL because they are only accessed by the internal subnet.
InternalURLs:
- "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107/": {}
- "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107/": {}
+ "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107": {}
+ "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107": {}
# and so forth
</code></pre>
</notextile>
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239 // indirect
github.com/arvados/cgofuse v1.2.0-arvados1
github.com/aws/aws-sdk-go v1.25.30
+ github.com/bgentry/speakeasy v0.1.0 // indirect
github.com/coreos/go-oidc v2.1.0+incompatible
github.com/coreos/go-systemd v0.0.0-20180108085132-cc4f39464dc7
github.com/dgrijalva/jwt-go v3.1.0+incompatible // indirect
github.com/docker/go-connections v0.3.0 // indirect
github.com/docker/go-units v0.3.3-0.20171221200356-d59758554a3d // indirect
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568 // indirect
+ github.com/fsnotify/fsnotify v1.4.9
github.com/ghodss/yaml v1.0.0
github.com/gliderlabs/ssh v0.2.2 // indirect
github.com/gogo/protobuf v1.1.1
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
+github.com/bgentry/speakeasy v0.1.0 h1:ByYyxL9InA1OWqxJqqp2A5pYHUrCiAL6K3J+LKSsQkY=
+github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/cespare/xxhash/v2 v2.1.0 h1:yTUvW7Vhb89inJ+8irsUqiWjh8iT6sQPZiQzI6ReGkA=
github.com/cespare/xxhash/v2 v2.1.0/go.mod h1:dgIUBU3pDso/gPgZ1osOZ0iQf77oPR28Tjxl5dIMyVM=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/docker/go-units v0.3.3-0.20171221200356-d59758554a3d/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568 h1:BHsljHzVlRcyQhjrss6TZTdY2VfCqZPbv5k3iBFa2ZQ=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
+github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
+github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gliderlabs/ssh v0.2.2 h1:6zsha5zo/TWhRhwqCD3+EarCAgZ2yN28ipRnGPnwkI0=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd h1:3x5uuvBgE6oaXJjCOvpCC1IpgJogqQ+PqGGU3ZxAgII=
golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
import (
"context"
+ "fmt"
"io/ioutil"
+ "net"
"path/filepath"
)
}
func (createCertificates) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ var san string
+ if net.ParseIP(super.ListenHost) != nil {
+ san = fmt.Sprintf("IP:%s", super.ListenHost)
+ } else {
+ san = fmt.Sprintf("DNS:%s", super.ListenHost)
+ }
+
// Generate root key
err := super.RunProgram(ctx, super.tempdir, nil, nil, "openssl", "genrsa", "-out", "rootCA.key", "4096")
if err != nil {
if err != nil {
return err
}
- err = ioutil.WriteFile(filepath.Join(super.tempdir, "server.cfg"), append(defaultconf, []byte(`
-[SAN]
-subjectAltName=DNS:localhost,DNS:localhost.localdomain
-`)...), 0644)
+ err = ioutil.WriteFile(filepath.Join(super.tempdir, "server.cfg"), append(defaultconf, []byte(fmt.Sprintf("\n[SAN]\nsubjectAltName=DNS:localhost,DNS:localhost.localdomain,%s\n", san))...), 0644)
if err != nil {
return err
}
return err
}
// Sign certificate
- err = super.RunProgram(ctx, super.tempdir, nil, nil, "openssl", "x509", "-req", "-in", "server.csr", "-CA", "rootCA.crt", "-CAkey", "rootCA.key", "-CAcreateserial", "-out", "server.crt", "-days", "3650", "-sha256")
+ err = super.RunProgram(ctx, super.tempdir, nil, nil, "openssl", "x509", "-req", "-in", "server.csr", "-CA", "rootCA.crt", "-CAkey", "rootCA.key", "-CAcreateserial", "-out", "server.crt", "-extfile", "server.cfg", "-extensions", "SAN", "-days", "3650", "-sha256")
if err != nil {
return err
}
String() string
}
+var errNeedConfigReload = errors.New("config changed, restart needed")
+
type bootCommand struct{}
-func (bootCommand) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
- super := &Supervisor{
- Stderr: stderr,
- logger: ctxlog.New(stderr, "json", "info"),
+func (bcmd bootCommand) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+ logger := ctxlog.New(stderr, "json", "info")
+ ctx := ctxlog.Context(context.Background(), logger)
+ for {
+ err := bcmd.run(ctx, prog, args, stdin, stdout, stderr)
+ if err == errNeedConfigReload {
+ continue
+ } else if err != nil {
+ logger.WithError(err).Info("exiting")
+ return 1
+ } else {
+ return 0
+ }
}
+}
- ctx := ctxlog.Context(context.Background(), super.logger)
+func (bcmd bootCommand) run(ctx context.Context, prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
-
- var err error
- defer func() {
- if err != nil {
- super.logger.WithError(err).Info("exiting")
- }
- }()
+ super := &Supervisor{
+ Stderr: stderr,
+ logger: ctxlog.FromContext(ctx),
+ }
flags := flag.NewFlagSet(prog, flag.ContinueOnError)
flags.SetOutput(stderr)
flags.BoolVar(&super.OwnTemporaryDatabase, "own-temporary-database", false, "bring up a postgres server and create a temporary database")
timeout := flags.Duration("timeout", 0, "maximum time to wait for cluster to be ready")
shutdown := flags.Bool("shutdown", false, "shut down when the cluster becomes ready")
- err = flags.Parse(args)
+ err := flags.Parse(args)
if err == flag.ErrHelp {
- err = nil
- return 0
+ return nil
} else if err != nil {
- return 2
+ return err
} else if *versionFlag {
- return cmd.Version.RunCommand(prog, args, stdin, stdout, stderr)
+ cmd.Version.RunCommand(prog, args, stdin, stdout, stderr)
+ return nil
} else if super.ClusterType != "development" && super.ClusterType != "test" && super.ClusterType != "production" {
- err = fmt.Errorf("cluster type must be 'development', 'test', or 'production'")
- return 2
+ return fmt.Errorf("cluster type must be 'development', 'test', or 'production'")
}
loader.SkipAPICalls = true
cfg, err := loader.Load()
if err != nil {
- return 1
+ return err
}
- super.Start(ctx, cfg)
+ super.Start(ctx, cfg, loader.Path)
defer super.Stop()
var timer *time.Timer
url, ok := super.WaitReady()
if timer != nil && !timer.Stop() {
- err = errors.New("boot timed out")
- return 1
+ return errors.New("boot timed out")
} else if !ok {
- err = errors.New("boot failed")
- return 1
- }
- // Write controller URL to stdout. Nothing else goes to
- // stdout, so this provides an easy way for a calling script
- // to discover the controller URL when everything is ready.
- fmt.Fprintln(stdout, url)
- if *shutdown {
- super.Stop()
+ super.logger.Error("boot failed")
+ } else {
+ // Write controller URL to stdout. Nothing else goes
+ // to stdout, so this provides an easy way for a
+ // calling script to discover the controller URL when
+ // everything is ready.
+ fmt.Fprintln(stdout, url)
+ if *shutdown {
+ super.Stop()
+ }
}
// Wait for signal/crash + orderly shutdown
- <-super.done
- return 0
+ return super.Wait()
}
{"KEEPWEBDL", super.cluster.Services.WebDAVDownload},
{"KEEPPROXY", super.cluster.Services.Keepproxy},
{"GIT", super.cluster.Services.GitHTTP},
+ {"HEALTH", super.cluster.Services.Health},
{"WORKBENCH1", super.cluster.Services.Workbench1},
{"WS", super.cluster.Services.Websocket},
} {
"os/signal"
"os/user"
"path/filepath"
+ "reflect"
"strings"
"sync"
"syscall"
"time"
+ "git.arvados.org/arvados.git/lib/config"
"git.arvados.org/arvados.git/lib/service"
"git.arvados.org/arvados.git/sdk/go/arvados"
"git.arvados.org/arvados.git/sdk/go/ctxlog"
"git.arvados.org/arvados.git/sdk/go/health"
+ "github.com/fsnotify/fsnotify"
"github.com/sirupsen/logrus"
)
ctx context.Context
cancel context.CancelFunc
- done chan struct{}
+ done chan struct{} // closed when child procs/services have shut down
+ err error // error that caused shutdown (valid when done is closed)
healthChecker *health.Aggregator
tasksReady map[string]chan bool
waitShutdown sync.WaitGroup
environ []string // for child processes
}
-func (super *Supervisor) Start(ctx context.Context, cfg *arvados.Config) {
+func (super *Supervisor) Start(ctx context.Context, cfg *arvados.Config, cfgPath string) {
super.ctx, super.cancel = context.WithCancel(ctx)
super.done = make(chan struct{})
go func() {
+ defer close(super.done)
+
sigch := make(chan os.Signal)
signal.Notify(sigch, syscall.SIGINT, syscall.SIGTERM)
defer signal.Stop(sigch)
go func() {
for sig := range sigch {
super.logger.WithField("signal", sig).Info("caught signal")
+ if super.err == nil {
+ super.err = fmt.Errorf("caught signal %s", sig)
+ }
+ super.cancel()
+ }
+ }()
+
+ hupch := make(chan os.Signal)
+ signal.Notify(hupch, syscall.SIGHUP)
+ defer signal.Stop(hupch)
+ go func() {
+ for sig := range hupch {
+ super.logger.WithField("signal", sig).Info("caught signal")
+ if super.err == nil {
+ super.err = errNeedConfigReload
+ }
super.cancel()
}
}()
+ if cfgPath != "" && cfgPath != "-" && cfg.AutoReloadConfig {
+ go watchConfig(super.ctx, super.logger, cfgPath, copyConfig(cfg), func() {
+ if super.err == nil {
+ super.err = errNeedConfigReload
+ }
+ super.cancel()
+ })
+ }
+
err := super.run(cfg)
if err != nil {
super.logger.WithError(err).Warn("supervisor shut down")
+ if super.err == nil {
+ super.err = err
+ }
}
- close(super.done)
}()
}
+func (super *Supervisor) Wait() error {
+ <-super.done
+ return super.err
+}
+
func (super *Supervisor) run(cfg *arvados.Config) error {
+ defer super.cancel()
+
cwd, err := os.Getwd()
if err != nil {
return err
if svc.ExternalURL.Host == "" {
if svc == &cluster.Services.Controller ||
svc == &cluster.Services.GitHTTP ||
+ svc == &cluster.Services.Health ||
svc == &cluster.Services.Keepproxy ||
svc == &cluster.Services.WebDAV ||
svc == &cluster.Services.WebDAVDownload ||
}
return ctx.Err()
}
+
+func copyConfig(cfg *arvados.Config) *arvados.Config {
+ pr, pw := io.Pipe()
+ go func() {
+ err := json.NewEncoder(pw).Encode(cfg)
+ if err != nil {
+ panic(err)
+ }
+ pw.Close()
+ }()
+ cfg2 := new(arvados.Config)
+ err := json.NewDecoder(pr).Decode(cfg2)
+ if err != nil {
+ panic(err)
+ }
+ return cfg2
+}
+
+func watchConfig(ctx context.Context, logger logrus.FieldLogger, cfgPath string, prevcfg *arvados.Config, fn func()) {
+ watcher, err := fsnotify.NewWatcher()
+ if err != nil {
+ logger.WithError(err).Error("fsnotify setup failed")
+ return
+ }
+ defer watcher.Close()
+
+ err = watcher.Add(cfgPath)
+ if err != nil {
+ logger.WithError(err).Error("fsnotify watcher failed")
+ return
+ }
+
+ for {
+ select {
+ case <-ctx.Done():
+ return
+ case err, ok := <-watcher.Errors:
+ if !ok {
+ return
+ }
+ logger.WithError(err).Warn("fsnotify watcher reported error")
+ case _, ok := <-watcher.Events:
+ if !ok {
+ return
+ }
+ for len(watcher.Events) > 0 {
+ <-watcher.Events
+ }
+ loader := config.NewLoader(&bytes.Buffer{}, &logrus.Logger{Out: ioutil.Discard})
+ loader.Path = cfgPath
+ loader.SkipAPICalls = true
+ cfg, err := loader.Load()
+ if err != nil {
+ logger.WithError(err).Warn("error reloading config file after change detected; ignoring new config for now")
+ } else if reflect.DeepEqual(cfg, prevcfg) {
+ logger.Debug("config file changed but is still DeepEqual to the existing config")
+ } else {
+ logger.Debug("config changed, notifying supervisor")
+ fn()
+ prevcfg = cfg
+ }
+ }
+ }
+}
code := DumpCommand.RunCommand("arvados config-dump", []string{"-config", "-"}, bytes.NewBufferString(in), &stdout, &stderr)
c.Check(code, check.Equals, 0)
c.Check(stderr.String(), check.Matches, `(?ms).*deprecated or unknown config entry: Clusters.z1234.UnknownKey.*`)
- c.Check(stdout.String(), check.Matches, `(?ms)Clusters:\n z1234:\n.*`)
+ c.Check(stdout.String(), check.Matches, `(?ms)(.*\n)?Clusters:\n z1234:\n.*`)
c.Check(stdout.String(), check.Matches, `(?ms).*\n *ManagementToken: secret\n.*`)
c.Check(stdout.String(), check.Not(check.Matches), `(?ms).*UnknownKey.*`)
}
# implementation. Note that it also disables some new federation
# features and will be removed in a future release.
ForceLegacyAPI14: false
+
+# (Experimental) Restart services automatically when config file
+# changes are detected. Only supported by `arvados-server boot` in
+# dev/test mode.
+AutoReloadConfig: false
# implementation. Note that it also disables some new federation
# features and will be removed in a future release.
ForceLegacyAPI14: false
+
+# (Experimental) Restart services automatically when config file
+# changes are detected. Only supported by ` + "`" + `arvados-server boot` + "`" + ` in
+# dev/test mode.
+AutoReloadConfig: false
`)
},
config: *cfg,
}
- s.testClusters[id].super.Start(context.Background(), &s.testClusters[id].config)
+ s.testClusters[id].super.Start(context.Background(), &s.testClusters[id].config, "-")
}
for _, tc := range s.testClusters {
au, ok := tc.super.WaitReady()
*next[upd.UUID] = upd
}
}
- selectParam := []string{"uuid", "state", "priority", "runtime_constraints", "container_image", "mounts"}
+ selectParam := []string{"uuid", "state", "priority", "runtime_constraints", "container_image", "mounts", "scheduling_parameters"}
limitParam := 1000
mine, err := cq.fetchAll(arvados.ResourceListParams{
logincluster:
type: boolean
default: false
+ arvbox_mode:
+ type: string?
+ default: "dev"
outputs:
arvados_api_token:
type: string
arvbox_data: mkdir/arvbox_data
arvbox_bin: arvbox
branch: branch
+ arvbox_mode: arvbox_mode
out: [cluster_id, container_host, arvbox_data_out, superuser_token]
scatter: [container_name, arvbox_data]
scatterMethod: dotproduct
branch:
type: string
default: master
+ arvbox_mode:
+ type: string?
+ default: "dev"
outputs:
cluster_id:
type: string
- shellQuote: false
valueFrom: |
set -ex
- mkdir -p $ARVBOX_DATA
- if ! test -d $ARVBOX_DATA/arvados ; then
- cd $ARVBOX_DATA
- git clone https://git.arvados.org/arvados.git
+ if test $(inputs.arvbox_mode) = dev ; then
+ mkdir -p $ARVBOX_DATA
+ if ! test -d $ARVBOX_DATA/arvados ; then
+ cd $ARVBOX_DATA
+ git clone https://git.arvados.org/arvados.git
+ fi
+ cd $ARVBOX_DATA/arvados
+ gitver=`git rev-parse HEAD`
+ git fetch
+ git checkout -f $(inputs.branch)
+ git pull
+ pulled=`git rev-parse HEAD`
+ git --no-pager log -n1 $pulled
+ else
+ export ARVBOX_BASE=$(runtime.tmpdir)
+ unset ARVBOX_DATA
fi
- cd $ARVBOX_DATA/arvados
- gitver=`git rev-parse HEAD`
- git fetch
- git checkout -f $(inputs.branch)
- git pull
- pulled=`git rev-parse HEAD`
- git --no-pager log -n1 $pulled
-
cd $(runtime.outdir)
if test "$gitver" = "$pulled" ; then
- $(inputs.arvbox_bin.path) start dev
+ $(inputs.arvbox_bin.path) start $(inputs.arvbox_mode)
else
- $(inputs.arvbox_bin.path) restart dev
+ $(inputs.arvbox_bin.path) restart $(inputs.arvbox_mode)
fi
$(inputs.arvbox_bin.path) status > status.txt
$(inputs.arvbox_bin.path) cat /var/lib/arvados/superuser_token > superuser_token.txt
}()
type Config struct {
- Clusters map[string]Cluster
+ Clusters map[string]Cluster
+ AutoReloadConfig bool
}
// GetConfig returns the current system config, loading it from
MaxPermissionEntries int
MaxUUIDEntries int
}
+
type Cluster struct {
ClusterID string `json:"-"`
ManagementToken string
branch:
type: string
default: master
+ arvbox_mode:
+ type: string?
outputs:
arvados_api_hosts:
type: string[]
in:
arvbox_base: arvbox_base
branch: branch
+ arvbox_mode: arvbox_mode
logincluster:
default: true
out: [arvados_api_hosts, arvados_cluster_ids, arvado_api_host_insecure, superuser_tokens, arvbox_containers, arvbox_bin]
--- /dev/null
+#!/bin/bash
+
+if test -z "$WORKSPACE" ; then
+ echo "WORKSPACE unset"
+ exit 1
+fi
+
+docker stop fedbox1 fedbox2 fedbox3
+docker rm fedbox1 fedbox2 fedbox3
+docker rm fedbox1-data fedbox2-data fedbox3-data
+
+set -ex
+
+mkdir -p $WORKSPACE/tmp
+cd $WORKSPACE/tmp
+virtualenv --python python3 venv3
+. venv3/bin/activate
+
+cd $WORKSPACE/sdk/python
+pip install -e .
+
+cd $WORKSPACE/sdk/cwl
+pip install -e .
+
+export PATH=$PATH:$WORKSPACE/tools/arvbox/bin
+
+mkdir -p $WORKSPACE/tmp/arvbox
+cd $WORKSPACE/sdk/python/tests/fed-migrate
+cwltool arvbox-make-federation.cwl \
+ --arvbox_base $WORKSPACE/tmp/arvbox \
+ --branch $(git rev-parse HEAD) \
+ --arvbox_mode localdemo > fed.json
+
+cwltool fed-migrate.cwl fed.json
proxy_request_buffering off;
}
}
+ upstream health {
+ server {{LISTENHOST}}:{{HEALTHPORT}};
+ }
+ server {
+ listen {{LISTENHOST}}:{{HEALTHSSLPORT}} ssl default_server;
+ server_name health;
+ ssl_certificate "{{SSLCERT}}";
+ ssl_certificate_key "{{SSLKEY}}";
+ location / {
+ proxy_pass http://health;
+ proxy_set_header Host $http_host;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto https;
+ proxy_redirect off;
+
+ proxy_http_version 1.1;
+ proxy_request_buffering off;
+ }
+ }
server {
listen {{LISTENHOST}}:{{KEEPWEBDLSSLPORT}} ssl default_server;
server_name keep-web-dl ~.*;
nginxconf['KEEPPROXYSSLPORT'] = external_port_from_config("Keepproxy")
nginxconf['GITPORT'] = internal_port_from_config("GitHTTP")
nginxconf['GITSSLPORT'] = external_port_from_config("GitHTTP")
+ nginxconf['HEALTHPORT'] = internal_port_from_config("Health")
+ nginxconf['HEALTHSSLPORT'] = external_port_from_config("Health")
nginxconf['WSPORT'] = internal_port_from_config("Websocket")
nginxconf['WSSSLPORT'] = external_port_from_config("Websocket")
nginxconf['WORKBENCH1PORT'] = internal_port_from_config("Workbench1")
workbench1_external_port = find_available_port()
git_httpd_port = find_available_port()
git_httpd_external_port = find_available_port()
+ health_httpd_port = find_available_port()
+ health_httpd_external_port = find_available_port()
keepproxy_port = find_available_port()
keepproxy_external_port = find_available_port()
keepstore_ports = sorted([str(find_available_port()) for _ in xrange(0,4)])
"http://%s:%s"%(localhost, git_httpd_port): {}
},
},
+ "Health": {
+ "ExternalURL": "https://%s:%s" % (localhost, health_httpd_external_port),
+ "InternalURLs": {
+ "http://%s:%s"%(localhost, health_httpd_port): {}
+ },
+ },
"Keepstore": {
"InternalURLs": {
"http://%s:%s"%(localhost, port): {} for port in keepstore_ports
pg (1.1.4)
power_assert (1.1.4)
public_suffix (4.0.3)
- rack (2.0.7)
+ rack (2.2.2)
rack-test (0.6.3)
rack (>= 1.0)
rails (5.0.7.2)
uglifier (~> 2.0)
BUNDLED WITH
- 1.11
+ 1.16.6
return fmt.Errorf("Error setting up arvados client %v", err)
}
+ // If a config file is available, use the keepstores defined there
+ // instead of the legacy autodiscover mechanism via the API server
+ for k := range cluster.Services.Keepstore.InternalURLs {
+ arv.KeepServiceURIs = append(arv.KeepServiceURIs, k.String())
+ }
+
if cluster.SystemLogs.LogLevel == "debug" {
keepclient.DebugPrintf = log.Printf
}
// Tests that require the Keep server running
type ServerRequiredSuite struct{}
+// Gocheck boilerplate
+var _ = Suite(&ServerRequiredConfigYmlSuite{})
+
+// Tests that require the Keep servers running as defined in config.yml
+type ServerRequiredConfigYmlSuite struct{}
+
// Gocheck boilerplate
var _ = Suite(&NoKeepServerSuite{})
arvadostest.StopAPI()
}
+func (s *ServerRequiredConfigYmlSuite) SetUpSuite(c *C) {
+ arvadostest.StartAPI()
+ // config.yml defines 4 keepstores
+ arvadostest.StartKeep(4, false)
+}
+
+func (s *ServerRequiredConfigYmlSuite) SetUpTest(c *C) {
+ arvadostest.ResetEnv()
+}
+
+func (s *ServerRequiredConfigYmlSuite) TearDownSuite(c *C) {
+ arvadostest.StopKeep(4)
+ arvadostest.StopAPI()
+}
+
func (s *NoKeepServerSuite) SetUpSuite(c *C) {
arvadostest.StartAPI()
// We need API to have some keep services listed, but the
arvadostest.StopAPI()
}
-func runProxy(c *C, bogusClientToken bool) *keepclient.KeepClient {
+func runProxy(c *C, bogusClientToken bool, loadKeepstoresFromConfig bool) *keepclient.KeepClient {
cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
c.Assert(err, Equals, nil)
cluster, err := cfg.GetCluster("")
c.Assert(err, Equals, nil)
+ if !loadKeepstoresFromConfig {
+ // Do not load Keepstore InternalURLs from the config file
+ cluster.Services.Keepstore.InternalURLs = make(map[arvados.URL]arvados.ServiceInstance)
+ }
+
cluster.Services.Keepproxy.InternalURLs = map[arvados.URL]arvados.ServiceInstance{arvados.URL{Host: ":0"}: arvados.ServiceInstance{}}
listener = nil
}
func (s *ServerRequiredSuite) TestResponseViaHeader(c *C) {
- runProxy(c, false)
+ runProxy(c, false, false)
defer closeListener()
req, err := http.NewRequest("POST",
}
func (s *ServerRequiredSuite) TestLoopDetection(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
sr := map[string]string{
}
func (s *ServerRequiredSuite) TestStorageClassesHeader(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
// Set up fake keepstore to record request headers
}
func (s *ServerRequiredSuite) TestDesiredReplicas(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
content := []byte("TestDesiredReplicas")
}
func (s *ServerRequiredSuite) TestPutWrongContentLength(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
content := []byte("TestPutWrongContentLength")
}
func (s *ServerRequiredSuite) TestManyFailedPuts(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
router.(*proxyHandler).timeout = time.Nanosecond
}
func (s *ServerRequiredSuite) TestPutAskGet(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
}
func (s *ServerRequiredSuite) TestPutAskGetForbidden(c *C) {
- kc := runProxy(c, true)
+ kc := runProxy(c, true, false)
defer closeListener()
hash := fmt.Sprintf("%x+3", md5.Sum([]byte("bar")))
}
func (s *ServerRequiredSuite) TestCorsHeaders(c *C) {
- runProxy(c, false)
+ runProxy(c, false, false)
defer closeListener()
{
}
func (s *ServerRequiredSuite) TestPostWithoutHash(c *C) {
- runProxy(c, false)
+ runProxy(c, false, false)
defer closeListener()
{
// With a valid but non-existing prefix (expect "\n")
// With an invalid prefix (expect error)
func (s *ServerRequiredSuite) TestGetIndex(c *C) {
- kc := runProxy(c, false)
+ getIndexWorker(c, false)
+}
+
+// Test GetIndex
+// Uses config.yml
+// Put one block, with 2 replicas
+// With no prefix (expect the block locator, twice)
+// With an existing prefix (expect the block locator, twice)
+// With a valid but non-existing prefix (expect "\n")
+// With an invalid prefix (expect error)
+func (s *ServerRequiredConfigYmlSuite) TestGetIndex(c *C) {
+ getIndexWorker(c, true)
+}
+
+func getIndexWorker(c *C, useConfig bool) {
+ kc := runProxy(c, false, useConfig)
defer closeListener()
// Put "index-data" blocks
}
func (s *ServerRequiredSuite) TestCollectionSharingToken(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
hash, _, err := kc.PutB([]byte("shareddata"))
c.Check(err, IsNil)
}
func (s *ServerRequiredSuite) TestPutAskGetInvalidToken(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
// Put a test block
}
func (s *ServerRequiredSuite) TestAskGetKeepProxyConnectionError(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
// Point keepproxy at a non-existent keepstore
}
func (s *NoKeepServerSuite) TestAskGetNoKeepServerError(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
}
func (s *ServerRequiredSuite) TestPing(c *C) {
- kc := runProxy(c, false)
+ kc := runProxy(c, false, false)
defer closeListener()
rtr := MakeRESTRouter(kc, 10*time.Second, arvadostest.ManagementToken)
mkdir -p "$PG_DATA" "$VAR_DATA" "$PASSENGER" "$GEMS" "$PIPCACHE" "$NPMCACHE" "$GOSTUFF" "$RLIBS"
if ! test -d "$ARVADOS_ROOT" ; then
- git clone https://github.com/arvados/arvados.git "$ARVADOS_ROOT"
+ git clone https://git.arvados.org/arvados.git "$ARVADOS_ROOT"
fi
if ! test -d "$SSO_ROOT" ; then
git clone https://github.com/arvados/sso-devise-omniauth-provider.git "$SSO_ROOT"
sv stop keepstore1
sv stop keepproxy
cd /usr/src/arvados/services/api
+export DISABLE_DATABASE_ENVIRONMENT_CHECK=1
export RAILS_ENV=development
bundle exec rake db:drop
rm /var/lib/arvados/api_database_setup
RUN sudo -u arvbox /var/lib/arvbox/service/keepproxy/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/arv-git-httpd/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/crunch-dispatch-local/run-service --only-deps
-RUN sudo -u arvbox /var/lib/arvbox/service/websockets/run-service --only-deps
+RUN sudo -u arvbox /var/lib/arvbox/service/websockets/run --only-deps
RUN sudo -u arvbox /usr/local/lib/arvbox/keep-setup.sh --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/sdk/run-service
/usr/local/lib/arvbox/runsu.sh flock /var/lib/arvados/cluster_config.yml.lock /usr/local/lib/arvbox/cluster-config.sh
-exec /usr/local/lib/arvbox/runsu.sh /usr/local/bin/arvados-controller
+exec /usr/local/bin/arvados-controller
+++ /dev/null
-/usr/local/lib/arvbox/runsu.sh
\ No newline at end of file
--- /dev/null
+#!/bin/bash
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+exec 2>&1
+set -ex -o pipefail
+
+. /usr/local/lib/arvbox/common.sh
+. /usr/local/lib/arvbox/go-setup.sh
+
+(cd /usr/local/bin && ln -sf arvados-server arvados-ws)
+
+if test "$1" = "--only-deps" ; then
+ exit
+fi
+
+/usr/local/lib/arvbox/runsu.sh flock /var/lib/arvados/cluster_config.yml.lock /usr/local/lib/arvbox/cluster-config.sh
+
+exec /usr/local/lib/arvbox/runsu.sh /usr/local/bin/arvados-ws
+++ /dev/null
-#!/bin/bash
-# Copyright (C) The Arvados Authors. All rights reserved.
-#
-# SPDX-License-Identifier: AGPL-3.0
-
-exec 2>&1
-set -ex -o pipefail
-
-. /usr/local/lib/arvbox/common.sh
-. /usr/local/lib/arvbox/go-setup.sh
-
-(cd /usr/local/bin && ln -sf arvados-server arvados-ws)
-
-if test "$1" = "--only-deps" ; then
- exit
-fi
-
-/usr/local/lib/arvbox/runsu.sh flock /var/lib/arvados/cluster_config.yml.lock /usr/local/lib/arvbox/cluster-config.sh
-
-exec /usr/local/lib/arvbox/runsu.sh /usr/local/bin/arvados-ws