Merge branch '17152-collection-versions-fixes'
authorLucas Di Pentima <lucas@di-pentima.com.ar>
Wed, 9 Dec 2020 21:31:33 +0000 (18:31 -0300)
committerLucas Di Pentima <lucas@di-pentima.com.ar>
Wed, 9 Dec 2020 21:31:33 +0000 (18:31 -0300)
Refs #17152

Arvados-DCO-1.1-Signed-off-by: Lucas Di Pentima <lucas@di-pentima.com.ar>

38 files changed:
.gitignore
doc/admin/user-activity.html.textile.liquid
doc/install/arvbox.html.textile.liquid
doc/install/install-api-server.html.textile.liquid
doc/install/salt-single-host.html.textile.liquid
doc/install/salt-vagrant.html.textile.liquid
lib/config/config.default.yml
lib/config/generated_config.go
lib/costanalyzer/costanalyzer.go
lib/costanalyzer/costanalyzer_test.go
sdk/cwl/arvados_cwl/__init__.py
sdk/cwl/arvados_cwl/arvcontainer.py
sdk/cwl/arvados_cwl/arvdocker.py
sdk/cwl/arvados_cwl/arvworkflow.py
sdk/cwl/arvados_cwl/executor.py
sdk/cwl/arvados_cwl/pathmapper.py
sdk/cwl/arvados_cwl/runner.py
sdk/cwl/arvados_cwl/task_queue.py [deleted file]
sdk/cwl/setup.py
sdk/cwl/tests/test_submit.py
sdk/cwl/tests/test_tq.py
sdk/python/tests/run_test_server.py
services/api/lib/create_superuser_token.rb
services/api/test/unit/create_superuser_token_test.rb
services/keep-web/s3.go
services/keep-web/s3_test.go
tools/salt-install/README.md
tools/salt-install/Vagrantfile
tools/salt-install/provision.sh
tools/salt-install/single_host/arvados.sls
tools/salt-install/single_host/nginx_controller_configuration.sls
tools/salt-install/single_host/nginx_keepproxy_configuration.sls
tools/salt-install/single_host/nginx_keepweb_configuration.sls
tools/salt-install/single_host/nginx_webshell_configuration.sls
tools/salt-install/single_host/nginx_websocket_configuration.sls
tools/salt-install/single_host/nginx_workbench2_configuration.sls
tools/salt-install/single_host/nginx_workbench_configuration.sls
tools/salt-install/tests/run-test.sh

index 877ccdf4dfd1da971a3f736d18af06e381869e5d..beb84b3c2034f23e7c3072ac510f4a43722a0c75 100644 (file)
@@ -32,3 +32,5 @@ services/api/config/arvados-clients.yml
 .Rproj.user
 _version.py
 *.bak
+arvados-snakeoil-ca.pem
+.vagrant
index bd8d58833f4d8cd3697c70d7d71b8fa9a74cea0c..21bfb7655ca997f51c5f268d94ac7979b9786c3b 100644 (file)
@@ -35,7 +35,7 @@ Note: depends on the "Arvados Python SDK":../sdk/python/sdk-python.html and its
 
 h2. Usage
 
-Set ARVADOS_API_HOST and ARVADOS_API_TOKEN for an admin user or system root token.
+Set ARVADOS_API_HOST to the api server of the cluster for which the report should be generated. ARVADOS_API_TOKEN needs to be a "v2 token":../admin/scoped-tokens.html for an admin user, or a superuser token (e.g. generated with @script/create_superuser_token.rb@). Please note that in a login cluster federation, the token needs to be issued by the login cluster, but the report should be generated against the API server of the cluster for which it is desired. In other words, ARVADOS_API_HOST would point at the satellite cluster for which the report is desired, but ARVADOS_API_TOKEN would be a token that belongs to a login cluster user.
 
 Run the tool with the option @--days@ giving the number of days to report on.  It will request activity logs from the API and generate a summary report on standard output.
 
index c01ec61fa02248e59baaf1fa25ce30bde977b815..3c77ade8da5595fd4aff5886fafe0db046a9b97a 100644 (file)
@@ -80,10 +80,23 @@ Arvbox creates root certificate to authorize Arvbox services.  Installing the ro
 
 The certificate will be added under the "Arvados testing" organization as "arvbox testing root CA".
 
-To access your Arvbox instance using command line clients (such as arv-get and arv-put) without security errors, install the certificate into the OS certificate storage (instructions for Debian/Ubuntu):
+To access your Arvbox instance using command line clients (such as arv-get and arv-put) without security errors, install the certificate into the OS certificate storage.
 
-# copy @arvbox-root-cert.pem@ to @/usr/local/share/ca-certificates/@
-# run @/usr/sbin/update-ca-certificates@
+h3. On Debian/Ubuntu:
+
+<notextile>
+<pre><code>cp arvbox-root-cert.pem /usr/local/share/ca-certificates/
+/usr/sbin/update-ca-certificates
+</code></pre>
+</notextile>
+
+h3. On CentOS:
+
+<notextile>
+<pre><code>cp arvbox-root-cert.pem /etc/pki/ca-trust/source/anchors/
+/usr/bin/update-ca-trust
+</code></pre>
+</notextile>
 
 h2. Configs
 
index b8442eb0603dfd5279572c3ec1bc28e7b5bc4e47..c7303bbba28914ca3ab8be9d5a9e151afa8f83a3 100644 (file)
@@ -51,22 +51,20 @@ h3. Tokens
     API:
       RailsSessionSecretToken: <span class="userinput">"$rails_secret_token"</span>
     Collections:
-      BlobSigningKey: <span class="userinput">"blob_signing_key"</span>
+      BlobSigningKey: <span class="userinput">"$blob_signing_key"</span>
 </code></pre>
 </notextile>
 
-@SystemRootToken@ is used by Arvados system services to authenticate as the system (root) user when communicating with the API server.
+These secret tokens are used to authenticate messages between Arvados components.
+* @SystemRootToken@ is used by Arvados system services to authenticate as the system (root) user when communicating with the API server.
+* @ManagementToken@ is used to authenticate access to system metrics.
+* @API.RailsSessionSecretToken@ is used to sign session cookies.
+* @Collections.BlobSigningKey@ is used to control access to Keep blocks.
 
-@ManagementToken@ is used to authenticate access to system metrics.
-
-@API.RailsSessionSecretToken@ is required by the API server.
-
-@Collections.BlobSigningKey@ is used to control access to Keep blocks.
-
-You can generate a random token for each of these items at the command line like this:
+Each token should be a string of at least 50 alphanumeric characters. You can generate a suitable token with the following command:
 
 <notextile>
-<pre><code>~$ <span class="userinput">tr -dc 0-9a-zA-Z &lt;/dev/urandom | head -c50; echo</span>
+<pre><code>~$ <span class="userinput">tr -dc 0-9a-zA-Z &lt;/dev/urandom | head -c50 ; echo</span>
 </code></pre>
 </notextile>
 
index fb41d59ee2a67745ecb2b1ef163f21a2b79cd44b..5bed6d05e77e8602319ef0f8dd1f8c12c3617dbc 100644 (file)
@@ -11,7 +11,9 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 # "Install Saltstack":#saltstack
 # "Single host install using the provision.sh script":#single_host
-# "DNS configuration":#final_steps
+# "Final steps":#final_steps
+## "DNS configuration":#dns_configuration
+## "Install root certificate":#ca_root_certificate
 # "Initial user and login":#initial_user
 # "Test the installed cluster running a simple workflow":#test_install
 
@@ -49,7 +51,9 @@ arvados: Failed:      0
 </code></pre>
 </notextile>
 
-h2(#final_steps). DNS configuration
+h2(#final_steps). Final configuration steps
+
+h3(#dns_configuration). DNS configuration
 
 After the setup is done, you need to set up your DNS to be able to access the cluster.
 
@@ -65,6 +69,39 @@ echo "${HOST_IP} api keep keep0 collections download ws workbench workbench2 ${C
 </code></pre>
 </notextile>
 
+h3(#ca_root_certificate). Install root certificate
+
+Arvados uses SSL to encrypt communications. Its UI uses AJAX which will silently fail if the certificate is not valid or signed by an unknown Certification Authority.
+
+For this reason, the @arvados-formula@ has a helper state to create a root certificate to authorize Arvados services. The @provision.sh@ script will leave a copy of the generated CA's certificate (@arvados-snakeoil-ca.pem@) in the script's directory so ypu can add it to your workstation.
+
+Installing the root certificate into your web browser will prevent security errors when accessing Arvados services with your web browser.
+
+# Go to the certificate manager in your browser.
+#* In Chrome, this can be found under "Settings &rarr; Advanced &rarr; Manage Certificates" or by entering @chrome://settings/certificates@ in the URL bar.
+#* In Firefox, this can be found under "Preferences &rarr; Privacy & Security" or entering @about:preferences#privacy@ in the URL bar and then choosing "View Certificates...".
+# Select the "Authorities" tab, then press the "Import" button.  Choose @arvados-snakeoil-ca.pem@
+
+The certificate will be added under the "Arvados Formula".
+
+To access your Arvados instance using command line clients (such as arv-get and arv-put) without security errors, install the certificate into the OS certificate storage.
+
+* On Debian/Ubuntu:
+
+<notextile>
+<pre><code>cp arvados-root-cert.pem /usr/local/share/ca-certificates/
+/usr/sbin/update-ca-certificates
+</code></pre>
+</notextile>
+
+* On CentOS:
+
+<notextile>
+<pre><code>cp arvados-root-cert.pem /etc/pki/ca-trust/source/anchors/
+/usr/bin/update-ca-trust
+</code></pre>
+</notextile>
+
 h2(#initial_user). Initial user and login
 
 At this point you should be able to log into the Arvados cluster.
index d9aa791f0bad3a5173d6aa9762f2e56359534f5b..ed0d5bca62ed045d307caac880d644747a0141ae 100644 (file)
@@ -10,7 +10,9 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
 # "Vagrant":#vagrant
-# "DNS configuration":#final_steps
+# "Final steps":#final_steps
+## "DNS configuration":#dns_configuration
+## "Install root certificate":#ca_root_certificate
 # "Initial user and login":#initial_user
 # "Test the installed cluster running a simple workflow":#test_install
 
@@ -37,7 +39,9 @@ If you want to reconfigure the running box, you can just:
 </code></pre>
 </notextile>
 
-h2(#final_steps). DNS configuration
+h2(#final_steps). Final configuration steps
+
+h3(#dns_configuration). DNS configuration
 
 After the setup is done, you need to set up your DNS to be able to access the cluster.
 
@@ -53,6 +57,39 @@ echo "${HOST_IP} api keep keep0 collections download ws workbench workbench2 ${C
 </code></pre>
 </notextile>
 
+h3(#ca_root_certificate). Install root certificate
+
+Arvados uses SSL to encrypt communications. Its UI uses AJAX which will silently fail if the certificate is not valid or signed by an unknown Certification Authority.
+
+For this reason, the @arvados-formula@ has a helper state to create a root certificate to authorize Arvados services. The @provision.sh@ script will leave a copy of the generated CA's certificate (@arvados-snakeoil-ca.pem@) in the script's directory so ypu can add it to your workstation.
+
+Installing the root certificate into your web browser will prevent security errors when accessing Arvados services with your web browser.
+
+# Go to the certificate manager in your browser.
+#* In Chrome, this can be found under "Settings &rarr; Advanced &rarr; Manage Certificates" or by entering @chrome://settings/certificates@ in the URL bar.
+#* In Firefox, this can be found under "Preferences &rarr; Privacy & Security" or entering @about:preferences#privacy@ in the URL bar and then choosing "View Certificates...".
+# Select the "Authorities" tab, then press the "Import" button.  Choose @arvados-snakeoil-ca.pem@
+
+The certificate will be added under the "Arvados Formula".
+
+To access your Arvados instance using command line clients (such as arv-get and arv-put) without security errors, install the certificate into the OS certificate storage.
+
+* On Debian/Ubuntu:
+
+<notextile>
+<pre><code>cp arvados-root-cert.pem /usr/local/share/ca-certificates/
+/usr/sbin/update-ca-certificates
+</code></pre>
+</notextile>
+
+* On CentOS:
+
+<notextile>
+<pre><code>cp arvados-root-cert.pem /etc/pki/ca-trust/source/anchors/
+/usr/bin/update-ca-trust
+</code></pre>
+</notextile>
+
 h2(#initial_user). Initial user and login
 
 At this point you should be able to log into the Arvados cluster.
index 005d2738da5641bcadaea77b02fa351ce99298f1..7e16688d9d0b3277dfb9d5f58cb6beab7b09946d 100644 (file)
@@ -12,6 +12,8 @@
 
 Clusters:
   xxxxx:
+    # Token used internally by Arvados components to authenticate to
+    # one another. Use a string of at least 50 random alphanumerics.
     SystemRootToken: ""
 
     # Token to be included in all healthcheck requests. Disabled by default.
index 885bb4b8c5cf87e9552814c1799f69b1aa898746..934131bd8fd90e0085de2532e7f86e43f0709563 100644 (file)
@@ -18,6 +18,8 @@ var DefaultYAML = []byte(`# Copyright (C) The Arvados Authors. All rights reserv
 
 Clusters:
   xxxxx:
+    # Token used internally by Arvados components to authenticate to
+    # one another. Use a string of at least 50 random alphanumerics.
     SystemRootToken: ""
 
     # Token to be included in all healthcheck requests. Disabled by default.
index 4284542b869a07d713d698fbb7eaa1a243e77ec5..d81ade607c6ebc1b1426220077b60c9fa3cad77d 100644 (file)
@@ -44,7 +44,9 @@ func (i *arrayFlags) String() string {
 }
 
 func (i *arrayFlags) Set(value string) error {
-       *i = append(*i, value)
+       for _, s := range strings.Split(value, ",") {
+               *i = append(*i, s)
+       }
        return nil
 }
 
@@ -54,20 +56,26 @@ func parseFlags(prog string, args []string, loader *config.Loader, logger *logru
        flags.Usage = func() {
                fmt.Fprintf(flags.Output(), `
 Usage:
-  %s [options ...]
+  %s [options ...] <uuid> ...
 
        This program analyzes the cost of Arvados container requests. For each uuid
        supplied, it creates a CSV report that lists all the containers used to
        fulfill the container request, together with the machine type and cost of
-       each container.
+       each container. At least one uuid must be specified.
 
        When supplied with the uuid of a container request, it will calculate the
-       cost of that container request and all its children. When suplied with a
-       project uuid or when supplied with multiple container request uuids, it will
-       create a CSV report for each supplied uuid, as well as a CSV file with
-       aggregate cost accounting for all supplied uuids. The aggregate cost report
-       takes container reuse into account: if a container was reused between several
-       container requests, its cost will only be counted once.
+       cost of that container request and all its children.
+
+       When supplied with the uuid of a collection, it will see if there is a
+       container_request uuid in the properties of the collection, and if so, it
+       will calculate the cost of that container request and all its children.
+
+       When supplied with a project uuid or when supplied with multiple container
+       request or collection uuids, it will create a CSV report for each supplied
+       uuid, as well as a CSV file with aggregate cost accounting for all supplied
+       uuids. The aggregate cost report takes container reuse into account: if a
+       container was reused between several container requests, its cost will only
+       be counted once.
 
        To get the node costs, the progam queries the Arvados API for current cost
        data for each node type used. This means that the reported cost always
@@ -86,13 +94,18 @@ Usage:
        In order to get the data for the uuids supplied, the ARVADOS_API_HOST and
        ARVADOS_API_TOKEN environment variables must be set.
 
+       This program prints the total dollar amount from the aggregate cost
+       accounting across all provided uuids on stdout.
+
+       When the '-output' option is specified, a set of CSV files with cost details
+       will be written to the provided directory.
+
 Options:
 `, prog)
                flags.PrintDefaults()
        }
        loglevel := flags.String("log-level", "info", "logging `level` (debug, info, ...)")
-       flags.StringVar(&resultsDir, "output", "", "output `directory` for the CSV reports (required)")
-       flags.Var(&uuids, "uuid", "Toplevel `project or container request` uuid. May be specified more than once. (required)")
+       flags.StringVar(&resultsDir, "output", "", "output `directory` for the CSV reports")
        flags.BoolVar(&cache, "cache", true, "create and use a local disk cache of Arvados objects")
        err = flags.Parse(args)
        if err == flag.ErrHelp {
@@ -103,6 +116,7 @@ Options:
                exitCode = 2
                return
        }
+       uuids = flags.Args()
 
        if len(uuids) < 1 {
                flags.Usage()
@@ -111,13 +125,6 @@ Options:
                return
        }
 
-       if resultsDir == "" {
-               flags.Usage()
-               err = fmt.Errorf("Error: output directory must be specified")
-               exitCode = 2
-               return
-       }
-
        lvl, err := logrus.ParseLevel(*loglevel)
        if err != nil {
                exitCode = 2
@@ -180,8 +187,8 @@ func addContainerLine(logger *logrus.Logger, node nodeInfo, cr arvados.Container
 
 func loadCachedObject(logger *logrus.Logger, file string, uuid string, object interface{}) (reload bool) {
        reload = true
-       if strings.Contains(uuid, "-j7d0g-") {
-               // We do not cache projects, they have no final state
+       if strings.Contains(uuid, "-j7d0g-") || strings.Contains(uuid, "-4zz18-") {
+               // We do not cache projects or collections, they have no final state
                return
        }
        // See if we have a cached copy of this object
@@ -251,6 +258,8 @@ func loadObject(logger *logrus.Logger, ac *arvados.Client, path string, uuid str
                err = ac.RequestAndDecode(&object, "GET", "arvados/v1/container_requests/"+uuid, nil, nil)
        } else if strings.Contains(uuid, "-dz642-") {
                err = ac.RequestAndDecode(&object, "GET", "arvados/v1/containers/"+uuid, nil, nil)
+       } else if strings.Contains(uuid, "-4zz18-") {
+               err = ac.RequestAndDecode(&object, "GET", "arvados/v1/collections/"+uuid, nil, nil)
        } else {
                err = fmt.Errorf("unsupported object type with UUID %q:\n  %s", uuid, err)
                return
@@ -309,7 +318,6 @@ func getNode(arv *arvadosclient.ArvadosClient, ac *arvados.Client, kc *keepclien
 }
 
 func handleProject(logger *logrus.Logger, uuid string, arv *arvadosclient.ArvadosClient, ac *arvados.Client, kc *keepclient.KeepClient, resultsDir string, cache bool) (cost map[string]float64, err error) {
-
        cost = make(map[string]float64)
 
        var project arvados.Group
@@ -366,9 +374,27 @@ func generateCrCsv(logger *logrus.Logger, uuid string, arv *arvadosclient.Arvado
        var tmpTotalCost float64
        var totalCost float64
 
+       var crUUID = uuid
+       if strings.Contains(uuid, "-4zz18-") {
+               // This is a collection, find the associated container request (if any)
+               var c arvados.Collection
+               err = loadObject(logger, ac, uuid, uuid, cache, &c)
+               if err != nil {
+                       return nil, fmt.Errorf("error loading collection object %s: %s", uuid, err)
+               }
+               value, ok := c.Properties["container_request"]
+               if !ok {
+                       return nil, fmt.Errorf("error: collection %s does not have a 'container_request' property", uuid)
+               }
+               crUUID, ok = value.(string)
+               if !ok {
+                       return nil, fmt.Errorf("error: collection %s does not have a 'container_request' property of the string type", uuid)
+               }
+       }
+
        // This is a container request, find the container
        var cr arvados.ContainerRequest
-       err = loadObject(logger, ac, uuid, uuid, cache, &cr)
+       err = loadObject(logger, ac, crUUID, crUUID, cache, &cr)
        if err != nil {
                return nil, fmt.Errorf("error loading cr object %s: %s", uuid, err)
        }
@@ -424,13 +450,15 @@ func generateCrCsv(logger *logrus.Logger, uuid string, arv *arvadosclient.Arvado
 
        csv += "TOTAL,,,,,,,,," + strconv.FormatFloat(totalCost, 'f', 8, 64) + "\n"
 
-       // Write the resulting CSV file
-       fName := resultsDir + "/" + uuid + ".csv"
-       err = ioutil.WriteFile(fName, []byte(csv), 0644)
-       if err != nil {
-               return nil, fmt.Errorf("error writing file with path %s: %s", fName, err.Error())
+       if resultsDir != "" {
+               // Write the resulting CSV file
+               fName := resultsDir + "/" + uuid + ".csv"
+               err = ioutil.WriteFile(fName, []byte(csv), 0644)
+               if err != nil {
+                       return nil, fmt.Errorf("error writing file with path %s: %s", fName, err.Error())
+               }
+               logger.Infof("\nUUID report in %s\n\n", fName)
        }
-       logger.Infof("\nUUID report in %s\n\n", fName)
 
        return
 }
@@ -440,10 +468,12 @@ func costanalyzer(prog string, args []string, loader *config.Loader, logger *log
        if exitcode != 0 {
                return
        }
-       err = ensureDirectory(logger, resultsDir)
-       if err != nil {
-               exitcode = 3
-               return
+       if resultsDir != "" {
+               err = ensureDirectory(logger, resultsDir)
+               if err != nil {
+                       exitcode = 3
+                       return
+               }
        }
 
        // Arvados Client setup
@@ -474,12 +504,12 @@ func costanalyzer(prog string, args []string, loader *config.Loader, logger *log
                        for k, v := range cost {
                                cost[k] = v
                        }
-               } else if strings.Contains(uuid, "-xvhdp-") {
+               } else if strings.Contains(uuid, "-xvhdp-") || strings.Contains(uuid, "-4zz18-") {
                        // This is a container request
                        var crCsv map[string]float64
                        crCsv, err = generateCrCsv(logger, uuid, arv, ac, kc, resultsDir, cache)
                        if err != nil {
-                               err = fmt.Errorf("Error generating container_request CSV for uuid %s: %s", uuid, err.Error())
+                               err = fmt.Errorf("Error generating CSV for uuid %s: %s", uuid, err.Error())
                                exitcode = 2
                                return
                        }
@@ -492,6 +522,10 @@ func costanalyzer(prog string, args []string, loader *config.Loader, logger *log
                        // "Home" project is not supported by this program. Skip this uuid, but
                        // keep going.
                        logger.Errorf("Cost analysis is not supported for the 'Home' project: %s", uuid)
+               } else {
+                       logger.Errorf("This argument does not look like a uuid: %s\n", uuid)
+                       exitcode = 3
+                       return
                }
        }
 
@@ -515,14 +549,20 @@ func costanalyzer(prog string, args []string, loader *config.Loader, logger *log
 
        csv += "TOTAL," + strconv.FormatFloat(total, 'f', 8, 64) + "\n"
 
-       // Write the resulting CSV file
-       aFile := resultsDir + "/" + time.Now().Format("2006-01-02-15-04-05") + "-aggregate-costaccounting.csv"
-       err = ioutil.WriteFile(aFile, []byte(csv), 0644)
-       if err != nil {
-               err = fmt.Errorf("Error writing file with path %s: %s", aFile, err.Error())
-               exitcode = 1
-               return
+       if resultsDir != "" {
+               // Write the resulting CSV file
+               aFile := resultsDir + "/" + time.Now().Format("2006-01-02-15-04-05") + "-aggregate-costaccounting.csv"
+               err = ioutil.WriteFile(aFile, []byte(csv), 0644)
+               if err != nil {
+                       err = fmt.Errorf("Error writing file with path %s: %s", aFile, err.Error())
+                       exitcode = 1
+                       return
+               }
+               logger.Infof("Aggregate cost accounting for all supplied uuids in %s\n", aFile)
        }
-       logger.Infof("Aggregate cost accounting for all supplied uuids in %s\n", aFile)
+
+       // Output the total dollar amount on stdout
+       fmt.Fprintf(stdout, "%s\n", strconv.FormatFloat(total, 'f', 8, 64))
+
        return
 }
index 4fab93bf4fadb13919b0346109bc175c96084c93..3488f0d8dfd62522a7bfd4774f0ac8d7b903a88b 100644 (file)
@@ -161,7 +161,49 @@ func (*Suite) TestUsage(c *check.C) {
 func (*Suite) TestContainerRequestUUID(c *check.C) {
        var stdout, stderr bytes.Buffer
        // Run costanalyzer with 1 container request uuid
-       exitcode := Command.RunCommand("costanalyzer.test", []string{"-uuid", arvadostest.CompletedContainerRequestUUID, "-output", "results"}, &bytes.Buffer{}, &stdout, &stderr)
+       exitcode := Command.RunCommand("costanalyzer.test", []string{"-output", "results", arvadostest.CompletedContainerRequestUUID}, &bytes.Buffer{}, &stdout, &stderr)
+       c.Check(exitcode, check.Equals, 0)
+       c.Assert(stderr.String(), check.Matches, "(?ms).*supplied uuids in .*")
+
+       uuidReport, err := ioutil.ReadFile("results/" + arvadostest.CompletedContainerRequestUUID + ".csv")
+       c.Assert(err, check.IsNil)
+       c.Check(string(uuidReport), check.Matches, "(?ms).*TOTAL,,,,,,,,,7.01302889")
+       re := regexp.MustCompile(`(?ms).*supplied uuids in (.*?)\n`)
+       matches := re.FindStringSubmatch(stderr.String()) // matches[1] contains a string like 'results/2020-11-02-18-57-45-aggregate-costaccounting.csv'
+
+       aggregateCostReport, err := ioutil.ReadFile(matches[1])
+       c.Assert(err, check.IsNil)
+
+       c.Check(string(aggregateCostReport), check.Matches, "(?ms).*TOTAL,7.01302889")
+}
+
+func (*Suite) TestCollectionUUID(c *check.C) {
+       var stdout, stderr bytes.Buffer
+
+       // Run costanalyzer with 1 collection uuid, without 'container_request' property
+       exitcode := Command.RunCommand("costanalyzer.test", []string{"-output", "results", arvadostest.FooCollection}, &bytes.Buffer{}, &stdout, &stderr)
+       c.Check(exitcode, check.Equals, 2)
+       c.Assert(stderr.String(), check.Matches, "(?ms).*does not have a 'container_request' property.*")
+
+       // Update the collection, attach a 'container_request' property
+       ac := arvados.NewClientFromEnv()
+       var coll arvados.Collection
+
+       // Update collection record
+       err := ac.RequestAndDecode(&coll, "PUT", "arvados/v1/collections/"+arvadostest.FooCollection, nil, map[string]interface{}{
+               "collection": map[string]interface{}{
+                       "properties": map[string]interface{}{
+                               "container_request": arvadostest.CompletedContainerRequestUUID,
+                       },
+               },
+       })
+       c.Assert(err, check.IsNil)
+
+       stdout.Truncate(0)
+       stderr.Truncate(0)
+
+       // Run costanalyzer with 1 collection uuid
+       exitcode = Command.RunCommand("costanalyzer.test", []string{"-output", "results", arvadostest.FooCollection}, &bytes.Buffer{}, &stdout, &stderr)
        c.Check(exitcode, check.Equals, 0)
        c.Assert(stderr.String(), check.Matches, "(?ms).*supplied uuids in .*")
 
@@ -180,7 +222,7 @@ func (*Suite) TestContainerRequestUUID(c *check.C) {
 func (*Suite) TestDoubleContainerRequestUUID(c *check.C) {
        var stdout, stderr bytes.Buffer
        // Run costanalyzer with 2 container request uuids
-       exitcode := Command.RunCommand("costanalyzer.test", []string{"-uuid", arvadostest.CompletedContainerRequestUUID, "-uuid", arvadostest.CompletedContainerRequestUUID2, "-output", "results"}, &bytes.Buffer{}, &stdout, &stderr)
+       exitcode := Command.RunCommand("costanalyzer.test", []string{"-output", "results", arvadostest.CompletedContainerRequestUUID, arvadostest.CompletedContainerRequestUUID2}, &bytes.Buffer{}, &stdout, &stderr)
        c.Check(exitcode, check.Equals, 0)
        c.Assert(stderr.String(), check.Matches, "(?ms).*supplied uuids in .*")
 
@@ -220,7 +262,7 @@ func (*Suite) TestDoubleContainerRequestUUID(c *check.C) {
        c.Assert(err, check.IsNil)
 
        // Run costanalyzer with the project uuid
-       exitcode = Command.RunCommand("costanalyzer.test", []string{"-uuid", arvadostest.AProjectUUID, "-cache=false", "-log-level", "debug", "-output", "results"}, &bytes.Buffer{}, &stdout, &stderr)
+       exitcode = Command.RunCommand("costanalyzer.test", []string{"-cache=false", "-log-level", "debug", "-output", "results", arvadostest.AProjectUUID}, &bytes.Buffer{}, &stdout, &stderr)
        c.Check(exitcode, check.Equals, 0)
        c.Assert(stderr.String(), check.Matches, "(?ms).*supplied uuids in .*")
 
@@ -243,8 +285,19 @@ func (*Suite) TestDoubleContainerRequestUUID(c *check.C) {
 
 func (*Suite) TestMultipleContainerRequestUUIDWithReuse(c *check.C) {
        var stdout, stderr bytes.Buffer
+       // Run costanalyzer with 2 container request uuids, without output directory specified
+       exitcode := Command.RunCommand("costanalyzer.test", []string{arvadostest.CompletedDiagnosticsContainerRequest1UUID, arvadostest.CompletedDiagnosticsContainerRequest2UUID}, &bytes.Buffer{}, &stdout, &stderr)
+       c.Check(exitcode, check.Equals, 0)
+       c.Assert(stderr.String(), check.Not(check.Matches), "(?ms).*supplied uuids in .*")
+
+       // Check that the total amount was printed to stdout
+       c.Check(stdout.String(), check.Matches, "0.01492030\n")
+
+       stdout.Truncate(0)
+       stderr.Truncate(0)
+
        // Run costanalyzer with 2 container request uuids
-       exitcode := Command.RunCommand("costanalyzer.test", []string{"-uuid", arvadostest.CompletedDiagnosticsContainerRequest1UUID, "-uuid", arvadostest.CompletedDiagnosticsContainerRequest2UUID, "-output", "results"}, &bytes.Buffer{}, &stdout, &stderr)
+       exitcode = Command.RunCommand("costanalyzer.test", []string{"-output", "results", arvadostest.CompletedDiagnosticsContainerRequest1UUID, arvadostest.CompletedDiagnosticsContainerRequest2UUID}, &bytes.Buffer{}, &stdout, &stderr)
        c.Check(exitcode, check.Equals, 0)
        c.Assert(stderr.String(), check.Matches, "(?ms).*supplied uuids in .*")
 
index 4bfe272789062018d47e33e1b46394a725f45e97..3f7f7a9722885d3d373d19ffb22c863621e37274 100644 (file)
@@ -201,7 +201,7 @@ def arg_parser():  # type: () -> argparse.ArgumentParser
                         help=argparse.SUPPRESS)
 
     parser.add_argument("--thread-count", type=int,
-                        default=1, help="Number of threads to use for job submit and output collection.")
+                        default=4, help="Number of threads to use for job submit and output collection.")
 
     parser.add_argument("--http-timeout", type=int,
                         default=5*60, dest="http_timeout", help="API request timeout in seconds. Default is 300 seconds (5 minutes).")
index 99d82f3398332d10e0ac0cbf95decb0d5870bd5a..7b81bfb447a54b15674095508f7a95b4ec21c1e1 100644 (file)
@@ -21,8 +21,7 @@ import ruamel.yaml as yaml
 
 from cwltool.errors import WorkflowException
 from cwltool.process import UnsupportedRequirement, shortname
-from cwltool.pathmapper import adjustFileObjs, adjustDirObjs, visit_class
-from cwltool.utils import aslist
+from cwltool.utils import aslist, adjustFileObjs, adjustDirObjs, visit_class
 from cwltool.job import JobBase
 
 import arvados.collection
@@ -235,7 +234,9 @@ class ArvadosContainer(JobBase):
         container_request["container_image"] = arv_docker_get_image(self.arvrunner.api,
                                                                     docker_req,
                                                                     runtimeContext.pull_image,
-                                                                    runtimeContext.project_uuid)
+                                                                    runtimeContext.project_uuid,
+                                                                    runtimeContext.force_docker_pull,
+                                                                    runtimeContext.tmp_outdir_prefix)
 
         network_req, _ = self.get_requirement("NetworkAccess")
         if network_req:
index a8f56ad1d4f30db21c5a59f0fb6df9258c723c58..3c820827132e378ca88efe4eca9e76ea7df468ef 100644 (file)
@@ -18,7 +18,8 @@ logger = logging.getLogger('arvados.cwl-runner')
 cached_lookups = {}
 cached_lookups_lock = threading.Lock()
 
-def arv_docker_get_image(api_client, dockerRequirement, pull_image, project_uuid):
+def arv_docker_get_image(api_client, dockerRequirement, pull_image, project_uuid,
+                         force_pull, tmp_outdir_prefix):
     """Check if a Docker image is available in Keep, if not, upload it using arv-keepdocker."""
 
     if "http://arvados.org/cwl#dockerCollectionPDH" in dockerRequirement:
@@ -48,7 +49,8 @@ def arv_docker_get_image(api_client, dockerRequirement, pull_image, project_uuid
         if not images:
             # Fetch Docker image if necessary.
             try:
-                cwltool.docker.DockerCommandLineJob.get_image(dockerRequirement, pull_image)
+                cwltool.docker.DockerCommandLineJob.get_image(dockerRequirement, pull_image,
+                                                              force_pull, tmp_outdir_prefix)
             except OSError as e:
                 raise WorkflowException("While trying to get Docker image '%s', failed to execute 'docker': %s" % (dockerRequirement["dockerImageId"], e))
 
index 56c6f39e9d76654816e60b30595841f80b106d34..6067ae9f442b70c6d42db62df1581ab32a7cea37 100644 (file)
@@ -17,7 +17,7 @@ from cwltool.pack import pack
 from cwltool.load_tool import fetch_document, resolve_and_validate_document
 from cwltool.process import shortname
 from cwltool.workflow import Workflow, WorkflowException, WorkflowStep
-from cwltool.pathmapper import adjustFileObjs, adjustDirObjs, visit_class
+from cwltool.utils import adjustFileObjs, adjustDirObjs, visit_class
 from cwltool.context import LoadingContext
 
 import ruamel.yaml as yaml
index 68141586decdc7f656ca715e14bec1a6c436528a..947b630bab9d861deebf3772bb1ef53376fb2be4 100644 (file)
@@ -37,7 +37,7 @@ from .arvworkflow import ArvadosWorkflow, upload_workflow
 from .fsaccess import CollectionFsAccess, CollectionFetcher, collectionResolver, CollectionCache, pdh_size
 from .perf import Perf
 from .pathmapper import NoFollowPathMapper
-from .task_queue import TaskQueue
+from cwltool.task_queue import TaskQueue
 from .context import ArvLoadingContext, ArvRuntimeContext
 from ._version import __version__
 
index 5bad290773be9f49ef2e87b10b2dac48e70ef75b..e0b2d25bc5e89155bccc09567998fe1afcda2888 100644 (file)
@@ -21,7 +21,9 @@ import arvados.collection
 from schema_salad.sourceline import SourceLine
 
 from arvados.errors import ApiError
-from cwltool.pathmapper import PathMapper, MapperEnt, abspath, adjustFileObjs, adjustDirObjs
+from cwltool.pathmapper import PathMapper, MapperEnt
+from cwltool.utils import adjustFileObjs, adjustDirObjs
+from cwltool.stdfsaccess import abspath
 from cwltool.workflow import WorkflowException
 
 from .http import http_to_keep
index f0645311651c244c7d4f18dc3710663d01c2c481..42d4b552acc92bc8d76a3ffbbf32edc539682aad 100644 (file)
@@ -31,8 +31,7 @@ import cwltool.workflow
 from cwltool.process import (scandeps, UnsupportedRequirement, normalizeFilesDirs,
                              shortname, Process, fill_in_defaults)
 from cwltool.load_tool import fetch_document
-from cwltool.pathmapper import adjustFileObjs, adjustDirObjs, visit_class
-from cwltool.utils import aslist
+from cwltool.utils import aslist, adjustFileObjs, adjustDirObjs, visit_class
 from cwltool.builder import substitute
 from cwltool.pack import pack
 from cwltool.update import INTERNAL_VERSION
@@ -444,9 +443,14 @@ def upload_docker(arvrunner, tool):
                 # TODO: can be supported by containers API, but not jobs API.
                 raise SourceLine(docker_req, "dockerOutputDirectory", UnsupportedRequirement).makeError(
                     "Option 'dockerOutputDirectory' of DockerRequirement not supported.")
-            arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, docker_req, True, arvrunner.project_uuid)
+            arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, docker_req, True, arvrunner.project_uuid,
+                                                       arvrunner.runtimeContext.force_docker_pull,
+                                                       arvrunner.runtimeContext.tmp_outdir_prefix)
         else:
-            arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, {"dockerPull": "arvados/jobs:"+__version__}, True, arvrunner.project_uuid)
+            arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, {"dockerPull": "arvados/jobs:"+__version__},
+                                                       True, arvrunner.project_uuid,
+                                                       arvrunner.runtimeContext.force_docker_pull,
+                                                       arvrunner.runtimeContext.tmp_outdir_prefix)
     elif isinstance(tool, cwltool.workflow.Workflow):
         for s in tool.steps:
             upload_docker(arvrunner, s.embedded_tool)
@@ -479,7 +483,10 @@ def packed_workflow(arvrunner, tool, merged_map):
             if "location" in v and v["location"] in merged_map[cur_id].secondaryFiles:
                 v["secondaryFiles"] = merged_map[cur_id].secondaryFiles[v["location"]]
             if v.get("class") == "DockerRequirement":
-                v["http://arvados.org/cwl#dockerCollectionPDH"] = arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, v, True, arvrunner.project_uuid)
+                v["http://arvados.org/cwl#dockerCollectionPDH"] = arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, v, True,
+                                                                                                             arvrunner.project_uuid,
+                                                                                                             arvrunner.runtimeContext.force_docker_pull,
+                                                                                                             arvrunner.runtimeContext.tmp_outdir_prefix)
             for l in v:
                 visit(v[l], cur_id)
         if isinstance(v, list):
@@ -584,7 +591,9 @@ def arvados_jobs_image(arvrunner, img):
     """Determine if the right arvados/jobs image version is available.  If not, try to pull and upload it."""
 
     try:
-        return arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, {"dockerPull": img}, True, arvrunner.project_uuid)
+        return arvados_cwl.arvdocker.arv_docker_get_image(arvrunner.api, {"dockerPull": img}, True, arvrunner.project_uuid,
+                                                          arvrunner.runtimeContext.force_docker_pull,
+                                                          arvrunner.runtimeContext.tmp_outdir_prefix)
     except Exception as e:
         raise Exception("Docker image %s is not available\n%s" % (img, e) )
 
diff --git a/sdk/cwl/arvados_cwl/task_queue.py b/sdk/cwl/arvados_cwl/task_queue.py
deleted file mode 100644 (file)
index d75fec6..0000000
+++ /dev/null
@@ -1,77 +0,0 @@
-# Copyright (C) The Arvados Authors. All rights reserved.
-#
-# SPDX-License-Identifier: Apache-2.0
-
-from future import standard_library
-standard_library.install_aliases()
-from builtins import range
-from builtins import object
-
-import queue
-import threading
-import logging
-
-logger = logging.getLogger('arvados.cwl-runner')
-
-class TaskQueue(object):
-    def __init__(self, lock, thread_count):
-        self.thread_count = thread_count
-        self.task_queue = queue.Queue(maxsize=self.thread_count)
-        self.task_queue_threads = []
-        self.lock = lock
-        self.in_flight = 0
-        self.error = None
-
-        for r in range(0, self.thread_count):
-            t = threading.Thread(target=self.task_queue_func)
-            self.task_queue_threads.append(t)
-            t.start()
-
-    def task_queue_func(self):
-        while True:
-            task = self.task_queue.get()
-            if task is None:
-                return
-            try:
-                task()
-            except Exception as e:
-                logger.exception("Unhandled exception running task")
-                self.error = e
-
-            with self.lock:
-                self.in_flight -= 1
-
-    def add(self, task, unlock, check_done):
-        if self.thread_count > 1:
-            with self.lock:
-                self.in_flight += 1
-        else:
-            task()
-            return
-
-        while True:
-            try:
-                unlock.release()
-                if check_done.is_set():
-                    return
-                self.task_queue.put(task, block=True, timeout=3)
-                return
-            except queue.Full:
-                pass
-            finally:
-                unlock.acquire()
-
-
-    def drain(self):
-        try:
-            # Drain queue
-            while not self.task_queue.empty():
-                self.task_queue.get(True, .1)
-        except queue.Empty:
-            pass
-
-    def join(self):
-        for t in self.task_queue_threads:
-            self.task_queue.put(None)
-        for t in self.task_queue_threads:
-            t.join()
index 334c636dd422e091a2fb760be0b098117b21f0c4..a2fba730c5e6241704a5fc2c1df3bee47d588104 100644 (file)
@@ -39,7 +39,7 @@ setup(name='arvados-cwl-runner',
       # file to determine what version of cwltool and schema-salad to
       # build.
       install_requires=[
-          'cwltool==3.0.20200807132242',
+          'cwltool==3.0.20201121085451',
           'schema-salad==7.0.20200612160654',
           'arvados-python-client{}'.format(pysdk_dep),
           'setuptools',
index 517ca000bb3674817592855879a64370b4d6381c..7aaa27fd63a1e9ee9414062ab8544a9ab3d7d17d 100644 (file)
@@ -303,7 +303,7 @@ def stubs(func):
             'state': 'Committed',
             'command': ['arvados-cwl-runner', '--local', '--api=containers',
                         '--no-log-timestamps', '--disable-validate', '--disable-color',
-                        '--eval-timeout=20', '--thread-count=1',
+                        '--eval-timeout=20', '--thread-count=4',
                         '--enable-reuse', "--collection-cache-size=256", '--debug', '--on-error=continue',
                         '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json'],
             'name': 'submit_wf.cwl',
@@ -413,7 +413,7 @@ class TestSubmit(unittest.TestCase):
         expect_container["command"] = [
             'arvados-cwl-runner', '--local', '--api=containers',
             '--no-log-timestamps', '--disable-validate', '--disable-color',
-            '--eval-timeout=20', '--thread-count=1',
+            '--eval-timeout=20', '--thread-count=4',
             '--disable-reuse', "--collection-cache-size=256",
             '--debug', '--on-error=continue',
             '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
@@ -437,7 +437,7 @@ class TestSubmit(unittest.TestCase):
         expect_container["command"] = [
             'arvados-cwl-runner', '--local', '--api=containers',
             '--no-log-timestamps', '--disable-validate', '--disable-color',
-            '--eval-timeout=20', '--thread-count=1',
+            '--eval-timeout=20', '--thread-count=4',
             '--disable-reuse', "--collection-cache-size=256", '--debug', '--on-error=continue',
             '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
         expect_container["use_existing"] = False
@@ -469,7 +469,7 @@ class TestSubmit(unittest.TestCase):
         expect_container = copy.deepcopy(stubs.expect_container_spec)
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       '--eval-timeout=20', '--thread-count=1',
+                                       '--eval-timeout=20', '--thread-count=4',
                                        '--enable-reuse', "--collection-cache-size=256",
                                        '--debug', '--on-error=stop',
                                        '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
@@ -492,7 +492,7 @@ class TestSubmit(unittest.TestCase):
         expect_container = copy.deepcopy(stubs.expect_container_spec)
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       '--eval-timeout=20', '--thread-count=1',
+                                       '--eval-timeout=20', '--thread-count=4',
                                        '--enable-reuse', "--collection-cache-size=256",
                                        "--output-name="+output_name, '--debug', '--on-error=continue',
                                        '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
@@ -514,7 +514,7 @@ class TestSubmit(unittest.TestCase):
         expect_container = copy.deepcopy(stubs.expect_container_spec)
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       '--eval-timeout=20', '--thread-count=1',
+                                       '--eval-timeout=20', '--thread-count=4',
                                        '--enable-reuse', "--collection-cache-size=256", "--debug",
                                        "--storage-classes=foo", '--on-error=continue',
                                        '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
@@ -525,7 +525,7 @@ class TestSubmit(unittest.TestCase):
                          stubs.expect_container_request_uuid + '\n')
         self.assertEqual(exited, 0)
 
-    @mock.patch("arvados_cwl.task_queue.TaskQueue")
+    @mock.patch("cwltool.task_queue.TaskQueue")
     @mock.patch("arvados_cwl.arvworkflow.ArvadosWorkflow.job")
     @mock.patch("arvados_cwl.executor.ArvCwlExecutor.make_output_collection")
     @stubs
@@ -546,7 +546,7 @@ class TestSubmit(unittest.TestCase):
         make_output.assert_called_with(u'Output of submit_wf.cwl', ['foo'], '', 'zzzzz-4zz18-zzzzzzzzzzzzzzzz')
         self.assertEqual(exited, 0)
 
-    @mock.patch("arvados_cwl.task_queue.TaskQueue")
+    @mock.patch("cwltool.task_queue.TaskQueue")
     @mock.patch("arvados_cwl.arvworkflow.ArvadosWorkflow.job")
     @mock.patch("arvados_cwl.executor.ArvCwlExecutor.make_output_collection")
     @stubs
@@ -577,7 +577,7 @@ class TestSubmit(unittest.TestCase):
         expect_container = copy.deepcopy(stubs.expect_container_spec)
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       '--eval-timeout=20', '--thread-count=1',
+                                       '--eval-timeout=20', '--thread-count=4',
                                        '--enable-reuse', "--collection-cache-size=256", '--debug',
                                        '--on-error=continue',
                                        "--intermediate-output-ttl=3600",
@@ -600,7 +600,7 @@ class TestSubmit(unittest.TestCase):
         expect_container = copy.deepcopy(stubs.expect_container_spec)
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       '--eval-timeout=20', '--thread-count=1',
+                                       '--eval-timeout=20', '--thread-count=4',
                                        '--enable-reuse', "--collection-cache-size=256",
                                        '--debug', '--on-error=continue',
                                        "--trash-intermediate",
@@ -624,7 +624,7 @@ class TestSubmit(unittest.TestCase):
         expect_container = copy.deepcopy(stubs.expect_container_spec)
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       '--eval-timeout=20', '--thread-count=1',
+                                       '--eval-timeout=20', '--thread-count=4',
                                        '--enable-reuse', "--collection-cache-size=256",
                                        "--output-tags="+output_tags, '--debug', '--on-error=continue',
                                        '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
@@ -701,7 +701,7 @@ class TestSubmit(unittest.TestCase):
             'container_image': '999999999999999999999999999999d3+99',
             'command': ['arvados-cwl-runner', '--local', '--api=containers',
                         '--no-log-timestamps', '--disable-validate', '--disable-color',
-                        '--eval-timeout=20', '--thread-count=1',
+                        '--eval-timeout=20', '--thread-count=4',
                         '--enable-reuse', "--collection-cache-size=256", '--debug', '--on-error=continue',
                         '/var/lib/cwl/workflow/expect_arvworkflow.cwl#main', '/var/lib/cwl/cwl.input.json'],
             'cwd': '/var/spool/cwl',
@@ -796,7 +796,7 @@ class TestSubmit(unittest.TestCase):
             'container_image': "999999999999999999999999999999d3+99",
             'command': ['arvados-cwl-runner', '--local', '--api=containers',
                         '--no-log-timestamps', '--disable-validate', '--disable-color',
-                        '--eval-timeout=20', '--thread-count=1',
+                        '--eval-timeout=20', '--thread-count=4',
                         '--enable-reuse', "--collection-cache-size=256", '--debug', '--on-error=continue',
                         '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json'],
             'cwd': '/var/spool/cwl',
@@ -860,7 +860,7 @@ class TestSubmit(unittest.TestCase):
         expect_container["owner_uuid"] = project_uuid
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       "--eval-timeout=20", "--thread-count=1",
+                                       "--eval-timeout=20", "--thread-count=4",
                                        '--enable-reuse', "--collection-cache-size=256", '--debug',
                                        '--on-error=continue',
                                        '--project-uuid='+project_uuid,
@@ -882,7 +882,7 @@ class TestSubmit(unittest.TestCase):
         expect_container = copy.deepcopy(stubs.expect_container_spec)
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       '--eval-timeout=60.0', '--thread-count=1',
+                                       '--eval-timeout=60.0', '--thread-count=4',
                                        '--enable-reuse', "--collection-cache-size=256",
                                        '--debug', '--on-error=continue',
                                        '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
@@ -903,7 +903,7 @@ class TestSubmit(unittest.TestCase):
         expect_container = copy.deepcopy(stubs.expect_container_spec)
         expect_container["command"] = ['arvados-cwl-runner', '--local', '--api=containers',
                                        '--no-log-timestamps', '--disable-validate', '--disable-color',
-                                       '--eval-timeout=20', '--thread-count=1',
+                                       '--eval-timeout=20', '--thread-count=4',
                                        '--enable-reuse', "--collection-cache-size=500",
                                        '--debug', '--on-error=continue',
                                        '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
@@ -995,7 +995,7 @@ class TestSubmit(unittest.TestCase):
         }
         expect_container['command'] = ['arvados-cwl-runner', '--local', '--api=containers',
                         '--no-log-timestamps', '--disable-validate', '--disable-color',
-                        '--eval-timeout=20', '--thread-count=1',
+                        '--eval-timeout=20', '--thread-count=4',
                         '--enable-reuse', "--collection-cache-size=512", '--debug', '--on-error=continue',
                         '/var/lib/cwl/workflow.json#main', '/var/lib/cwl/cwl.input.json']
 
@@ -1061,7 +1061,7 @@ class TestSubmit(unittest.TestCase):
                 "--disable-validate",
                 "--disable-color",
                 "--eval-timeout=20",
-                '--thread-count=1',
+                '--thread-count=4',
                 "--enable-reuse",
                 "--collection-cache-size=256",
                 '--debug',
index a094890650e1a3049f177e9f01ec2330df7c7451..05e5116d722fb75a59973cb4bfc0373999dff50d 100644 (file)
@@ -11,7 +11,7 @@ import logging
 import os
 import threading
 
-from arvados_cwl.task_queue import TaskQueue
+from cwltool.task_queue import TaskQueue
 
 def success_task():
     pass
index 0cb4151ac3b040292c0af4b5c8b5eef888b48096..c79aa4e945924f782b93fc7c76389b657fe6ecc7 100644 (file)
@@ -73,6 +73,7 @@ if not os.path.exists(TEST_TMPDIR):
 my_api_host = None
 _cached_config = {}
 _cached_db_config = {}
+_already_used_port = {}
 
 def find_server_pid(PID_PATH, wait=10):
     now = time.time()
@@ -181,11 +182,15 @@ def find_available_port():
     would take care of the races, and this wouldn't be needed at all.
     """
 
-    sock = socket.socket()
-    sock.bind(('0.0.0.0', 0))
-    port = sock.getsockname()[1]
-    sock.close()
-    return port
+    global _already_used_port
+    while True:
+        sock = socket.socket()
+        sock.bind(('0.0.0.0', 0))
+        port = sock.getsockname()[1]
+        sock.close()
+        if port not in _already_used_port:
+            _already_used_port[port] = True
+            return port
 
 def _wait_until_port_listens(port, timeout=10, warn=True):
     """Wait for a process to start listening on the given port.
index 57eac048a9595b6d2bb5e139d7a191caa3e70017..7a18d970582786e860a0378717340e5e6ba3f3f1 100755 (executable)
@@ -54,7 +54,7 @@ module CreateSuperUserToken
         end
       end
 
-      api_client_auth.api_token
+      "v2/" + api_client_auth.uuid + "/" + api_client_auth.api_token
     end
   end
 end
index e95e0f2264d20e3767e07363e8a151c7915135ff..3c6dcbdbbc51e12f952bb949474c166a576c9665 100644 (file)
@@ -9,11 +9,11 @@ require 'create_superuser_token'
 class CreateSuperUserTokenTest < ActiveSupport::TestCase
   include CreateSuperUserToken
 
-  test "create superuser token twice and expect same resutls" do
+  test "create superuser token twice and expect same results" do
     # Create a token with some string
     token1 = create_superuser_token 'atesttoken'
     assert_not_nil token1
-    assert_equal token1, 'atesttoken'
+    assert_match(/atesttoken$/, token1)
 
     # Create token again; this time, we should get the one created earlier
     token2 = create_superuser_token
@@ -25,7 +25,7 @@ class CreateSuperUserTokenTest < ActiveSupport::TestCase
     # Create a token with some string
     token1 = create_superuser_token 'atesttoken'
     assert_not_nil token1
-    assert_equal token1, 'atesttoken'
+    assert_match(/\/atesttoken$/, token1)
 
     # Create token again with some other string and expect the existing superuser token back
     token2 = create_superuser_token 'someothertokenstring'
@@ -33,37 +33,26 @@ class CreateSuperUserTokenTest < ActiveSupport::TestCase
     assert_equal token1, token2
   end
 
-  test "create superuser token twice and expect same results" do
-    # Create a token with some string
-    token1 = create_superuser_token 'atesttoken'
-    assert_not_nil token1
-    assert_equal token1, 'atesttoken'
-
-    # Create token again with that same superuser token and expect it back
-    token2 = create_superuser_token 'atesttoken'
-    assert_not_nil token2
-    assert_equal token1, token2
-  end
-
   test "create superuser token and invoke again with some other valid token" do
     # Create a token with some string
     token1 = create_superuser_token 'atesttoken'
     assert_not_nil token1
-    assert_equal token1, 'atesttoken'
+    assert_match(/\/atesttoken$/, token1)
 
     su_token = api_client_authorizations("system_user").api_token
     token2 = create_superuser_token su_token
-    assert_equal token2, su_token
+    assert_equal token2.split('/')[2], su_token
   end
 
   test "create superuser token, expire it, and create again" do
     # Create a token with some string
     token1 = create_superuser_token 'atesttoken'
     assert_not_nil token1
-    assert_equal token1, 'atesttoken'
+    assert_match(/\/atesttoken$/, token1)
 
     # Expire this token and call create again; expect a new token created
-    apiClientAuth = ApiClientAuthorization.where(api_token: token1).first
+    apiClientAuth = ApiClientAuthorization.where(api_token: 'atesttoken').first
+    refute_nil apiClientAuth
     Thread.current[:user] = users(:admin)
     apiClientAuth.update_attributes expires_at: '2000-10-10'
 
index 7fb90789a55b164bd2465bf8d2454cf2d30e4f52..97201d2922116bf7db8cd8368557b4247093a074 100644 (file)
@@ -249,11 +249,14 @@ func (h *handler) serveS3(w http.ResponseWriter, r *http.Request) bool {
        fs.ForwardSlashNameSubstitution(h.Config.cluster.Collections.ForwardSlashNameSubstitution)
 
        var objectNameGiven bool
+       var bucketName string
        fspath := "/by_id"
        if id := parseCollectionIDFromDNSName(r.Host); id != "" {
                fspath += "/" + id
+               bucketName = id
                objectNameGiven = strings.Count(strings.TrimSuffix(r.URL.Path, "/"), "/") > 0
        } else {
+               bucketName = strings.SplitN(strings.TrimPrefix(r.URL.Path, "/"), "/", 2)[0]
                objectNameGiven = strings.Count(strings.TrimSuffix(r.URL.Path, "/"), "/") > 1
        }
        fspath += r.URL.Path
@@ -268,7 +271,7 @@ func (h *handler) serveS3(w http.ResponseWriter, r *http.Request) bool {
                        fmt.Fprintln(w, `<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"/>`)
                } else {
                        // ListObjects
-                       h.s3list(w, r, fs)
+                       h.s3list(bucketName, w, r, fs)
                }
                return true
        case r.Method == http.MethodGet || r.Method == http.MethodHead:
@@ -504,15 +507,13 @@ func walkFS(fs arvados.CustomFileSystem, path string, isRoot bool, fn func(path
 
 var errDone = errors.New("done")
 
-func (h *handler) s3list(w http.ResponseWriter, r *http.Request, fs arvados.CustomFileSystem) {
+func (h *handler) s3list(bucket string, w http.ResponseWriter, r *http.Request, fs arvados.CustomFileSystem) {
        var params struct {
-               bucket    string
                delimiter string
                marker    string
                maxKeys   int
                prefix    string
        }
-       params.bucket = strings.SplitN(r.URL.Path[1:], "/", 2)[0]
        params.delimiter = r.FormValue("delimiter")
        params.marker = r.FormValue("marker")
        if mk, _ := strconv.ParseInt(r.FormValue("max-keys"), 10, 64); mk > 0 && mk < s3MaxKeys {
@@ -522,7 +523,7 @@ func (h *handler) s3list(w http.ResponseWriter, r *http.Request, fs arvados.Cust
        }
        params.prefix = r.FormValue("prefix")
 
-       bucketdir := "by_id/" + params.bucket
+       bucketdir := "by_id/" + bucket
        // walkpath is the directory (relative to bucketdir) we need
        // to walk: the innermost directory that is guaranteed to
        // contain all paths that have the requested prefix. Examples:
@@ -557,7 +558,7 @@ func (h *handler) s3list(w http.ResponseWriter, r *http.Request, fs arvados.Cust
        }
        resp := listResp{
                ListResp: s3.ListResp{
-                       Name:      strings.SplitN(r.URL.Path[1:], "/", 2)[0],
+                       Name:      bucket,
                        Prefix:    params.prefix,
                        Delimiter: params.delimiter,
                        Marker:    params.marker,
index b9a6d85ecb1a6bacc75b2a9b94b3ebc54c851f7e..a6aab357e301e4b4703f2c72ef4a1a490ab65766 100644 (file)
@@ -10,6 +10,7 @@ import (
        "fmt"
        "io/ioutil"
        "net/http"
+       "net/http/httptest"
        "net/url"
        "os"
        "os/exec"
@@ -303,7 +304,7 @@ func (s *IntegrationSuite) TestS3ProjectPutObjectNotSupported(c *check.C) {
                err = bucket.PutReader(trial.path, bytes.NewReader(buf), int64(len(buf)), trial.contentType, s3.Private, s3.Options{})
                c.Check(err.(*s3.Error).StatusCode, check.Equals, 400)
                c.Check(err.(*s3.Error).Code, check.Equals, `InvalidArgument`)
-               c.Check(err, check.ErrorMatches, `(mkdir "by_id/zzzzz-j7d0g-[a-z0-9]{15}/newdir2?"|open "/zzzzz-j7d0g-[a-z0-9]{15}/newfile") failed: invalid argument`)
+               c.Check(err, check.ErrorMatches, `(mkdir "/by_id/zzzzz-j7d0g-[a-z0-9]{15}/newdir2?"|open "/zzzzz-j7d0g-[a-z0-9]{15}/newfile") failed: invalid argument`)
 
                _, err = bucket.GetReader(trial.path)
                c.Check(err.(*s3.Error).StatusCode, check.Equals, 404)
@@ -440,6 +441,70 @@ func (stage *s3stage) writeBigDirs(c *check.C, dirs int, filesPerDir int) {
        c.Assert(fs.Sync(), check.IsNil)
 }
 
+func (s *IntegrationSuite) TestS3VirtualHostStyleRequests(c *check.C) {
+       stage := s.s3setup(c)
+       defer stage.teardown(c)
+       for _, trial := range []struct {
+               url            string
+               method         string
+               body           string
+               responseCode   int
+               responseRegexp []string
+       }{
+               {
+                       url:            "https://" + stage.collbucket.Name + ".example.com/",
+                       method:         "GET",
+                       responseCode:   http.StatusOK,
+                       responseRegexp: []string{`(?ms).*sailboat\.txt.*`},
+               },
+               {
+                       url:            "https://" + strings.Replace(stage.coll.PortableDataHash, "+", "-", -1) + ".example.com/",
+                       method:         "GET",
+                       responseCode:   http.StatusOK,
+                       responseRegexp: []string{`(?ms).*sailboat\.txt.*`},
+               },
+               {
+                       url:            "https://" + stage.projbucket.Name + ".example.com/?prefix=" + stage.coll.Name + "/&delimiter=/",
+                       method:         "GET",
+                       responseCode:   http.StatusOK,
+                       responseRegexp: []string{`(?ms).*sailboat\.txt.*`},
+               },
+               {
+                       url:            "https://" + stage.projbucket.Name + ".example.com/" + stage.coll.Name + "/sailboat.txt",
+                       method:         "GET",
+                       responseCode:   http.StatusOK,
+                       responseRegexp: []string{`⛵\n`},
+               },
+               {
+                       url:          "https://" + stage.projbucket.Name + ".example.com/" + stage.coll.Name + "/beep",
+                       method:       "PUT",
+                       body:         "boop",
+                       responseCode: http.StatusOK,
+               },
+               {
+                       url:            "https://" + stage.projbucket.Name + ".example.com/" + stage.coll.Name + "/beep",
+                       method:         "GET",
+                       responseCode:   http.StatusOK,
+                       responseRegexp: []string{`boop`},
+               },
+       } {
+               url, err := url.Parse(trial.url)
+               c.Assert(err, check.IsNil)
+               req, err := http.NewRequest(trial.method, url.String(), bytes.NewReader([]byte(trial.body)))
+               c.Assert(err, check.IsNil)
+               req.Header.Set("Authorization", "AWS "+arvadostest.ActiveTokenV2+":none")
+               rr := httptest.NewRecorder()
+               s.testServer.Server.Handler.ServeHTTP(rr, req)
+               resp := rr.Result()
+               c.Check(resp.StatusCode, check.Equals, trial.responseCode)
+               body, err := ioutil.ReadAll(resp.Body)
+               c.Assert(err, check.IsNil)
+               for _, re := range trial.responseRegexp {
+                       c.Check(string(body), check.Matches, re)
+               }
+       }
+}
+
 func (s *IntegrationSuite) TestS3GetBucketVersioning(c *check.C) {
        stage := s.s3setup(c)
        defer stage.teardown(c)
index 3175224d037525096d4e2ac56957dab90cc4795e..10d08b414adfdc726e586be2b03a8b1c8b2afdd4 100644 (file)
@@ -17,4 +17,4 @@ and run it as root.
 There's an example `Vagrantfile` also, to install it in a vagrant box if you want
 to try it locally.
 
-For more information, please read https://doc.arvados.org/v2.1/install/install-using-salt.html
+For more information, please read https://doc.arvados.org/main/install/salt-single-host.html
index ed3466ddebd81182540b234d63bbd221efd4b8e9..6966ea83452f74558a4749f44d01d3b076a629d6 100644 (file)
@@ -33,6 +33,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
     arv.vm.provision "shell",
                      path: "provision.sh",
                      args: [
+                       # "--debug",
                        "--test",
                        "--vagrant",
                        "--ssl-port=8443"
index a207d019875a7b43bc2dfed1e116f4a9410ab1a5..a4d55c2216dff7064be0f028473395b0ef97a7f5 100755 (executable)
@@ -1,4 +1,4 @@
-#!/bin/bash 
+#!/bin/bash
 
 # Copyright (C) The Arvados Authors. All rights reserved.
 #
@@ -48,6 +48,13 @@ VERSION="latest"
 ##########################################################
 # Usually there's no need to modify things below this line
 
+# Formulas versions
+ARVADOS_TAG="v1.1.3"
+POSTGRES_TAG="v0.41.3"
+NGINX_TAG="v2.4.0"
+DOCKER_TAG="v1.0.0"
+LOCALE_TAG="v0.3.4"
+
 set -o pipefail
 
 # capture the directory that the script is running from
@@ -139,7 +146,7 @@ file_roots:
   base:
     - ${S_DIR}
     - ${F_DIR}/*
-    - ${F_DIR}/*/test/salt/states
+    - ${F_DIR}/*/test/salt/states/examples
 
 pillar_roots:
   base:
@@ -154,8 +161,8 @@ mkdir -p ${P_DIR}
 cat > ${S_DIR}/top.sls << EOFTSLS
 base:
   '*':
-    - example_single_host_host_entries
-    - example_add_snakeoil_certs
+    - single_host.host_entries
+    - single_host.snakeoil_certs
     - locale
     - nginx.passenger
     - postgres
@@ -182,12 +189,13 @@ base:
     - postgresql
 EOFPSLS
 
-
 # Get the formula and dependencies
 cd ${F_DIR} || exit 1
-for f in postgres arvados nginx docker locale; do
-  git clone https://github.com/saltstack-formulas/${f}-formula.git
-done
+git clone --branch "${ARVADOS_TAG}" https://github.com/saltstack-formulas/arvados-formula.git
+git clone --branch "${DOCKER_TAG}" https://github.com/saltstack-formulas/docker-formula.git
+git clone --branch "${LOCALE_TAG}" https://github.com/saltstack-formulas/locale-formula.git
+git clone --branch "${NGINX_TAG}" https://github.com/saltstack-formulas/nginx-formula.git
+git clone --branch "${POSTGRES_TAG}" https://github.com/saltstack-formulas/postgres-formula.git
 
 if [ "x${BRANCH}" != "x" ]; then
   cd ${F_DIR}/arvados-formula || exit 1
@@ -258,9 +266,16 @@ if [ "x${RESTORE_PSQL}" = "xyes" ]; then
 fi
 # END FIXME! #16992 Temporary fix for psql call in arvados-api-server
 
-# If running in a vagrant VM, add default user to docker group
+# Leave a copy of the Arvados CA so the user can copy it where it's required
+echo "Copying the Arvados CA certificate to the installer dir, so you can import it"
+# If running in a vagrant VM, also add default user to docker group
 if [ "x${VAGRANT}" = "xyes" ]; then
-  usermod -a -G docker vagrant 
+  cp /etc/ssl/certs/arvados-snakeoil-ca.pem /vagrant
+
+  echo "Adding the vagrant user to the docker group"
+  usermod -a -G docker vagrant
+else
+  cp /etc/ssl/certs/arvados-snakeoil-ca.pem ${SCRIPT_DIR}
 fi
 
 # Test that the installation finished correctly
index dffd6575e02dc768daa7696e4a4eb94ee2146036..a06244270c237150f159220fffaab4de1a9f2f19 100644 (file)
@@ -73,7 +73,7 @@ arvados:
     tls:
       # certificate: ''
       # key: ''
-      # required to test with snakeoil certs
+      # required to test with arvados-snakeoil certs
       insecure: true
 
     ### TOKENS
index 7c99d2dea7538e042321b1f78b1811cb8224723b..00c3b3a13e6d10f04a4f677e02d86913e3289f17 100644 (file)
@@ -52,8 +52,7 @@ nginx:
               - proxy_set_header: 'X-Real-IP $remote_addr'
               - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
               - proxy_set_header: 'X-External-Client $external_client'
-            # - include: 'snippets/letsencrypt.conf'
-            - include: 'snippets/snakeoil.conf'
+            - include: 'snippets/arvados-snakeoil.conf'
             - access_log: /var/log/nginx/__CLUSTER__.__DOMAIN__.access.log combined
             - error_log: /var/log/nginx/__CLUSTER__.__DOMAIN__.error.log
             - client_max_body_size: 128m
index fc4854e5a8d35fad8cabb628f3383a5a4f65dc83..6554f79a7c44d1f66ac17ce4e4d4b9db4ff7d2e2 100644 (file)
@@ -52,7 +52,6 @@ nginx:
             - client_max_body_size: 64M
             - proxy_http_version: '1.1'
             - proxy_request_buffering: 'off'
-            # - include: 'snippets/letsencrypt.conf'
-            - include: 'snippets/snakeoil.conf'
+            - include: 'snippets/arvados-snakeoil.conf'
             - access_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.access.log combined
             - error_log: /var/log/nginx/keepproxy.__CLUSTER__.__DOMAIN__.error.log
index 513c0393e0e98b2f9f48634527d083c1e415b1a3..cc871b9da14af308163348d85b4a0afe69b6be24 100644 (file)
@@ -52,7 +52,6 @@ nginx:
             - client_max_body_size: 0
             - proxy_http_version: '1.1'
             - proxy_request_buffering: 'off'
-            # - include: 'snippets/letsencrypt.conf'
-            - include: 'snippets/snakeoil.conf'
+            - include: 'snippets/arvados-snakeoil.conf'
             - access_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.access.log combined
             - error_log: /var/log/nginx/collections.__CLUSTER__.__DOMAIN__.error.log
index 495de82d235e2bebb511b56e559110f16b4573c5..a0756b7ce5504df125225017bf16edc3422ef6b3 100644 (file)
@@ -68,8 +68,7 @@ nginx:
                 - add_header: "'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'"
                 - add_header: "'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'"
 
-            # - include: 'snippets/letsencrypt.conf'
-            - include: 'snippets/snakeoil.conf'
+            - include: 'snippets/arvados-snakeoil.conf'
             - access_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.access.log combined
             - error_log: /var/log/nginx/webshell.__CLUSTER__.__DOMAIN__.error.log
 
index 1848a8737ea0fb8e10ff9c884ac87111b2a17b42..ebe03f733745b1f168822deb3171e45183bc13b9 100644 (file)
@@ -53,7 +53,6 @@ nginx:
             - client_max_body_size: 64M
             - proxy_http_version: '1.1'
             - proxy_request_buffering: 'off'
-            # - include: 'snippets/letsencrypt.conf'
-            - include: 'snippets/snakeoil.conf'
+            - include: 'snippets/arvados-snakeoil.conf'
             - access_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.access.log combined
             - error_log: /var/log/nginx/ws.__CLUSTER__.__DOMAIN__.error.log
index 733397adf31556f70c0a47415a2649bb01c441da..8930be408cb0f56350ab3af1d1ab071530bf03b5 100644 (file)
@@ -42,8 +42,7 @@ nginx:
               - 'if (-f $document_root/maintenance.html)':
                 - return: 503
             - location /config.json:
-              - return: {{ "200 '" ~ '{"API_HOST":"__CLUSTER__.__DOMAIN__"}' ~ "'" }}
-            # - include: 'snippets/letsencrypt.conf'
-            - include: 'snippets/snakeoil.conf'
+              - return: {{ "200 '" ~ '{"API_HOST":"__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__"}' ~ "'" }}
+            - include: 'snippets/arvados-snakeoil.conf'
             - access_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.access.log combined
             - error_log: /var/log/nginx/workbench2.__CLUSTER__.__DOMAIN__.error.log
index 9a382e777cc4dbf81f2834b63816c495977c64f2..be571ca77e84ba6208175a431b051a8b72bd5926 100644 (file)
@@ -54,8 +54,7 @@ nginx:
               - proxy_set_header: 'Host $http_host'
               - proxy_set_header: 'X-Real-IP $remote_addr'
               - proxy_set_header: 'X-Forwarded-For $proxy_add_x_forwarded_for'
-            # - include: 'snippets/letsencrypt.conf'
-            - include: 'snippets/snakeoil.conf'
+            - include: 'snippets/arvados-snakeoil.conf'
             - access_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.access.log combined
             - error_log: /var/log/nginx/workbench.__CLUSTER__.__DOMAIN__.error.log
 
index cf61d92b5db2d594e748cc8eee4db9f79b237b3a..8d9de6fdf0b12e338208fa8ba2fcd89b5b995139 100755 (executable)
@@ -7,6 +7,15 @@ export ARVADOS_API_TOKEN=changemesystemroottoken
 export ARVADOS_API_HOST=__CLUSTER__.__DOMAIN__:__HOST_SSL_PORT__
 export ARVADOS_API_HOST_INSECURE=true
 
+set -o pipefail
+
+# First, validate that the CA is installed and that we can query it with no errors.
+if ! curl -s -o /dev/null https://workbench.${ARVADOS_API_HOST}/users/welcome?return_to=%2F; then
+  echo "The Arvados CA was not correctly installed. Although some components will work,"
+  echo "others won't. Please verify that the CA cert file was installed correctly and"
+  echo "retry running these tests."
+  exit 1
+fi
 
 # https://doc.arvados.org/v2.0/install/install-jobs-image.html
 echo "Creating Arvados Standard Docker Images project"