Merge branch '17395-container-output-storage-class' into main
authorPeter Amstutz <peter.amstutz@curii.com>
Thu, 1 Jul 2021 20:13:58 +0000 (16:13 -0400)
committerPeter Amstutz <peter.amstutz@curii.com>
Thu, 1 Jul 2021 20:13:58 +0000 (16:13 -0400)
refs #17395

Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <peter.amstutz@curii.com>

35 files changed:
build/version-at-commit.sh
doc/admin/keep-recovering-data.html.textile.liquid
doc/admin/restricting-upload-download.html.textile.liquid [new file with mode: 0644]
doc/admin/upgrading.html.textile.liquid
doc/install/install-manual-prerequisites.html.textile.liquid
doc/install/salt-multi-host.html.textile.liquid
doc/install/salt-single-host.html.textile.liquid
doc/install/salt-vagrant.html.textile.liquid
doc/sdk/go/example.html.textile.liquid
doc/user/cwl/cwl-runner.html.textile.liquid
doc/user/cwl/federated-workflows.html.textile.liquid
lib/config/config.default.yml
lib/config/export.go
lib/config/generated_config.go
sdk/go/arvados/config.go
sdk/go/arvados/fs_collection.go
sdk/go/arvadostest/fixtures.go
services/api/test/functional/arvados/v1/job_reuse_controller_test.rb
services/arv-git-httpd/server_test.go
services/keep-web/cache.go
services/keep-web/handler.go
services/keep-web/handler_test.go
services/keep-web/s3.go
services/keep-web/server_test.go
services/keepproxy/keepproxy.go
services/keepproxy/keepproxy_test.go
tools/arvbox/lib/arvbox/docker/Dockerfile.demo
tools/arvbox/lib/arvbox/docker/service/gitolite/run-service
tools/salt-install/config_examples/single_host/multiple_hostnames/states/snakeoil_certs.sls
tools/salt-install/config_examples/single_host/single_hostname/states/snakeoil_certs.sls
tools/salt-install/local.params.example.multiple_hosts
tools/salt-install/local.params.example.single_host_multiple_hostnames
tools/salt-install/local.params.example.single_host_single_hostname
tools/salt-install/provision.sh
tools/user-activity/arvados_user_activity/main.py

index fc60d53e0f20870b355aacd359ec3e3b99ed6a21..e42b8753934b07581aa69b52950a8cdad5bc521d 100755 (executable)
@@ -14,12 +14,12 @@ devsuffix="~dev"
 #
 # 1. commit is directly tagged.  print that.
 #
-# 2. commit is on master or a development branch, the nearest tag is older
-#    than commit where this branch joins master.
+# 2. commit is on main or a development branch, the nearest tag is older
+#    than commit where this branch joins main.
 #    -> take greatest version tag in repo X.Y.Z and assign X.(Y+1).0
 #
 # 3. commit is on a release branch, the nearest tag is newer
-#    than the commit where this branch joins master.
+#    than the commit where this branch joins main.
 #    -> take nearest tag X.Y.Z and assign X.Y.(Z+1)
 
 tagged=$(git tag --points-at "$commit")
@@ -28,13 +28,13 @@ if [[ -n "$tagged" ]] ; then
     echo $tagged
 else
     # 1. get the nearest tag with 'git describe'
-    # 2. get the merge base between this commit and master
+    # 2. get the merge base between this commit and main
     # 3. if the tag is an ancestor of the merge base,
     #    (tag is older than merge base) increment minor version
     #    else, tag is newer than merge base, so increment point version
 
     nearest_tag=$(git describe --tags --abbrev=0 --match "$versionglob" "$commit")
-    merge_base=$(git merge-base origin/master "$commit")
+    merge_base=$(git merge-base origin/main "$commit")
 
     if git merge-base --is-ancestor "$nearest_tag" "$merge_base" ; then
         # x.(y+1).0~devTIMESTAMP, where x.y.z is the newest version that does not contain $commit
index 3a9f51e569794b406d4f57838ecf7ded528a5830..14e25dd11c5bd17da6808f40553033e813246b09 100644 (file)
@@ -106,4 +106,4 @@ For blocks which were trashed long enough ago that they've been deleted, it may
 * Delete the affected collections so that job reuse doesn't attempt to reuse them (it's likely that if one block is missing, they all are, so they're unlikely to contain any useful data)
 * Resubmit any container requests for which you want the output collections regenerated
 
-The Arvados repository contains a tool that can be used to generate a report to help with this task at "arvados/tools/keep-xref/keep-xref.py":https://github.com/arvados/arvados/blob/master/tools/keep-xref/keep-xref.py
+The Arvados repository contains a tool that can be used to generate a report to help with this task at "arvados/tools/keep-xref/keep-xref.py":https://github.com/arvados/arvados/blob/main/tools/keep-xref/keep-xref.py
diff --git a/doc/admin/restricting-upload-download.html.textile.liquid b/doc/admin/restricting-upload-download.html.textile.liquid
new file mode 100644 (file)
index 0000000..ea10752
--- /dev/null
@@ -0,0 +1,148 @@
+---
+layout: default
+navsection: admin
+title: Restricting upload or download
+...
+
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+For some use cases, you may want to limit the ability of users to upload or download data from outside the cluster.  (By "outside" we mean from networks other than the cluster's own private network).  For example, this makes it possible to share restricted data sets with users so that they may run their own data analysis on the cluster, while preventing them from easily downloading the data set to their local workstation.
+
+This feature exists in addition to the existing Arvados permission system.  Users can only download from collections they have @read@ access to, and can only upload to projects and collections they have @write@ access to.
+
+There are two services involved in accessing data from outside the cluster.
+
+h2. Keepproxy Permissions
+
+Permitting @keeproxy@ makes it possible to use @arv-put@ and @arv-get@, and upload from Workbench 1.  It works in terms of individual 64 MiB keep blocks.  It prints a log each time a user uploads or downloads an individual block.
+
+The default policy allows anyone to upload or download.
+
+<pre>
+    Collections:
+      KeepproxyPermission:
+        User:
+          Download: true
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+</pre>
+
+If you create a sharing link as an admin user, and then give someone the token from the sharing link to download a file using @arv-get@, because the downloader is anonymous, the download permission will be restricted based on the "User" role and not the "Admin" role.
+
+h2. WebDAV and S3 API Permissions
+
+Permitting @WebDAV@ makes it possible to use WebDAV, S3 API, download from Workbench 1, and upload/download with Workbench 2.  It works in terms of individual files.  It prints a log each time a user uploads or downloads a file.  When @WebDAVLogEvents@ (default true) is enabled, it also adds an entry into the API server @logs@ table.
+
+When a user attempts to upload or download from a service without permission, they will receive a @403 Forbidden@ response.  This only applies to file content.
+
+Denying download permission does not deny access to access to XML file listings with PROPFIND, or auto-generated HTML documents containing file listings.
+
+Denying upload permission does not deny other operations that modify collections without directly accessing file content, such as MOVE and COPY.
+
+The default policy allows anyone to upload or download.
+
+<pre>
+    Collections:
+      WebDAVPermisison:
+        User:
+          Download: true
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+      WebDAVLogEvents: true
+</pre>
+
+If you create a sharing link as an admin user, and then give someone the token from the sharing link to download a file over HTTP (WebDAV or S3 API), because the downloader is anonymous, the download permission will be restricted based on the "User" role and not the "Admin" role.
+
+h2. Shell node and container permissions
+
+Be aware that even when upload and download from outside the network is not allowed, a user who has access to a shell node or runs a container still has internal access to Keep.  (This is necessary to be able to run workflows).  From the shell node or container, a user could send data outside the network by some other method, although this requires more intent than accidentally clicking on a link and downloading a file.  It is possible to set up a firewall to prevent shell and compute nodes from making connections to hosts outside the private network.  Exactly how to configure firewalls is out of scope for this page, as it depends on the specific network infrastructure of your cluster.
+
+h2. Choosing a policy
+
+This distinction between WebDAV and Keepproxy is important for auditing.  WebDAV records 'upload' and 'download' events on the API server that are included in the "User Activity Report":user-activity.html ,  whereas @keepproxy@ only logs upload and download of individual blocks, which require a reverse lookup to determine the collection(s) and file(s) a block is associated with.
+
+You set separate permissions for @WebDAV@ and @Keepproxy@, with separate policies for regular users and admin users.
+
+These policies apply to only access from outside the cluster, using Workbench or Arvados CLI tools.
+
+The @WebDAVLogEvents@ option should be enabled if you intend to the run the "User Activity Report":user-activity.html .  If you don't need audits, or you are running a site that is mostly serving public data to anonymous downloaders, you can disable in to avoid the extra API server request.
+
+h3. Audited downloads
+
+For ease of access auditing, this policy prevents downloads using @arv-get@.  Downloads through WebDAV and S3 API are permitted, but logged.  Uploads are allowed.
+
+<pre>
+    Collections:
+      WebDAVPermisison:
+        User:
+          Download: true
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+
+      KeepproxyPermission:
+        User:
+          Download: false
+          Upload: true
+        Admin:
+          Download: false
+          Upload: true
+      WebDAVLogEvents: true
+</pre>
+
+h3. Disallow downloads by regular users
+
+This policy prevents regular users (non-admin) from downloading data.  Uploading is allowed.  This supports the case where restricted data sets are shared with users so that they may run their own data analysis on the cluster, while preventing them from downloading the data set to their local workstation.  Be aware that users won't be able to download the results of their analysis, either, requiring an admin in the loop or some other process to release results.
+
+<pre>
+    Collections:
+      WebDAVPermisison:
+        User:
+          Download: false
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+
+      KeepproxyPermission:
+        User:
+          Download: false
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+      WebDAVLogEvents: true
+</pre>
+
+h3. Disallow uploads by regular users
+
+This policy is suitable for an installation where data is being shared with a group of users who are allowed to download the data, but not permitted to store their own data on the cluster.
+
+<pre>
+    Collections:
+      WebDAVPermisison:
+        User:
+          Download: true
+          Upload: false
+        Admin:
+          Download: true
+          Upload: true
+
+      KeepproxyPermission:
+        User:
+          Download: true
+          Upload: false
+        Admin:
+          Download: true
+          Upload: true
+      WebDAVLogEvents: true
+</pre>
index 803b399be22bf058121b85e46ff975c2a71262aa..13f093394ba481d92d25a56a08fa0f3d04285d30 100644 (file)
@@ -414,7 +414,7 @@ h2(#v1_3_3). v1.3.3 (2019-05-14)
 
 This release corrects a potential data loss issue, if you are running Arvados 1.3.0 or 1.3.1 we strongly recommended disabling @keep-balance@ until you can upgrade to 1.3.3 or 1.4.0. With keep-balance disabled, there is no chance of data loss.
 
-We've put together a "wiki page":https://dev.arvados.org/projects/arvados/wiki/Recovering_lost_data which outlines how to recover blocks which have been put in the trash, but not yet deleted, as well as how to identify any collections which have missing blocks so that they can be regenerated. The keep-balance component has been enhanced to provide a list of missing blocks and affected collections and we've provided a "utility script":https://github.com/arvados/arvados/blob/master/tools/keep-xref/keep-xref.py  which can be used to identify the workflows that generated those collections and who ran those workflows, so that they can be rerun.
+We've put together a "wiki page":https://dev.arvados.org/projects/arvados/wiki/Recovering_lost_data which outlines how to recover blocks which have been put in the trash, but not yet deleted, as well as how to identify any collections which have missing blocks so that they can be regenerated. The keep-balance component has been enhanced to provide a list of missing blocks and affected collections and we've provided a "utility script":https://github.com/arvados/arvados/blob/main/tools/keep-xref/keep-xref.py  which can be used to identify the workflows that generated those collections and who ran those workflows, so that they can be rerun.
 
 h2(#v1_3_0). v1.3.0 (2018-12-05)
 
index 73b54c462e91776ae76150e233863f97e34f8d6d..1f0186e33aba5a257da8ce5afb9886dd5f9e9ce3 100644 (file)
@@ -48,7 +48,7 @@ Arvados consists of many components, some of which may be omitted (at the cost o
 
 table(table table-bordered table-condensed).
 |\3=. *Core*|
-|"Postgres database":install-postgresql.html |Stores data for the API server.|Required.|
+|"PostgreSQL database":install-postgresql.html |Stores data for the API server.|Required.|
 |"API server":install-api-server.html |Core Arvados logic for managing users, groups, collections, containers, and enforcing permissions.|Required.|
 |\3=. *Keep (storage)*|
 |"Keepstore":install-keepstore.html |Stores content-addressed blocks in a variety of backends (local filesystem, cloud object storage).|Required.|
@@ -75,6 +75,10 @@ Choose which backend you will use to authenticate users.
 * LDAP login to authenticate users by username/password using the LDAP protocol, supported by many services such as OpenLDAP and Active Directory.
 * PAM login to authenticate users by username/password according to the PAM configuration on the controller node.
 
+h2(#postgresql). PostgreSQL
+
+Arvados works well with a standalone PostgreSQL installation. When deploying on AWS, Aurora RDS also works but Aurora Serverless is not recommended.
+
 h2(#storage). Storage backend
 
 Choose which backend you will use for storing and retrieving content-addressed Keep blocks.
@@ -104,7 +108,7 @@ For a production installation, this is a reasonable starting point:
 <div class="offset1">
 table(table table-bordered table-condensed).
 |_. Function|_. Number of nodes|_. Recommended specs|
-|Postgres database, Arvados API server, Arvados controller, Git, Websockets, Container dispatcher|1|16+ GiB RAM, 4+ cores, fast disk for database|
+|PostgreSQL database, Arvados API server, Arvados controller, Git, Websockets, Container dispatcher|1|16+ GiB RAM, 4+ cores, fast disk for database|
 |Workbench, Keepproxy, Keep-web, Keep-balance|1|8 GiB RAM, 2+ cores|
 |Keepstore servers ^1^|2+|4 GiB RAM|
 |Compute worker nodes ^1^|0+ |Depends on workload; scaled dynamically in the cloud|
index 89dfc1717d5cf7acdd01f53a9344a3665f0864eb..2e4f49b0196c7f8d140e74438899fff76cade2f6 100644 (file)
@@ -68,7 +68,7 @@ Again, if your infrastructure differs from the setup proposed above (ie, using R
 
 h3(#hosts_setup_using_terraform). Hosts setup using terraform (AWS, experimental)
 
-We added a few "terraform":https://terraform.io/ scripts (https://github.com/arvados/arvados/tree/master/tools/terraform) to let you create these instances easier in an AWS account. Check "the Arvados terraform documentation":/doc/install/terraform.html for more details.
+We added a few "terraform":https://terraform.io/ scripts (https://github.com/arvados/arvados/tree/main/tools/terraform) to let you create these instances easier in an AWS account. Check "the Arvados terraform documentation":/doc/install/terraform.html for more details.
 
 
 
@@ -78,7 +78,7 @@ h2(#multi_host). Multi host install using the provision.sh script
 {% if site.current_version %}
 {% assign branchname = site.current_version | slice: 1, 5 | append: '-dev' %}
 {% else %}
-{% assign branchname = 'master' %}
+{% assign branchname = 'main' %}
 {% endif %}
 
 This is a package-based installation method. Start with the @provision.sh@ script which is available by cloning the @{{ branchname }}@ branch from "https://git.arvados.org/arvados.git":https://git.arvados.org/arvados.git . The @provision.sh@ script and its supporting files can be found in the "arvados/tools/salt-install":https://git.arvados.org/arvados.git/tree/refs/heads/{{ branchname }}:/tools/salt-install directory in the Arvados git repository.
@@ -91,7 +91,7 @@ After setting up a few variables in a config file (next step), you'll be ready t
 
 h3(#create_a_compute_image). Create a compute image
 
-In a multi-host installation, containers are dispatched in docker daemons running in the <i>compute instances</i>, which need some special setup. We provide a "compute image builder script":https://github.com/arvados/arvados/tree/master/tools/compute-images that you can use to build a template image following "these instructions":https://doc.arvados.org/main/install/crunch2-cloud/install-compute-node.html . Once you have that image created, you can use the image ID in the Arvados configuration in the next steps.
+In a multi-host installation, containers are dispatched in docker daemons running in the <i>compute instances</i>, which need some special setup. We provide a "compute image builder script":https://github.com/arvados/arvados/tree/main/tools/compute-images that you can use to build a template image following "these instructions":https://doc.arvados.org/main/install/crunch2-cloud/install-compute-node.html . Once you have that image created, you can use the image ID in the Arvados configuration in the next steps.
 
 h2(#choose_configuration). Choose the desired configuration
 
index 39eb47965a8b4ea3e7d3ffa8dcf7f0ed8b8c2dd2..6a066f77b0a9d6b96270dff35604779e33a8227d 100644 (file)
@@ -28,7 +28,7 @@ h2(#single_host). Single host install using the provision.sh script
 {% if site.current_version %}
 {% assign branchname = site.current_version | slice: 1, 5 | append: '-dev' %}
 {% else %}
-{% assign branchname = 'master' %}
+{% assign branchname = 'main' %}
 {% endif %}
 
 This is a package-based installation method. Start with the @provision.sh@ script which is available by cloning the @{{ branchname }}@ branch from "https://git.arvados.org/arvados.git":https://git.arvados.org/arvados.git .  The @provision.sh@ script and its supporting files can be found in the "arvados/tools/salt-install":https://git.arvados.org/arvados.git/tree/refs/heads/{{ branchname }}:/tools/salt-install directory in the Arvados git repository.
index c913e2041caffc91b6f74bed51a66cad2e975923..8ba4b324e56632a98d81240605673cbf1a2ace23 100644 (file)
@@ -18,7 +18,7 @@ SPDX-License-Identifier: CC-BY-SA-3.0
 
 h2(#vagrant). Vagrant
 
-This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/master/tools/salt-install directory in the Arvados git repository.
+This is a package-based installation method. The Salt scripts are available from the "tools/salt-install":https://github.com/arvados/arvados/tree/main/tools/salt-install directory in the Arvados git repository.
 
 A @Vagrantfile@ is provided to install Arvados in a virtual machine on your computer using "Vagrant":https://www.vagrantup.com/.
 
index 688c45bf34b2681243dbc2816f60e5a04911203e..031791fde1d8c6dde440bd8eacac49637257beec 100644 (file)
@@ -78,4 +78,4 @@ You can save this source as a .go file and run it:
 
 <notextile>{% code example_sdk_go as go %}</notextile>
 
-A few more usage examples can be found in the "services/keepproxy":https://dev.arvados.org/projects/arvados/repository/revisions/master/show/services/keepproxy and "sdk/go/keepclient":https://dev.arvados.org/projects/arvados/repository/revisions/master/show/sdk/go/keepclient directories in the arvados source tree.
+A few more usage examples can be found in the "services/keepproxy":https://dev.arvados.org/projects/arvados/repository/revisions/main/show/services/keepproxy and "sdk/go/keepclient":https://dev.arvados.org/projects/arvados/repository/revisions/main/show/sdk/go/keepclient directories in the arvados source tree.
index 442a60b04f968706f604b53a2a7484d3f1daeb83..b108de551a077d4c24626386eb3a86b745457a09 100644 (file)
@@ -22,7 +22,7 @@ This tutorial will demonstrate how to submit a workflow at the command line usin
 
 h2(#get-files). Get the tutorial files
 
-The tutorial files are located in the documentation section of the Arvados source repository, which can be found on "git.arvados.org":https://git.arvados.org/arvados.git/tree/HEAD:/doc/user/cwl/bwa-mem or "github":https://github.com/arvados/arvados/tree/master/doc/user/cwl/bwa-mem
+The tutorial files are located in the documentation section of the Arvados source repository, which can be found on "git.arvados.org":https://git.arvados.org/arvados.git/tree/HEAD:/doc/user/cwl/bwa-mem or "github":https://github.com/arvados/arvados/tree/main/doc/user/cwl/bwa-mem
 
 <notextile>
 <pre><code>~$ <span class="userinput">git clone https://git.arvados.org/arvados.git</span>
index bf5f1fc059202ffcdffc50c50c5f975517f60ed3..a93aac56b1eabd005ed119271f65b2264e1d042e 100644 (file)
@@ -17,7 +17,7 @@ For more information, visit the "architecture":{{site.baseurl}}/architecture/fed
 
 h2. Get the example files
 
-The tutorial files are located in the "documentation section of the Arvados source repository:":https://github.com/arvados/arvados/tree/master/doc/user/cwl/federated or "see below":#fed-example
+The tutorial files are located in the "documentation section of the Arvados source repository:":https://github.com/arvados/arvados/tree/main/doc/user/cwl/federated or "see below":#fed-example
 
 <notextile>
 <pre><code>~$ <span class="userinput">git clone https://github.com/arvados/arvados</span>
index 645da56718d6ae10bda4fb7dd366541d034a20f8..e28d5cbb7f0cd09b3ad559f6cab0c9a9967c10d5 100644 (file)
@@ -551,6 +551,34 @@ Clusters:
         # Persistent sessions.
         MaxSessions: 100
 
+      # Selectively set permissions for regular users and admins to
+      # download or upload data files using the upload/download
+      # features for Workbench, WebDAV and S3 API support.
+      WebDAVPermission:
+        User:
+          Download: true
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+
+      # Selectively set permissions for regular users and admins to be
+      # able to download or upload blocks using arv-put and
+      # arv-get from outside the cluster.
+      KeepproxyPermission:
+        User:
+          Download: true
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+
+      # Post upload / download events to the API server logs table, so
+      # that they can be included in the arv-user-activity report.
+      # You can disable this if you find that it is creating excess
+      # load on the API server and you don't need it.
+      WebDAVLogEvents: true
+
     Login:
       # One of the following mechanisms (Google, PAM, LDAP, or
       # LoginCluster) should be enabled; see
index 32a528b3c73835cef815f056d781941386a92695..8753b52f27e17dee80958fd32ab25e46cda2f9fb 100644 (file)
@@ -106,6 +106,9 @@ var whitelist = map[string]bool{
        "Collections.TrashSweepInterval":                      false,
        "Collections.TrustAllContent":                         false,
        "Collections.WebDAVCache":                             false,
+       "Collections.KeepproxyPermission":                     false,
+       "Collections.WebDAVPermission":                        false,
+       "Collections.WebDAVLogEvents":                         false,
        "Containers":                                          true,
        "Containers.CloudVMs":                                 false,
        "Containers.CrunchRunArgumentsList":                   false,
index 1bdc269c083b00b4e3e6d273c87d5a6c97726412..b15bf7eebc29facda6f1a0e2670c5482f83055cf 100644 (file)
@@ -557,6 +557,34 @@ Clusters:
         # Persistent sessions.
         MaxSessions: 100
 
+      # Selectively set permissions for regular users and admins to
+      # download or upload data files using the upload/download
+      # features for Workbench, WebDAV and S3 API support.
+      WebDAVPermission:
+        User:
+          Download: true
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+
+      # Selectively set permissions for regular users and admins to be
+      # able to download or upload blocks using arv-put and
+      # arv-get from outside the cluster.
+      KeepproxyPermission:
+        User:
+          Download: true
+          Upload: true
+        Admin:
+          Download: true
+          Upload: true
+
+      # Post upload / download events to the API server logs table, so
+      # that they can be included in the arv-user-activity report.
+      # You can disable this if you find that it is creating excess
+      # load on the API server and you don't need it.
+      WebDAVLogEvents: true
+
     Login:
       # One of the following mechanisms (Google, PAM, LDAP, or
       # LoginCluster) should be enabled; see
index 7fdea2c74059224c0718fe60fcd099df7acb3843..6e59828a3cbf5656fef1e6c7fc790ca9d3b6268f 100644 (file)
@@ -68,6 +68,16 @@ type WebDAVCacheConfig struct {
        MaxSessions          int
 }
 
+type UploadDownloadPermission struct {
+       Upload   bool
+       Download bool
+}
+
+type UploadDownloadRolePermissions struct {
+       User  UploadDownloadPermission
+       Admin UploadDownloadPermission
+}
+
 type Cluster struct {
        ClusterID       string `json:"-"`
        ManagementToken string
@@ -130,6 +140,10 @@ type Cluster struct {
                BalanceTimeout           Duration
 
                WebDAVCache WebDAVCacheConfig
+
+               KeepproxyPermission UploadDownloadRolePermissions
+               WebDAVPermission    UploadDownloadRolePermissions
+               WebDAVLogEvents     bool
        }
        Git struct {
                GitCommand   string
index 22e2b31d57e08d6c5dc813017c62b950f61aac01..b743ab368e33f69a5c1710d63dc410af8a380ffc 100644 (file)
@@ -674,6 +674,7 @@ func (dn *dirnode) Child(name string, replace func(inode) (inode, error)) (inode
                        if err != nil {
                                return nil, err
                        }
+                       coll.UUID = dn.fs.uuid
                        data, err := json.Marshal(&coll)
                        if err == nil {
                                data = append(data, '\n')
index a4d7e88b2354ab6c4258e5bbd0269e1247e25497..4b7ad6dd59fa426e8b1e71c546ee43a851d99c54 100644 (file)
@@ -10,6 +10,7 @@ const (
        ActiveToken             = "3kg6k6lzmp9kj5cpkcoxie963cmvjahbt2fod9zru30k1jqdmi"
        ActiveTokenUUID         = "zzzzz-gj3su-077z32aux8dg2s1"
        ActiveTokenV2           = "v2/zzzzz-gj3su-077z32aux8dg2s1/3kg6k6lzmp9kj5cpkcoxie963cmvjahbt2fod9zru30k1jqdmi"
+       AdminUserUUID           = "zzzzz-tpzed-d9tiejq69daie8f"
        AdminToken              = "4axaw8zxe0qm22wa6urpp5nskcne8z88cvbupv653y1njyi05h"
        AdminTokenUUID          = "zzzzz-gj3su-027z32aux8dg2s1"
        AnonymousToken          = "4kg6k6lzmp9kj4cpkcoxie964cmvjahbt4fod9zru44k4jqdmi"
@@ -30,6 +31,8 @@ const (
        UserAgreementPDH        = "b519d9cb706a29fc7ea24dbea2f05851+93"
        HelloWorldPdh           = "55713e6a34081eb03609e7ad5fcad129+62"
 
+       MultilevelCollection1 = "zzzzz-4zz18-pyw8yp9g3pr7irn"
+
        AProjectUUID    = "zzzzz-j7d0g-v955i6s2oi1cbso"
        ASubprojectUUID = "zzzzz-j7d0g-axqo7eu9pwvna1x"
 
index 02c5c6892ce8e01abbbd8278024e8bec42af613f..46cfac5c9a841899f3267788141bb8c2163f12a0 100644 (file)
@@ -16,7 +16,7 @@ class Arvados::V1::JobReuseControllerTest < ActionController::TestCase
   BASE_FILTERS = {
     'repository' => ['=', 'active/foo'],
     'script' => ['=', 'hash'],
-    'script_version' => ['in git', 'master'],
+    'script_version' => ['in git', 'main'],
     'docker_image_locator' => ['=', nil],
     'arvados_sdk_version' => ['=', nil],
   }
index cba82fe3f299177851d847189bf9313d112f438d..386205d37f10db75a988fef59137e97c57b15a95 100644 (file)
@@ -39,7 +39,7 @@ func (s *GitSuite) TestPathVariants(c *check.C) {
 func (s *GitSuite) TestReadonly(c *check.C) {
        err := s.RunGit(c, spectatorToken, "fetch", "active/foo.git")
        c.Assert(err, check.Equals, nil)
-       err = s.RunGit(c, spectatorToken, "push", "active/foo.git", "master:newbranchfail")
+       err = s.RunGit(c, spectatorToken, "push", "active/foo.git", "main:newbranchfail")
        c.Assert(err, check.ErrorMatches, `.*HTTP (code = )?403.*`)
        _, err = os.Stat(s.tmpRepoRoot + "/zzzzz-s0uqq-382brsig8rp3666.git/refs/heads/newbranchfail")
        c.Assert(err, check.FitsTypeOf, &os.PathError{})
@@ -48,7 +48,7 @@ func (s *GitSuite) TestReadonly(c *check.C) {
 func (s *GitSuite) TestReadwrite(c *check.C) {
        err := s.RunGit(c, activeToken, "fetch", "active/foo.git")
        c.Assert(err, check.Equals, nil)
-       err = s.RunGit(c, activeToken, "push", "active/foo.git", "master:newbranch")
+       err = s.RunGit(c, activeToken, "push", "active/foo.git", "main:newbranch")
        c.Assert(err, check.Equals, nil)
        _, err = os.Stat(s.tmpRepoRoot + "/zzzzz-s0uqq-382brsig8rp3666.git/refs/heads/newbranch")
        c.Assert(err, check.Equals, nil)
@@ -104,7 +104,7 @@ func (s *GitSuite) makeArvadosRepo(c *check.C) {
        msg, err := exec.Command("git", "init", "--bare", s.tmpRepoRoot+"/zzzzz-s0uqq-arvadosrepo0123.git").CombinedOutput()
        c.Log(string(msg))
        c.Assert(err, check.Equals, nil)
-       msg, err = exec.Command("git", "--git-dir", s.tmpRepoRoot+"/zzzzz-s0uqq-arvadosrepo0123.git", "fetch", "../../.git", "HEAD:master").CombinedOutput()
+       msg, err = exec.Command("git", "--git-dir", s.tmpRepoRoot+"/zzzzz-s0uqq-arvadosrepo0123.git", "fetch", "../../.git", "HEAD:main").CombinedOutput()
        c.Log(string(msg))
        c.Assert(err, check.Equals, nil)
 }
index 9bdecdca1c40cfd2662197e39f4c129fc146932e..a52af804841fb58f4b837893ae83e3cb76d960b4 100644 (file)
@@ -131,8 +131,12 @@ type cachedPermission struct {
 }
 
 type cachedSession struct {
-       expire time.Time
-       fs     atomic.Value
+       expire        time.Time
+       fs            atomic.Value
+       client        *arvados.Client
+       arvadosclient *arvadosclient.ArvadosClient
+       keepclient    *keepclient.KeepClient
+       user          atomic.Value
 }
 
 func (c *cache) setup() {
@@ -213,7 +217,7 @@ func (c *cache) ResetSession(token string) {
 
 // Get a long-lived CustomFileSystem suitable for doing a read operation
 // with the given token.
-func (c *cache) GetSession(token string) (arvados.CustomFileSystem, error) {
+func (c *cache) GetSession(token string) (arvados.CustomFileSystem, *cachedSession, error) {
        c.setupOnce.Do(c.setup)
        now := time.Now()
        ent, _ := c.sessions.Get(token)
@@ -224,6 +228,17 @@ func (c *cache) GetSession(token string) (arvados.CustomFileSystem, error) {
                sess = &cachedSession{
                        expire: now.Add(c.config.TTL.Duration()),
                }
+               var err error
+               sess.client, err = arvados.NewClientFromConfig(c.cluster)
+               if err != nil {
+                       return nil, nil, err
+               }
+               sess.client.AuthToken = token
+               sess.arvadosclient, err = arvadosclient.New(sess.client)
+               if err != nil {
+                       return nil, nil, err
+               }
+               sess.keepclient = keepclient.New(sess.arvadosclient)
                c.sessions.Add(token, sess)
        } else if sess.expire.Before(now) {
                c.metrics.sessionMisses.Inc()
@@ -234,22 +249,12 @@ func (c *cache) GetSession(token string) (arvados.CustomFileSystem, error) {
        go c.pruneSessions()
        fs, _ := sess.fs.Load().(arvados.CustomFileSystem)
        if fs != nil && !expired {
-               return fs, nil
+               return fs, sess, nil
        }
-       ac, err := arvados.NewClientFromConfig(c.cluster)
-       if err != nil {
-               return nil, err
-       }
-       ac.AuthToken = token
-       arv, err := arvadosclient.New(ac)
-       if err != nil {
-               return nil, err
-       }
-       kc := keepclient.New(arv)
-       fs = ac.SiteFileSystem(kc)
+       fs = sess.client.SiteFileSystem(sess.keepclient)
        fs.ForwardSlashNameSubstitution(c.cluster.Collections.ForwardSlashNameSubstitution)
        sess.fs.Store(fs)
-       return fs, nil
+       return fs, sess, nil
 }
 
 // Remove all expired session cache entries, then remove more entries
@@ -464,3 +469,35 @@ func (c *cache) lookupCollection(key string) *arvados.Collection {
        c.metrics.collectionHits.Inc()
        return ent.collection
 }
+
+func (c *cache) GetTokenUser(token string) (*arvados.User, error) {
+       // Get and cache user record associated with this
+       // token.  We need to know their UUID for logging, and
+       // whether they are an admin or not for certain
+       // permission checks.
+
+       // Get/create session entry
+       _, sess, err := c.GetSession(token)
+       if err != nil {
+               return nil, err
+       }
+
+       // See if the user is already set, and if so, return it
+       user, _ := sess.user.Load().(*arvados.User)
+       if user != nil {
+               return user, nil
+       }
+
+       // Fetch the user record
+       c.metrics.apiCalls.Inc()
+       var current arvados.User
+
+       err = sess.client.RequestAndDecode(&current, "GET", "/arvados/v1/users/current", nil, nil)
+       if err != nil {
+               return nil, err
+       }
+
+       // Stash the user record for next time
+       sess.user.Store(&current)
+       return &current, nil
+}
index 81925421dc53b322fc9eb731422f85907e245fd5..97ec95e3aac3f96111ab49014635ae742073b4e8 100644 (file)
@@ -398,6 +398,7 @@ func (h *handler) ServeHTTP(wOrig http.ResponseWriter, r *http.Request) {
        defer h.clientPool.Put(arv)
 
        var collection *arvados.Collection
+       var tokenUser *arvados.User
        tokenResult := make(map[string]int)
        for _, arv.ApiToken = range tokens {
                var err error
@@ -483,7 +484,17 @@ func (h *handler) ServeHTTP(wOrig http.ResponseWriter, r *http.Request) {
                return
        }
 
+       // Check configured permission
+       _, sess, err := h.Config.Cache.GetSession(arv.ApiToken)
+       tokenUser, err = h.Config.Cache.GetTokenUser(arv.ApiToken)
+
        if webdavMethod[r.Method] {
+               if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+                       http.Error(w, "Not permitted", http.StatusForbidden)
+                       return
+               }
+               h.logUploadOrDownload(r, sess.arvadosclient, nil, strings.Join(targetPath, "/"), collection, tokenUser)
+
                if writeMethod[r.Method] {
                        // Save the collection only if/when all
                        // webdav->filesystem operations succeed --
@@ -538,6 +549,12 @@ func (h *handler) ServeHTTP(wOrig http.ResponseWriter, r *http.Request) {
        } else if stat.IsDir() {
                h.serveDirectory(w, r, collection.Name, fs, openPath, true)
        } else {
+               if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+                       http.Error(w, "Not permitted", http.StatusForbidden)
+                       return
+               }
+               h.logUploadOrDownload(r, sess.arvadosclient, nil, strings.Join(targetPath, "/"), collection, tokenUser)
+
                http.ServeContent(w, r, basename, stat.ModTime(), f)
                if wrote := int64(w.WroteBodyBytes()); wrote != stat.Size() && w.WroteStatus() == http.StatusOK {
                        // If we wrote fewer bytes than expected, it's
@@ -583,7 +600,8 @@ func (h *handler) serveSiteFS(w http.ResponseWriter, r *http.Request, tokens []s
                http.Error(w, errReadOnly.Error(), http.StatusMethodNotAllowed)
                return
        }
-       fs, err := h.Config.Cache.GetSession(tokens[0])
+
+       fs, sess, err := h.Config.Cache.GetSession(tokens[0])
        if err != nil {
                http.Error(w, err.Error(), http.StatusInternalServerError)
                return
@@ -606,6 +624,14 @@ func (h *handler) serveSiteFS(w http.ResponseWriter, r *http.Request, tokens []s
                }
                return
        }
+
+       tokenUser, err := h.Config.Cache.GetTokenUser(tokens[0])
+       if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+               http.Error(w, "Not permitted", http.StatusForbidden)
+               return
+       }
+       h.logUploadOrDownload(r, sess.arvadosclient, fs, r.URL.Path, nil, tokenUser)
+
        if r.Method == "GET" {
                _, basename := filepath.Split(r.URL.Path)
                applyContentDispositionHdr(w, r, basename, attachment)
@@ -836,3 +862,117 @@ func (h *handler) seeOtherWithCookie(w http.ResponseWriter, r *http.Request, loc
        io.WriteString(w, html.EscapeString(redir))
        io.WriteString(w, `">Continue</A>`)
 }
+
+func (h *handler) userPermittedToUploadOrDownload(method string, tokenUser *arvados.User) bool {
+       var permitDownload bool
+       var permitUpload bool
+       if tokenUser != nil && tokenUser.IsAdmin {
+               permitUpload = h.Config.cluster.Collections.WebDAVPermission.Admin.Upload
+               permitDownload = h.Config.cluster.Collections.WebDAVPermission.Admin.Download
+       } else {
+               permitUpload = h.Config.cluster.Collections.WebDAVPermission.User.Upload
+               permitDownload = h.Config.cluster.Collections.WebDAVPermission.User.Download
+       }
+       if (method == "PUT" || method == "POST") && !permitUpload {
+               // Disallow operations that upload new files.
+               // Permit webdav operations that move existing files around.
+               return false
+       } else if method == "GET" && !permitDownload {
+               // Disallow downloading file contents.
+               // Permit webdav operations like PROPFIND that retrieve metadata
+               // but not file contents.
+               return false
+       }
+       return true
+}
+
+func (h *handler) logUploadOrDownload(
+       r *http.Request,
+       client *arvadosclient.ArvadosClient,
+       fs arvados.CustomFileSystem,
+       filepath string,
+       collection *arvados.Collection,
+       user *arvados.User) {
+
+       log := ctxlog.FromContext(r.Context())
+       props := make(map[string]string)
+       props["reqPath"] = r.URL.Path
+       var useruuid string
+       if user != nil {
+               log = log.WithField("user_uuid", user.UUID).
+                       WithField("user_full_name", user.FullName)
+               useruuid = user.UUID
+       } else {
+               useruuid = fmt.Sprintf("%s-tpzed-anonymouspublic", h.Config.cluster.ClusterID)
+       }
+       if collection == nil && fs != nil {
+               collection, filepath = h.determineCollection(fs, filepath)
+       }
+       if collection != nil {
+               log = log.WithField("collection_uuid", collection.UUID).
+                       WithField("collection_file_path", filepath)
+               props["collection_uuid"] = collection.UUID
+               props["collection_file_path"] = filepath
+       }
+       if r.Method == "PUT" || r.Method == "POST" {
+               log.Info("File upload")
+               if h.Config.cluster.Collections.WebDAVLogEvents {
+                       go func() {
+                               lr := arvadosclient.Dict{"log": arvadosclient.Dict{
+                                       "object_uuid": useruuid,
+                                       "event_type":  "file_upload",
+                                       "properties":  props}}
+                               err := client.Create("logs", lr, nil)
+                               if err != nil {
+                                       log.WithError(err).Error("Failed to create upload log event on API server")
+                               }
+                       }()
+               }
+       } else if r.Method == "GET" {
+               if collection != nil && collection.PortableDataHash != "" {
+                       log = log.WithField("portable_data_hash", collection.PortableDataHash)
+                       props["portable_data_hash"] = collection.PortableDataHash
+               }
+               log.Info("File download")
+               if h.Config.cluster.Collections.WebDAVLogEvents {
+                       go func() {
+                               lr := arvadosclient.Dict{"log": arvadosclient.Dict{
+                                       "object_uuid": useruuid,
+                                       "event_type":  "file_download",
+                                       "properties":  props}}
+                               err := client.Create("logs", lr, nil)
+                               if err != nil {
+                                       log.WithError(err).Error("Failed to create download log event on API server")
+                               }
+                       }()
+               }
+       }
+}
+
+func (h *handler) determineCollection(fs arvados.CustomFileSystem, path string) (*arvados.Collection, string) {
+       segments := strings.Split(path, "/")
+       var i int
+       for i = 0; i < len(segments); i++ {
+               dir := append([]string{}, segments[0:i]...)
+               dir = append(dir, ".arvados#collection")
+               f, err := fs.OpenFile(strings.Join(dir, "/"), os.O_RDONLY, 0)
+               if f != nil {
+                       defer f.Close()
+               }
+               if err != nil {
+                       if !os.IsNotExist(err) {
+                               return nil, ""
+                       }
+                       continue
+               }
+               // err is nil so we found it.
+               decoder := json.NewDecoder(f)
+               var collection arvados.Collection
+               err = decoder.Decode(&collection)
+               if err != nil {
+                       return nil, ""
+               }
+               return &collection, strings.Join(segments[i:], "/")
+       }
+       return nil, ""
+}
index 8715ab24f35c0312fcca8152dc464ab60aa582af..e883e806ccf509fc87a73f592300619e61901fd3 100644 (file)
@@ -9,6 +9,7 @@ import (
        "context"
        "fmt"
        "html"
+       "io"
        "io/ioutil"
        "net/http"
        "net/http/httptest"
@@ -92,8 +93,9 @@ func (s *UnitSuite) TestEmptyResponse(c *check.C) {
 
                // If we return no content because the client sent an
                // If-Modified-Since header, our response should be
-               // 304, and we should not emit a log message.
-               {true, true, http.StatusNotModified, ``},
+               // 304.  We still expect a "File download" log since it
+               // counts as a file access for auditing.
+               {true, true, http.StatusNotModified, `(?ms).*msg="File download".*`},
        } {
                c.Logf("trial: %+v", trial)
                arvadostest.StartKeep(2, true)
@@ -1185,3 +1187,187 @@ func copyHeader(h http.Header) http.Header {
        }
        return hc
 }
+
+func (s *IntegrationSuite) checkUploadDownloadRequest(c *check.C, h *handler, req *http.Request,
+       successCode int, direction string, perm bool, userUuid string, collectionUuid string, filepath string) {
+
+       client := s.testServer.Config.Client
+       client.AuthToken = arvadostest.AdminToken
+       var logentries arvados.LogList
+       limit1 := 1
+       err := client.RequestAndDecode(&logentries, "GET", "arvados/v1/logs", nil,
+               arvados.ResourceListParams{
+                       Limit: &limit1,
+                       Order: "created_at desc"})
+       c.Check(err, check.IsNil)
+       c.Check(logentries.Items, check.HasLen, 1)
+       lastLogId := logentries.Items[0].ID
+       nextLogId := lastLogId
+
+       var logbuf bytes.Buffer
+       logger := logrus.New()
+       logger.Out = &logbuf
+       resp := httptest.NewRecorder()
+       req = req.WithContext(ctxlog.Context(context.Background(), logger))
+       h.ServeHTTP(resp, req)
+
+       if perm {
+               c.Check(resp.Result().StatusCode, check.Equals, successCode)
+               c.Check(logbuf.String(), check.Matches, `(?ms).*msg="File `+direction+`".*`)
+               c.Check(logbuf.String(), check.Not(check.Matches), `(?ms).*level=error.*`)
+
+               count := 0
+               for ; nextLogId == lastLogId && count < 20; count++ {
+                       time.Sleep(50 * time.Millisecond)
+                       err = client.RequestAndDecode(&logentries, "GET", "arvados/v1/logs", nil,
+                               arvados.ResourceListParams{
+                                       Filters: []arvados.Filter{arvados.Filter{Attr: "event_type", Operator: "=", Operand: "file_" + direction}},
+                                       Limit:   &limit1,
+                                       Order:   "created_at desc",
+                               })
+                       c.Check(err, check.IsNil)
+                       if len(logentries.Items) > 0 {
+                               nextLogId = logentries.Items[0].ID
+                       }
+               }
+               c.Check(count, check.Not(check.Equals), 20)
+               c.Check(logentries.Items[0].ObjectUUID, check.Equals, userUuid)
+               c.Check(logentries.Items[0].Properties["collection_uuid"], check.Equals, collectionUuid)
+               c.Check(logentries.Items[0].Properties["collection_file_path"], check.Equals, filepath)
+       } else {
+               c.Check(resp.Result().StatusCode, check.Equals, http.StatusForbidden)
+               c.Check(logbuf.String(), check.Equals, "")
+       }
+}
+
+func (s *IntegrationSuite) TestDownloadLoggingPermission(c *check.C) {
+       config := newConfig(s.ArvConfig)
+       h := handler{Config: config}
+       u := mustParseURL("http://" + arvadostest.FooCollection + ".keep-web.example/foo")
+
+       config.cluster.Collections.TrustAllContent = true
+
+       for _, adminperm := range []bool{true, false} {
+               for _, userperm := range []bool{true, false} {
+                       config.cluster.Collections.WebDAVPermission.Admin.Download = adminperm
+                       config.cluster.Collections.WebDAVPermission.User.Download = userperm
+
+                       // Test admin permission
+                       req := &http.Request{
+                               Method:     "GET",
+                               Host:       u.Host,
+                               URL:        u,
+                               RequestURI: u.RequestURI(),
+                               Header: http.Header{
+                                       "Authorization": {"Bearer " + arvadostest.AdminToken},
+                               },
+                       }
+                       s.checkUploadDownloadRequest(c, &h, req, http.StatusOK, "download", adminperm,
+                               arvadostest.AdminUserUUID, arvadostest.FooCollection, "foo")
+
+                       // Test user permission
+                       req = &http.Request{
+                               Method:     "GET",
+                               Host:       u.Host,
+                               URL:        u,
+                               RequestURI: u.RequestURI(),
+                               Header: http.Header{
+                                       "Authorization": {"Bearer " + arvadostest.ActiveToken},
+                               },
+                       }
+                       s.checkUploadDownloadRequest(c, &h, req, http.StatusOK, "download", userperm,
+                               arvadostest.ActiveUserUUID, arvadostest.FooCollection, "foo")
+               }
+       }
+
+       config.cluster.Collections.WebDAVPermission.User.Download = true
+
+       for _, tryurl := range []string{"http://" + arvadostest.MultilevelCollection1 + ".keep-web.example/dir1/subdir/file1",
+               "http://keep-web/users/active/multilevel_collection_1/dir1/subdir/file1"} {
+
+               u = mustParseURL(tryurl)
+               req := &http.Request{
+                       Method:     "GET",
+                       Host:       u.Host,
+                       URL:        u,
+                       RequestURI: u.RequestURI(),
+                       Header: http.Header{
+                               "Authorization": {"Bearer " + arvadostest.ActiveToken},
+                       },
+               }
+               s.checkUploadDownloadRequest(c, &h, req, http.StatusOK, "download", true,
+                       arvadostest.ActiveUserUUID, arvadostest.MultilevelCollection1, "dir1/subdir/file1")
+       }
+
+       u = mustParseURL("http://" + strings.Replace(arvadostest.FooCollectionPDH, "+", "-", 1) + ".keep-web.example/foo")
+       req := &http.Request{
+               Method:     "GET",
+               Host:       u.Host,
+               URL:        u,
+               RequestURI: u.RequestURI(),
+               Header: http.Header{
+                       "Authorization": {"Bearer " + arvadostest.ActiveToken},
+               },
+       }
+       s.checkUploadDownloadRequest(c, &h, req, http.StatusOK, "download", true,
+               arvadostest.ActiveUserUUID, arvadostest.FooCollection, "foo")
+}
+
+func (s *IntegrationSuite) TestUploadLoggingPermission(c *check.C) {
+       config := newConfig(s.ArvConfig)
+       h := handler{Config: config}
+
+       for _, adminperm := range []bool{true, false} {
+               for _, userperm := range []bool{true, false} {
+
+                       arv := s.testServer.Config.Client
+                       arv.AuthToken = arvadostest.ActiveToken
+
+                       var coll arvados.Collection
+                       err := arv.RequestAndDecode(&coll,
+                               "POST",
+                               "/arvados/v1/collections",
+                               nil,
+                               map[string]interface{}{
+                                       "ensure_unique_name": true,
+                                       "collection": map[string]interface{}{
+                                               "name": "test collection",
+                                       },
+                               })
+                       c.Assert(err, check.Equals, nil)
+
+                       u := mustParseURL("http://" + coll.UUID + ".keep-web.example/bar")
+
+                       config.cluster.Collections.WebDAVPermission.Admin.Upload = adminperm
+                       config.cluster.Collections.WebDAVPermission.User.Upload = userperm
+
+                       // Test admin permission
+                       req := &http.Request{
+                               Method:     "PUT",
+                               Host:       u.Host,
+                               URL:        u,
+                               RequestURI: u.RequestURI(),
+                               Header: http.Header{
+                                       "Authorization": {"Bearer " + arvadostest.AdminToken},
+                               },
+                               Body: io.NopCloser(bytes.NewReader([]byte("bar"))),
+                       }
+                       s.checkUploadDownloadRequest(c, &h, req, http.StatusCreated, "upload", adminperm,
+                               arvadostest.AdminUserUUID, coll.UUID, "bar")
+
+                       // Test user permission
+                       req = &http.Request{
+                               Method:     "PUT",
+                               Host:       u.Host,
+                               URL:        u,
+                               RequestURI: u.RequestURI(),
+                               Header: http.Header{
+                                       "Authorization": {"Bearer " + arvadostest.ActiveToken},
+                               },
+                               Body: io.NopCloser(bytes.NewReader([]byte("bar"))),
+                       }
+                       s.checkUploadDownloadRequest(c, &h, req, http.StatusCreated, "upload", userperm,
+                               arvadostest.ActiveUserUUID, coll.UUID, "bar")
+               }
+       }
+}
index 6ea9bf9f7a8383cc10ed82e108b76bb3ca97585b..e6262374d640f9ed526ef38c816fa785bce1d3d5 100644 (file)
@@ -24,7 +24,9 @@ import (
        "time"
 
        "git.arvados.org/arvados.git/sdk/go/arvados"
+       "git.arvados.org/arvados.git/sdk/go/arvadosclient"
        "git.arvados.org/arvados.git/sdk/go/ctxlog"
+       "git.arvados.org/arvados.git/sdk/go/keepclient"
        "github.com/AdRoll/goamz/s3"
 )
 
@@ -309,19 +311,25 @@ func (h *handler) serveS3(w http.ResponseWriter, r *http.Request) bool {
 
        var err error
        var fs arvados.CustomFileSystem
+       var arvclient *arvadosclient.ArvadosClient
        if r.Method == http.MethodGet || r.Method == http.MethodHead {
                // Use a single session (cached FileSystem) across
                // multiple read requests.
-               fs, err = h.Config.Cache.GetSession(token)
+               var sess *cachedSession
+               fs, sess, err = h.Config.Cache.GetSession(token)
                if err != nil {
                        s3ErrorResponse(w, InternalError, err.Error(), r.URL.Path, http.StatusInternalServerError)
                        return true
                }
+               arvclient = sess.arvadosclient
        } else {
                // Create a FileSystem for this request, to avoid
                // exposing incomplete write operations to concurrent
                // requests.
-               _, kc, client, release, err := h.getClients(r.Header.Get("X-Request-Id"), token)
+               var kc *keepclient.KeepClient
+               var release func()
+               var client *arvados.Client
+               arvclient, kc, client, release, err = h.getClients(r.Header.Get("X-Request-Id"), token)
                if err != nil {
                        s3ErrorResponse(w, InternalError, err.Error(), r.URL.Path, http.StatusInternalServerError)
                        return true
@@ -396,6 +404,14 @@ func (h *handler) serveS3(w http.ResponseWriter, r *http.Request) bool {
                        s3ErrorResponse(w, NoSuchKey, "The specified key does not exist.", r.URL.Path, http.StatusNotFound)
                        return true
                }
+
+               tokenUser, err := h.Config.Cache.GetTokenUser(token)
+               if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+                       http.Error(w, "Not permitted", http.StatusForbidden)
+                       return true
+               }
+               h.logUploadOrDownload(r, arvclient, fs, fspath, nil, tokenUser)
+
                // shallow copy r, and change URL path
                r := *r
                r.URL.Path = fspath
@@ -479,6 +495,14 @@ func (h *handler) serveS3(w http.ResponseWriter, r *http.Request) bool {
                                return true
                        }
                        defer f.Close()
+
+                       tokenUser, err := h.Config.Cache.GetTokenUser(token)
+                       if !h.userPermittedToUploadOrDownload(r.Method, tokenUser) {
+                               http.Error(w, "Not permitted", http.StatusForbidden)
+                               return true
+                       }
+                       h.logUploadOrDownload(r, arvclient, fs, fspath, nil, tokenUser)
+
                        _, err = io.Copy(f, r.Body)
                        if err != nil {
                                err = fmt.Errorf("write to %q failed: %w", r.URL.Path, err)
index 5c68eb4249d0a7da7c4ea04717d8295041d29cd0..a65a48892ae75d709ae578e3354fe58803bddac1 100644 (file)
@@ -34,6 +34,7 @@ var _ = check.Suite(&IntegrationSuite{})
 // IntegrationSuite tests need an API server and a keep-web server
 type IntegrationSuite struct {
        testServer *server
+       ArvConfig  *arvados.Config
 }
 
 func (s *IntegrationSuite) TestNoToken(c *check.C) {
@@ -389,7 +390,7 @@ func (s *IntegrationSuite) TestMetrics(c *check.C) {
        c.Check(summaries["request_duration_seconds/get/404"].SampleCount, check.Equals, "1")
        c.Check(summaries["time_to_status_seconds/get/404"].SampleCount, check.Equals, "1")
        c.Check(counters["arvados_keepweb_collectioncache_requests//"].Value, check.Equals, int64(2))
-       c.Check(counters["arvados_keepweb_collectioncache_api_calls//"].Value, check.Equals, int64(1))
+       c.Check(counters["arvados_keepweb_collectioncache_api_calls//"].Value, check.Equals, int64(2))
        c.Check(counters["arvados_keepweb_collectioncache_hits//"].Value, check.Equals, int64(1))
        c.Check(counters["arvados_keepweb_collectioncache_pdh_hits//"].Value, check.Equals, int64(1))
        c.Check(counters["arvados_keepweb_collectioncache_permission_hits//"].Value, check.Equals, int64(1))
@@ -446,6 +447,7 @@ func (s *IntegrationSuite) SetUpTest(c *check.C) {
        cfg.cluster.ManagementToken = arvadostest.ManagementToken
        cfg.cluster.SystemRootToken = arvadostest.SystemRootToken
        cfg.cluster.Users.AnonymousUserToken = arvadostest.AnonymousToken
+       s.ArvConfig = arvCfg
        s.testServer = &server{Config: cfg}
        err = s.testServer.Start(ctxlog.TestLogger(c))
        c.Assert(err, check.Equals, nil)
index 740ba9b1cbf849462a20f251effea867fba48c29..dd67aff797281829cd21da95e0400448775b9539 100644 (file)
@@ -16,7 +16,6 @@ import (
        "os/signal"
        "regexp"
        "strings"
-       "sync"
        "syscall"
        "time"
 
@@ -29,6 +28,7 @@ import (
        "github.com/coreos/go-systemd/daemon"
        "github.com/ghodss/yaml"
        "github.com/gorilla/mux"
+       lru "github.com/hashicorp/golang-lru"
        log "github.com/sirupsen/logrus"
 )
 
@@ -163,45 +163,53 @@ func run(logger log.FieldLogger, cluster *arvados.Cluster) error {
        signal.Notify(term, syscall.SIGINT)
 
        // Start serving requests.
-       router = MakeRESTRouter(kc, time.Duration(keepclient.DefaultProxyRequestTimeout), cluster.ManagementToken)
+       router, err = MakeRESTRouter(kc, time.Duration(keepclient.DefaultProxyRequestTimeout), cluster, logger)
+       if err != nil {
+               return err
+       }
        return http.Serve(listener, httpserver.AddRequestIDs(httpserver.LogRequests(router)))
 }
 
+type TokenCacheEntry struct {
+       expire int64
+       user   *arvados.User
+}
+
 type APITokenCache struct {
-       tokens     map[string]int64
-       lock       sync.Mutex
+       tokens     *lru.TwoQueueCache
        expireTime int64
 }
 
-// RememberToken caches the token and set an expire time.  If we already have
-// an expire time on the token, it is not updated.
-func (cache *APITokenCache) RememberToken(token string) {
-       cache.lock.Lock()
-       defer cache.lock.Unlock()
-
+// RememberToken caches the token and set an expire time.  If the
+// token is already in the cache, it is not updated.
+func (cache *APITokenCache) RememberToken(token string, user *arvados.User) {
        now := time.Now().Unix()
-       if cache.tokens[token] == 0 {
-               cache.tokens[token] = now + cache.expireTime
+       _, ok := cache.tokens.Get(token)
+       if !ok {
+               cache.tokens.Add(token, TokenCacheEntry{
+                       expire: now + cache.expireTime,
+                       user:   user,
+               })
        }
 }
 
 // RecallToken checks if the cached token is known and still believed to be
 // valid.
-func (cache *APITokenCache) RecallToken(token string) bool {
-       cache.lock.Lock()
-       defer cache.lock.Unlock()
+func (cache *APITokenCache) RecallToken(token string) (bool, *arvados.User) {
+       val, ok := cache.tokens.Get(token)
+       if !ok {
+               return false, nil
+       }
 
+       cacheEntry := val.(TokenCacheEntry)
        now := time.Now().Unix()
-       if cache.tokens[token] == 0 {
-               // Unknown token
-               return false
-       } else if now < cache.tokens[token] {
+       if now < cacheEntry.expire {
                // Token is known and still valid
-               return true
+               return true, cacheEntry.user
        } else {
                // Token is expired
-               cache.tokens[token] = 0
-               return false
+               cache.tokens.Remove(token)
+               return false, nil
        }
 }
 
@@ -216,10 +224,10 @@ func GetRemoteAddress(req *http.Request) string {
        return req.RemoteAddr
 }
 
-func CheckAuthorizationHeader(kc *keepclient.KeepClient, cache *APITokenCache, req *http.Request) (pass bool, tok string) {
+func (h *proxyHandler) CheckAuthorizationHeader(req *http.Request) (pass bool, tok string, user *arvados.User) {
        parts := strings.SplitN(req.Header.Get("Authorization"), " ", 2)
        if len(parts) < 2 || !(parts[0] == "OAuth2" || parts[0] == "Bearer") || len(parts[1]) == 0 {
-               return false, ""
+               return false, "", nil
        }
        tok = parts[1]
 
@@ -234,29 +242,56 @@ func CheckAuthorizationHeader(kc *keepclient.KeepClient, cache *APITokenCache, r
                op = "write"
        }
 
-       if cache.RecallToken(op + ":" + tok) {
+       if ok, user := h.APITokenCache.RecallToken(op + ":" + tok); ok {
                // Valid in the cache, short circuit
-               return true, tok
+               return true, tok, user
        }
 
        var err error
-       arv := *kc.Arvados
+       arv := *h.KeepClient.Arvados
        arv.ApiToken = tok
        arv.RequestID = req.Header.Get("X-Request-Id")
-       if op == "read" {
-               err = arv.Call("HEAD", "keep_services", "", "accessible", nil, nil)
-       } else {
-               err = arv.Call("HEAD", "users", "", "current", nil, nil)
+       user = &arvados.User{}
+       userCurrentError := arv.Call("GET", "users", "", "current", nil, user)
+       err = userCurrentError
+       if err != nil && op == "read" {
+               apiError, ok := err.(arvadosclient.APIServerError)
+               if ok && apiError.HttpStatusCode == http.StatusForbidden {
+                       // If it was a scoped "sharing" token it will
+                       // return 403 instead of 401 for the current
+                       // user check.  If it is a download operation
+                       // and they have permission to read the
+                       // keep_services table, we can allow it.
+                       err = arv.Call("HEAD", "keep_services", "", "accessible", nil, nil)
+               }
        }
        if err != nil {
                log.Printf("%s: CheckAuthorizationHeader error: %v", GetRemoteAddress(req), err)
-               return false, ""
+               return false, "", nil
+       }
+
+       if userCurrentError == nil && user.IsAdmin {
+               // checking userCurrentError is probably redundant,
+               // IsAdmin would be false anyway. But can't hurt.
+               if op == "read" && !h.cluster.Collections.KeepproxyPermission.Admin.Download {
+                       return false, "", nil
+               }
+               if op == "write" && !h.cluster.Collections.KeepproxyPermission.Admin.Upload {
+                       return false, "", nil
+               }
+       } else {
+               if op == "read" && !h.cluster.Collections.KeepproxyPermission.User.Download {
+                       return false, "", nil
+               }
+               if op == "write" && !h.cluster.Collections.KeepproxyPermission.User.Upload {
+                       return false, "", nil
+               }
        }
 
        // Success!  Update cache
-       cache.RememberToken(op + ":" + tok)
+       h.APITokenCache.RememberToken(op+":"+tok, user)
 
-       return true, tok
+       return true, tok, user
 }
 
 // We need to make a private copy of the default http transport early
@@ -273,11 +308,13 @@ type proxyHandler struct {
        *APITokenCache
        timeout   time.Duration
        transport *http.Transport
+       logger    log.FieldLogger
+       cluster   *arvados.Cluster
 }
 
 // MakeRESTRouter returns an http.Handler that passes GET and PUT
 // requests to the appropriate handlers.
-func MakeRESTRouter(kc *keepclient.KeepClient, timeout time.Duration, mgmtToken string) http.Handler {
+func MakeRESTRouter(kc *keepclient.KeepClient, timeout time.Duration, cluster *arvados.Cluster, logger log.FieldLogger) (http.Handler, error) {
        rest := mux.NewRouter()
 
        transport := defaultTransport
@@ -289,15 +326,22 @@ func MakeRESTRouter(kc *keepclient.KeepClient, timeout time.Duration, mgmtToken
        transport.TLSClientConfig = arvadosclient.MakeTLSConfig(kc.Arvados.ApiInsecure)
        transport.TLSHandshakeTimeout = keepclient.DefaultTLSHandshakeTimeout
 
+       cacheQ, err := lru.New2Q(500)
+       if err != nil {
+               return nil, fmt.Errorf("Error from lru.New2Q: %v", err)
+       }
+
        h := &proxyHandler{
                Handler:    rest,
                KeepClient: kc,
                timeout:    timeout,
                transport:  &transport,
                APITokenCache: &APITokenCache{
-                       tokens:     make(map[string]int64),
+                       tokens:     cacheQ,
                        expireTime: 300,
                },
+               logger:  logger,
+               cluster: cluster,
        }
 
        rest.HandleFunc(`/{locator:[0-9a-f]{32}\+.*}`, h.Get).Methods("GET", "HEAD")
@@ -316,19 +360,19 @@ func MakeRESTRouter(kc *keepclient.KeepClient, timeout time.Duration, mgmtToken
        rest.HandleFunc(`/`, h.Options).Methods("OPTIONS")
 
        rest.Handle("/_health/{check}", &health.Handler{
-               Token:  mgmtToken,
+               Token:  cluster.ManagementToken,
                Prefix: "/_health/",
        }).Methods("GET")
 
        rest.NotFoundHandler = InvalidPathHandler{}
-       return h
+       return h, nil
 }
 
 var errLoopDetected = errors.New("loop detected")
 
-func (*proxyHandler) checkLoop(resp http.ResponseWriter, req *http.Request) error {
+func (*proxyHandler) checkLoop(resp http.ResponseWriter, req *http.Request) error {
        if via := req.Header.Get("Via"); strings.Index(via, " "+viaAlias) >= 0 {
-               log.Printf("proxy loop detected (request has Via: %q): perhaps keepproxy is misidentified by gateway config as an external client, or its keep_services record does not have service_type=proxy?", via)
+               h.logger.Printf("proxy loop detected (request has Via: %q): perhaps keepproxy is misidentified by gateway config as an external client, or its keep_services record does not have service_type=proxy?", via)
                http.Error(resp, errLoopDetected.Error(), http.StatusInternalServerError)
                return errLoopDetected
        }
@@ -354,7 +398,7 @@ func (h *proxyHandler) Options(resp http.ResponseWriter, req *http.Request) {
        SetCorsHeaders(resp)
 }
 
-var errBadAuthorizationHeader = errors.New("Missing or invalid Authorization header")
+var errBadAuthorizationHeader = errors.New("Missing or invalid Authorization header, or method not allowed")
 var errContentLengthMismatch = errors.New("Actual length != expected content length")
 var errMethodNotSupported = errors.New("Method not supported")
 
@@ -384,7 +428,8 @@ func (h *proxyHandler) Get(resp http.ResponseWriter, req *http.Request) {
 
        var pass bool
        var tok string
-       if pass, tok = CheckAuthorizationHeader(kc, h.APITokenCache, req); !pass {
+       var user *arvados.User
+       if pass, tok, user = h.CheckAuthorizationHeader(req); !pass {
                status, err = http.StatusForbidden, errBadAuthorizationHeader
                return
        }
@@ -398,6 +443,18 @@ func (h *proxyHandler) Get(resp http.ResponseWriter, req *http.Request) {
 
        locator = removeHint.ReplaceAllString(locator, "$1")
 
+       if locator != "" {
+               parts := strings.SplitN(locator, "+", 3)
+               if len(parts) >= 2 {
+                       logger := h.logger
+                       if user != nil {
+                               logger = logger.WithField("user_uuid", user.UUID).
+                                       WithField("user_full_name", user.FullName)
+                       }
+                       logger.WithField("locator", fmt.Sprintf("%s+%s", parts[0], parts[1])).Infof("Block download")
+               }
+       }
+
        switch req.Method {
        case "HEAD":
                expectLength, proxiedURI, err = kc.Ask(locator)
@@ -498,7 +555,8 @@ func (h *proxyHandler) Put(resp http.ResponseWriter, req *http.Request) {
 
        var pass bool
        var tok string
-       if pass, tok = CheckAuthorizationHeader(kc, h.APITokenCache, req); !pass {
+       var user *arvados.User
+       if pass, tok, user = h.CheckAuthorizationHeader(req); !pass {
                err = errBadAuthorizationHeader
                status = http.StatusForbidden
                return
@@ -531,6 +589,18 @@ func (h *proxyHandler) Put(resp http.ResponseWriter, req *http.Request) {
                locatorOut, wroteReplicas, err = kc.PutHR(locatorIn, req.Body, expectLength)
        }
 
+       if locatorOut != "" {
+               parts := strings.SplitN(locatorOut, "+", 3)
+               if len(parts) >= 2 {
+                       logger := h.logger
+                       if user != nil {
+                               logger = logger.WithField("user_uuid", user.UUID).
+                                       WithField("user_full_name", user.FullName)
+                       }
+                       logger.WithField("locator", fmt.Sprintf("%s+%s", parts[0], parts[1])).Infof("Block upload")
+               }
+       }
+
        // Tell the client how many successful PUTs we accomplished
        resp.Header().Set(keepclient.XKeepReplicasStored, fmt.Sprintf("%d", wroteReplicas))
 
@@ -585,7 +655,7 @@ func (h *proxyHandler) Index(resp http.ResponseWriter, req *http.Request) {
        }()
 
        kc := h.makeKeepClient(req)
-       ok, token := CheckAuthorizationHeader(kc, h.APITokenCache, req)
+       ok, token, _ := h.CheckAuthorizationHeader(req)
        if !ok {
                status, err = http.StatusForbidden, errBadAuthorizationHeader
                return
index c569a05e74d970efa98248b7f9ca95ff14657fdb..2d4266d8d591ca1f304f0ca1a18993d20348e8be 100644 (file)
@@ -26,6 +26,7 @@ import (
        "git.arvados.org/arvados.git/sdk/go/keepclient"
        log "github.com/sirupsen/logrus"
 
+       "gopkg.in/check.v1"
        . "gopkg.in/check.v1"
 )
 
@@ -120,7 +121,7 @@ func (s *NoKeepServerSuite) TearDownSuite(c *C) {
        arvadostest.StopAPI()
 }
 
-func runProxy(c *C, bogusClientToken bool, loadKeepstoresFromConfig bool) *keepclient.KeepClient {
+func runProxy(c *C, bogusClientToken bool, loadKeepstoresFromConfig bool, kp *arvados.UploadDownloadRolePermissions) (*keepclient.KeepClient, *bytes.Buffer) {
        cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
        c.Assert(err, Equals, nil)
        cluster, err := cfg.GetCluster("")
@@ -133,9 +134,16 @@ func runProxy(c *C, bogusClientToken bool, loadKeepstoresFromConfig bool) *keepc
 
        cluster.Services.Keepproxy.InternalURLs = map[arvados.URL]arvados.ServiceInstance{{Host: ":0"}: {}}
 
+       if kp != nil {
+               cluster.Collections.KeepproxyPermission = *kp
+       }
+
        listener = nil
+       logbuf := &bytes.Buffer{}
+       logger := log.New()
+       logger.Out = logbuf
        go func() {
-               run(log.New(), cluster)
+               run(logger, cluster)
                defer closeListener()
        }()
        waitForListener()
@@ -153,11 +161,11 @@ func runProxy(c *C, bogusClientToken bool, loadKeepstoresFromConfig bool) *keepc
        kc.SetServiceRoots(sr, sr, sr)
        kc.Arvados.External = true
 
-       return kc
+       return kc, logbuf
 }
 
 func (s *ServerRequiredSuite) TestResponseViaHeader(c *C) {
-       runProxy(c, false, false)
+       runProxy(c, false, false, nil)
        defer closeListener()
 
        req, err := http.NewRequest("POST",
@@ -184,7 +192,7 @@ func (s *ServerRequiredSuite) TestResponseViaHeader(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestLoopDetection(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
 
        sr := map[string]string{
@@ -202,7 +210,7 @@ func (s *ServerRequiredSuite) TestLoopDetection(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestStorageClassesHeader(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
 
        // Set up fake keepstore to record request headers
@@ -251,7 +259,7 @@ func (s *ServerRequiredSuite) TestStorageClassesConfirmedHeader(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestDesiredReplicas(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
 
        content := []byte("TestDesiredReplicas")
@@ -268,7 +276,7 @@ func (s *ServerRequiredSuite) TestDesiredReplicas(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestPutWrongContentLength(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
 
        content := []byte("TestPutWrongContentLength")
@@ -279,7 +287,8 @@ func (s *ServerRequiredSuite) TestPutWrongContentLength(c *C) {
        // fixes the invalid Content-Length header. In order to test
        // our server behavior, we have to call the handler directly
        // using an httptest.ResponseRecorder.
-       rtr := MakeRESTRouter(kc, 10*time.Second, "")
+       rtr, err := MakeRESTRouter(kc, 10*time.Second, &arvados.Cluster{}, log.New())
+       c.Assert(err, check.IsNil)
 
        type testcase struct {
                sendLength   string
@@ -307,7 +316,7 @@ func (s *ServerRequiredSuite) TestPutWrongContentLength(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestManyFailedPuts(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
        router.(*proxyHandler).timeout = time.Nanosecond
 
@@ -334,7 +343,7 @@ func (s *ServerRequiredSuite) TestManyFailedPuts(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestPutAskGet(c *C) {
-       kc := runProxy(c, false, false)
+       kc, logbuf := runProxy(c, false, false, nil)
        defer closeListener()
 
        hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
@@ -370,6 +379,9 @@ func (s *ServerRequiredSuite) TestPutAskGet(c *C) {
                c.Check(rep, Equals, 2)
                c.Check(err, Equals, nil)
                c.Log("Finished PutB (expected success)")
+
+               c.Check(logbuf.String(), Matches, `(?ms).*msg="Block upload" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+               logbuf.Reset()
        }
 
        {
@@ -377,6 +389,8 @@ func (s *ServerRequiredSuite) TestPutAskGet(c *C) {
                c.Assert(err, Equals, nil)
                c.Check(blocklen, Equals, int64(3))
                c.Log("Finished Ask (expected success)")
+               c.Check(logbuf.String(), Matches, `(?ms).*msg="Block download" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+               logbuf.Reset()
        }
 
        {
@@ -387,6 +401,8 @@ func (s *ServerRequiredSuite) TestPutAskGet(c *C) {
                c.Check(all, DeepEquals, []byte("foo"))
                c.Check(blocklen, Equals, int64(3))
                c.Log("Finished Get (expected success)")
+               c.Check(logbuf.String(), Matches, `(?ms).*msg="Block download" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+               logbuf.Reset()
        }
 
        {
@@ -411,7 +427,7 @@ func (s *ServerRequiredSuite) TestPutAskGet(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestPutAskGetForbidden(c *C) {
-       kc := runProxy(c, true, false)
+       kc, _ := runProxy(c, true, false, nil)
        defer closeListener()
 
        hash := fmt.Sprintf("%x+3", md5.Sum([]byte("bar")))
@@ -426,18 +442,116 @@ func (s *ServerRequiredSuite) TestPutAskGetForbidden(c *C) {
 
        blocklen, _, err := kc.Ask(hash)
        c.Check(err, FitsTypeOf, &keepclient.ErrNotFound{})
-       c.Check(err, ErrorMatches, ".*not found.*")
+       c.Check(err, ErrorMatches, ".*HTTP 403.*")
        c.Check(blocklen, Equals, int64(0))
 
        _, blocklen, _, err = kc.Get(hash)
        c.Check(err, FitsTypeOf, &keepclient.ErrNotFound{})
-       c.Check(err, ErrorMatches, ".*not found.*")
+       c.Check(err, ErrorMatches, ".*HTTP 403.*")
        c.Check(blocklen, Equals, int64(0))
+}
+
+func testPermission(c *C, admin bool, perm arvados.UploadDownloadPermission) {
+       kp := arvados.UploadDownloadRolePermissions{}
+       if admin {
+               kp.Admin = perm
+               kp.User = arvados.UploadDownloadPermission{Upload: true, Download: true}
+       } else {
+               kp.Admin = arvados.UploadDownloadPermission{Upload: true, Download: true}
+               kp.User = perm
+       }
+
+       kc, logbuf := runProxy(c, false, false, &kp)
+       defer closeListener()
+       if admin {
+               kc.Arvados.ApiToken = arvadostest.AdminToken
+       } else {
+               kc.Arvados.ApiToken = arvadostest.ActiveToken
+       }
+
+       hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
+       var hash2 string
+
+       {
+               var rep int
+               var err error
+               hash2, rep, err = kc.PutB([]byte("foo"))
+
+               if perm.Upload {
+                       c.Check(hash2, Matches, fmt.Sprintf(`^%s\+3(\+.+)?$`, hash))
+                       c.Check(rep, Equals, 2)
+                       c.Check(err, Equals, nil)
+                       c.Log("Finished PutB (expected success)")
+                       if admin {
+                               c.Check(logbuf.String(), Matches, `(?ms).*msg="Block upload" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+                       } else {
+
+                               c.Check(logbuf.String(), Matches, `(?ms).*msg="Block upload" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="Active User" user_uuid=zzzzz-tpzed-xurymjxw79nv3jz.*`)
+                       }
+               } else {
+                       c.Check(hash2, Equals, "")
+                       c.Check(rep, Equals, 0)
+                       c.Check(err, FitsTypeOf, keepclient.InsufficientReplicasError(errors.New("")))
+               }
+               logbuf.Reset()
+       }
+       if perm.Upload {
+               // can't test download without upload.
+
+               reader, blocklen, _, err := kc.Get(hash2)
+               if perm.Download {
+                       c.Assert(err, Equals, nil)
+                       all, err := ioutil.ReadAll(reader)
+                       c.Check(err, IsNil)
+                       c.Check(all, DeepEquals, []byte("foo"))
+                       c.Check(blocklen, Equals, int64(3))
+                       c.Log("Finished Get (expected success)")
+                       if admin {
+                               c.Check(logbuf.String(), Matches, `(?ms).*msg="Block download" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="TestCase Administrator" user_uuid=zzzzz-tpzed-d9tiejq69daie8f.*`)
+                       } else {
+                               c.Check(logbuf.String(), Matches, `(?ms).*msg="Block download" locator=acbd18db4cc2f85cedef654fccc4a4d8\+3 user_full_name="Active User" user_uuid=zzzzz-tpzed-xurymjxw79nv3jz.*`)
+                       }
+               } else {
+                       c.Check(err, FitsTypeOf, &keepclient.ErrNotFound{})
+                       c.Check(err, ErrorMatches, ".*Missing or invalid Authorization header, or method not allowed.*")
+                       c.Check(blocklen, Equals, int64(0))
+               }
+               logbuf.Reset()
+       }
 
 }
 
+func (s *ServerRequiredSuite) TestPutGetPermission(c *C) {
+
+       for _, adminperm := range []bool{true, false} {
+               for _, userperm := range []bool{true, false} {
+
+                       testPermission(c, true,
+                               arvados.UploadDownloadPermission{
+                                       Upload:   adminperm,
+                                       Download: true,
+                               })
+                       testPermission(c, true,
+                               arvados.UploadDownloadPermission{
+                                       Upload:   true,
+                                       Download: adminperm,
+                               })
+                       testPermission(c, false,
+                               arvados.UploadDownloadPermission{
+                                       Upload:   true,
+                                       Download: userperm,
+                               })
+                       testPermission(c, false,
+                               arvados.UploadDownloadPermission{
+                                       Upload:   true,
+                                       Download: userperm,
+                               })
+               }
+       }
+}
+
 func (s *ServerRequiredSuite) TestCorsHeaders(c *C) {
-       runProxy(c, false, false)
+       runProxy(c, false, false, nil)
        defer closeListener()
 
        {
@@ -468,7 +582,7 @@ func (s *ServerRequiredSuite) TestCorsHeaders(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestPostWithoutHash(c *C) {
-       runProxy(c, false, false)
+       runProxy(c, false, false, nil)
        defer closeListener()
 
        {
@@ -526,7 +640,7 @@ func (s *ServerRequiredConfigYmlSuite) TestGetIndex(c *C) {
 }
 
 func getIndexWorker(c *C, useConfig bool) {
-       kc := runProxy(c, false, useConfig)
+       kc, _ := runProxy(c, false, useConfig, nil)
        defer closeListener()
 
        // Put "index-data" blocks
@@ -589,7 +703,7 @@ func getIndexWorker(c *C, useConfig bool) {
 }
 
 func (s *ServerRequiredSuite) TestCollectionSharingToken(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
        hash, _, err := kc.PutB([]byte("shareddata"))
        c.Check(err, IsNil)
@@ -602,7 +716,7 @@ func (s *ServerRequiredSuite) TestCollectionSharingToken(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestPutAskGetInvalidToken(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
 
        // Put a test block
@@ -630,16 +744,16 @@ func (s *ServerRequiredSuite) TestPutAskGetInvalidToken(c *C) {
                        _, _, _, err = kc.Get(hash)
                        c.Assert(err, FitsTypeOf, &keepclient.ErrNotFound{})
                        c.Check(err.(*keepclient.ErrNotFound).Temporary(), Equals, false)
-                       c.Check(err, ErrorMatches, ".*HTTP 403 \"Missing or invalid Authorization header\".*")
+                       c.Check(err, ErrorMatches, ".*HTTP 403 \"Missing or invalid Authorization header, or method not allowed\".*")
                }
 
                _, _, err = kc.PutB([]byte("foo"))
-               c.Check(err, ErrorMatches, ".*403.*Missing or invalid Authorization header")
+               c.Check(err, ErrorMatches, ".*403.*Missing or invalid Authorization header, or method not allowed")
        }
 }
 
 func (s *ServerRequiredSuite) TestAskGetKeepProxyConnectionError(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
 
        // Point keepproxy at a non-existent keepstore
@@ -665,7 +779,7 @@ func (s *ServerRequiredSuite) TestAskGetKeepProxyConnectionError(c *C) {
 }
 
 func (s *NoKeepServerSuite) TestAskGetNoKeepServerError(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
 
        hash := fmt.Sprintf("%x", md5.Sum([]byte("foo")))
@@ -688,10 +802,11 @@ func (s *NoKeepServerSuite) TestAskGetNoKeepServerError(c *C) {
 }
 
 func (s *ServerRequiredSuite) TestPing(c *C) {
-       kc := runProxy(c, false, false)
+       kc, _ := runProxy(c, false, false, nil)
        defer closeListener()
 
-       rtr := MakeRESTRouter(kc, 10*time.Second, arvadostest.ManagementToken)
+       rtr, err := MakeRESTRouter(kc, 10*time.Second, &arvados.Cluster{ManagementToken: arvadostest.ManagementToken}, log.New())
+       c.Assert(err, check.IsNil)
 
        req, err := http.NewRequest("GET",
                "http://"+listener.Addr().String()+"/_health/ping",
index 92d4e70881460b335bc1444c5bd8bb7e1f8d695e..c285d53cacde9fd81781dfde4bbd1a43566db30c 100644 (file)
@@ -5,7 +5,7 @@
 FROM arvados/arvbox-base
 ARG arvados_version
 ARG composer_version=arvados-fork
-ARG workbench2_version=master
+ARG workbench2_version=main
 
 RUN cd /usr/src && \
     git clone --no-checkout https://git.arvados.org/arvados.git && \
index c60c15bfc53887e45cfeb84dd8f119aedcf544ee..698367b8a6766d399b976c8632c7cc80c3b3a6bc 100755 (executable)
@@ -101,7 +101,7 @@ fi
 if ! test -d $ARVADOS_CONTAINER_PATH/git/repositories/$repo_uuid.git ; then
     git clone --bare /usr/src/arvados $ARVADOS_CONTAINER_PATH/git/repositories/$repo_uuid.git
 else
-    git --git-dir=$ARVADOS_CONTAINER_PATH/git/repositories/$repo_uuid.git fetch -f /usr/src/arvados master:master
+    git --git-dir=$ARVADOS_CONTAINER_PATH/git/repositories/$repo_uuid.git fetch -f /usr/src/arvados main:main
 fi
 
 cd /usr/src/arvados/services/api
index 466d41d423490f30e0b2a4f9b4f06fd47d5b08f4..fb1473def250dea3405890a54de90070d248fae0 100644 (file)
@@ -30,7 +30,7 @@ arvados_test_salt_states_examples_single_host_snakeoil_certs_dependencies_pkg_in
       - ca-certificates
 
 arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_ca_cmd_run:
-  # Taken from https://github.com/arvados/arvados/blob/master/tools/arvbox/lib/arvbox/docker/service/certificate/run
+  # Taken from https://github.com/arvados/arvados/blob/main/tools/arvbox/lib/arvbox/docker/service/certificate/run
   cmd.run:
     - name: |
         # These dirs are not to CentOS-ish, but this is a helper script
index d88adbc5366ab93a25c44355126865551c052796..130fb5e937affe145b06c9f75b0ec2f6540003c8 100644 (file)
@@ -30,7 +30,7 @@ arvados_test_salt_states_examples_single_host_snakeoil_certs_dependencies_pkg_in
       - ca-certificates
 
 arvados_test_salt_states_examples_single_host_snakeoil_certs_arvados_snake_oil_ca_cmd_run:
-  # Taken from https://github.com/arvados/arvados/blob/master/tools/arvbox/lib/arvbox/docker/service/certificate/run
+  # Taken from https://github.com/arvados/arvados/blob/main/tools/arvbox/lib/arvbox/docker/service/certificate/run
   cmd.run:
     - name: |
         # These dirs are not to CentOS-ish, but this is a helper script
index f5e40ff153f92889f6293398e7bc2350c3356561..17b7b888846fca194a04f60af829dd5ee271a4e5 100644 (file)
@@ -82,6 +82,7 @@ LE_AWS_SECRET_ACCESS_KEY="thisistherandomstringthatisyoursecretkey"
 # Extra states to apply. If you use your own subdir, change this value accordingly
 # EXTRA_STATES_DIR="${CONFIG_DIR}/states"
 
+# These are ARVADOS-related settings.
 # Which release of Arvados repo you want to use
 RELEASE="production"
 # Which version of Arvados you want to install. Defaults to latest stable
@@ -90,13 +91,13 @@ RELEASE="production"
 # This is an arvados-formula setting.
 # If branch is set, the script will switch to it before running salt
 # Usually not needed, only used for testing
-# BRANCH="master"
+# BRANCH="main"
 
 ##########################################################
 # Usually there's no need to modify things below this line
 
 # Formulas versions
-# ARVADOS_TAG="v1.1.4"
+# ARVADOS_TAG="2.2.0"
 # POSTGRES_TAG="v0.41.6"
 # NGINX_TAG="temp-fix-missing-statements-in-pillar"
 # DOCKER_TAG="v1.0.0"
index 6dd47722c19d67ef3b0c49a0be7976cb3fcae63d..ae54e7437a83db83b7373eaa6ef87d70aa31e8b5 100644 (file)
@@ -54,6 +54,7 @@ USE_LETSENCRYPT="no"
 # Extra states to apply. If you use your own subdir, change this value accordingly
 # EXTRA_STATES_DIR="${CONFIG_DIR}/states"
 
+# These are ARVADOS-related settings.
 # Which release of Arvados repo you want to use
 RELEASE="production"
 # Which version of Arvados you want to install. Defaults to latest stable
@@ -62,13 +63,13 @@ RELEASE="production"
 # This is an arvados-formula setting.
 # If branch is set, the script will switch to it before running salt
 # Usually not needed, only used for testing
-# BRANCH="master"
+# BRANCH="main"
 
 ##########################################################
 # Usually there's no need to modify things below this line
 
 # Formulas versions
-# ARVADOS_TAG="v1.1.4"
+# ARVADOS_TAG="2.2.0"
 # POSTGRES_TAG="v0.41.6"
 # NGINX_TAG="temp-fix-missing-statements-in-pillar"
 # DOCKER_TAG="v1.0.0"
index fda42a9c745ad5f7feb3e219ea85bd29b681c503..a35bd45bffc258d7c3a8dd4b59eb564bfc13c4b8 100644 (file)
@@ -63,6 +63,7 @@ USE_LETSENCRYPT="no"
 # Extra states to apply. If you use your own subdir, change this value accordingly
 # EXTRA_STATES_DIR="${CONFIG_DIR}/states"
 
+# These are ARVADOS-related settings.
 # Which release of Arvados repo you want to use
 RELEASE="production"
 # Which version of Arvados you want to install. Defaults to latest stable
@@ -71,13 +72,13 @@ RELEASE="production"
 # This is an arvados-formula setting.
 # If branch is set, the script will switch to it before running salt
 # Usually not needed, only used for testing
-# BRANCH="master"
+# BRANCH="main"
 
 ##########################################################
 # Usually there's no need to modify things below this line
 
 # Formulas versions
-# ARVADOS_TAG="v1.1.4"
+# ARVADOS_TAG="2.2.0"
 # POSTGRES_TAG="v0.41.6"
 # NGINX_TAG="temp-fix-missing-statements-in-pillar"
 # DOCKER_TAG="v1.0.0"
index 5cd376844dd5ca7ae5f6a198e1bf41a494d374c7..7ac120e5fd89179f75fcf13608679edfaa2b45e5 100755 (executable)
@@ -1,4 +1,4 @@
-#!/bin/bash -x
+#!/usr/bin/env bash
 
 # Copyright (C) The Arvados Authors. All rights reserved.
 #
@@ -21,7 +21,6 @@ usage() {
   echo >&2
   echo >&2 "${0} options:"
   echo >&2 "  -d, --debug                                 Run salt installation in debug mode"
-  echo >&2 "  -p <N>, --ssl-port <N>                      SSL port to use for the web applications"
   echo >&2 "  -c <local.params>, --config <local.params>  Path to the local.params config file"
   echo >&2 "  -t, --test                                  Test installation running a CWL workflow"
   echo >&2 "  -r, --roles                                 List of Arvados roles to apply to the host, comma separated"
@@ -39,17 +38,35 @@ usage() {
   echo >&2 "                                                workbench2"
   echo >&2 "                                              Defaults to applying them all"
   echo >&2 "  -h, --help                                  Display this help and exit"
+  echo >&2 "  --dump-config <dest_dir>                    Dumps the pillars and states to a directory"
+  echo >&2 "                                              This parameter does not perform any installation at all. It's"
+  echo >&2 "                                              intended to give you a parsed sot of configuration files so"
+  echo >&2 "                                              you can inspect them or use them in you Saltstack infrastructure."
+  echo >&2 "                                              It"
+  echo >&2 "                                                - parses the pillar and states templates,"
+  echo >&2 "                                                - downloads the helper formulas with their desired versions,"
+  echo >&2 "                                                - prepares the 'top.sls' files both for pillars and states"
+  echo >&2 "                                                  for the selected role/s"
+  echo >&2 "                                                - writes the resulting files into <dest_dir>"
   echo >&2 "  -v, --vagrant                               Run in vagrant and use the /vagrant shared dir"
   echo >&2
 }
 
 arguments() {
   # NOTE: This requires GNU getopt (part of the util-linux package on Debian-based distros).
+  if ! which getopt > /dev/null; then
+    echo >&2 "GNU getopt is required to run this script. Please install it and re-reun it"
+    exit 1
+  fi
+
   TEMP=$(getopt -o c:dhp:r:tv \
-    --long config:,debug,help,ssl-port:,roles:,test,vagrant \
+    --long config:,debug,dump-config:,help,roles:,test,vagrant \
     -n "${0}" -- "${@}")
 
-  if [ ${?} != 0 ] ; then echo "GNU getopt missing? Use -h for help"; exit 1 ; fi
+  if [ ${?} != 0 ];
+    then echo "Please check the parameters you entered and re-run again"
+    exit 1
+  fi
   # Note the quotes around `$TEMP': they are essential!
   eval set -- "$TEMP"
 
@@ -62,9 +79,23 @@ arguments() {
       -d | --debug)
         LOG_LEVEL="debug"
         shift
+        set -x
         ;;
-      -p | --ssl-port)
-        CONTROLLER_EXT_SSL_PORT=${2}
+      --dump-config)
+        if [[ ${2} = /* ]]; then
+          DUMP_SALT_CONFIG_DIR=${2}
+        else
+          DUMP_SALT_CONFIG_DIR=${PWD}/${2}
+        fi
+        ## states
+        S_DIR="${DUMP_SALT_CONFIG_DIR}/salt"
+        ## formulas
+        F_DIR="${DUMP_SALT_CONFIG_DIR}/formulas"
+        ## pillars
+        P_DIR="${DUMP_SALT_CONFIG_DIR}/pillars"
+        ## tests
+        T_DIR="${DUMP_SALT_CONFIG_DIR}/tests"
+        DUMP_CONFIG="yes"
         shift 2
         ;;
       -r | --roles)
@@ -102,6 +133,7 @@ arguments() {
 
 CONFIG_FILE="${SCRIPT_DIR}/local.params"
 CONFIG_DIR="local_config_dir"
+DUMP_CONFIG="no"
 LOG_LEVEL="info"
 CONTROLLER_EXT_SSL_PORT=443
 TESTS_DIR="tests"
@@ -127,44 +159,54 @@ WEBSOCKET_EXT_SSL_PORT=8002
 WORKBENCH1_EXT_SSL_PORT=443
 WORKBENCH2_EXT_SSL_PORT=3001
 
+## These are ARVADOS-related parameters
 # For a stable release, change RELEASE "production" and VERSION to the
 # package version (including the iteration, e.g. X.Y.Z-1) of the
 # release.
+# The "local.params.example.*" files already set "RELEASE=production"
+# to deploy  production-ready packages
 RELEASE="development"
 VERSION="latest"
 
-# The arvados-formula version.  For a stable release, this should be a
+# These are arvados-formula-related parameters
+# An arvados-formula tag. For a stable release, this should be a
 # branch name (e.g. X.Y-dev) or tag for the release.
-ARVADOS_TAG="main"
+# ARVADOS_TAG="2.2.0"
+# BRANCH="main"
 
 # Other formula versions we depend on
 POSTGRES_TAG="v0.41.6"
-NGINX_TAG="v2.7.4"
+NGINX_TAG="temp-fix-missing-statements-in-pillar"
 DOCKER_TAG="v1.0.0"
 LOCALE_TAG="v0.3.4"
 LETSENCRYPT_TAG="v2.1.0"
 
 # Salt's dir
+DUMP_SALT_CONFIG_DIR=""
 ## states
 S_DIR="/srv/salt"
 ## formulas
 F_DIR="/srv/formulas"
-##pillars
+## pillars
 P_DIR="/srv/pillars"
+## tests
+T_DIR="/tmp/cluster_tests"
 
 arguments ${@}
 
 if [ -s ${CONFIG_FILE} ]; then
   source ${CONFIG_FILE}
 else
-  echo >&2 "Please create a '${CONFIG_FILE}' file with initial values, as described in"
+  echo >&2 "You don't seem to have a config file with initial values."
+  echo >&2 "Please create a '${CONFIG_FILE}' file as described in"
   echo >&2 "  * https://doc.arvados.org/install/salt-single-host.html#single_host, or"
   echo >&2 "  * https://doc.arvados.org/install/salt-multi-host.html#multi_host_multi_hostnames"
   exit 1
 fi
 
 if [ ! -d ${CONFIG_DIR} ]; then
-  echo >&2 "Please create a '${CONFIG_DIR}' with initial values, as described in"
+  echo >&2 "You don't seem to have a config directory with pillars and states."
+  echo >&2 "Please create a '${CONFIG_DIR}' directory (as configured in your '${CONFIG_FILE}'). Please see"
   echo >&2 "  * https://doc.arvados.org/install/salt-single-host.html#single_host, or"
   echo >&2 "  * https://doc.arvados.org/install/salt-multi-host.html#multi_host_multi_hostnames"
   exit 1
@@ -176,7 +218,7 @@ if grep -q 'fixme_or_this_wont_work' ${CONFIG_FILE} ; then
   exit 1
 fi
 
-if ! grep -E '^[[:alnum:]]{5}$' <<<${CLUSTER} ; then
+if ! grep -qE '^[[:alnum:]]{5}$' <<<${CLUSTER} ; then
   echo >&2 "ERROR: <CLUSTER> must be exactly 5 alphanumeric characters long"
   echo >&2 "Fix the cluster name in the 'local.params' file and re-run the provision script"
   exit 1
@@ -187,20 +229,23 @@ if [ "x${HOSTNAME_EXT}" = "x" ] ; then
   HOSTNAME_EXT="${CLUSTER}.${DOMAIN}"
 fi
 
-apt-get update
-apt-get install -y curl git jq
-
-if which salt-call; then
-  echo "Salt already installed"
+if [ "${DUMP_CONFIG}" = "yes" ]; then
+  echo "The provision installer will just dump a config under ${DUMP_SALT_CONFIG_DIR} and exit"
 else
-  curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
-  sh /tmp/bootstrap_salt.sh -XdfP -x python3
-  /bin/systemctl stop salt-minion.service
-  /bin/systemctl disable salt-minion.service
-fi
+  apt-get update
+  apt-get install -y curl git jq
+
+  if which salt-call; then
+    echo "Salt already installed"
+  else
+    curl -L https://bootstrap.saltstack.com -o /tmp/bootstrap_salt.sh
+    sh /tmp/bootstrap_salt.sh -XdfP -x python3
+    /bin/systemctl stop salt-minion.service
+    /bin/systemctl disable salt-minion.service
+  fi
 
-# Set salt to masterless mode
-cat > /etc/salt/minion << EOFSM
+  # Set salt to masterless mode
+  cat > /etc/salt/minion << EOFSM
 file_client: local
 file_roots:
   base:
@@ -211,23 +256,36 @@ pillar_roots:
   base:
     - ${P_DIR}
 EOFSM
+fi
 
-mkdir -p ${S_DIR} ${F_DIR} ${P_DIR}
+mkdir -p ${S_DIR} ${F_DIR} ${P_DIR} ${T_DIR}
 
 # Get the formula and dependencies
 cd ${F_DIR} || exit 1
-git clone --branch "${ARVADOS_TAG}"     https://git.arvados.org/arvados-formula.git
-git clone --branch "${DOCKER_TAG}"      https://github.com/saltstack-formulas/docker-formula.git
-git clone --branch "${LOCALE_TAG}"      https://github.com/saltstack-formulas/locale-formula.git
-git clone --branch "${NGINX_TAG}"       https://github.com/saltstack-formulas/nginx-formula.git
-git clone --branch "${POSTGRES_TAG}"    https://github.com/saltstack-formulas/postgres-formula.git
-git clone --branch "${LETSENCRYPT_TAG}" https://github.com/saltstack-formulas/letsencrypt-formula.git
+echo "Cloning formulas"
+rm -rf ${F_DIR}/* || exit 1
+git clone --quiet https://github.com/saltstack-formulas/docker-formula.git ${F_DIR}/docker
+( cd docker && git checkout --quiet tags/"${DOCKER_TAG}" -b "${DOCKER_TAG}" )
+
+git clone --quiet https://github.com/saltstack-formulas/locale-formula.git ${F_DIR}/locale
+( cd locale && git checkout --quiet tags/"${LOCALE_TAG}" -b "${LOCALE_TAG}" )
+
+git clone --quiet https://github.com/netmanagers/nginx-formula.git ${F_DIR}/nginx
+( cd nginx && git checkout --quiet tags/"${NGINX_TAG}" -b "${NGINX_TAG}" )
+
+git clone --quiet https://github.com/saltstack-formulas/postgres-formula.git ${F_DIR}/postgres
+( cd postgres && git checkout --quiet tags/"${POSTGRES_TAG}" -b "${POSTGRES_TAG}" )
+
+git clone --quiet https://github.com/saltstack-formulas/letsencrypt-formula.git ${F_DIR}/letsencrypt
+( cd letsencrypt && git checkout --quiet tags/"${LETSENCRYPT_TAG}" -b "${LETSENCRYPT_TAG}" )
+
+git clone --quiet https://git.arvados.org/arvados-formula.git ${F_DIR}/arvados
 
 # If we want to try a specific branch of the formula
 if [ "x${BRANCH}" != "x" ]; then
-  cd ${F_DIR}/arvados-formula || exit 1
-  git checkout -t origin/"${BRANCH}" -b "${BRANCH}"
-  cd -
+  ( cd ${F_DIR}/arvados && git checkout --quiet -t origin/"${BRANCH}" -b "${BRANCH}" )
+elif [ "x${ARVADOS_TAG}" != "x" ]; then
+( cd ${F_DIR}/arvados && git checkout --quiet tags/"${ARVADOS_TAG}" -b "${ARVADOS_TAG}" )
 fi
 
 if [ "x${VAGRANT}" = "xyes" ]; then
@@ -242,6 +300,8 @@ fi
 
 SOURCE_STATES_DIR="${EXTRA_STATES_DIR}"
 
+echo "Writing pillars and states"
+
 # Replace variables (cluster,  domain, etc) in the pillars, states and tests
 # to ease deployment for newcomers
 if [ ! -d "${SOURCE_PILLARS_DIR}" ]; then
@@ -293,7 +353,7 @@ if [ "x${TEST}" = "xyes" ] && [ ! -d "${SOURCE_TESTS_DIR}" ]; then
   echo "You requested to run tests, but ${SOURCE_TESTS_DIR} does not exist or is not a directory. Exiting."
   exit 1
 fi
-mkdir -p /tmp/cluster_tests
+mkdir -p ${T_DIR}
 # Replace cluster and domain name in the test files
 for f in $(ls "${SOURCE_TESTS_DIR}"/*); do
   sed "s#__CLUSTER__#${CLUSTER}#g;
@@ -305,9 +365,9 @@ for f in $(ls "${SOURCE_TESTS_DIR}"/*); do
        s#__INITIAL_USER__#${INITIAL_USER}#g;
        s#__DATABASE_PASSWORD__#${DATABASE_PASSWORD}#g;
        s#__SYSTEM_ROOT_TOKEN__#${SYSTEM_ROOT_TOKEN}#g" \
-  "${f}" > "/tmp/cluster_tests"/$(basename "${f}")
+  "${f}" > ${T_DIR}/$(basename "${f}")
 done
-chmod 755 /tmp/cluster_tests/run-test.sh
+chmod 755 ${T_DIR}/run-test.sh
 
 # Replace helper state files that differ from the formula's examples
 if [ -d "${SOURCE_STATES_DIR}" ]; then
@@ -499,6 +559,11 @@ else
   done
 fi
 
+if [ "${DUMP_CONFIG}" = "yes" ]; then
+  # We won't run the rest of the script because we're just dumping the config
+  exit 0
+fi
+
 # FIXME! #16992 Temporary fix for psql call in arvados-api-server
 if [ -e /root/.psqlrc ]; then
   if ! ( grep 'pset pager off' /root/.psqlrc ); then
@@ -541,6 +606,6 @@ fi
 
 # Test that the installation finished correctly
 if [ "x${TEST}" = "xyes" ]; then
-  cd /tmp/cluster_tests
+  cd ${T_DIR}
   ./run-test.sh
 fi
index 959f16d8985f0ccaa4aad88449db0b40e6dbe698..997da57e052db81a25306507b23b3f60935b129e 100755 (executable)
@@ -41,6 +41,13 @@ def getuserinfo(arv, uuid):
                                                        arv.config()["Services"]["Workbench1"]["ExternalURL"],
                                                        uuid, prof)
 
+collectionNameCache = {}
+def getCollectionName(arv, uuid):
+    if uuid not in collectionNameCache:
+        u = arv.collections().get(uuid=uuid).execute()
+        collectionNameCache[uuid] = u["name"]
+    return collectionNameCache[uuid]
+
 def getname(u):
     return "\"%s\" (%s)" % (u["name"], u["uuid"])
 
@@ -137,6 +144,19 @@ def main(arguments=None):
             else:
                 users[owner].append("%s Deleted collection %s %s" % (event_at, getname(e["properties"]["old_attributes"]), loguuid))
 
+        elif e["event_type"] == "file_download":
+                users[e["object_uuid"]].append("%s Downloaded file \"%s\" from \"%s\" (%s) (%s)" % (event_at,
+                                                                                       e["properties"].get("collection_file_path") or e["properties"].get("reqPath"),
+                                                                                       getCollectionName(arv, e["properties"].get("collection_uuid")),
+                                                                                       e["properties"].get("collection_uuid"),
+                                                                                       e["properties"].get("portable_data_hash")))
+
+        elif e["event_type"] == "file_upload":
+                users[e["object_uuid"]].append("%s Uploaded file \"%s\" to \"%s\" (%s)" % (event_at,
+                                                                                    e["properties"].get("collection_file_path") or e["properties"].get("reqPath"),
+                                                                                    getCollectionName(arv, e["properties"].get("collection_uuid")),
+                                                                                    e["properties"].get("collection_uuid")))
+
         else:
             users[owner].append("%s %s %s %s" % (e["event_type"], e["object_kind"], e["object_uuid"], loguuid))