Merge branch '20825-cwl-separate-runner' refs #20825
authorPeter Amstutz <peter.amstutz@curii.com>
Thu, 19 Oct 2023 18:44:52 +0000 (14:44 -0400)
committerPeter Amstutz <peter.amstutz@curii.com>
Thu, 19 Oct 2023 18:44:52 +0000 (14:44 -0400)
Arvados-DCO-1.1-Signed-off-by: Peter Amstutz <peter.amstutz@curii.com>

44 files changed:
doc/admin/user-management-cli.html.textile.liquid
doc/admin/user-management.html.textile.liquid
doc/user/topics/arv-copy.html.textile.liquid
lib/install/deps.go
sdk/cwl/arvados_cwl/__init__.py
sdk/cwl/arvados_cwl/arvcontainer.py
sdk/cwl/arvados_cwl/arvtool.py
sdk/cwl/arvados_cwl/arvworkflow.py
sdk/cwl/arvados_cwl/context.py
sdk/cwl/arvados_cwl/executor.py
sdk/cwl/arvados_cwl/pathmapper.py
sdk/cwl/arvados_cwl/runner.py
sdk/cwl/setup.py
sdk/cwl/tests/test_container.py
sdk/cwl/tests/test_submit.py
sdk/cwl/tests/tool/submit_tool_map.cwl [new file with mode: 0644]
sdk/cwl/tests/wf/expect_upload_wrapper_map.cwl [new file with mode: 0644]
sdk/cwl/tests/wf/submit_wf_map.cwl [new file with mode: 0644]
sdk/python/arvados/collection.py
sdk/python/arvados/commands/arv_copy.py
sdk/python/arvados/http_to_keep.py
sdk/python/arvados/util.py
sdk/python/tests/test_http.py
services/api/app/models/group.rb
services/api/db/migrate/20231013000000_compute_permission_index.rb [new file with mode: 0644]
services/api/db/structure.sql
services/api/lib/update_permissions.rb
services/workbench2/package.json
services/workbench2/src/components/dropdown-menu/dropdown-menu.tsx
services/workbench2/src/views-components/baner/banner.tsx
services/workbench2/src/views-components/main-app-bar/notifications-menu.tsx
services/workbench2/yarn.lock
tools/salt-install/config_examples/multi_host/aws/pillars/nginx_workbench_configuration.sls
tools/salt-install/config_examples/multi_host/aws/states/custom_certs.sls
tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/nginx_keepproxy_configuration.sls
tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/nginx_keepweb_configuration.sls
tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/nginx_webshell_configuration.sls
tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/nginx_websocket_configuration.sls
tools/salt-install/config_examples/single_host/multiple_hostnames/pillars/nginx_workbench_configuration.sls
tools/salt-install/config_examples/single_host/multiple_hostnames/states/custom_certs.sls
tools/salt-install/config_examples/single_host/multiple_hostnames/states/snakeoil_certs.sls
tools/salt-install/config_examples/single_host/single_hostname/states/custom_certs.sls
tools/salt-install/config_examples/single_host/single_hostname/states/snakeoil_certs.sls
tools/salt-install/provision.sh

index 949ce6a5527a6a763aace11943bf19fb61f6b631..c2d4743ddfdf5b58372ac9b31dfff9452eb2db26 100644 (file)
@@ -40,7 +40,7 @@ h3. Deactivate user
 
 When deactivating a user, you may also want to "reassign ownership of their data":{{site.baseurl}}/admin/reassign-ownership.html .
 
-h3. Directly activate user
+h3(#activate-user). Directly activate user
 
 <notextile>
 <pre><code>$ <span class="userinput">arv user update --uuid "zzzzz-tpzed-3kz0nwtjehhl0u4" --user '{"is_active":true}'</span>
index 296660d01bda247653b68958a0b9f67f15aa5d24..7d30ee88d1e70cbca7eb046e967e337abd154ac0 100644 (file)
@@ -10,13 +10,28 @@ Copyright (C) The Arvados Authors. All rights reserved.
 SPDX-License-Identifier: CC-BY-SA-3.0
 {% endcomment %}
 
+# "Authentication":#authentication
+## "Federated Authentication":#federated_auth
+# "User activation":#user_activation
+# "User agreements and self-activation":#user_agreements
+# "User profile":#user_profile
+# "User visibility":#user_visibility
+# "Pre-setup user by email address":#pre-activated
+# "Pre-activate federated user":#pre-activated-fed
+# "Auto-setup federated users from trusted clusters":#auto_setup_federated
+# "Activation flows":#activation_flows
+## "Private instance":#activation_flow_private
+## "Federated instance":#federated
+## "Open instance":#activation_flow_open
+# "Service Accounts":#service_accounts
+
 {% comment %}
 TODO: Link to relevant workbench documentation when it gets written
 {% endcomment %}
 
 This page describes how user accounts are created, set up and activated.
 
-h2. Authentication
+h2(#authentication). Authentication
 
 "Browser login and management of API tokens is described here.":{{site.baseurl}}/api/tokens.html
 
@@ -30,11 +45,11 @@ If no user account is found, a new user account is created with the information
 
 If a user account has been "linked":{{site.baseurl}}/user/topics/link-accounts.html or "migrated":merge-remote-account.html the API server may follow internal redirects (@redirect_to_user_uuid@) to select the linked or migrated user account.
 
-h3. Federated Authentication
+h3(#federated_auth). Federated Authentication
 
 A federated user follows a slightly different flow.  The client presents a token issued by the remote cluster.  The local API server contacts the remote cluster to verify the user's identity.  This results in a user object (representing the remote user) being created on the local cluster.  If the user cannot be verified, the token will be rejected.  If the user is inactive on the remote cluster, a user record will be created, but it will also be inactive.
 
-h2. User activation
+h2(#user_activation). User activation
 
 This section describes the different user account states.
 
@@ -94,13 +109,13 @@ The @user_agreements/sign@ endpoint creates a Link object:
 
 The @user_agreements/signatures@ endpoint returns the list of Link objects that represent signatures by the current user (created by @sign@).
 
-h2. User profile
+h2(#user_profile). User profile
 
 The fields making up the user profile are described in @Workbench.UserProfileFormFields@ .  See "Configuration reference":config.html .
 
 The user profile is checked by workbench after checking if user agreements need to be signed.  The values entered are stored in the @properties@ field on the user object.  Unlike user agreements, the requirement to fill out the user profile is not enforced by the API server.
 
-h2. User visibility
+h2(#user_visibility). User visibility
 
 Initially, a user is not part of any groups and will not be able to interact with other users on the system.  The admin should determine who the user is permited to interact with and use Workbench or the "command line":group-management.html#add to create and add the user to the appropriate group(s).
 
@@ -118,7 +133,7 @@ $ arv user setup --uuid clsr1-tpzed-1234567890abcdf
 
 2. When the user logs in the first time, the email address will be recognized and the user will be associated with the existing user object.
 
-h2. Pre-activate federated user
+h2(#pre-activated-fed). Pre-activate federated user
 
 1. As admin, create a user object with the @uuid@ of the federated user (this is the user's uuid on their home cluster, called @clsr2@ in this example):
 
@@ -128,13 +143,13 @@ $ arv user create --user '{"uuid": "clsr2-tpzed-1234567890abcdf", "email": "foo@
 
 2. When the user logs in, they will be associated with the existing user object.
 
-h2. Auto-setup federated users from trusted clusters
+h2(#auto_setup_federated). Auto-setup federated users from trusted clusters
 
 By setting @ActivateUsers: true@ for each federated cluster in @RemoteClusters@, a federated user from one of the listed clusters will be automatically set up and activated on this cluster.  See configuration example in "Federated instance":#federated .
 
-h2. Activation flows
+h2(#activation_flows). Activation flows
 
-h3. Private instance
+h3(#activation_flow_private). Private instance
 
 Policy: users must be manually set up by the admin.
 
@@ -171,7 +186,7 @@ RemoteClusters:
 # Because 'clsr2' has @ActivateUsers@ the user is set up and activated.
 # User can immediately start using Workbench.
 
-h3. Open instance
+h3(#activation_flow_open). Open instance
 
 Policy: anybody who shows up and signs the agreements is activated.
 
@@ -187,3 +202,11 @@ Users:
 # Workbench presents user with list of user agreements, user reads and clicks "sign" for each one.
 # Workbench tries to activate user.
 # User is activated.
+
+h2(#service_accounts). Service Accounts
+
+For automation purposes, you can create service accounts that aren't tied to an external authorization system. These kind of accounts don't really differ much from standard user accounts, they just cannot be accessed through a normal login mechanism.
+
+As an admin, you can create accounts like described in the "user pre-setup section above":#pre-activated and then "activate them by updating its @is_active@ field":{{site.baseurl}}/admin/user-management-cli.html#activate-user.
+
+Once a service account is created you can "use an admin account to set up a token":{{site.baseurl}}/admin/user-management-cli.html#create-token for it, so that the required automations can authenticate. Note that these tokens support having a limited lifetime by using the @expires_at@ field and also "limited scope":{{site.baseurl}}/admin/scoped-tokens.html, if required by your security policies. You can read more about them at "the API reference page":{{site.baseurl}}/api/methods/api_client_authorizations.html.
\ No newline at end of file
index 15c9623224dd9440703b883c8d97dff2b97fab0c..a05620d62d7502d54a17d8e03df094d05810c64f 100644 (file)
@@ -15,7 +15,7 @@ This tutorial describes how to copy Arvados objects from one cluster to another
 
 h2. arv-copy
 
-@arv-copy@ allows users to copy collections, workflow definitions and projects from one cluster to another.
+@arv-copy@ allows users to copy collections, workflow definitions and projects from one cluster to another.  You can also use @arv-copy@ to import resources from HTTP URLs into Keep.
 
 For projects, @arv-copy@ will copy all the collections workflow definitions owned by the project, and recursively copy subprojects.
 
@@ -71,10 +71,14 @@ Additionally, if you need to specify the storage classes where to save the copie
 
 h3. How to copy a workflow
 
+Copying workflows requires @arvados-cwl-runner@ to be available in your @$PATH@.
+
 We will use the uuid @jutro-7fd4e-mkmmq53m1ze6apx@ as an example workflow.
 
+Arv-copy will infer the source cluster is @jutro@ from the object uuid, and destination cluster is @pirca@ from @--project-uuid@.
+
 <notextile>
-<pre><code>~$ <span class="userinput">arv-copy --src jutro --dst pirca --project-uuid pirca-j7d0g-ecak8knpefz8ere jutro-7fd4e-mkmmq53m1ze6apx</span>
+<pre><code>~$ <span class="userinput">arv-copy --project-uuid pirca-j7d0g-ecak8knpefz8ere jutro-7fd4e-mkmmq53m1ze6apx</span>
 ae480c5099b81e17267b7445e35b4bc7+180: 23M / 23M 100.0%
 2463fa9efeb75e099685528b3b9071e0+438: 156M / 156M 100.0%
 jutro-4zz18-vvvqlops0a0kpdl: 94M / 94M 100.0%
@@ -91,8 +95,10 @@ h3. How to copy a project
 
 We will use the uuid @jutro-j7d0g-xj19djofle3aryq@ as an example project.
 
+Arv-copy will infer the source cluster is @jutro@ from the source project uuid, and destination cluster is @pirca@ from @--project-uuid@.
+
 <notextile>
-<pre><code>~$ <span class="userinput">peteramstutz@shell:~$ arv-copy --project-uuid pirca-j7d0g-lr8sq3tx3ovn68k jutro-j7d0g-xj19djofle3aryq
+<pre><code>~$ <span class="userinput">arv-copy --project-uuid pirca-j7d0g-lr8sq3tx3ovn68k jutro-j7d0g-xj19djofle3aryq</span>
 2021-09-08 21:29:32 arvados.arv-copy[6377] INFO:
 2021-09-08 21:29:32 arvados.arv-copy[6377] INFO: Success: created copy with uuid pirca-j7d0g-ig9gvu5piznducp
 </code></pre>
@@ -101,3 +107,23 @@ We will use the uuid @jutro-j7d0g-xj19djofle3aryq@ as an example project.
 The name and description of the original project will be used for the destination copy.  If a project already exists with the same name, collections and workflow definitions will be copied into the project with the same name.
 
 If you would like to copy the project but not its subproject, you can use the @--no-recursive@ flag.
+
+h3. Importing HTTP resources to Keep
+
+You can also use @arv-copy@ to copy the contents of a HTTP URL into Keep.  When you do this, Arvados keeps track of the original URL the resource came from.  This allows you to refer to the resource by its original URL in Workflow inputs, but actually read from the local copy in Keep.
+
+<notextile>
+<pre><code>~$ <span class="userinput">arv-copy --project-uuid tordo-j7d0g-lr8sq3tx3ovn68k https://example.com/index.html</span>
+tordo-4zz18-dhpb6y9km2byb94
+2023-10-06 10:15:36 arvados.arv-copy[374147] INFO: Success: created copy with uuid tordo-4zz18-dhpb6y9km2byb94
+</code></pre>
+</notextile>
+
+In addition, when importing from HTTP URLs, you may provide a different cluster than the destination in @--src@. This tells @arv-copy@ to search the other cluster for a collection associated with that URL, and if found, copy the collection from that cluster instead of downloading from the original URL.
+
+The following @arv-copy@ command line options affect the behavior of HTTP import.
+
+table(table table-bordered table-condensed).
+|_. Option |_. Description |
+|==--varying-url-params== VARYING_URL_PARAMS|A comma separated list of URL query parameters that should be ignored when storing HTTP URLs in Keep.|
+|==--prefer-cached-downloads==|If a HTTP URL is found in Keep, skip upstream URL freshness check (will not notice if the upstream has changed, but also not error if upstream is unavailable).|
index 263f0180976c4e55d05f1f6f992cb84d4d556d41..c4f104f30ac142055dcd94a5b724f1ef7e453b78 100644 (file)
@@ -231,7 +231,7 @@ func (inst *installCommand) RunCommand(prog string, args []string, stdin io.Read
        }
 
        if dev || test {
-               if havedockerversion, err := exec.Command("docker", "--version").CombinedOutput(); err == nil {
+               if havedockerversion, err2 := exec.Command("docker", "--version").CombinedOutput(); err2 == nil {
                        logger.Printf("%s installed, assuming that version is ok", bytes.TrimSuffix(havedockerversion, []byte("\n")))
                } else if osv.Debian {
                        var codename string
@@ -240,6 +240,8 @@ func (inst *installCommand) RunCommand(prog string, args []string, stdin io.Read
                                codename = "buster"
                        case 11:
                                codename = "bullseye"
+                       case 12:
+                               codename = "bookworm"
                        default:
                                err = fmt.Errorf("don't know how to install docker-ce for debian %d", osv.Major)
                                return 1
@@ -261,13 +263,17 @@ DEBIAN_FRONTEND=noninteractive apt-get --yes --no-install-recommends install doc
                }
 
                err = inst.runBash(`
-add="fs.inotify.max_user_watches=524288"
-if ! grep -F -- "$add" /etc/sysctl.conf; then
-    echo "$add" | tee -a /etc/sysctl.conf
+key=fs.inotify.max_user_watches
+min=524288
+if [[ "$(sysctl --values "${key}")" -lt "${min}" ]]; then
+    sysctl "${key}=${min}"
+    # writing sysctl worked, so we should make it permanent
+    echo "${key}=${min}" | tee -a /etc/sysctl.conf
     sysctl -p
 fi
 `, stdout, stderr)
                if err != nil {
+                       err = fmt.Errorf("couldn't set fs.inotify.max_user_watches value. (Is this a docker container? Fix this on the docker host by adding fs.inotify.max_user_watches=524288 to /etc/sysctl.conf and running `sysctl -p`)")
                        return 1
                }
        }
index 7968fb1e2b2900452e2535e30dad39e285ae83c4..fd3b7a5d16b6e62909f6b1391ead5198aafe01bf 100644 (file)
@@ -123,6 +123,8 @@ def arg_parser():  # type: () -> argparse.ArgumentParser
     exgroup.add_argument("--create-workflow", action="store_true", help="Register an Arvados workflow that can be run from Workbench")
     exgroup.add_argument("--update-workflow", metavar="UUID", help="Update an existing Arvados workflow with the given UUID.")
 
+    exgroup.add_argument("--print-keep-deps", action="store_true", help="To assist copying, print a list of Keep collections that this workflow depends on.")
+
     exgroup = parser.add_mutually_exclusive_group()
     exgroup.add_argument("--wait", action="store_true", help="After submitting workflow runner, wait for completion.",
                         default=True, dest="wait")
@@ -324,7 +326,9 @@ def main(args=sys.argv[1:],
             return 1
         arvargs.work_api = want_api
 
-    if (arvargs.create_workflow or arvargs.update_workflow) and not arvargs.job_order:
+    workflow_op = arvargs.create_workflow or arvargs.update_workflow or arvargs.print_keep_deps
+
+    if workflow_op and not arvargs.job_order:
         job_order_object = ({}, "")
 
     add_arv_hints()
@@ -416,9 +420,11 @@ def main(args=sys.argv[1:],
         # unit tests.
         stdout = None
 
+    executor.loadingContext.default_docker_image = arvargs.submit_runner_image or "arvados/jobs:"+__version__
+
     if arvargs.workflow.startswith("arvwf:") or workflow_uuid_pattern.match(arvargs.workflow) or arvargs.workflow.startswith("keep:"):
         executor.loadingContext.do_validate = False
-        if arvargs.submit:
+        if arvargs.submit and not workflow_op:
             executor.fast_submit = True
 
     return cwltool.main.main(args=arvargs,
@@ -431,4 +437,4 @@ def main(args=sys.argv[1:],
                              custom_schema_callback=add_arv_hints,
                              loadingContext=executor.loadingContext,
                              runtimeContext=executor.toplevel_runtimeContext,
-                             input_required=not (arvargs.create_workflow or arvargs.update_workflow))
+                             input_required=not workflow_op)
index 8a58066c19287c09f03c0072319d095a71ffb044..6e3e42975e75385fe1bf1a2e1d5b5070773d4e8c 100644 (file)
@@ -560,13 +560,19 @@ class RunnerContainer(Runner):
                 }
                 self.job_order[param] = {"$include": mnt}
 
+        container_image = arvados_jobs_image(self.arvrunner, self.jobs_image, runtimeContext)
+
+        workflow_runner_req, _ = self.embedded_tool.get_requirement("http://arvados.org/cwl#WorkflowRunnerResources")
+        if workflow_runner_req and workflow_runner_req.get("acrContainerImage"):
+            container_image = workflow_runner_req.get("acrContainerImage")
+
         container_req = {
             "name": self.name,
             "output_path": "/var/spool/cwl",
             "cwd": "/var/spool/cwl",
             "priority": self.priority,
             "state": "Committed",
-            "container_image": arvados_jobs_image(self.arvrunner, self.jobs_image, runtimeContext),
+            "container_image": container_image,
             "mounts": {
                 "/var/lib/cwl/cwl.input.json": {
                     "kind": "json",
index b66e8ad3aac6b73b3bb086a60a1403c8a6cf7a64..86fecc0a1dbb6e1e0687b8f2cf96f8f8ba44f5da 100644 (file)
@@ -10,6 +10,7 @@ from ._version import __version__
 from functools import partial
 from schema_salad.sourceline import SourceLine
 from cwltool.errors import WorkflowException
+from arvados.util import portable_data_hash_pattern
 
 def validate_cluster_target(arvrunner, runtimeContext):
     if (runtimeContext.submit_runner_cluster and
@@ -61,8 +62,12 @@ class ArvadosCommandTool(CommandLineTool):
 
         (docker_req, docker_is_req) = self.get_requirement("DockerRequirement")
         if not docker_req:
-            self.hints.append({"class": "DockerRequirement",
-                               "dockerPull": "arvados/jobs:"+__version__})
+            if portable_data_hash_pattern.match(loadingContext.default_docker_image):
+                self.hints.append({"class": "DockerRequirement",
+                                   "http://arvados.org/cwl#dockerCollectionPDH": loadingContext.default_docker_image})
+            else:
+                self.hints.append({"class": "DockerRequirement",
+                                   "dockerPull": loadingContext.default_docker_image})
 
         self.arvrunner = arvrunner
 
index 6b6634bcc9f5f6a3b684a83c6ab3cfb7bca17772..c592b83dc7739b142fb51ffff25a630a5494f5fc 100644 (file)
@@ -29,7 +29,7 @@ from cwltool.load_tool import fetch_document, resolve_and_validate_document
 from cwltool.process import shortname, uniquename
 from cwltool.workflow import Workflow, WorkflowException, WorkflowStep
 from cwltool.utils import adjustFileObjs, adjustDirObjs, visit_class, normalizeFilesDirs
-from cwltool.context import LoadingContext
+from cwltool.context import LoadingContext, getdefault
 
 from schema_salad.ref_resolver import file_uri, uri_file_path
 
@@ -43,6 +43,7 @@ from .pathmapper import ArvPathMapper, trim_listing
 from .arvtool import ArvadosCommandTool, set_cluster_target
 from ._version import __version__
 from .util import common_prefix
+from .arvdocker import arv_docker_get_image
 
 from .perf import Perf
 
@@ -179,14 +180,14 @@ def rel_ref(s, baseuri, urlexpander, merged_map, jobmapper):
 def is_basetype(tp):
     return _basetype_re.match(tp) is not None
 
-def update_refs(d, baseuri, urlexpander, merged_map, jobmapper, runtimeContext, prefix, replacePrefix):
+def update_refs(api, d, baseuri, urlexpander, merged_map, jobmapper, runtimeContext, prefix, replacePrefix):
     if isinstance(d, MutableSequence):
         for i, s in enumerate(d):
             if prefix and isinstance(s, str):
                 if s.startswith(prefix):
                     d[i] = replacePrefix+s[len(prefix):]
             else:
-                update_refs(s, baseuri, urlexpander, merged_map, jobmapper, runtimeContext, prefix, replacePrefix)
+                update_refs(api, s, baseuri, urlexpander, merged_map, jobmapper, runtimeContext, prefix, replacePrefix)
     elif isinstance(d, MutableMapping):
         for field in ("id", "name"):
             if isinstance(d.get(field), str) and d[field].startswith("_:"):
@@ -199,8 +200,8 @@ def update_refs(d, baseuri, urlexpander, merged_map, jobmapper, runtimeContext,
             baseuri = urlexpander(d["name"], baseuri, scoped_id=True)
 
         if d.get("class") == "DockerRequirement":
-            dockerImageId = d.get("dockerImageId") or d.get("dockerPull")
-            d["http://arvados.org/cwl#dockerCollectionPDH"] = runtimeContext.cached_docker_lookups.get(dockerImageId)
+            d["http://arvados.org/cwl#dockerCollectionPDH"] = arv_docker_get_image(api, d, False,
+                                                                                   runtimeContext)
 
         for field in d:
             if field in ("location", "run", "name") and isinstance(d[field], str):
@@ -223,15 +224,21 @@ def update_refs(d, baseuri, urlexpander, merged_map, jobmapper, runtimeContext,
                     if isinstance(d["inputs"][inp], str) and not is_basetype(d["inputs"][inp]):
                         d["inputs"][inp] = rel_ref(d["inputs"][inp], baseuri, urlexpander, merged_map, jobmapper)
                     if isinstance(d["inputs"][inp], MutableMapping):
-                        update_refs(d["inputs"][inp], baseuri, urlexpander, merged_map, jobmapper, runtimeContext, prefix, replacePrefix)
+                        update_refs(api, d["inputs"][inp], baseuri, urlexpander, merged_map, jobmapper, runtimeContext, prefix, replacePrefix)
                 continue
 
+            if field in ("requirements", "hints") and isinstance(d[field], MutableMapping):
+                dr = d[field].get("DockerRequirement")
+                if dr:
+                    dr["http://arvados.org/cwl#dockerCollectionPDH"] = arv_docker_get_image(api, dr, False,
+                                                                                            runtimeContext)
+
             if field == "$schemas":
                 for n, s in enumerate(d["$schemas"]):
                     d["$schemas"][n] = rel_ref(d["$schemas"][n], baseuri, urlexpander, merged_map, jobmapper)
                 continue
 
-            update_refs(d[field], baseuri, urlexpander, merged_map, jobmapper, runtimeContext, prefix, replacePrefix)
+            update_refs(api, d[field], baseuri, urlexpander, merged_map, jobmapper, runtimeContext, prefix, replacePrefix)
 
 
 def fix_schemadef(req, baseuri, urlexpander, merged_map, jobmapper, pdh):
@@ -293,7 +300,8 @@ def upload_workflow(arvRunner, tool, job_order, project_uuid,
     # Find the longest common prefix among all the file names.  We'll
     # use this to recreate the directory structure in a keep
     # collection with correct relative references.
-    prefix = common_prefix(firstfile, all_files)
+    prefix = common_prefix(firstfile, all_files) if firstfile else ""
+
 
     col = arvados.collection.Collection(api_client=arvRunner.api)
 
@@ -327,7 +335,7 @@ def upload_workflow(arvRunner, tool, job_order, project_uuid,
 
         # 2. find $import, $include, $schema, run, location
         # 3. update field value
-        update_refs(result, w, tool.doc_loader.expand_url, merged_map, jobmapper, runtimeContext, "", "")
+        update_refs(arvRunner.api, result, w, tool.doc_loader.expand_url, merged_map, jobmapper, runtimeContext, "", "")
 
         # Write the updated file to the collection.
         with col.open(w[len(prefix):], "wt") as f:
@@ -412,9 +420,10 @@ def upload_workflow(arvRunner, tool, job_order, project_uuid,
         wf_runner_resources = {"class": "http://arvados.org/cwl#WorkflowRunnerResources"}
         hints.append(wf_runner_resources)
 
-    wf_runner_resources["acrContainerImage"] = arvados_jobs_image(arvRunner,
-                                                                  submit_runner_image or "arvados/jobs:"+__version__,
-                                                                  runtimeContext)
+    if "acrContainerImage" not in wf_runner_resources:
+        wf_runner_resources["acrContainerImage"] = arvados_jobs_image(arvRunner,
+                                                                      submit_runner_image or "arvados/jobs:"+__version__,
+                                                                      runtimeContext)
 
     if submit_runner_ram:
         wf_runner_resources["ramMin"] = submit_runner_ram
@@ -484,7 +493,7 @@ def upload_workflow(arvRunner, tool, job_order, project_uuid,
         if r["class"] == "SchemaDefRequirement":
             wrapper["requirements"][i] = fix_schemadef(r, main["id"], tool.doc_loader.expand_url, merged_map, jobmapper, col.portable_data_hash())
 
-    update_refs(wrapper, main["id"], tool.doc_loader.expand_url, merged_map, jobmapper, runtimeContext, main["id"]+"#", "#main/")
+    update_refs(arvRunner.api, wrapper, main["id"], tool.doc_loader.expand_url, merged_map, jobmapper, runtimeContext, main["id"]+"#", "#main/")
 
     doc = {"cwlVersion": "v1.2", "$graph": [wrapper]}
 
@@ -594,8 +603,18 @@ class ArvadosWorkflow(Workflow):
         self.dynamic_resource_req = []
         self.static_resource_req = []
         self.wf_reffiles = []
-        self.loadingContext = loadingContext
-        super(ArvadosWorkflow, self).__init__(toolpath_object, loadingContext)
+        self.loadingContext = loadingContext.copy()
+
+        self.requirements = copy.deepcopy(getdefault(loadingContext.requirements, []))
+        tool_requirements = toolpath_object.get("requirements", [])
+        self.hints = copy.deepcopy(getdefault(loadingContext.hints, []))
+        tool_hints = toolpath_object.get("hints", [])
+
+        workflow_runner_req, _ = self.get_requirement("http://arvados.org/cwl#WorkflowRunnerResources")
+        if workflow_runner_req and workflow_runner_req.get("acrContainerImage"):
+            self.loadingContext.default_docker_image = workflow_runner_req.get("acrContainerImage")
+
+        super(ArvadosWorkflow, self).__init__(toolpath_object, self.loadingContext)
         self.cluster_target_req, _ = self.get_requirement("http://arvados.org/cwl#ClusterTarget")
 
 
index 86812a419a07c3cbe3088478f9d15ce02d5c0b4a..0439cb5b15cb64d1c449e39114358d564dc21b86 100644 (file)
@@ -7,6 +7,7 @@ from collections import namedtuple
 
 class ArvLoadingContext(LoadingContext):
     def __init__(self, kwargs=None):
+        self.default_docker_image = None
         super(ArvLoadingContext, self).__init__(kwargs)
 
 class ArvRuntimeContext(RuntimeContext):
@@ -43,6 +44,7 @@ class ArvRuntimeContext(RuntimeContext):
         self.varying_url_params = ""
         self.prefer_cached_downloads = False
         self.cached_docker_lookups = {}
+        self.print_keep_deps = False
         self.git_info = {}
 
         super(ArvRuntimeContext, self).__init__(kwargs)
index 43d7b60006113919265be7cc146f5da6d12a6a76..2db6a9bfe2a3de1c6f4036a4795ce88924954036 100644 (file)
@@ -34,7 +34,7 @@ from arvados.errors import ApiError
 
 import arvados_cwl.util
 from .arvcontainer import RunnerContainer, cleanup_name_for_collection
-from .runner import Runner, upload_docker, upload_job_order, upload_workflow_deps, make_builder, update_from_merged_map
+from .runner import Runner, upload_docker, upload_job_order, upload_workflow_deps, make_builder, update_from_merged_map, print_keep_deps
 from .arvtool import ArvadosCommandTool, validate_cluster_target, ArvadosExpressionTool
 from .arvworkflow import ArvadosWorkflow, upload_workflow, make_workflow_record
 from .fsaccess import CollectionFsAccess, CollectionFetcher, collectionResolver, CollectionCache, pdh_size
@@ -651,6 +651,10 @@ The 'jobs' API is no longer supported.
             runtimeContext.copy_deps = True
             runtimeContext.match_local_docker = True
 
+        if runtimeContext.print_keep_deps:
+            runtimeContext.copy_deps = False
+            runtimeContext.match_local_docker = False
+
         if runtimeContext.update_workflow and self.project_uuid is None:
             # If we are updating a workflow, make sure anything that
             # gets uploaded goes into the same parent project, unless
@@ -673,12 +677,10 @@ The 'jobs' API is no longer supported.
         # are going to wait for the result, and always_submit_runner
         # is false, then we don't submit a runner process.
 
-        submitting = (runtimeContext.update_workflow or
-                      runtimeContext.create_workflow or
-                      (runtimeContext.submit and not
+        submitting = (runtimeContext.submit and not
                        (updated_tool.tool["class"] == "CommandLineTool" and
                         runtimeContext.wait and
-                        not runtimeContext.always_submit_runner)))
+                        not runtimeContext.always_submit_runner))
 
         loadingContext = self.loadingContext.copy()
         loadingContext.do_validate = False
@@ -704,7 +706,7 @@ The 'jobs' API is no longer supported.
         loadingContext.skip_resolve_all = True
 
         workflow_wrapper = None
-        if submitting and not self.fast_submit:
+        if (submitting and not self.fast_submit) or runtimeContext.update_workflow or runtimeContext.create_workflow or runtimeContext.print_keep_deps:
             # upload workflow and get back the workflow wrapper
 
             workflow_wrapper = upload_workflow(self, tool, job_order,
@@ -727,6 +729,11 @@ The 'jobs' API is no longer supported.
                 self.stdout.write(uuid + "\n")
                 return (None, "success")
 
+            if runtimeContext.print_keep_deps:
+                # Just find and print out all the collection dependencies and exit
+                print_keep_deps(self, runtimeContext, merged_map, tool)
+                return (None, "success")
+
             # Did not register a workflow, we're going to submit
             # it instead.
             loadingContext.loader.idx.clear()
index 539188fddd995b9cda5c58c89f1f8ef1dd96293a..448facf776823c68f5c706cc0ec1707460222cf7 100644 (file)
@@ -109,9 +109,10 @@ class ArvPathMapper(PathMapper):
                         # passthrough, we'll download it later.
                         self._pathmap[src] = MapperEnt(src, src, srcobj["class"], True)
                     else:
-                        keepref = "keep:%s/%s" % http_to_keep(self.arvrunner.api, self.arvrunner.project_uuid, src,
+                        results = http_to_keep(self.arvrunner.api, self.arvrunner.project_uuid, src,
                                                               varying_url_params=self.arvrunner.toplevel_runtimeContext.varying_url_params,
                                                               prefer_cached_downloads=self.arvrunner.toplevel_runtimeContext.prefer_cached_downloads)
+                        keepref = "keep:%s/%s" % (results[0], results[1])
                         logger.info("%s is %s", src, keepref)
                         self._pathmap[src] = MapperEnt(keepref, keepref, srcobj["class"], True)
                 except Exception as e:
index 763d9d7e1219b7f713324c13636b6f92d66a8962..f52768d3d39662ec9ce67e91a930f50587b7f5b9 100644 (file)
@@ -948,3 +948,42 @@ class Runner(Process):
             self.arvrunner.output_callback({}, "permanentFail")
         else:
             self.arvrunner.output_callback(outputs, processStatus)
+
+
+def print_keep_deps_visitor(api, runtimeContext, references, doc_loader, tool):
+    def collect_locators(obj):
+        loc = obj.get("location", "")
+
+        g = arvados.util.keepuri_pattern.match(loc)
+        if g:
+            references.add(g[1])
+
+        if obj.get("class") == "http://arvados.org/cwl#WorkflowRunnerResources" and "acrContainerImage" in obj:
+            references.add(obj["acrContainerImage"])
+
+        if obj.get("class") == "DockerRequirement":
+            references.add(arvados_cwl.arvdocker.arv_docker_get_image(api, obj, False, runtimeContext))
+
+    sc_result = scandeps(tool["id"], tool,
+                         set(),
+                         set(("location", "id")),
+                         None, urljoin=doc_loader.fetcher.urljoin,
+                         nestdirs=False)
+
+    visit_class(sc_result, ("File", "Directory"), collect_locators)
+    visit_class(tool, ("DockerRequirement", "http://arvados.org/cwl#WorkflowRunnerResources"), collect_locators)
+
+
+def print_keep_deps(arvRunner, runtimeContext, merged_map, tool):
+    references = set()
+
+    tool.visit(partial(print_keep_deps_visitor, arvRunner.api, runtimeContext, references, tool.doc_loader))
+
+    for mm in merged_map:
+        for k, v in merged_map[mm].resolved.items():
+            g = arvados.util.keepuri_pattern.match(v)
+            if g:
+                references.add(g[1])
+
+    json.dump(sorted(references), arvRunner.stdout)
+    print(file=arvRunner.stdout)
index 1be2c119c2ff4c455a27e093597af206ee2c24e6..9e20efd2a3a5d9f38460c712a1a36b9f2e870582 100644 (file)
@@ -44,7 +44,9 @@ setup(name='arvados-cwl-runner',
           'msgpack==1.0.3',
           'importlib-metadata<5',
           'setuptools>=40.3.0',
-          'zipp<3.16.0'
+
+          # zipp 3.16 dropped support for Python 3.7
+          'zipp<3.16.0; python_version<"3.8"'
       ],
       data_files=[
           ('share/doc/arvados-cwl-runner', ['LICENSE-2.0.txt', 'README.rst']),
index a2f404d7ebe9b24b3d726d6057be14848b088cdd..8e3a8ab85e66e70ff76f4a8357e262ec543b084c 100644 (file)
@@ -85,7 +85,8 @@ class TestContainer(unittest.TestCase):
              "construct_tool_object": runner.arv_make_tool,
              "fetcher_constructor": functools.partial(arvados_cwl.CollectionFetcher, api_client=runner.api, fs_access=fs_access),
              "loader": Loader({}),
-             "metadata": cmap({"cwlVersion": INTERNAL_VERSION, "http://commonwl.org/cwltool#original_cwlVersion": "v1.0"})
+             "metadata": cmap({"cwlVersion": INTERNAL_VERSION, "http://commonwl.org/cwltool#original_cwlVersion": "v1.0"}),
+             "default_docker_image": "arvados/jobs:"+arvados_cwl.__version__
              })
         runtimeContext = arvados_cwl.context.ArvRuntimeContext(
             {"work_api": "containers",
@@ -1463,7 +1464,8 @@ class TestWorkflow(unittest.TestCase):
              "make_fs_access": make_fs_access,
              "loader": document_loader,
              "metadata": {"cwlVersion": INTERNAL_VERSION, "http://commonwl.org/cwltool#original_cwlVersion": "v1.0"},
-             "construct_tool_object": runner.arv_make_tool})
+             "construct_tool_object": runner.arv_make_tool,
+             "default_docker_image": "arvados/jobs:"+arvados_cwl.__version__})
         runtimeContext = arvados_cwl.context.ArvRuntimeContext(
             {"work_api": "containers",
              "basedir": "",
index 9dad245254c50cfac4df2f2734bd41fe59f1ab61..c8bf1279511cd8591104af5b196b4938dd71eb88 100644 (file)
@@ -1180,7 +1180,7 @@ class TestSubmit(unittest.TestCase):
                                         "out": [
                                             {"id": "#main/step/out"}
                                         ],
-                                        "run": "keep:7628e49da34b93de9f4baf08a6212817+247/secret_wf.cwl"
+                                        "run": "keep:991302581d01db470345a131480e623b+247/secret_wf.cwl"
                                     }
                                 ]
                             }
@@ -1737,3 +1737,55 @@ class TestCreateWorkflow(unittest.TestCase):
         self.assertEqual(stubs.capture_stdout.getvalue(),
                          stubs.expect_workflow_uuid + '\n')
         self.assertEqual(exited, 0)
+
+    @stubs()
+    def test_create_map(self, stubs):
+        # test uploading a document that uses objects instead of arrays
+        # for certain fields like inputs and requirements.
+
+        project_uuid = 'zzzzz-j7d0g-zzzzzzzzzzzzzzz'
+        stubs.api.groups().get().execute.return_value = {"group_class": "project"}
+
+        exited = arvados_cwl.main(
+            ["--create-workflow", "--debug",
+             "--api=containers",
+             "--project-uuid", project_uuid,
+             "--disable-git",
+             "tests/wf/submit_wf_map.cwl", "tests/submit_test_job.json"],
+            stubs.capture_stdout, sys.stderr, api_client=stubs.api)
+
+        stubs.api.pipeline_templates().create.refute_called()
+        stubs.api.container_requests().create.refute_called()
+
+        expect_workflow = StripYAMLComments(
+            open("tests/wf/expect_upload_wrapper_map.cwl").read().rstrip())
+
+        body = {
+            "workflow": {
+                "owner_uuid": project_uuid,
+                "name": "submit_wf_map.cwl",
+                "description": "",
+                "definition": expect_workflow,
+            }
+        }
+        stubs.api.workflows().create.assert_called_with(
+            body=JsonDiffMatcher(body))
+
+        self.assertEqual(stubs.capture_stdout.getvalue(),
+                         stubs.expect_workflow_uuid + '\n')
+        self.assertEqual(exited, 0)
+
+
+class TestPrintKeepDeps(unittest.TestCase):
+    @stubs()
+    def test_print_keep_deps(self, stubs):
+        # test --print-keep-deps which is used by arv-copy
+
+        exited = arvados_cwl.main(
+            ["--print-keep-deps", "--debug",
+             "tests/wf/submit_wf_map.cwl"],
+            stubs.capture_stdout, sys.stderr, api_client=stubs.api)
+
+        self.assertEqual(stubs.capture_stdout.getvalue(),
+                         '["5d373e7629203ce39e7c22af98a0f881+52", "999999999999999999999999999999d4+99"]' + '\n')
+        self.assertEqual(exited, 0)
diff --git a/sdk/cwl/tests/tool/submit_tool_map.cwl b/sdk/cwl/tests/tool/submit_tool_map.cwl
new file mode 100644 (file)
index 0000000..7a833d4
--- /dev/null
@@ -0,0 +1,24 @@
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+# Test case for arvados-cwl-runner
+#
+# Used to test whether scanning a tool file for dependencies (e.g. default
+# value blub.txt) and uploading to Keep works as intended.
+
+class: CommandLineTool
+cwlVersion: v1.0
+requirements:
+  DockerRequirement:
+    dockerPull: debian:buster-slim
+inputs:
+  x:
+    type: File
+    default:
+      class: File
+      location: blub.txt
+    inputBinding:
+      position: 1
+outputs: []
+baseCommand: cat
diff --git a/sdk/cwl/tests/wf/expect_upload_wrapper_map.cwl b/sdk/cwl/tests/wf/expect_upload_wrapper_map.cwl
new file mode 100644 (file)
index 0000000..8f98f47
--- /dev/null
@@ -0,0 +1,88 @@
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+{
+    "$graph": [
+        {
+            "class": "Workflow",
+            "hints": [
+                {
+                    "acrContainerImage": "999999999999999999999999999999d3+99",
+                    "class": "http://arvados.org/cwl#WorkflowRunnerResources"
+                }
+            ],
+            "id": "#main",
+            "inputs": [
+                {
+                    "default": {
+                        "basename": "blorp.txt",
+                        "class": "File",
+                        "location": "keep:169f39d466a5438ac4a90e779bf750c7+53/blorp.txt",
+                        "nameext": ".txt",
+                        "nameroot": "blorp",
+                        "size": 16
+                    },
+                    "id": "#main/x",
+                    "type": "File"
+                },
+                {
+                    "default": {
+                        "basename": "99999999999999999999999999999998+99",
+                        "class": "Directory",
+                        "location": "keep:99999999999999999999999999999998+99"
+                    },
+                    "id": "#main/y",
+                    "type": "Directory"
+                },
+                {
+                    "default": {
+                        "basename": "anonymous",
+                        "class": "Directory",
+                        "listing": [
+                            {
+                                "basename": "renamed.txt",
+                                "class": "File",
+                                "location": "keep:99999999999999999999999999999998+99/file1.txt",
+                                "nameext": ".txt",
+                                "nameroot": "renamed",
+                                "size": 0
+                            }
+                        ]
+                    },
+                    "id": "#main/z",
+                    "type": "Directory"
+                }
+            ],
+            "outputs": [],
+            "requirements": [
+                {
+                    "class": "SubworkflowFeatureRequirement"
+                }
+            ],
+            "steps": [
+                {
+                    "id": "#main/submit_wf_map.cwl",
+                    "in": [
+                        {
+                            "id": "#main/step/x",
+                            "source": "#main/x"
+                        },
+                        {
+                            "id": "#main/step/y",
+                            "source": "#main/y"
+                        },
+                        {
+                            "id": "#main/step/z",
+                            "source": "#main/z"
+                        }
+                    ],
+                    "label": "submit_wf_map.cwl",
+                    "out": [],
+                    "run": "keep:2b94b65162db72023301a582e085646f+290/wf/submit_wf_map.cwl"
+                }
+            ]
+        }
+    ],
+    "cwlVersion": "v1.2"
+}
diff --git a/sdk/cwl/tests/wf/submit_wf_map.cwl b/sdk/cwl/tests/wf/submit_wf_map.cwl
new file mode 100644 (file)
index 0000000..e8bb9cf
--- /dev/null
@@ -0,0 +1,25 @@
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+# Test case for arvados-cwl-runner
+#
+# Used to test whether scanning a workflow file for dependencies
+# (e.g. submit_tool.cwl) and uploading to Keep works as intended.
+
+class: Workflow
+cwlVersion: v1.2
+inputs:
+  x:
+    type: File
+  y:
+    type: Directory
+  z:
+    type: Directory
+outputs: []
+steps:
+  step1:
+    in:
+      x: x
+    out: []
+    run: ../tool/submit_tool_map.cwl
index bfb43be5eb85401e332915419f2a52ea71eb2e19..9e6bd06071b5653a560da72a0f666217d7c0102c 100644 (file)
@@ -1,6 +1,16 @@
 # Copyright (C) The Arvados Authors. All rights reserved.
 #
 # SPDX-License-Identifier: Apache-2.0
+"""Tools to work with Arvados collections
+
+This module provides high-level interfaces to create, read, and update
+Arvados collections. Most users will want to instantiate `Collection`
+objects, and use methods like `Collection.open` and `Collection.mkdirs` to
+read and write data in the collection. Refer to the Arvados Python SDK
+cookbook for [an introduction to using the Collection class][cookbook].
+
+[cookbook]: https://doc.arvados.org/sdk/python/cookbook.html#working-with-collections
+"""
 
 from __future__ import absolute_import
 from future.utils import listitems, listvalues, viewkeys
@@ -35,15 +45,65 @@ import arvados.util
 import arvados.events as events
 from arvados.retry import retry_method
 
+from typing import (
+    Any,
+    Callable,
+    Dict,
+    IO,
+    Iterator,
+    List,
+    Mapping,
+    Optional,
+    Tuple,
+    Union,
+)
+
+if sys.version_info < (3, 8):
+    from typing_extensions import Literal
+else:
+    from typing import Literal
+
 _logger = logging.getLogger('arvados.collection')
 
+ADD = "add"
+"""Argument value for `Collection` methods to represent an added item"""
+DEL = "del"
+"""Argument value for `Collection` methods to represent a removed item"""
+MOD = "mod"
+"""Argument value for `Collection` methods to represent a modified item"""
+TOK = "tok"
+"""Argument value for `Collection` methods to represent an item with token differences"""
+FILE = "file"
+"""`create_type` value for `Collection.find_or_create`"""
+COLLECTION = "collection"
+"""`create_type` value for `Collection.find_or_create`"""
+
+ChangeList = List[Union[
+    Tuple[Literal[ADD, DEL], str, 'Collection'],
+    Tuple[Literal[MOD, TOK], str, 'Collection', 'Collection'],
+]]
+ChangeType = Literal[ADD, DEL, MOD, TOK]
+CollectionItem = Union[ArvadosFile, 'Collection']
+ChangeCallback = Callable[[ChangeType, 'Collection', str, CollectionItem], object]
+CreateType = Literal[COLLECTION, FILE]
+Properties = Dict[str, Any]
+StorageClasses = List[str]
+
 class CollectionBase(object):
-    """Abstract base class for Collection classes."""
+    """Abstract base class for Collection classes
+
+    .. ATTENTION:: Internal
+       This class is meant to be used by other parts of the SDK. User code
+       should instantiate or subclass `Collection` or one of its subclasses
+       directly.
+    """
 
     def __enter__(self):
+        """Enter a context block with this collection instance"""
         return self
 
     def __exit__(self, exc_type, exc_value, traceback):
+        """Exit a context block with this collection instance"""
         pass
 
     def _my_keep(self):
@@ -52,12 +112,13 @@ class CollectionBase(object):
                                            num_retries=self.num_retries)
         return self._keep_client
 
-    def stripped_manifest(self):
-        """Get the manifest with locator hints stripped.
+    def stripped_manifest(self) -> str:
+        """Create a copy of the collection manifest with only size hints
 
-        Return the manifest for the current collection with all
-        non-portable hints (i.e., permission signatures and other
-        hints other than size hints) removed from the locators.
+        This method returns a string with the current collection's manifest
+        text with all non-portable locator hints like permission hints and
+        remote cluster hints removed. The only hints in the returned manifest
+        will be size hints.
         """
         raw = self.manifest_text()
         clean = []
@@ -96,709 +157,379 @@ class _WriterFile(_FileLikeObjectBase):
         self.dest.flush_data()
 
 
-class CollectionWriter(CollectionBase):
-    """Deprecated, use Collection instead."""
+class RichCollectionBase(CollectionBase):
+    """Base class for Collection classes
 
-    @arvados.util._deprecated('3.0', 'arvados.collection.Collection')
-    def __init__(self, api_client=None, num_retries=0, replication=None):
-        """Instantiate a CollectionWriter.
+    .. ATTENTION:: Internal
+       This class is meant to be used by other parts of the SDK. User code
+       should instantiate or subclass `Collection` or one of its subclasses
+       directly.
+    """
 
-        CollectionWriter lets you build a new Arvados Collection from scratch.
-        Write files to it.  The CollectionWriter will upload data to Keep as
-        appropriate, and provide you with the Collection manifest text when
-        you're finished.
+    def __init__(self, parent=None):
+        self.parent = parent
+        self._committed = False
+        self._has_remote_blocks = False
+        self._callback = None
+        self._items = {}
 
-        Arguments:
-        * api_client: The API client to use to look up Collections.  If not
-          provided, CollectionReader will build one from available Arvados
-          configuration.
-        * num_retries: The default number of times to retry failed
-          service requests.  Default 0.  You may change this value
-          after instantiation, but note those changes may not
-          propagate to related objects like the Keep client.
-        * replication: The number of copies of each block to store.
-          If this argument is None or not supplied, replication is
-          the server-provided default if available, otherwise 2.
-        """
-        self._api_client = api_client
-        self.num_retries = num_retries
-        self.replication = (2 if replication is None else replication)
-        self._keep_client = None
-        self._data_buffer = []
-        self._data_buffer_len = 0
-        self._current_stream_files = []
-        self._current_stream_length = 0
-        self._current_stream_locators = []
-        self._current_stream_name = '.'
-        self._current_file_name = None
-        self._current_file_pos = 0
-        self._finished_streams = []
-        self._close_file = None
-        self._queued_file = None
-        self._queued_dirents = deque()
-        self._queued_trees = deque()
-        self._last_open = None
+    def _my_api(self):
+        raise NotImplementedError()
 
-    def __exit__(self, exc_type, exc_value, traceback):
-        if exc_type is None:
-            self.finish()
+    def _my_keep(self):
+        raise NotImplementedError()
 
-    def do_queued_work(self):
-        # The work queue consists of three pieces:
-        # * _queued_file: The file object we're currently writing to the
-        #   Collection.
-        # * _queued_dirents: Entries under the current directory
-        #   (_queued_trees[0]) that we want to write or recurse through.
-        #   This may contain files from subdirectories if
-        #   max_manifest_depth == 0 for this directory.
-        # * _queued_trees: Directories that should be written as separate
-        #   streams to the Collection.
-        # This function handles the smallest piece of work currently queued
-        # (current file, then current directory, then next directory) until
-        # no work remains.  The _work_THING methods each do a unit of work on
-        # THING.  _queue_THING methods add a THING to the work queue.
-        while True:
-            if self._queued_file:
-                self._work_file()
-            elif self._queued_dirents:
-                self._work_dirents()
-            elif self._queued_trees:
-                self._work_trees()
-            else:
-                break
+    def _my_block_manager(self):
+        raise NotImplementedError()
 
-    def _work_file(self):
-        while True:
-            buf = self._queued_file.read(config.KEEP_BLOCK_SIZE)
-            if not buf:
-                break
-            self.write(buf)
-        self.finish_current_file()
-        if self._close_file:
-            self._queued_file.close()
-        self._close_file = None
-        self._queued_file = None
+    def writable(self) -> bool:
+        """Indicate whether this collection object can be modified
 
-    def _work_dirents(self):
-        path, stream_name, max_manifest_depth = self._queued_trees[0]
-        if stream_name != self.current_stream_name():
-            self.start_new_stream(stream_name)
-        while self._queued_dirents:
-            dirent = self._queued_dirents.popleft()
-            target = os.path.join(path, dirent)
-            if os.path.isdir(target):
-                self._queue_tree(target,
-                                 os.path.join(stream_name, dirent),
-                                 max_manifest_depth - 1)
-            else:
-                self._queue_file(target, dirent)
-                break
-        if not self._queued_dirents:
-            self._queued_trees.popleft()
+        This method returns `False` if this object is a `CollectionReader`,
+        else `True`.
+        """
+        raise NotImplementedError()
 
-    def _work_trees(self):
-        path, stream_name, max_manifest_depth = self._queued_trees[0]
-        d = arvados.util.listdir_recursive(
-            path, max_depth = (None if max_manifest_depth == 0 else 0))
-        if d:
-            self._queue_dirents(stream_name, d)
-        else:
-            self._queued_trees.popleft()
+    def root_collection(self) -> 'Collection':
+        """Get this collection's root collection object
 
-    def _queue_file(self, source, filename=None):
-        assert (self._queued_file is None), "tried to queue more than one file"
-        if not hasattr(source, 'read'):
-            source = open(source, 'rb')
-            self._close_file = True
-        else:
-            self._close_file = False
-        if filename is None:
-            filename = os.path.basename(source.name)
-        self.start_new_file(filename)
-        self._queued_file = source
+        If you open a subcollection with `Collection.find`, calling this method
+        on that subcollection returns the source Collection object.
+        """
+        raise NotImplementedError()
 
-    def _queue_dirents(self, stream_name, dirents):
-        assert (not self._queued_dirents), "tried to queue more than one tree"
-        self._queued_dirents = deque(sorted(dirents))
+    def stream_name(self) -> str:
+        """Get the name of the manifest stream represented by this collection
 
-    def _queue_tree(self, path, stream_name, max_manifest_depth):
-        self._queued_trees.append((path, stream_name, max_manifest_depth))
+        If you open a subcollection with `Collection.find`, calling this method
+        on that subcollection returns the name of the stream you opened.
+        """
+        raise NotImplementedError()
 
-    def write_file(self, source, filename=None):
-        self._queue_file(source, filename)
-        self.do_queued_work()
+    @synchronized
+    def has_remote_blocks(self) -> bool:
+        """Indiciate whether the collection refers to remote data
 
-    def write_directory_tree(self,
-                             path, stream_name='.', max_manifest_depth=-1):
-        self._queue_tree(path, stream_name, max_manifest_depth)
-        self.do_queued_work()
+        Returns `True` if the collection manifest includes any Keep locators
+        with a remote hint (`+R`), else `False`.
+        """
+        if self._has_remote_blocks:
+            return True
+        for item in self:
+            if self[item].has_remote_blocks():
+                return True
+        return False
 
-    def write(self, newdata):
-        if isinstance(newdata, bytes):
-            pass
-        elif isinstance(newdata, str):
-            newdata = newdata.encode()
-        elif hasattr(newdata, '__iter__'):
-            for s in newdata:
-                self.write(s)
-            return
-        self._data_buffer.append(newdata)
-        self._data_buffer_len += len(newdata)
-        self._current_stream_length += len(newdata)
-        while self._data_buffer_len >= config.KEEP_BLOCK_SIZE:
-            self.flush_data()
+    @synchronized
+    def set_has_remote_blocks(self, val: bool) -> None:
+        """Cache whether this collection refers to remote blocks
 
-    def open(self, streampath, filename=None):
-        """open(streampath[, filename]) -> file-like object
+        .. ATTENTION:: Internal
+           This method is only meant to be used by other Collection methods.
 
-        Pass in the path of a file to write to the Collection, either as a
-        single string or as two separate stream name and file name arguments.
-        This method returns a file-like object you can write to add it to the
-        Collection.
+        Set this collection's cached "has remote blocks" flag to the given
+        value.
+        """
+        self._has_remote_blocks = val
+        if self.parent:
+            self.parent.set_has_remote_blocks(val)
 
-        You may only have one file object from the Collection open at a time,
-        so be sure to close the object when you're done.  Using the object in
-        a with statement makes that easy::
+    @must_be_writable
+    @synchronized
+    def find_or_create(
+            self,
+            path: str,
+            create_type: CreateType,
+    ) -> CollectionItem:
+        """Get the item at the given path, creating it if necessary
+
+        If `path` refers to a stream in this collection, returns a
+        corresponding `Subcollection` object. If `path` refers to a file in
+        this collection, returns a corresponding
+        `arvados.arvfile.ArvadosFile` object. If `path` does not exist in
+        this collection, then this method creates a new object and returns
+        it, creating parent streams as needed. The type of object created is
+        determined by the value of `create_type`.
+
+        Arguments:
+
+        * path: str --- The path to find or create within this collection.
 
-          with cwriter.open('./doc/page1.txt') as outfile:
-              outfile.write(page1_data)
-          with cwriter.open('./doc/page2.txt') as outfile:
-              outfile.write(page2_data)
+        * create_type: Literal[COLLECTION, FILE] --- The type of object to
+          create at `path` if one does not exist. Passing `COLLECTION`
+          creates a stream and returns the corresponding
+          `Subcollection`. Passing `FILE` creates a new file and returns the
+          corresponding `arvados.arvfile.ArvadosFile`.
         """
-        if filename is None:
-            streampath, filename = split(streampath)
-        if self._last_open and not self._last_open.closed:
-            raise errors.AssertionError(
-                u"can't open '{}' when '{}' is still open".format(
-                    filename, self._last_open.name))
-        if streampath != self.current_stream_name():
-            self.start_new_stream(streampath)
-        self.set_current_file_name(filename)
-        self._last_open = _WriterFile(self, filename)
-        return self._last_open
+        pathcomponents = path.split("/", 1)
+        if pathcomponents[0]:
+            item = self._items.get(pathcomponents[0])
+            if len(pathcomponents) == 1:
+                if item is None:
+                    # create new file
+                    if create_type == COLLECTION:
+                        item = Subcollection(self, pathcomponents[0])
+                    else:
+                        item = ArvadosFile(self, pathcomponents[0])
+                    self._items[pathcomponents[0]] = item
+                    self.set_committed(False)
+                    self.notify(ADD, self, pathcomponents[0], item)
+                return item
+            else:
+                if item is None:
+                    # create new collection
+                    item = Subcollection(self, pathcomponents[0])
+                    self._items[pathcomponents[0]] = item
+                    self.set_committed(False)
+                    self.notify(ADD, self, pathcomponents[0], item)
+                if isinstance(item, RichCollectionBase):
+                    return item.find_or_create(pathcomponents[1], create_type)
+                else:
+                    raise IOError(errno.ENOTDIR, "Not a directory", pathcomponents[0])
+        else:
+            return self
 
-    def flush_data(self):
-        data_buffer = b''.join(self._data_buffer)
-        if data_buffer:
-            self._current_stream_locators.append(
-                self._my_keep().put(
-                    data_buffer[0:config.KEEP_BLOCK_SIZE],
-                    copies=self.replication))
-            self._data_buffer = [data_buffer[config.KEEP_BLOCK_SIZE:]]
-            self._data_buffer_len = len(self._data_buffer[0])
+    @synchronized
+    def find(self, path: str) -> CollectionItem:
+        """Get the item at the given path
 
-    def start_new_file(self, newfilename=None):
-        self.finish_current_file()
-        self.set_current_file_name(newfilename)
+        If `path` refers to a stream in this collection, returns a
+        corresponding `Subcollection` object. If `path` refers to a file in
+        this collection, returns a corresponding
+        `arvados.arvfile.ArvadosFile` object. If `path` does not exist in
+        this collection, then this method raises `NotADirectoryError`.
 
-    def set_current_file_name(self, newfilename):
-        if re.search(r'[\t\n]', newfilename):
-            raise errors.AssertionError(
-                "Manifest filenames cannot contain whitespace: %s" %
-                newfilename)
-        elif re.search(r'\x00', newfilename):
-            raise errors.AssertionError(
-                "Manifest filenames cannot contain NUL characters: %s" %
-                newfilename)
-        self._current_file_name = newfilename
+        Arguments:
 
-    def current_file_name(self):
-        return self._current_file_name
+        * path: str --- The path to find or create within this collection.
+        """
+        if not path:
+            raise errors.ArgumentError("Parameter 'path' is empty.")
 
-    def finish_current_file(self):
-        if self._current_file_name is None:
-            if self._current_file_pos == self._current_stream_length:
-                return
-            raise errors.AssertionError(
-                "Cannot finish an unnamed file " +
-                "(%d bytes at offset %d in '%s' stream)" %
-                (self._current_stream_length - self._current_file_pos,
-                 self._current_file_pos,
-                 self._current_stream_name))
-        self._current_stream_files.append([
-                self._current_file_pos,
-                self._current_stream_length - self._current_file_pos,
-                self._current_file_name])
-        self._current_file_pos = self._current_stream_length
-        self._current_file_name = None
+        pathcomponents = path.split("/", 1)
+        if pathcomponents[0] == '':
+            raise IOError(errno.ENOTDIR, "Not a directory", pathcomponents[0])
 
-    def start_new_stream(self, newstreamname='.'):
-        self.finish_current_stream()
-        self.set_current_stream_name(newstreamname)
+        item = self._items.get(pathcomponents[0])
+        if item is None:
+            return None
+        elif len(pathcomponents) == 1:
+            return item
+        else:
+            if isinstance(item, RichCollectionBase):
+                if pathcomponents[1]:
+                    return item.find(pathcomponents[1])
+                else:
+                    return item
+            else:
+                raise IOError(errno.ENOTDIR, "Not a directory", pathcomponents[0])
 
-    def set_current_stream_name(self, newstreamname):
-        if re.search(r'[\t\n]', newstreamname):
-            raise errors.AssertionError(
-                "Manifest stream names cannot contain whitespace: '%s'" %
-                (newstreamname))
-        self._current_stream_name = '.' if newstreamname=='' else newstreamname
+    @synchronized
+    def mkdirs(self, path: str) -> 'Subcollection':
+        """Create and return a subcollection at `path`
 
-    def current_stream_name(self):
-        return self._current_stream_name
+        If `path` exists within this collection, raises `FileExistsError`.
+        Otherwise, creates a stream at that path and returns the
+        corresponding `Subcollection`.
+        """
+        if self.find(path) != None:
+            raise IOError(errno.EEXIST, "Directory or file exists", path)
 
-    def finish_current_stream(self):
-        self.finish_current_file()
-        self.flush_data()
-        if not self._current_stream_files:
-            pass
-        elif self._current_stream_name is None:
-            raise errors.AssertionError(
-                "Cannot finish an unnamed stream (%d bytes in %d files)" %
-                (self._current_stream_length, len(self._current_stream_files)))
-        else:
-            if not self._current_stream_locators:
-                self._current_stream_locators.append(config.EMPTY_BLOCK_LOCATOR)
-            self._finished_streams.append([self._current_stream_name,
-                                           self._current_stream_locators,
-                                           self._current_stream_files])
-        self._current_stream_files = []
-        self._current_stream_length = 0
-        self._current_stream_locators = []
-        self._current_stream_name = None
-        self._current_file_pos = 0
-        self._current_file_name = None
+        return self.find_or_create(path, COLLECTION)
 
-    def finish(self):
-        """Store the manifest in Keep and return its locator.
+    def open(
+            self,
+            path: str,
+            mode: str="r",
+            encoding: Optional[str]=None,
+    ) -> IO:
+        """Open a file-like object within the collection
 
-        This is useful for storing manifest fragments (task outputs)
-        temporarily in Keep during a Crunch job.
+        This method returns a file-like object that can read and/or write the
+        file located at `path` within the collection. If you attempt to write
+        a `path` that does not exist, the file is created with `find_or_create`.
+        If the file cannot be opened for any other reason, this method raises
+        `OSError` with an appropriate errno.
 
-        In other cases you should make a collection instead, by
-        sending manifest_text() to the API server's "create
-        collection" endpoint.
-        """
-        return self._my_keep().put(self.manifest_text().encode(),
-                                   copies=self.replication)
+        Arguments:
 
-    def portable_data_hash(self):
-        stripped = self.stripped_manifest().encode()
-        return '{}+{}'.format(hashlib.md5(stripped).hexdigest(), len(stripped))
+        * path: str --- The path of the file to open within this collection
 
-    def manifest_text(self):
-        self.finish_current_stream()
-        manifest = ''
+        * mode: str --- The mode to open this file. Supports all the same
+          values as `builtins.open`.
 
-        for stream in self._finished_streams:
-            if not re.search(r'^\.(/.*)?$', stream[0]):
-                manifest += './'
-            manifest += stream[0].replace(' ', '\\040')
-            manifest += ' ' + ' '.join(stream[1])
-            manifest += ' ' + ' '.join("%d:%d:%s" % (sfile[0], sfile[1], sfile[2].replace(' ', '\\040')) for sfile in stream[2])
-            manifest += "\n"
+        * encoding: str | None --- The text encoding of the file. Only used
+          when the file is opened in text mode. The default is
+          platform-dependent.
+        """
+        if not re.search(r'^[rwa][bt]?\+?$', mode):
+            raise errors.ArgumentError("Invalid mode {!r}".format(mode))
 
-        return manifest
+        if mode[0] == 'r' and '+' not in mode:
+            fclass = ArvadosFileReader
+            arvfile = self.find(path)
+        elif not self.writable():
+            raise IOError(errno.EROFS, "Collection is read only")
+        else:
+            fclass = ArvadosFileWriter
+            arvfile = self.find_or_create(path, FILE)
 
-    def data_locators(self):
-        ret = []
-        for name, locators, files in self._finished_streams:
-            ret += locators
-        return ret
+        if arvfile is None:
+            raise IOError(errno.ENOENT, "File not found", path)
+        if not isinstance(arvfile, ArvadosFile):
+            raise IOError(errno.EISDIR, "Is a directory", path)
 
-    def save_new(self, name=None):
-        return self._api_client.collections().create(
-            ensure_unique_name=True,
-            body={
-                'name': name,
-                'manifest_text': self.manifest_text(),
-            }).execute(num_retries=self.num_retries)
+        if mode[0] == 'w':
+            arvfile.truncate(0)
 
+        binmode = mode[0] + 'b' + re.sub('[bt]', '', mode[1:])
+        f = fclass(arvfile, mode=binmode, num_retries=self.num_retries)
+        if 'b' not in mode:
+            bufferclass = io.BufferedRandom if f.writable() else io.BufferedReader
+            f = io.TextIOWrapper(bufferclass(WrappableFile(f)), encoding=encoding)
+        return f
 
-class ResumableCollectionWriter(CollectionWriter):
-    """Deprecated, use Collection instead."""
+    def modified(self) -> bool:
+        """Indicate whether this collection has an API server record
 
-    STATE_PROPS = ['_current_stream_files', '_current_stream_length',
-                   '_current_stream_locators', '_current_stream_name',
-                   '_current_file_name', '_current_file_pos', '_close_file',
-                   '_data_buffer', '_dependencies', '_finished_streams',
-                   '_queued_dirents', '_queued_trees']
+        Returns `False` if this collection corresponds to a record loaded from
+        the API server, `True` otherwise.
+        """
+        return not self.committed()
 
-    @arvados.util._deprecated('3.0', 'arvados.collection.Collection')
-    def __init__(self, api_client=None, **kwargs):
-        self._dependencies = {}
-        super(ResumableCollectionWriter, self).__init__(api_client, **kwargs)
+    @synchronized
+    def committed(self):
+        """Indicate whether this collection has an API server record
 
-    @classmethod
-    def from_state(cls, state, *init_args, **init_kwargs):
-        # Try to build a new writer from scratch with the given state.
-        # If the state is not suitable to resume (because files have changed,
-        # been deleted, aren't predictable, etc.), raise a
-        # StaleWriterStateError.  Otherwise, return the initialized writer.
-        # The caller is responsible for calling writer.do_queued_work()
-        # appropriately after it's returned.
-        writer = cls(*init_args, **init_kwargs)
-        for attr_name in cls.STATE_PROPS:
-            attr_value = state[attr_name]
-            attr_class = getattr(writer, attr_name).__class__
-            # Coerce the value into the same type as the initial value, if
-            # needed.
-            if attr_class not in (type(None), attr_value.__class__):
-                attr_value = attr_class(attr_value)
-            setattr(writer, attr_name, attr_value)
-        # Check dependencies before we try to resume anything.
-        if any(KeepLocator(ls).permission_expired()
-               for ls in writer._current_stream_locators):
-            raise errors.StaleWriterStateError(
-                "locators include expired permission hint")
-        writer.check_dependencies()
-        if state['_current_file'] is not None:
-            path, pos = state['_current_file']
-            try:
-                writer._queued_file = open(path, 'rb')
-                writer._queued_file.seek(pos)
-            except IOError as error:
-                raise errors.StaleWriterStateError(
-                    u"failed to reopen active file {}: {}".format(path, error))
-        return writer
+        Returns `True` if this collection corresponds to a record loaded from
+        the API server, `False` otherwise.
+        """
+        return self._committed
 
-    def check_dependencies(self):
-        for path, orig_stat in listitems(self._dependencies):
-            if not S_ISREG(orig_stat[ST_MODE]):
-                raise errors.StaleWriterStateError(u"{} not file".format(path))
-            try:
-                now_stat = tuple(os.stat(path))
-            except OSError as error:
-                raise errors.StaleWriterStateError(
-                    u"failed to stat {}: {}".format(path, error))
-            if ((not S_ISREG(now_stat[ST_MODE])) or
-                (orig_stat[ST_MTIME] != now_stat[ST_MTIME]) or
-                (orig_stat[ST_SIZE] != now_stat[ST_SIZE])):
-                raise errors.StaleWriterStateError(u"{} changed".format(path))
+    @synchronized
+    def set_committed(self, value: bool=True):
+        """Cache whether this collection has an API server record
 
-    def dump_state(self, copy_func=lambda x: x):
-        state = {attr: copy_func(getattr(self, attr))
-                 for attr in self.STATE_PROPS}
-        if self._queued_file is None:
-            state['_current_file'] = None
-        else:
-            state['_current_file'] = (os.path.realpath(self._queued_file.name),
-                                      self._queued_file.tell())
-        return state
+        .. ATTENTION:: Internal
+           This method is only meant to be used by other Collection methods.
 
-    def _queue_file(self, source, filename=None):
-        try:
-            src_path = os.path.realpath(source)
-        except Exception:
-            raise errors.AssertionError(u"{} not a file path".format(source))
-        try:
-            path_stat = os.stat(src_path)
-        except OSError as stat_error:
-            path_stat = None
-        super(ResumableCollectionWriter, self)._queue_file(source, filename)
-        fd_stat = os.fstat(self._queued_file.fileno())
-        if not S_ISREG(fd_stat.st_mode):
-            # We won't be able to resume from this cache anyway, so don't
-            # worry about further checks.
-            self._dependencies[source] = tuple(fd_stat)
-        elif path_stat is None:
-            raise errors.AssertionError(
-                u"could not stat {}: {}".format(source, stat_error))
-        elif path_stat.st_ino != fd_stat.st_ino:
-            raise errors.AssertionError(
-                u"{} changed between open and stat calls".format(source))
+        Set this collection's cached "committed" flag to the given
+        value and propagates it as needed.
+        """
+        if value == self._committed:
+            return
+        if value:
+            for k,v in listitems(self._items):
+                v.set_committed(True)
+            self._committed = True
         else:
-            self._dependencies[src_path] = tuple(fd_stat)
+            self._committed = False
+            if self.parent is not None:
+                self.parent.set_committed(False)
 
-    def write(self, data):
-        if self._queued_file is None:
-            raise errors.AssertionError(
-                "resumable writer can't accept unsourced data")
-        return super(ResumableCollectionWriter, self).write(data)
+    @synchronized
+    def __iter__(self) -> Iterator[str]:
+        """Iterate names of streams and files in this collection
 
+        This method does not recurse. It only iterates the contents of this
+        collection's corresponding stream.
+        """
+        return iter(viewkeys(self._items))
 
-ADD = "add"
-DEL = "del"
-MOD = "mod"
-TOK = "tok"
-FILE = "file"
-COLLECTION = "collection"
+    @synchronized
+    def __getitem__(self, k: str) -> CollectionItem:
+        """Get a `arvados.arvfile.ArvadosFile` or `Subcollection` in this collection
 
-class RichCollectionBase(CollectionBase):
-    """Base class for Collections and Subcollections.
+        This method does not recurse. If you want to search a path, use
+        `RichCollectionBase.find` instead.
+        """
+        return self._items[k]
 
-    Implements the majority of functionality relating to accessing items in the
-    Collection.
-
-    """
-
-    def __init__(self, parent=None):
-        self.parent = parent
-        self._committed = False
-        self._has_remote_blocks = False
-        self._callback = None
-        self._items = {}
-
-    def _my_api(self):
-        raise NotImplementedError()
-
-    def _my_keep(self):
-        raise NotImplementedError()
-
-    def _my_block_manager(self):
-        raise NotImplementedError()
-
-    def writable(self):
-        raise NotImplementedError()
-
-    def root_collection(self):
-        raise NotImplementedError()
-
-    def notify(self, event, collection, name, item):
-        raise NotImplementedError()
-
-    def stream_name(self):
-        raise NotImplementedError()
+    @synchronized
+    def __contains__(self, k: str) -> bool:
+        """Indicate whether this collection has an item with this name
 
+        This method does not recurse. It you want to check a path, use
+        `RichCollectionBase.exists` instead.
+        """
+        return k in self._items
 
     @synchronized
-    def has_remote_blocks(self):
-        """Recursively check for a +R segment locator signature."""
-
-        if self._has_remote_blocks:
-            return True
-        for item in self:
-            if self[item].has_remote_blocks():
-                return True
-        return False
+    def __len__(self):
+        """Get the number of items directly contained in this collection
 
-    @synchronized
-    def set_has_remote_blocks(self, val):
-        self._has_remote_blocks = val
-        if self.parent:
-            self.parent.set_has_remote_blocks(val)
+        This method does not recurse. It only counts the streams and files
+        in this collection's corresponding stream.
+        """
+        return len(self._items)
 
     @must_be_writable
     @synchronized
-    def find_or_create(self, path, create_type):
-        """Recursively search the specified file path.
-
-        May return either a `Collection` or `ArvadosFile`.  If not found, will
-        create a new item at the specified path based on `create_type`.  Will
-        create intermediate subcollections needed to contain the final item in
-        the path.
-
-        :create_type:
-          One of `arvados.collection.FILE` or
-          `arvados.collection.COLLECTION`.  If the path is not found, and value
-          of create_type is FILE then create and return a new ArvadosFile for
-          the last path component.  If COLLECTION, then create and return a new
-          Collection for the last path component.
+    def __delitem__(self, p: str) -> None:
+        """Delete an item from this collection's stream
 
+        This method does not recurse. If you want to remove an item by a
+        path, use `RichCollectionBase.remove` instead.
         """
-
-        pathcomponents = path.split("/", 1)
-        if pathcomponents[0]:
-            item = self._items.get(pathcomponents[0])
-            if len(pathcomponents) == 1:
-                if item is None:
-                    # create new file
-                    if create_type == COLLECTION:
-                        item = Subcollection(self, pathcomponents[0])
-                    else:
-                        item = ArvadosFile(self, pathcomponents[0])
-                    self._items[pathcomponents[0]] = item
-                    self.set_committed(False)
-                    self.notify(ADD, self, pathcomponents[0], item)
-                return item
-            else:
-                if item is None:
-                    # create new collection
-                    item = Subcollection(self, pathcomponents[0])
-                    self._items[pathcomponents[0]] = item
-                    self.set_committed(False)
-                    self.notify(ADD, self, pathcomponents[0], item)
-                if isinstance(item, RichCollectionBase):
-                    return item.find_or_create(pathcomponents[1], create_type)
-                else:
-                    raise IOError(errno.ENOTDIR, "Not a directory", pathcomponents[0])
-        else:
-            return self
+        del self._items[p]
+        self.set_committed(False)
+        self.notify(DEL, self, p, None)
 
     @synchronized
-    def find(self, path):
-        """Recursively search the specified file path.
-
-        May return either a Collection or ArvadosFile. Return None if not
-        found.
-        If path is invalid (ex: starts with '/'), an IOError exception will be
-        raised.
+    def keys(self) -> Iterator[str]:
+        """Iterate names of streams and files in this collection
 
+        This method does not recurse. It only iterates the contents of this
+        collection's corresponding stream.
         """
-        if not path:
-            raise errors.ArgumentError("Parameter 'path' is empty.")
-
-        pathcomponents = path.split("/", 1)
-        if pathcomponents[0] == '':
-            raise IOError(errno.ENOTDIR, "Not a directory", pathcomponents[0])
-
-        item = self._items.get(pathcomponents[0])
-        if item is None:
-            return None
-        elif len(pathcomponents) == 1:
-            return item
-        else:
-            if isinstance(item, RichCollectionBase):
-                if pathcomponents[1]:
-                    return item.find(pathcomponents[1])
-                else:
-                    return item
-            else:
-                raise IOError(errno.ENOTDIR, "Not a directory", pathcomponents[0])
+        return self._items.keys()
 
     @synchronized
-    def mkdirs(self, path):
-        """Recursive subcollection create.
-
-        Like `os.makedirs()`.  Will create intermediate subcollections needed
-        to contain the leaf subcollection path.
-
-        """
-
-        if self.find(path) != None:
-            raise IOError(errno.EEXIST, "Directory or file exists", path)
-
-        return self.find_or_create(path, COLLECTION)
-
-    def open(self, path, mode="r", encoding=None):
-        """Open a file-like object for access.
-
-        :path:
-          path to a file in the collection
-        :mode:
-          a string consisting of "r", "w", or "a", optionally followed
-          by "b" or "t", optionally followed by "+".
-          :"b":
-            binary mode: write() accepts bytes, read() returns bytes.
-          :"t":
-            text mode (default): write() accepts strings, read() returns strings.
-          :"r":
-            opens for reading
-          :"r+":
-            opens for reading and writing.  Reads/writes share a file pointer.
-          :"w", "w+":
-            truncates to 0 and opens for reading and writing.  Reads/writes share a file pointer.
-          :"a", "a+":
-            opens for reading and writing.  All writes are appended to
-            the end of the file.  Writing does not affect the file pointer for
-            reading.
+    def values(self) -> List[CollectionItem]:
+        """Get a list of objects in this collection's stream
 
+        The return value includes a `Subcollection` for every stream, and an
+        `arvados.arvfile.ArvadosFile` for every file, directly within this
+        collection's stream.  This method does not recurse.
         """
-
-        if not re.search(r'^[rwa][bt]?\+?$', mode):
-            raise errors.ArgumentError("Invalid mode {!r}".format(mode))
-
-        if mode[0] == 'r' and '+' not in mode:
-            fclass = ArvadosFileReader
-            arvfile = self.find(path)
-        elif not self.writable():
-            raise IOError(errno.EROFS, "Collection is read only")
-        else:
-            fclass = ArvadosFileWriter
-            arvfile = self.find_or_create(path, FILE)
-
-        if arvfile is None:
-            raise IOError(errno.ENOENT, "File not found", path)
-        if not isinstance(arvfile, ArvadosFile):
-            raise IOError(errno.EISDIR, "Is a directory", path)
-
-        if mode[0] == 'w':
-            arvfile.truncate(0)
-
-        binmode = mode[0] + 'b' + re.sub('[bt]', '', mode[1:])
-        f = fclass(arvfile, mode=binmode, num_retries=self.num_retries)
-        if 'b' not in mode:
-            bufferclass = io.BufferedRandom if f.writable() else io.BufferedReader
-            f = io.TextIOWrapper(bufferclass(WrappableFile(f)), encoding=encoding)
-        return f
-
-    def modified(self):
-        """Determine if the collection has been modified since last commited."""
-        return not self.committed()
-
-    @synchronized
-    def committed(self):
-        """Determine if the collection has been committed to the API server."""
-        return self._committed
+        return listvalues(self._items)
 
     @synchronized
-    def set_committed(self, value=True):
-        """Recursively set committed flag.
+    def items(self) -> List[Tuple[str, CollectionItem]]:
+        """Get a list of `(name, object)` tuples from this collection's stream
 
-        If value is True, set committed to be True for this and all children.
-
-        If value is False, set committed to be False for this and all parents.
+        The return value includes a `Subcollection` for every stream, and an
+        `arvados.arvfile.ArvadosFile` for every file, directly within this
+        collection's stream.  This method does not recurse.
         """
-        if value == self._committed:
-            return
-        if value:
-            for k,v in listitems(self._items):
-                v.set_committed(True)
-            self._committed = True
-        else:
-            self._committed = False
-            if self.parent is not None:
-                self.parent.set_committed(False)
+        return listitems(self._items)
 
-    @synchronized
-    def __iter__(self):
-        """Iterate over names of files and collections contained in this collection."""
-        return iter(viewkeys(self._items))
+    def exists(self, path: str) -> bool:
+        """Indicate whether this collection includes an item at `path`
 
-    @synchronized
-    def __getitem__(self, k):
-        """Get a file or collection that is directly contained by this collection.
+        This method returns `True` if `path` refers to a stream or file within
+        this collection, else `False`.
 
-        If you want to search a path, use `find()` instead.
+        Arguments:
 
+        * path: str --- The path to check for existence within this collection
         """
-        return self._items[k]
-
-    @synchronized
-    def __contains__(self, k):
-        """Test if there is a file or collection a directly contained by this collection."""
-        return k in self._items
-
-    @synchronized
-    def __len__(self):
-        """Get the number of items directly contained in this collection."""
-        return len(self._items)
+        return self.find(path) is not None
 
     @must_be_writable
     @synchronized
-    def __delitem__(self, p):
-        """Delete an item by name which is directly contained by this collection."""
-        del self._items[p]
-        self.set_committed(False)
-        self.notify(DEL, self, p, None)
-
-    @synchronized
-    def keys(self):
-        """Get a list of names of files and collections directly contained in this collection."""
-        return self._items.keys()
-
-    @synchronized
-    def values(self):
-        """Get a list of files and collection objects directly contained in this collection."""
-        return listvalues(self._items)
-
-    @synchronized
-    def items(self):
-        """Get a list of (name, object) tuples directly contained in this collection."""
-        return listitems(self._items)
+    def remove(self, path: str, recursive: bool=False) -> None:
+        """Remove the file or stream at `path`
 
-    def exists(self, path):
-        """Test if there is a file or collection at `path`."""
-        return self.find(path) is not None
+        Arguments:
 
-    @must_be_writable
-    @synchronized
-    def remove(self, path, recursive=False):
-        """Remove the file or subcollection (directory) at `path`.
+        * path: str --- The path of the item to remove from the collection
 
-        :recursive:
-          Specify whether to remove non-empty subcollections (True), or raise an error (False).
+        * recursive: bool --- Controls the method's behavior if `path` refers
+          to a nonempty stream. If `False` (the default), this method raises
+          `OSError` with errno `ENOTEMPTY`. If `True`, this method removes all
+          items under the stream.
         """
-
         if not path:
             raise errors.ArgumentError("Parameter 'path' is empty.")
 
@@ -825,26 +556,33 @@ class RichCollectionBase(CollectionBase):
 
     @must_be_writable
     @synchronized
-    def add(self, source_obj, target_name, overwrite=False, reparent=False):
-        """Copy or move a file or subcollection to this collection.
+    def add(
+            self,
+            source_obj: CollectionItem,
+            target_name: str,
+            overwrite: bool=False,
+            reparent: bool=False,
+    ) -> None:
+        """Copy or move a file or subcollection object to this collection
 
-        :source_obj:
-          An ArvadosFile, or Subcollection object
+        Arguments:
 
-        :target_name:
-          Destination item name.  If the target name already exists and is a
-          file, this will raise an error unless you specify `overwrite=True`.
+        * source_obj: arvados.arvfile.ArvadosFile | Subcollection --- The file or subcollection
+          to add to this collection
 
-        :overwrite:
-          Whether to overwrite target file if it already exists.
+        * target_name: str --- The path inside this collection where
+          `source_obj` should be added.
 
-        :reparent:
-          If True, source_obj will be moved from its parent collection to this collection.
-          If False, source_obj will be copied and the parent collection will be
-          unmodified.
+        * overwrite: bool --- Controls the behavior of this method when the
+          collection already contains an object at `target_name`. If `False`
+          (the default), this method will raise `FileExistsError`. If `True`,
+          the object at `target_name` will be replaced with `source_obj`.
 
+        * reparent: bool --- Controls whether this method copies or moves
+          `source_obj`. If `False` (the default), `source_obj` is copied into
+          this collection. If `True`, `source_obj` is moved into this
+          collection.
         """
-
         if target_name in self and not overwrite:
             raise IOError(errno.EEXIST, "File already exists", target_name)
 
@@ -911,92 +649,117 @@ class RichCollectionBase(CollectionBase):
 
     @must_be_writable
     @synchronized
-    def copy(self, source, target_path, source_collection=None, overwrite=False):
-        """Copy a file or subcollection to a new path in this collection.
+    def copy(
+            self,
+            source: Union[str, CollectionItem],
+            target_path: str,
+            source_collection: Optional['RichCollectionBase']=None,
+            overwrite: bool=False,
+    ) -> None:
+        """Copy a file or subcollection object to this collection
 
-        :source:
-          A string with a path to source file or subcollection, or an actual ArvadosFile or Subcollection object.
+        Arguments:
 
-        :target_path:
-          Destination file or path.  If the target path already exists and is a
-          subcollection, the item will be placed inside the subcollection.  If
-          the target path already exists and is a file, this will raise an error
-          unless you specify `overwrite=True`.
+        * source: str | arvados.arvfile.ArvadosFile |
+          arvados.collection.Subcollection --- The file or subcollection to
+          add to this collection. If `source` is a str, the object will be
+          found by looking up this path from `source_collection` (see
+          below).
 
-        :source_collection:
-          Collection to copy `source_path` from (default `self`)
+        * target_path: str --- The path inside this collection where the
+          source object should be added.
 
-        :overwrite:
-          Whether to overwrite target file if it already exists.
-        """
+        * source_collection: arvados.collection.Collection | None --- The
+          collection to find the source object from when `source` is a
+          path. Defaults to the current collection (`self`).
 
+        * overwrite: bool --- Controls the behavior of this method when the
+          collection already contains an object at `target_path`. If `False`
+          (the default), this method will raise `FileExistsError`. If `True`,
+          the object at `target_path` will be replaced with `source_obj`.
+        """
         source_obj, target_dir, target_name = self._get_src_target(source, target_path, source_collection, True)
         target_dir.add(source_obj, target_name, overwrite, False)
 
     @must_be_writable
     @synchronized
-    def rename(self, source, target_path, source_collection=None, overwrite=False):
-        """Move a file or subcollection from `source_collection` to a new path in this collection.
+    def rename(
+            self,
+            source: Union[str, CollectionItem],
+            target_path: str,
+            source_collection: Optional['RichCollectionBase']=None,
+            overwrite: bool=False,
+    ) -> None:
+        """Move a file or subcollection object to this collection
+
+        Arguments:
 
-        :source:
-          A string with a path to source file or subcollection.
+        * source: str | arvados.arvfile.ArvadosFile |
+          arvados.collection.Subcollection --- The file or subcollection to
+          add to this collection. If `source` is a str, the object will be
+          found by looking up this path from `source_collection` (see
+          below).
 
-        :target_path:
-          Destination file or path.  If the target path already exists and is a
-          subcollection, the item will be placed inside the subcollection.  If
-          the target path already exists and is a file, this will raise an error
-          unless you specify `overwrite=True`.
+        * target_path: str --- The path inside this collection where the
+          source object should be added.
 
-        :source_collection:
-          Collection to copy `source_path` from (default `self`)
+        * source_collection: arvados.collection.Collection | None --- The
+          collection to find the source object from when `source` is a
+          path. Defaults to the current collection (`self`).
 
-        :overwrite:
-          Whether to overwrite target file if it already exists.
+        * overwrite: bool --- Controls the behavior of this method when the
+          collection already contains an object at `target_path`. If `False`
+          (the default), this method will raise `FileExistsError`. If `True`,
+          the object at `target_path` will be replaced with `source_obj`.
         """
-
         source_obj, target_dir, target_name = self._get_src_target(source, target_path, source_collection, False)
         if not source_obj.writable():
             raise IOError(errno.EROFS, "Source collection is read only", source)
         target_dir.add(source_obj, target_name, overwrite, True)
 
-    def portable_manifest_text(self, stream_name="."):
-        """Get the manifest text for this collection, sub collections and files.
+    def portable_manifest_text(self, stream_name: str=".") -> str:
+        """Get the portable manifest text for this collection
 
-        This method does not flush outstanding blocks to Keep.  It will return
-        a normalized manifest with access tokens stripped.
+        The portable manifest text is normalized, and does not include access
+        tokens. This method does not flush outstanding blocks to Keep.
 
-        :stream_name:
-          Name to use for this stream (directory)
+        Arguments:
 
+        * stream_name: str --- The name to use for this collection's stream in
+          the generated manifest. Default `'.'`.
         """
         return self._get_manifest_text(stream_name, True, True)
 
     @synchronized
-    def manifest_text(self, stream_name=".", strip=False, normalize=False,
-                      only_committed=False):
-        """Get the manifest text for this collection, sub collections and files.
-
-        This method will flush outstanding blocks to Keep.  By default, it will
-        not normalize an unmodified manifest or strip access tokens.
+    def manifest_text(
+            self,
+            stream_name: str=".",
+            strip: bool=False,
+            normalize: bool=False,
+            only_committed: bool=False,
+    ) -> str:
+        """Get the manifest text for this collection
 
-        :stream_name:
-          Name to use for this stream (directory)
+        Arguments:
 
-        :strip:
-          If True, remove signing tokens from block locators if present.
-          If False (default), block locators are left unchanged.
+        * stream_name: str --- The name to use for this collection's stream in
+          the generated manifest. Default `'.'`.
 
-        :normalize:
-          If True, always export the manifest text in normalized form
-          even if the Collection is not modified.  If False (default) and the collection
-          is not modified, return the original manifest text even if it is not
-          in normalized form.
+        * strip: bool --- Controls whether or not the returned manifest text
+          includes access tokens. If `False` (the default), the manifest text
+          will include access tokens. If `True`, the manifest text will not
+          include access tokens.
 
-        :only_committed:
-          If True, don't commit pending blocks.
+        * normalize: bool --- Controls whether or not the returned manifest
+          text is normalized. Default `False`.
 
+        * only_committed: bool --- Controls whether or not this method uploads
+          pending data to Keep before building and returning the manifest text.
+          If `False` (the default), this method will finish uploading all data
+          to Keep, then return the final manifest. If `True`, this method will
+          build and return a manifest that only refers to the data that has
+          finished uploading at the time this method was called.
         """
-
         if not only_committed:
             self._my_block_manager().commit_all()
         return self._get_manifest_text(stream_name, strip, normalize,
@@ -1075,11 +838,27 @@ class RichCollectionBase(CollectionBase):
         return remote_blocks
 
     @synchronized
-    def diff(self, end_collection, prefix=".", holding_collection=None):
-        """Generate list of add/modify/delete actions.
+    def diff(
+            self,
+            end_collection: 'RichCollectionBase',
+            prefix: str=".",
+            holding_collection: Optional['Collection']=None,
+    ) -> ChangeList:
+        """Build a list of differences between this collection and another
+
+        Arguments:
+
+        * end_collection: arvados.collection.RichCollectionBase --- A
+          collection object with the desired end state. The returned diff
+          list will describe how to go from the current collection object
+          `self` to `end_collection`.
 
-        When given to `apply`, will change `self` to match `end_collection`
+        * prefix: str --- The name to use for this collection's stream in
+          the diff list. Default `'.'`.
 
+        * holding_collection: arvados.collection.Collection | None --- A
+          collection object used to hold objects for the returned diff
+          list. By default, a new empty collection is created.
         """
         changes = []
         if holding_collection is None:
@@ -1101,12 +880,20 @@ class RichCollectionBase(CollectionBase):
 
     @must_be_writable
     @synchronized
-    def apply(self, changes):
-        """Apply changes from `diff`.
+    def apply(self, changes: ChangeList) -> None:
+        """Apply a list of changes from to this collection
 
-        If a change conflicts with a local change, it will be saved to an
-        alternate path indicating the conflict.
+        This method takes a list of changes generated by
+        `RichCollectionBase.diff` and applies it to this
+        collection. Afterward, the state of this collection object will
+        match the state of `end_collection` passed to `diff`. If a change
+        conflicts with a local change, it will be saved to an alternate path
+        indicating the conflict.
 
+        Arguments:
+
+        * changes: arvados.collection.ChangeList --- The list of differences
+          generated by `RichCollectionBase.diff`.
         """
         if changes:
             self.set_committed(False)
@@ -1148,8 +935,8 @@ class RichCollectionBase(CollectionBase):
                 # else, the file is modified or already removed, in either
                 # case we don't want to try to remove it.
 
-    def portable_data_hash(self):
-        """Get the portable data hash for this collection's manifest."""
+    def portable_data_hash(self) -> str:
+        """Get the portable data hash for this collection's manifest"""
         if self._manifest_locator and self.committed():
             # If the collection is already saved on the API server, and it's committed
             # then return API server's PDH response.
@@ -1159,25 +946,64 @@ class RichCollectionBase(CollectionBase):
             return '{}+{}'.format(hashlib.md5(stripped).hexdigest(), len(stripped))
 
     @synchronized
-    def subscribe(self, callback):
+    def subscribe(self, callback: ChangeCallback) -> None:
+        """Set a notify callback for changes to this collection
+
+        Arguments:
+
+        * callback: arvados.collection.ChangeCallback --- The callable to
+          call each time the collection is changed.
+        """
         if self._callback is None:
             self._callback = callback
         else:
             raise errors.ArgumentError("A callback is already set on this collection.")
 
     @synchronized
-    def unsubscribe(self):
+    def unsubscribe(self) -> None:
+        """Remove any notify callback set for changes to this collection"""
         if self._callback is not None:
             self._callback = None
 
     @synchronized
-    def notify(self, event, collection, name, item):
+    def notify(
+            self,
+            event: ChangeType,
+            collection: 'RichCollectionBase',
+            name: str,
+            item: CollectionItem,
+    ) -> None:
+        """Notify any subscribed callback about a change to this collection
+
+        .. ATTENTION:: Internal
+           This method is only meant to be used by other Collection methods.
+
+        If a callback has been registered with `RichCollectionBase.subscribe`,
+        it will be called with information about a change to this collection.
+        Then this notification will be propagated to this collection's root.
+
+        Arguments:
+
+        * event: Literal[ADD, DEL, MOD, TOK] --- The type of modification to
+          the collection.
+
+        * collection: arvados.collection.RichCollectionBase --- The
+          collection that was modified.
+
+        * name: str --- The name of the file or stream within `collection` that
+          was modified.
+
+        * item: arvados.arvfile.ArvadosFile |
+          arvados.collection.Subcollection --- The new contents at `name`
+          within `collection`.
+        """
         if self._callback:
             self._callback(event, collection, name, item)
         self.root_collection().notify(event, collection, name, item)
 
     @synchronized
-    def __eq__(self, other):
+    def __eq__(self, other: Any) -> bool:
+        """Indicate whether this collection object is equal to another"""
         if other is self:
             return True
         if not isinstance(other, RichCollectionBase):
@@ -1191,101 +1017,97 @@ class RichCollectionBase(CollectionBase):
                 return False
         return True
 
-    def __ne__(self, other):
+    def __ne__(self, other: Any) -> bool:
+        """Indicate whether this collection object is not equal to another"""
         return not self.__eq__(other)
 
     @synchronized
-    def flush(self):
-        """Flush bufferblocks to Keep."""
+    def flush(self) -> None:
+        """Upload any pending data to Keep"""
         for e in listvalues(self):
             e.flush()
 
 
 class Collection(RichCollectionBase):
-    """Represents the root of an Arvados Collection.
-
-    This class is threadsafe.  The root collection object, all subcollections
-    and files are protected by a single lock (i.e. each access locks the entire
-    collection).
-
-    Brief summary of
-    useful methods:
-
-    :To read an existing file:
-      `c.open("myfile", "r")`
-
-    :To write a new file:
-      `c.open("myfile", "w")`
-
-    :To determine if a file exists:
-      `c.find("myfile") is not None`
+    """Read and manipulate an Arvados collection
 
-    :To copy a file:
-      `c.copy("source", "dest")`
-
-    :To delete a file:
-      `c.remove("myfile")`
-
-    :To save to an existing collection record:
-      `c.save()`
-
-    :To save a new collection record:
-    `c.save_new()`
-
-    :To merge remote changes into this object:
-      `c.update()`
-
-    Must be associated with an API server Collection record (during
-    initialization, or using `save_new`) to use `save` or `update`
+    This class provides a high-level interface to create, read, and update
+    Arvados collections and their contents. Refer to the Arvados Python SDK
+    cookbook for [an introduction to using the Collection class][cookbook].
 
+    [cookbook]: https://doc.arvados.org/sdk/python/cookbook.html#working-with-collections
     """
 
-    def __init__(self, manifest_locator_or_text=None,
-                 api_client=None,
-                 keep_client=None,
-                 num_retries=10,
-                 parent=None,
-                 apiconfig=None,
-                 block_manager=None,
-                 replication_desired=None,
-                 storage_classes_desired=None,
-                 put_threads=None):
-        """Collection constructor.
-
-        :manifest_locator_or_text:
-          An Arvados collection UUID, portable data hash, raw manifest
-          text, or (if creating an empty collection) None.
-
-        :parent:
-          the parent Collection, may be None.
-
-        :apiconfig:
-          A dict containing keys for ARVADOS_API_HOST and ARVADOS_API_TOKEN.
-          Prefer this over supplying your own api_client and keep_client (except in testing).
-          Will use default config settings if not specified.
+    def __init__(self, manifest_locator_or_text: Optional[str]=None,
+                 api_client: Optional['arvados.api_resources.ArvadosAPIClient']=None,
+                 keep_client: Optional['arvados.keep.KeepClient']=None,
+                 num_retries: int=10,
+                 parent: Optional['Collection']=None,
+                 apiconfig: Optional[Mapping[str, str]]=None,
+                 block_manager: Optional['arvados.arvfile._BlockManager']=None,
+                 replication_desired: Optional[int]=None,
+                 storage_classes_desired: Optional[List[str]]=None,
+                 put_threads: Optional[int]=None):
+        """Initialize a Collection object
 
-        :api_client:
-          The API client object to use for requests.  If not specified, create one using `apiconfig`.
-
-        :keep_client:
-          the Keep client to use for requests.  If not specified, create one using `apiconfig`.
-
-        :num_retries:
-          the number of retries for API and Keep requests.
-
-        :block_manager:
-          the block manager to use.  If not specified, create one.
-
-        :replication_desired:
-          How many copies should Arvados maintain. If None, API server default
-          configuration applies. If not None, this value will also be used
-          for determining the number of block copies being written.
-
-        :storage_classes_desired:
-          A list of storage class names where to upload the data. If None,
-          the keep client is expected to store the data into the cluster's
-          default storage class(es).
+        Arguments:
 
+        * manifest_locator_or_text: str | None --- This string can contain a
+          collection manifest text, portable data hash, or UUID. When given a
+          portable data hash or UUID, this instance will load a collection
+          record from the API server. Otherwise, this instance will represent a
+          new collection without an API server record. The default value `None`
+          instantiates a new collection with an empty manifest.
+
+        * api_client: arvados.api_resources.ArvadosAPIClient | None --- The
+          Arvados API client object this instance uses to make requests. If
+          none is given, this instance creates its own client using the
+          settings from `apiconfig` (see below). If your client instantiates
+          many Collection objects, you can help limit memory utilization by
+          calling `arvados.api.api` to construct an
+          `arvados.safeapi.ThreadSafeApiCache`, and use that as the `api_client`
+          for every Collection.
+
+        * keep_client: arvados.keep.KeepClient | None --- The Keep client
+          object this instance uses to make requests. If none is given, this
+          instance creates its own client using its `api_client`.
+
+        * num_retries: int --- The number of times that client requests are
+          retried. Default 10.
+
+        * parent: arvados.collection.Collection | None --- The parent Collection
+          object of this instance, if any. This argument is primarily used by
+          other Collection methods; user client code shouldn't need to use it.
+
+        * apiconfig: Mapping[str, str] | None --- A mapping with entries for
+          `ARVADOS_API_HOST`, `ARVADOS_API_TOKEN`, and optionally
+          `ARVADOS_API_HOST_INSECURE`. When no `api_client` is provided, the
+          Collection object constructs one from these settings. If no
+          mapping is provided, calls `arvados.config.settings` to get these
+          parameters from user configuration.
+
+        * block_manager: arvados.arvfile._BlockManager | None --- The
+          _BlockManager object used by this instance to coordinate reading
+          and writing Keep data blocks. If none is given, this instance
+          constructs its own. This argument is primarily used by other
+          Collection methods; user client code shouldn't need to use it.
+
+        * replication_desired: int | None --- This controls both the value of
+          the `replication_desired` field on API collection records saved by
+          this class, as well as the number of Keep services that the object
+          writes new data blocks to. If none is given, uses the default value
+          configured for the cluster.
+
+        * storage_classes_desired: list[str] | None --- This controls both
+          the value of the `storage_classes_desired` field on API collection
+          records saved by this class, as well as selecting which specific
+          Keep services the object writes new data blocks to. If none is
+          given, defaults to an empty list.
+
+        * put_threads: int | None --- The number of threads to run
+          simultaneously to upload data blocks to Keep. This value is used when
+          building a new `block_manager`. It is unused when a `block_manager`
+          is provided.
         """
 
         if storage_classes_desired and type(storage_classes_desired) is not list:
@@ -1339,19 +1161,33 @@ class Collection(RichCollectionBase):
             except errors.SyntaxError as e:
                 raise errors.ArgumentError("Error processing manifest text: %s", str(e)) from None
 
-    def storage_classes_desired(self):
+    def storage_classes_desired(self) -> List[str]:
+        """Get this collection's `storage_classes_desired` value"""
         return self._storage_classes_desired or []
 
-    def root_collection(self):
+    def root_collection(self) -> 'Collection':
         return self
 
-    def get_properties(self):
+    def get_properties(self) -> Properties:
+        """Get this collection's properties
+
+        This method always returns a dict. If this collection object does not
+        have an associated API record, or that record does not have any
+        properties set, this method returns an empty dict.
+        """
         if self._api_response and self._api_response["properties"]:
             return self._api_response["properties"]
         else:
             return {}
 
-    def get_trash_at(self):
+    def get_trash_at(self) -> Optional[datetime.datetime]:
+        """Get this collection's `trash_at` field
+
+        This method parses the `trash_at` field of the collection's API
+        record and returns a datetime from it. If that field is not set, or
+        this collection object does not have an associated API record,
+        returns None.
+        """
         if self._api_response and self._api_response["trash_at"]:
             try:
                 return ciso8601.parse_datetime(self._api_response["trash_at"])
@@ -1360,21 +1196,57 @@ class Collection(RichCollectionBase):
         else:
             return None
 
-    def stream_name(self):
+    def stream_name(self) -> str:
         return "."
 
-    def writable(self):
+    def writable(self) -> bool:
         return True
 
     @synchronized
-    def known_past_version(self, modified_at_and_portable_data_hash):
+    def known_past_version(
+            self,
+            modified_at_and_portable_data_hash: Tuple[Optional[str], Optional[str]]
+    ) -> bool:
+        """Indicate whether an API record for this collection has been seen before
+
+        As this collection object loads records from the API server, it records
+        their `modified_at` and `portable_data_hash` fields. This method accepts
+        a 2-tuple with values for those fields, and returns `True` if the
+        combination was previously loaded.
+        """
         return modified_at_and_portable_data_hash in self._past_versions
 
     @synchronized
     @retry_method
-    def update(self, other=None, num_retries=None):
-        """Merge the latest collection on the API server with the current collection."""
+    def update(
+            self,
+            other: Optional['Collection']=None,
+            num_retries: Optional[int]=None,
+    ) -> None:
+        """Merge another collection's contents into this one
+
+        This method compares the manifest of this collection instance with
+        another, then updates this instance's manifest with changes from the
+        other, renaming files to flag conflicts where necessary.
+
+        When called without any arguments, this method reloads the collection's
+        API record, and updates this instance with any changes that have
+        appeared server-side. If this instance does not have a corresponding
+        API record, this method raises `arvados.errors.ArgumentError`.
+
+        Arguments:
+
+        * other: arvados.collection.Collection | None --- The collection
+          whose contents should be merged into this instance. When not
+          provided, this method reloads this collection's API record and
+          constructs a Collection object from it.  If this instance does not
+          have a corresponding API record, this method raises
+          `arvados.errors.ArgumentError`.
 
+        * num_retries: int | None --- The number of times to retry reloading
+          the collection's API record from the API server. If not specified,
+          uses the `num_retries` provided when this instance was constructed.
+        """
         if other is None:
             if self._manifest_locator is None:
                 raise errors.ArgumentError("`other` is None but collection does not have a manifest_locator uuid")
@@ -1467,32 +1339,65 @@ class Collection(RichCollectionBase):
         return self
 
     def __exit__(self, exc_type, exc_value, traceback):
-        """Support scoped auto-commit in a with: block."""
-        if exc_type is None:
+        """Exit a context with this collection instance
+
+        If no exception was raised inside the context block, and this
+        collection is writable and has a corresponding API record, that
+        record will be updated to match the state of this instance at the end
+        of the block.
+        """
+        if exc_type is None:
             if self.writable() and self._has_collection_uuid():
                 self.save()
         self.stop_threads()
 
-    def stop_threads(self):
+    def stop_threads(self) -> None:
+        """Stop background Keep upload/download threads"""
         if self._block_manager is not None:
             self._block_manager.stop_threads()
 
     @synchronized
-    def manifest_locator(self):
-        """Get the manifest locator, if any.
-
-        The manifest locator will be set when the collection is loaded from an
-        API server record or the portable data hash of a manifest.
-
-        The manifest locator will be None if the collection is newly created or
-        was created directly from manifest text.  The method `save_new()` will
-        assign a manifest locator.
-
+    def manifest_locator(self) -> Optional[str]:
+        """Get this collection's manifest locator, if any
+
+        * If this collection instance is associated with an API record with a
+          UUID, return that.
+        * Otherwise, if this collection instance was loaded from an API record
+          by portable data hash, return that.
+        * Otherwise, return `None`.
         """
         return self._manifest_locator
 
     @synchronized
-    def clone(self, new_parent=None, new_name=None, readonly=False, new_config=None):
+    def clone(
+            self,
+            new_parent: Optional['Collection']=None,
+            new_name: Optional[str]=None,
+            readonly: bool=False,
+            new_config: Optional[Mapping[str, str]]=None,
+    ) -> 'Collection':
+        """Create a Collection object with the same contents as this instance
+
+        This method creates a new Collection object with contents that match
+        this instance's. The new collection will not be associated with any API
+        record.
+
+        Arguments:
+
+        * new_parent: arvados.collection.Collection | None --- This value is
+          passed to the new Collection's constructor as the `parent`
+          argument.
+
+        * new_name: str | None --- This value is unused.
+
+        * readonly: bool --- If this value is true, this method constructs and
+          returns a `CollectionReader`. Otherwise, it returns a mutable
+          `Collection`. Default `False`.
+
+        * new_config: Mapping[str, str] | None --- This value is passed to the
+          new Collection's constructor as `apiconfig`. If no value is provided,
+          defaults to the configuration passed to this instance's constructor.
+        """
         if new_config is None:
             new_config = self._config
         if readonly:
@@ -1504,31 +1409,31 @@ class Collection(RichCollectionBase):
         return newcollection
 
     @synchronized
-    def api_response(self):
-        """Returns information about this Collection fetched from the API server.
-
-        If the Collection exists in Keep but not the API server, currently
-        returns None.  Future versions may provide a synthetic response.
+    def api_response(self) -> Optional[Dict[str, Any]]:
+        """Get this instance's associated API record
 
+        If this Collection instance has an associated API record, return it.
+        Otherwise, return `None`.
         """
         return self._api_response
 
-    def find_or_create(self, path, create_type):
-        """See `RichCollectionBase.find_or_create`"""
+    def find_or_create(
+            self,
+            path: str,
+            create_type: CreateType,
+    ) -> CollectionItem:
         if path == ".":
             return self
         else:
             return super(Collection, self).find_or_create(path[2:] if path.startswith("./") else path, create_type)
 
-    def find(self, path):
-        """See `RichCollectionBase.find`"""
+    def find(self, path: str) -> CollectionItem:
         if path == ".":
             return self
         else:
             return super(Collection, self).find(path[2:] if path.startswith("./") else path)
 
-    def remove(self, path, recursive=False):
-        """See `RichCollectionBase.remove`"""
+    def remove(self, path: str, recursive: bool=False) -> None:
         if path == ".":
             raise errors.ArgumentError("Cannot remove '.'")
         else:
@@ -1537,49 +1442,52 @@ class Collection(RichCollectionBase):
     @must_be_writable
     @synchronized
     @retry_method
-    def save(self,
-             properties=None,
-             storage_classes=None,
-             trash_at=None,
-             merge=True,
-             num_retries=None,
-             preserve_version=False):
-        """Save collection to an existing collection record.
-
-        Commit pending buffer blocks to Keep, merge with remote record (if
-        merge=True, the default), and update the collection record. Returns
-        the current manifest text.
-
-        Will raise AssertionError if not associated with a collection record on
-        the API server.  If you want to save a manifest to Keep only, see
-        `save_new()`.
-
-        :properties:
-          Additional properties of collection. This value will replace any existing
-          properties of collection.
-
-        :storage_classes:
-          Specify desirable storage classes to be used when writing data to Keep.
-
-        :trash_at:
-          A collection is *expiring* when it has a *trash_at* time in the future.
-          An expiring collection can be accessed as normal,
-          but is scheduled to be trashed automatically at the *trash_at* time.
-
-        :merge:
-          Update and merge remote changes before saving.  Otherwise, any
-          remote changes will be ignored and overwritten.
-
-        :num_retries:
-          Retry count on API calls (if None,  use the collection default)
-
-        :preserve_version:
-          If True, indicate that the collection content being saved right now
-          should be preserved in a version snapshot if the collection record is
-          updated in the future. Requires that the API server has
-          Collections.CollectionVersioning enabled, if not, setting this will
-          raise an exception.
+    def save(
+            self,
+            properties: Optional[Properties]=None,
+            storage_classes: Optional[StorageClasses]=None,
+            trash_at: Optional[datetime.datetime]=None,
+            merge: bool=True,
+            num_retries: Optional[int]=None,
+            preserve_version: bool=False,
+    ) -> str:
+        """Save collection to an existing API record
+
+        This method updates the instance's corresponding API record to match
+        the instance's state. If this instance does not have a corresponding API
+        record yet, raises `AssertionError`. (To create a new API record, use
+        `Collection.save_new`.) This method returns the saved collection
+        manifest.
 
+        Arguments:
+
+        * properties: dict[str, Any] | None --- If provided, the API record will
+          be updated with these properties. Note this will completely replace
+          any existing properties.
+
+        * storage_classes: list[str] | None --- If provided, the API record will
+          be updated with this value in the `storage_classes_desired` field.
+          This value will also be saved on the instance and used for any
+          changes that follow.
+
+        * trash_at: datetime.datetime | None --- If provided, the API record
+          will be updated with this value in the `trash_at` field.
+
+        * merge: bool --- If `True` (the default), this method will first
+          reload this collection's API record, and merge any new contents into
+          this instance before saving changes. See `Collection.update` for
+          details.
+
+        * num_retries: int | None --- The number of times to retry reloading
+          the collection's API record from the API server. If not specified,
+          uses the `num_retries` provided when this instance was constructed.
+
+        * preserve_version: bool --- This value will be passed to directly
+          to the underlying API call. If `True`, the Arvados API will
+          preserve the versions of this collection both immediately before
+          and after the update. If `True` when the API server is not
+          configured with collection versioning, this method raises
+          `arvados.errors.ArgumentError`.
         """
         if properties and type(properties) is not dict:
             raise errors.ArgumentError("properties must be dictionary type.")
@@ -1643,60 +1551,66 @@ class Collection(RichCollectionBase):
     @must_be_writable
     @synchronized
     @retry_method
-    def save_new(self, name=None,
-                 create_collection_record=True,
-                 owner_uuid=None,
-                 properties=None,
-                 storage_classes=None,
-                 trash_at=None,
-                 ensure_unique_name=False,
-                 num_retries=None,
-                 preserve_version=False):
-        """Save collection to a new collection record.
-
-        Commit pending buffer blocks to Keep and, when create_collection_record
-        is True (default), create a new collection record.  After creating a
-        new collection record, this Collection object will be associated with
-        the new record used by `save()`. Returns the current manifest text.
-
-        :name:
-          The collection name.
-
-        :create_collection_record:
-           If True, create a collection record on the API server.
-           If False, only commit blocks to Keep and return the manifest text.
-
-        :owner_uuid:
-          the user, or project uuid that will own this collection.
-          If None, defaults to the current user.
-
-        :properties:
-          Additional properties of collection. This value will replace any existing
-          properties of collection.
-
-        :storage_classes:
-          Specify desirable storage classes to be used when writing data to Keep.
-
-        :trash_at:
-          A collection is *expiring* when it has a *trash_at* time in the future.
-          An expiring collection can be accessed as normal,
-          but is scheduled to be trashed automatically at the *trash_at* time.
-
-        :ensure_unique_name:
-          If True, ask the API server to rename the collection
-          if it conflicts with a collection with the same name and owner.  If
-          False, a name conflict will result in an error.
-
-        :num_retries:
-          Retry count on API calls (if None,  use the collection default)
-
-        :preserve_version:
-          If True, indicate that the collection content being saved right now
-          should be preserved in a version snapshot if the collection record is
-          updated in the future. Requires that the API server has
-          Collections.CollectionVersioning enabled, if not, setting this will
-          raise an exception.
+    def save_new(
+            self,
+            name: Optional[str]=None,
+            create_collection_record: bool=True,
+            owner_uuid: Optional[str]=None,
+            properties: Optional[Properties]=None,
+            storage_classes: Optional[StorageClasses]=None,
+            trash_at: Optional[datetime.datetime]=None,
+            ensure_unique_name: bool=False,
+            num_retries: Optional[int]=None,
+            preserve_version: bool=False,
+    ):
+        """Save collection to a new API record
+
+        This method finishes uploading new data blocks and (optionally)
+        creates a new API collection record with the provided data. If a new
+        record is created, this instance becomes associated with that record
+        for future updates like `save()`. This method returns the saved
+        collection manifest.
+
+        Arguments:
+
+        * name: str | None --- The `name` field to use on the new collection
+          record. If not specified, a generic default name is generated.
+
+        * create_collection_record: bool --- If `True` (the default), creates a
+          collection record on the API server. If `False`, the method finishes
+          all data uploads and only returns the resulting collection manifest
+          without sending it to the API server.
+
+        * owner_uuid: str | None --- The `owner_uuid` field to use on the
+          new collection record.
 
+        * properties: dict[str, Any] | None --- The `properties` field to use on
+          the new collection record.
+
+        * storage_classes: list[str] | None --- The
+          `storage_classes_desired` field to use on the new collection record.
+
+        * trash_at: datetime.datetime | None --- The `trash_at` field to use
+          on the new collection record.
+
+        * ensure_unique_name: bool --- This value is passed directly to the
+          Arvados API when creating the collection record. If `True`, the API
+          server may modify the submitted `name` to ensure the collection's
+          `name`+`owner_uuid` combination is unique. If `False` (the default),
+          if a collection already exists with this same `name`+`owner_uuid`
+          combination, creating a collection record will raise a validation
+          error.
+
+        * num_retries: int | None --- The number of times to retry reloading
+          the collection's API record from the API server. If not specified,
+          uses the `num_retries` provided when this instance was constructed.
+
+        * preserve_version: bool --- This value will be passed to directly
+          to the underlying API call. If `True`, the Arvados API will
+          preserve the versions of this collection both immediately before
+          and after the update. If `True` when the API server is not
+          configured with collection versioning, this method raises
+          `arvados.errors.ArgumentError`.
         """
         if properties and type(properties) is not dict:
             raise errors.ArgumentError("properties must be dictionary type.")
@@ -1834,17 +1748,24 @@ class Collection(RichCollectionBase):
         self.set_committed(True)
 
     @synchronized
-    def notify(self, event, collection, name, item):
+    def notify(
+            self,
+            event: ChangeType,
+            collection: 'RichCollectionBase',
+            name: str,
+            item: CollectionItem,
+    ) -> None:
         if self._callback:
             self._callback(event, collection, name, item)
 
 
 class Subcollection(RichCollectionBase):
-    """This is a subdirectory within a collection that doesn't have its own API
-    server record.
-
-    Subcollection locking falls under the umbrella lock of its root collection.
+    """Read and manipulate a stream/directory within an Arvados collection
 
+    This class represents a single stream (like a directory) within an Arvados
+    `Collection`. It is returned by `Collection.find` and provides the same API.
+    Operations that work on the API collection record propagate to the parent
+    `Collection` object.
     """
 
     def __init__(self, parent, name):
@@ -1854,10 +1775,10 @@ class Subcollection(RichCollectionBase):
         self.name = name
         self.num_retries = parent.num_retries
 
-    def root_collection(self):
+    def root_collection(self) -> 'Collection':
         return self.parent.root_collection()
 
-    def writable(self):
+    def writable(self) -> bool:
         return self.root_collection().writable()
 
     def _my_api(self):
@@ -1869,11 +1790,15 @@ class Subcollection(RichCollectionBase):
     def _my_block_manager(self):
         return self.root_collection()._my_block_manager()
 
-    def stream_name(self):
+    def stream_name(self) -> str:
         return os.path.join(self.parent.stream_name(), self.name)
 
     @synchronized
-    def clone(self, new_parent, new_name):
+    def clone(
+            self,
+            new_parent: Optional['Collection']=None,
+            new_name: Optional[str]=None,
+    ) -> 'Subcollection':
         c = Subcollection(new_parent, new_name)
         c._clonefrom(self)
         return c
@@ -1900,11 +1825,11 @@ class Subcollection(RichCollectionBase):
 
 
 class CollectionReader(Collection):
-    """A read-only collection object.
-
-    Initialize from a collection UUID or portable data hash, or raw
-    manifest text.  See `Collection` constructor for detailed options.
+    """Read-only `Collection` subclass
 
+    This class will never create or update any API collection records. You can
+    use this class for additional code safety when you only need to read
+    existing collections.
     """
     def __init__(self, manifest_locator_or_text, *args, **kwargs):
         self._in_init = True
@@ -1918,7 +1843,7 @@ class CollectionReader(Collection):
         # all_streams() and all_files()
         self._streams = None
 
-    def writable(self):
+    def writable(self) -> bool:
         return self._in_init
 
     def _populate_streams(orig_func):
@@ -1935,16 +1860,10 @@ class CollectionReader(Collection):
             return orig_func(self, *args, **kwargs)
         return populate_streams_wrapper
 
+    @arvados.util._deprecated('3.0', 'Collection iteration')
     @_populate_streams
     def normalize(self):
-        """Normalize the streams returned by `all_streams`.
-
-        This method is kept for backwards compatability and only affects the
-        behavior of `all_streams()` and `all_files()`
-
-        """
-
-        # Rearrange streams
+        """Normalize the streams returned by `all_streams`"""
         streams = {}
         for s in self.all_streams():
             for f in s.all_files():
@@ -1971,3 +1890,423 @@ class CollectionReader(Collection):
         for s in self.all_streams():
             for f in s.all_files():
                 yield f
+
+
+class CollectionWriter(CollectionBase):
+    """Create a new collection from scratch
+
+    .. WARNING:: Deprecated
+       This class is deprecated. Prefer `arvados.collection.Collection`
+       instead.
+    """
+
+    @arvados.util._deprecated('3.0', 'arvados.collection.Collection')
+    def __init__(self, api_client=None, num_retries=0, replication=None):
+        """Instantiate a CollectionWriter.
+
+        CollectionWriter lets you build a new Arvados Collection from scratch.
+        Write files to it.  The CollectionWriter will upload data to Keep as
+        appropriate, and provide you with the Collection manifest text when
+        you're finished.
+
+        Arguments:
+        * api_client: The API client to use to look up Collections.  If not
+          provided, CollectionReader will build one from available Arvados
+          configuration.
+        * num_retries: The default number of times to retry failed
+          service requests.  Default 0.  You may change this value
+          after instantiation, but note those changes may not
+          propagate to related objects like the Keep client.
+        * replication: The number of copies of each block to store.
+          If this argument is None or not supplied, replication is
+          the server-provided default if available, otherwise 2.
+        """
+        self._api_client = api_client
+        self.num_retries = num_retries
+        self.replication = (2 if replication is None else replication)
+        self._keep_client = None
+        self._data_buffer = []
+        self._data_buffer_len = 0
+        self._current_stream_files = []
+        self._current_stream_length = 0
+        self._current_stream_locators = []
+        self._current_stream_name = '.'
+        self._current_file_name = None
+        self._current_file_pos = 0
+        self._finished_streams = []
+        self._close_file = None
+        self._queued_file = None
+        self._queued_dirents = deque()
+        self._queued_trees = deque()
+        self._last_open = None
+
+    def __exit__(self, exc_type, exc_value, traceback):
+        if exc_type is None:
+            self.finish()
+
+    def do_queued_work(self):
+        # The work queue consists of three pieces:
+        # * _queued_file: The file object we're currently writing to the
+        #   Collection.
+        # * _queued_dirents: Entries under the current directory
+        #   (_queued_trees[0]) that we want to write or recurse through.
+        #   This may contain files from subdirectories if
+        #   max_manifest_depth == 0 for this directory.
+        # * _queued_trees: Directories that should be written as separate
+        #   streams to the Collection.
+        # This function handles the smallest piece of work currently queued
+        # (current file, then current directory, then next directory) until
+        # no work remains.  The _work_THING methods each do a unit of work on
+        # THING.  _queue_THING methods add a THING to the work queue.
+        while True:
+            if self._queued_file:
+                self._work_file()
+            elif self._queued_dirents:
+                self._work_dirents()
+            elif self._queued_trees:
+                self._work_trees()
+            else:
+                break
+
+    def _work_file(self):
+        while True:
+            buf = self._queued_file.read(config.KEEP_BLOCK_SIZE)
+            if not buf:
+                break
+            self.write(buf)
+        self.finish_current_file()
+        if self._close_file:
+            self._queued_file.close()
+        self._close_file = None
+        self._queued_file = None
+
+    def _work_dirents(self):
+        path, stream_name, max_manifest_depth = self._queued_trees[0]
+        if stream_name != self.current_stream_name():
+            self.start_new_stream(stream_name)
+        while self._queued_dirents:
+            dirent = self._queued_dirents.popleft()
+            target = os.path.join(path, dirent)
+            if os.path.isdir(target):
+                self._queue_tree(target,
+                                 os.path.join(stream_name, dirent),
+                                 max_manifest_depth - 1)
+            else:
+                self._queue_file(target, dirent)
+                break
+        if not self._queued_dirents:
+            self._queued_trees.popleft()
+
+    def _work_trees(self):
+        path, stream_name, max_manifest_depth = self._queued_trees[0]
+        d = arvados.util.listdir_recursive(
+            path, max_depth = (None if max_manifest_depth == 0 else 0))
+        if d:
+            self._queue_dirents(stream_name, d)
+        else:
+            self._queued_trees.popleft()
+
+    def _queue_file(self, source, filename=None):
+        assert (self._queued_file is None), "tried to queue more than one file"
+        if not hasattr(source, 'read'):
+            source = open(source, 'rb')
+            self._close_file = True
+        else:
+            self._close_file = False
+        if filename is None:
+            filename = os.path.basename(source.name)
+        self.start_new_file(filename)
+        self._queued_file = source
+
+    def _queue_dirents(self, stream_name, dirents):
+        assert (not self._queued_dirents), "tried to queue more than one tree"
+        self._queued_dirents = deque(sorted(dirents))
+
+    def _queue_tree(self, path, stream_name, max_manifest_depth):
+        self._queued_trees.append((path, stream_name, max_manifest_depth))
+
+    def write_file(self, source, filename=None):
+        self._queue_file(source, filename)
+        self.do_queued_work()
+
+    def write_directory_tree(self,
+                             path, stream_name='.', max_manifest_depth=-1):
+        self._queue_tree(path, stream_name, max_manifest_depth)
+        self.do_queued_work()
+
+    def write(self, newdata):
+        if isinstance(newdata, bytes):
+            pass
+        elif isinstance(newdata, str):
+            newdata = newdata.encode()
+        elif hasattr(newdata, '__iter__'):
+            for s in newdata:
+                self.write(s)
+            return
+        self._data_buffer.append(newdata)
+        self._data_buffer_len += len(newdata)
+        self._current_stream_length += len(newdata)
+        while self._data_buffer_len >= config.KEEP_BLOCK_SIZE:
+            self.flush_data()
+
+    def open(self, streampath, filename=None):
+        """open(streampath[, filename]) -> file-like object
+
+        Pass in the path of a file to write to the Collection, either as a
+        single string or as two separate stream name and file name arguments.
+        This method returns a file-like object you can write to add it to the
+        Collection.
+
+        You may only have one file object from the Collection open at a time,
+        so be sure to close the object when you're done.  Using the object in
+        a with statement makes that easy:
+
+            with cwriter.open('./doc/page1.txt') as outfile:
+                outfile.write(page1_data)
+            with cwriter.open('./doc/page2.txt') as outfile:
+                outfile.write(page2_data)
+        """
+        if filename is None:
+            streampath, filename = split(streampath)
+        if self._last_open and not self._last_open.closed:
+            raise errors.AssertionError(
+                u"can't open '{}' when '{}' is still open".format(
+                    filename, self._last_open.name))
+        if streampath != self.current_stream_name():
+            self.start_new_stream(streampath)
+        self.set_current_file_name(filename)
+        self._last_open = _WriterFile(self, filename)
+        return self._last_open
+
+    def flush_data(self):
+        data_buffer = b''.join(self._data_buffer)
+        if data_buffer:
+            self._current_stream_locators.append(
+                self._my_keep().put(
+                    data_buffer[0:config.KEEP_BLOCK_SIZE],
+                    copies=self.replication))
+            self._data_buffer = [data_buffer[config.KEEP_BLOCK_SIZE:]]
+            self._data_buffer_len = len(self._data_buffer[0])
+
+    def start_new_file(self, newfilename=None):
+        self.finish_current_file()
+        self.set_current_file_name(newfilename)
+
+    def set_current_file_name(self, newfilename):
+        if re.search(r'[\t\n]', newfilename):
+            raise errors.AssertionError(
+                "Manifest filenames cannot contain whitespace: %s" %
+                newfilename)
+        elif re.search(r'\x00', newfilename):
+            raise errors.AssertionError(
+                "Manifest filenames cannot contain NUL characters: %s" %
+                newfilename)
+        self._current_file_name = newfilename
+
+    def current_file_name(self):
+        return self._current_file_name
+
+    def finish_current_file(self):
+        if self._current_file_name is None:
+            if self._current_file_pos == self._current_stream_length:
+                return
+            raise errors.AssertionError(
+                "Cannot finish an unnamed file " +
+                "(%d bytes at offset %d in '%s' stream)" %
+                (self._current_stream_length - self._current_file_pos,
+                 self._current_file_pos,
+                 self._current_stream_name))
+        self._current_stream_files.append([
+                self._current_file_pos,
+                self._current_stream_length - self._current_file_pos,
+                self._current_file_name])
+        self._current_file_pos = self._current_stream_length
+        self._current_file_name = None
+
+    def start_new_stream(self, newstreamname='.'):
+        self.finish_current_stream()
+        self.set_current_stream_name(newstreamname)
+
+    def set_current_stream_name(self, newstreamname):
+        if re.search(r'[\t\n]', newstreamname):
+            raise errors.AssertionError(
+                "Manifest stream names cannot contain whitespace: '%s'" %
+                (newstreamname))
+        self._current_stream_name = '.' if newstreamname=='' else newstreamname
+
+    def current_stream_name(self):
+        return self._current_stream_name
+
+    def finish_current_stream(self):
+        self.finish_current_file()
+        self.flush_data()
+        if not self._current_stream_files:
+            pass
+        elif self._current_stream_name is None:
+            raise errors.AssertionError(
+                "Cannot finish an unnamed stream (%d bytes in %d files)" %
+                (self._current_stream_length, len(self._current_stream_files)))
+        else:
+            if not self._current_stream_locators:
+                self._current_stream_locators.append(config.EMPTY_BLOCK_LOCATOR)
+            self._finished_streams.append([self._current_stream_name,
+                                           self._current_stream_locators,
+                                           self._current_stream_files])
+        self._current_stream_files = []
+        self._current_stream_length = 0
+        self._current_stream_locators = []
+        self._current_stream_name = None
+        self._current_file_pos = 0
+        self._current_file_name = None
+
+    def finish(self):
+        """Store the manifest in Keep and return its locator.
+
+        This is useful for storing manifest fragments (task outputs)
+        temporarily in Keep during a Crunch job.
+
+        In other cases you should make a collection instead, by
+        sending manifest_text() to the API server's "create
+        collection" endpoint.
+        """
+        return self._my_keep().put(self.manifest_text().encode(),
+                                   copies=self.replication)
+
+    def portable_data_hash(self):
+        stripped = self.stripped_manifest().encode()
+        return '{}+{}'.format(hashlib.md5(stripped).hexdigest(), len(stripped))
+
+    def manifest_text(self):
+        self.finish_current_stream()
+        manifest = ''
+
+        for stream in self._finished_streams:
+            if not re.search(r'^\.(/.*)?$', stream[0]):
+                manifest += './'
+            manifest += stream[0].replace(' ', '\\040')
+            manifest += ' ' + ' '.join(stream[1])
+            manifest += ' ' + ' '.join("%d:%d:%s" % (sfile[0], sfile[1], sfile[2].replace(' ', '\\040')) for sfile in stream[2])
+            manifest += "\n"
+
+        return manifest
+
+    def data_locators(self):
+        ret = []
+        for name, locators, files in self._finished_streams:
+            ret += locators
+        return ret
+
+    def save_new(self, name=None):
+        return self._api_client.collections().create(
+            ensure_unique_name=True,
+            body={
+                'name': name,
+                'manifest_text': self.manifest_text(),
+            }).execute(num_retries=self.num_retries)
+
+
+class ResumableCollectionWriter(CollectionWriter):
+    """CollectionWriter that can serialize internal state to disk
+
+    .. WARNING:: Deprecated
+       This class is deprecated. Prefer `arvados.collection.Collection`
+       instead.
+    """
+
+    STATE_PROPS = ['_current_stream_files', '_current_stream_length',
+                   '_current_stream_locators', '_current_stream_name',
+                   '_current_file_name', '_current_file_pos', '_close_file',
+                   '_data_buffer', '_dependencies', '_finished_streams',
+                   '_queued_dirents', '_queued_trees']
+
+    @arvados.util._deprecated('3.0', 'arvados.collection.Collection')
+    def __init__(self, api_client=None, **kwargs):
+        self._dependencies = {}
+        super(ResumableCollectionWriter, self).__init__(api_client, **kwargs)
+
+    @classmethod
+    def from_state(cls, state, *init_args, **init_kwargs):
+        # Try to build a new writer from scratch with the given state.
+        # If the state is not suitable to resume (because files have changed,
+        # been deleted, aren't predictable, etc.), raise a
+        # StaleWriterStateError.  Otherwise, return the initialized writer.
+        # The caller is responsible for calling writer.do_queued_work()
+        # appropriately after it's returned.
+        writer = cls(*init_args, **init_kwargs)
+        for attr_name in cls.STATE_PROPS:
+            attr_value = state[attr_name]
+            attr_class = getattr(writer, attr_name).__class__
+            # Coerce the value into the same type as the initial value, if
+            # needed.
+            if attr_class not in (type(None), attr_value.__class__):
+                attr_value = attr_class(attr_value)
+            setattr(writer, attr_name, attr_value)
+        # Check dependencies before we try to resume anything.
+        if any(KeepLocator(ls).permission_expired()
+               for ls in writer._current_stream_locators):
+            raise errors.StaleWriterStateError(
+                "locators include expired permission hint")
+        writer.check_dependencies()
+        if state['_current_file'] is not None:
+            path, pos = state['_current_file']
+            try:
+                writer._queued_file = open(path, 'rb')
+                writer._queued_file.seek(pos)
+            except IOError as error:
+                raise errors.StaleWriterStateError(
+                    u"failed to reopen active file {}: {}".format(path, error))
+        return writer
+
+    def check_dependencies(self):
+        for path, orig_stat in listitems(self._dependencies):
+            if not S_ISREG(orig_stat[ST_MODE]):
+                raise errors.StaleWriterStateError(u"{} not file".format(path))
+            try:
+                now_stat = tuple(os.stat(path))
+            except OSError as error:
+                raise errors.StaleWriterStateError(
+                    u"failed to stat {}: {}".format(path, error))
+            if ((not S_ISREG(now_stat[ST_MODE])) or
+                (orig_stat[ST_MTIME] != now_stat[ST_MTIME]) or
+                (orig_stat[ST_SIZE] != now_stat[ST_SIZE])):
+                raise errors.StaleWriterStateError(u"{} changed".format(path))
+
+    def dump_state(self, copy_func=lambda x: x):
+        state = {attr: copy_func(getattr(self, attr))
+                 for attr in self.STATE_PROPS}
+        if self._queued_file is None:
+            state['_current_file'] = None
+        else:
+            state['_current_file'] = (os.path.realpath(self._queued_file.name),
+                                      self._queued_file.tell())
+        return state
+
+    def _queue_file(self, source, filename=None):
+        try:
+            src_path = os.path.realpath(source)
+        except Exception:
+            raise errors.AssertionError(u"{} not a file path".format(source))
+        try:
+            path_stat = os.stat(src_path)
+        except OSError as stat_error:
+            path_stat = None
+        super(ResumableCollectionWriter, self)._queue_file(source, filename)
+        fd_stat = os.fstat(self._queued_file.fileno())
+        if not S_ISREG(fd_stat.st_mode):
+            # We won't be able to resume from this cache anyway, so don't
+            # worry about further checks.
+            self._dependencies[source] = tuple(fd_stat)
+        elif path_stat is None:
+            raise errors.AssertionError(
+                u"could not stat {}: {}".format(source, stat_error))
+        elif path_stat.st_ino != fd_stat.st_ino:
+            raise errors.AssertionError(
+                u"{} changed between open and stat calls".format(source))
+        else:
+            self._dependencies[src_path] = tuple(fd_stat)
+
+    def write(self, data):
+        if self._queued_file is None:
+            raise errors.AssertionError(
+                "resumable writer can't accept unsourced data")
+        return super(ResumableCollectionWriter, self).write(data)
index 41b500a52f02f785c3e1b61c92003ede93823f40..7f5245db863acd0c9c446ce6328b58f237125a3c 100755 (executable)
@@ -36,6 +36,9 @@ import logging
 import tempfile
 import urllib.parse
 import io
+import json
+import queue
+import threading
 
 import arvados
 import arvados.config
@@ -43,6 +46,7 @@ import arvados.keep
 import arvados.util
 import arvados.commands._util as arv_cmd
 import arvados.commands.keepdocker
+import arvados.http_to_keep
 import ruamel.yaml as yaml
 
 from arvados._version import __version__
@@ -105,6 +109,11 @@ def main():
     copy_opts.add_argument(
         '--storage-classes', dest='storage_classes',
         help='Comma separated list of storage classes to be used when saving data to the destinaton Arvados instance.')
+    copy_opts.add_argument("--varying-url-params", type=str, default="",
+                        help="A comma separated list of URL query parameters that should be ignored when storing HTTP URLs in Keep.")
+
+    copy_opts.add_argument("--prefer-cached-downloads", action="store_true", default=False,
+                        help="If a HTTP URL is found in Keep, skip upstream URL freshness check (will not notice if the upstream has changed, but also not error if upstream is unavailable).")
 
     copy_opts.add_argument(
         'object_uuid',
@@ -125,7 +134,7 @@ def main():
     else:
         logger.setLevel(logging.INFO)
 
-    if not args.source_arvados:
+    if not args.source_arvados and arvados.util.uuid_pattern.match(args.object_uuid):
         args.source_arvados = args.object_uuid[:5]
 
     # Create API clients for the source and destination instances
@@ -137,28 +146,39 @@ def main():
 
     # Identify the kind of object we have been given, and begin copying.
     t = uuid_type(src_arv, args.object_uuid)
-    if t == 'Collection':
-        set_src_owner_uuid(src_arv.collections(), args.object_uuid, args)
-        result = copy_collection(args.object_uuid,
-                                 src_arv, dst_arv,
-                                 args)
-    elif t == 'Workflow':
-        set_src_owner_uuid(src_arv.workflows(), args.object_uuid, args)
-        result = copy_workflow(args.object_uuid, src_arv, dst_arv, args)
-    elif t == 'Group':
-        set_src_owner_uuid(src_arv.groups(), args.object_uuid, args)
-        result = copy_project(args.object_uuid, src_arv, dst_arv, args.project_uuid, args)
-    else:
-        abort("cannot copy object {} of type {}".format(args.object_uuid, t))
+
+    try:
+        if t == 'Collection':
+            set_src_owner_uuid(src_arv.collections(), args.object_uuid, args)
+            result = copy_collection(args.object_uuid,
+                                     src_arv, dst_arv,
+                                     args)
+        elif t == 'Workflow':
+            set_src_owner_uuid(src_arv.workflows(), args.object_uuid, args)
+            result = copy_workflow(args.object_uuid, src_arv, dst_arv, args)
+        elif t == 'Group':
+            set_src_owner_uuid(src_arv.groups(), args.object_uuid, args)
+            result = copy_project(args.object_uuid, src_arv, dst_arv, args.project_uuid, args)
+        elif t == 'httpURL':
+            result = copy_from_http(args.object_uuid, src_arv, dst_arv, args)
+        else:
+            abort("cannot copy object {} of type {}".format(args.object_uuid, t))
+    except Exception as e:
+        logger.error("%s", e, exc_info=args.verbose)
+        exit(1)
 
     # Clean up any outstanding temp git repositories.
     for d in listvalues(local_repo_dir):
         shutil.rmtree(d, ignore_errors=True)
 
+    if not result:
+        exit(1)
+
     # If no exception was thrown and the response does not have an
     # error_token field, presume success
-    if 'error_token' in result or 'uuid' not in result:
-        logger.error("API server returned an error result: {}".format(result))
+    if result is None or 'error_token' in result or 'uuid' not in result:
+        if result:
+            logger.error("API server returned an error result: {}".format(result))
         exit(1)
 
     print(result['uuid'])
@@ -307,21 +327,26 @@ def copy_workflow(wf_uuid, src, dst, args):
 
     # copy collections and docker images
     if args.recursive and wf["definition"]:
-        wf_def = yaml.safe_load(wf["definition"])
-        if wf_def is not None:
-            locations = []
-            docker_images = {}
-            graph = wf_def.get('$graph', None)
-            if graph is not None:
-                workflow_collections(graph, locations, docker_images)
-            else:
-                workflow_collections(wf_def, locations, docker_images)
+        env = {"ARVADOS_API_HOST": urllib.parse.urlparse(src._rootDesc["rootUrl"]).netloc,
+               "ARVADOS_API_TOKEN": src.api_token,
+               "PATH": os.environ["PATH"]}
+        try:
+            result = subprocess.run(["arvados-cwl-runner", "--quiet", "--print-keep-deps", "arvwf:"+wf_uuid],
+                                    capture_output=True, env=env)
+        except FileNotFoundError:
+            no_arv_copy = True
+        else:
+            no_arv_copy = result.returncode == 2
 
-            if locations:
-                copy_collections(locations, src, dst, args)
+        if no_arv_copy:
+            raise Exception('Copying workflows requires arvados-cwl-runner 2.7.1 or later to be installed in PATH.')
+        elif result.returncode != 0:
+            raise Exception('There was an error getting Keep dependencies from workflow using arvados-cwl-runner --print-keep-deps')
 
-            for image in docker_images:
-                copy_docker_image(image, docker_images[image], src, dst, args)
+        locations = json.loads(result.stdout)
+
+        if locations:
+            copy_collections(locations, src, dst, args)
 
     # copy the workflow itself
     del wf['uuid']
@@ -565,6 +590,125 @@ def copy_collection(obj_uuid, src, dst, args):
     else:
         progress_writer = None
 
+    # go through the words
+    # put each block loc into 'get' queue
+    # 'get' threads get block and put it into 'put' queue
+    # 'put' threads put block and then update dst_locators
+    #
+    # after going through the whole manifest we go back through it
+    # again and build dst_manifest
+
+    lock = threading.Lock()
+
+    # the get queue should be unbounded because we'll add all the
+    # block hashes we want to get, but these are small
+    get_queue = queue.Queue()
+
+    threadcount = 4
+
+    # the put queue contains full data blocks
+    # and if 'get' is faster than 'put' we could end up consuming
+    # a great deal of RAM if it isn't bounded.
+    put_queue = queue.Queue(threadcount)
+    transfer_error = []
+
+    def get_thread():
+        while True:
+            word = get_queue.get()
+            if word is None:
+                put_queue.put(None)
+                get_queue.task_done()
+                return
+
+            blockhash = arvados.KeepLocator(word).md5sum
+            with lock:
+                if blockhash in dst_locators:
+                    # Already uploaded
+                    get_queue.task_done()
+                    continue
+
+            try:
+                logger.debug("Getting block %s", word)
+                data = src_keep.get(word)
+                put_queue.put((word, data))
+            except e:
+                logger.error("Error getting block %s: %s", word, e)
+                transfer_error.append(e)
+                try:
+                    # Drain the 'get' queue so we end early
+                    while True:
+                        get_queue.get(False)
+                        get_queue.task_done()
+                except queue.Empty:
+                    pass
+            finally:
+                get_queue.task_done()
+
+    def put_thread():
+        nonlocal bytes_written
+        while True:
+            item = put_queue.get()
+            if item is None:
+                put_queue.task_done()
+                return
+
+            word, data = item
+            loc = arvados.KeepLocator(word)
+            blockhash = loc.md5sum
+            with lock:
+                if blockhash in dst_locators:
+                    # Already uploaded
+                    put_queue.task_done()
+                    continue
+
+            try:
+                logger.debug("Putting block %s (%s bytes)", blockhash, loc.size)
+                dst_locator = dst_keep.put(data, classes=(args.storage_classes or []))
+                with lock:
+                    dst_locators[blockhash] = dst_locator
+                    bytes_written += loc.size
+                    if progress_writer:
+                        progress_writer.report(obj_uuid, bytes_written, bytes_expected)
+            except e:
+                logger.error("Error putting block %s (%s bytes): %s", blockhash, loc.size, e)
+                try:
+                    # Drain the 'get' queue so we end early
+                    while True:
+                        get_queue.get(False)
+                        get_queue.task_done()
+                except queue.Empty:
+                    pass
+                transfer_error.append(e)
+            finally:
+                put_queue.task_done()
+
+    for line in manifest.splitlines():
+        words = line.split()
+        for word in words[1:]:
+            try:
+                loc = arvados.KeepLocator(word)
+            except ValueError:
+                # If 'word' can't be parsed as a locator,
+                # presume it's a filename.
+                continue
+
+            get_queue.put(word)
+
+    for i in range(0, threadcount):
+        get_queue.put(None)
+
+    for i in range(0, threadcount):
+        threading.Thread(target=get_thread, daemon=True).start()
+
+    for i in range(0, threadcount):
+        threading.Thread(target=put_thread, daemon=True).start()
+
+    get_queue.join()
+    put_queue.join()
+
+    if len(transfer_error) > 0:
+        return {"error_token": "Failed to transfer blocks"}
+
     for line in manifest.splitlines():
         words = line.split()
         dst_manifest.write(words[0])
@@ -578,16 +722,6 @@ def copy_collection(obj_uuid, src, dst, args):
                 dst_manifest.write(word)
                 continue
             blockhash = loc.md5sum
-            # copy this block if we haven't seen it before
-            # (otherwise, just reuse the existing dst_locator)
-            if blockhash not in dst_locators:
-                logger.debug("Copying block %s (%s bytes)", blockhash, loc.size)
-                if progress_writer:
-                    progress_writer.report(obj_uuid, bytes_written, bytes_expected)
-                data = src_keep.get(word)
-                dst_locator = dst_keep.put(data, classes=(args.storage_classes or []))
-                dst_locators[blockhash] = dst_locator
-                bytes_written += loc.size
             dst_manifest.write(' ')
             dst_manifest.write(dst_locators[blockhash])
         dst_manifest.write("\n")
@@ -756,6 +890,10 @@ def git_rev_parse(rev, repo):
 def uuid_type(api, object_uuid):
     if re.match(arvados.util.keep_locator_pattern, object_uuid):
         return 'Collection'
+
+    if object_uuid.startswith("http:") or object_uuid.startswith("https:"):
+        return 'httpURL'
+
     p = object_uuid.split('-')
     if len(p) == 3:
         type_prefix = p[1]
@@ -765,6 +903,27 @@ def uuid_type(api, object_uuid):
                 return k
     return None
 
+
+def copy_from_http(url, src, dst, args):
+
+    project_uuid = args.project_uuid
+    varying_url_params = args.varying_url_params
+    prefer_cached_downloads = args.prefer_cached_downloads
+
+    cached = arvados.http_to_keep.check_cached_url(src, project_uuid, url, {},
+                                                   varying_url_params=varying_url_params,
+                                                   prefer_cached_downloads=prefer_cached_downloads)
+    if cached[2] is not None:
+        return copy_collection(cached[2], src, dst, args)
+
+    cached = arvados.http_to_keep.http_to_keep(dst, project_uuid, url,
+                                               varying_url_params=varying_url_params,
+                                               prefer_cached_downloads=prefer_cached_downloads)
+
+    if cached is not None:
+        return {"uuid": cached[2]}
+
+
 def abort(msg, code=1):
     logger.info("arv-copy: %s", msg)
     exit(code)
index 16c3dc4778cede64160d331456f334d84dfc5832..1da8cf4946c652bfca3208ca632821144ccab1f5 100644 (file)
@@ -182,6 +182,10 @@ class _Downloader(PyCurlHelper):
         mt = re.match(r'^HTTP\/(\d(\.\d)?) ([1-5]\d\d) ([^\r\n\x00-\x08\x0b\x0c\x0e-\x1f\x7f]*)\r\n$', self._headers["x-status-line"])
         code = int(mt.group(3))
 
+        if not self.name:
+            logger.error("Cannot determine filename from URL or headers")
+            return
+
         if code == 200:
             self.target = self.collection.open(self.name, "wb")
 
@@ -191,6 +195,13 @@ class _Downloader(PyCurlHelper):
             self._first_chunk = False
 
         self.count += len(chunk)
+
+        if self.target is None:
+            # "If this number is not equal to the size of the byte
+            # string, this signifies an error and libcurl will abort
+            # the request."
+            return 0
+
         self.target.write(chunk)
         loopnow = time.time()
         if (loopnow - self.checkpoint) < 20:
@@ -238,16 +249,10 @@ def _etag_quote(etag):
         return '"' + etag + '"'
 
 
-def http_to_keep(api, project_uuid, url,
-                 utcnow=datetime.datetime.utcnow, varying_url_params="",
-                 prefer_cached_downloads=False):
-    """Download a file over HTTP and upload it to keep, with HTTP headers as metadata.
-
-    Before downloading the URL, checks to see if the URL already
-    exists in Keep and applies HTTP caching policy, the
-    varying_url_params and prefer_cached_downloads flags in order to
-    decide whether to use the version in Keep or re-download it.
-    """
+def check_cached_url(api, project_uuid, url, etags,
+                     utcnow=datetime.datetime.utcnow,
+                     varying_url_params="",
+                     prefer_cached_downloads=False):
 
     logger.info("Checking Keep for %s", url)
 
@@ -270,8 +275,6 @@ def http_to_keep(api, project_uuid, url,
 
     now = utcnow()
 
-    etags = {}
-
     curldownloader = _Downloader(api)
 
     for item in items:
@@ -287,13 +290,13 @@ def http_to_keep(api, project_uuid, url,
         if prefer_cached_downloads or _fresh_cache(cache_url, properties, now):
             # HTTP caching rules say we should use the cache
             cr = arvados.collection.CollectionReader(item["portable_data_hash"], api_client=api)
-            return (item["portable_data_hash"], next(iter(cr.keys())) )
+            return (item["portable_data_hash"], next(iter(cr.keys())), item["uuid"], clean_url, now)
 
         if not _changed(cache_url, clean_url, properties, now, curldownloader):
             # Etag didn't change, same content, just update headers
             api.collections().update(uuid=item["uuid"], body={"collection":{"properties": properties}}).execute()
             cr = arvados.collection.CollectionReader(item["portable_data_hash"], api_client=api)
-            return (item["portable_data_hash"], next(iter(cr.keys())))
+            return (item["portable_data_hash"], next(iter(cr.keys())), item["uuid"], clean_url, now)
 
         for etagstr in ("Etag", "ETag"):
             if etagstr in properties[cache_url] and len(properties[cache_url][etagstr]) > 2:
@@ -301,6 +304,31 @@ def http_to_keep(api, project_uuid, url,
 
     logger.debug("Found ETag values %s", etags)
 
+    return (None, None, None, clean_url, now)
+
+
+def http_to_keep(api, project_uuid, url,
+                 utcnow=datetime.datetime.utcnow, varying_url_params="",
+                 prefer_cached_downloads=False):
+    """Download a file over HTTP and upload it to keep, with HTTP headers as metadata.
+
+    Before downloading the URL, checks to see if the URL already
+    exists in Keep and applies HTTP caching policy, the
+    varying_url_params and prefer_cached_downloads flags in order to
+    decide whether to use the version in Keep or re-download it.
+    """
+
+    etags = {}
+    cache_result = check_cached_url(api, project_uuid, url, etags,
+                                    utcnow, varying_url_params,
+                                    prefer_cached_downloads)
+
+    if cache_result[0] is not None:
+        return cache_result
+
+    clean_url = cache_result[3]
+    now = cache_result[4]
+
     properties = {}
     headers = {}
     if etags:
@@ -309,6 +337,8 @@ def http_to_keep(api, project_uuid, url,
 
     logger.info("Beginning download of %s", url)
 
+    curldownloader = _Downloader(api)
+
     req = curldownloader.download(url, headers)
 
     c = curldownloader.collection
@@ -326,7 +356,7 @@ def http_to_keep(api, project_uuid, url,
         item["properties"].update(properties)
         api.collections().update(uuid=item["uuid"], body={"collection":{"properties": item["properties"]}}).execute()
         cr = arvados.collection.CollectionReader(item["portable_data_hash"], api_client=api)
-        return (item["portable_data_hash"], list(cr.keys())[0])
+        return (item["portable_data_hash"], list(cr.keys())[0], item["uuid"], clean_url, now)
 
     logger.info("Download complete")
 
@@ -344,4 +374,4 @@ def http_to_keep(api, project_uuid, url,
 
     api.collections().update(uuid=c.manifest_locator(), body={"collection":{"properties": properties}}).execute()
 
-    return (c.portable_data_hash(), curldownloader.name)
+    return (c.portable_data_hash(), curldownloader.name, c.manifest_locator(), clean_url, now)
index 1ee5f6355a96a04a08aeba0c3e961a6386c26c4f..88adc8879b6c86ed211040c6a5fa2244c68d4549 100644 (file)
@@ -24,9 +24,9 @@ CR_UNCOMMITTED = 'Uncommitted'
 CR_COMMITTED = 'Committed'
 CR_FINAL = 'Final'
 
-keep_locator_pattern = re.compile(r'[0-9a-f]{32}\+\d+(\+\S+)*')
-signed_locator_pattern = re.compile(r'[0-9a-f]{32}\+\d+(\+\S+)*\+A\S+(\+\S+)*')
-portable_data_hash_pattern = re.compile(r'[0-9a-f]{32}\+\d+')
+keep_locator_pattern = re.compile(r'[0-9a-f]{32}\+[0-9]+(\+\S+)*')
+signed_locator_pattern = re.compile(r'[0-9a-f]{32}\+[0-9]+(\+\S+)*\+A\S+(\+\S+)*')
+portable_data_hash_pattern = re.compile(r'[0-9a-f]{32}\+[0-9]+')
 uuid_pattern = re.compile(r'[a-z0-9]{5}-[a-z0-9]{5}-[a-z0-9]{15}')
 collection_uuid_pattern = re.compile(r'[a-z0-9]{5}-4zz18-[a-z0-9]{15}')
 group_uuid_pattern = re.compile(r'[a-z0-9]{5}-j7d0g-[a-z0-9]{15}')
@@ -34,7 +34,9 @@ user_uuid_pattern = re.compile(r'[a-z0-9]{5}-tpzed-[a-z0-9]{15}')
 link_uuid_pattern = re.compile(r'[a-z0-9]{5}-o0j2j-[a-z0-9]{15}')
 job_uuid_pattern = re.compile(r'[a-z0-9]{5}-8i9sb-[a-z0-9]{15}')
 container_uuid_pattern = re.compile(r'[a-z0-9]{5}-dz642-[a-z0-9]{15}')
-manifest_pattern = re.compile(r'((\S+)( +[a-f0-9]{32}(\+\d+)(\+\S+)*)+( +\d+:\d+:\S+)+$)+', flags=re.MULTILINE)
+manifest_pattern = re.compile(r'((\S+)( +[a-f0-9]{32}(\+[0-9]+)(\+\S+)*)+( +[0-9]+:[0-9]+:\S+)+$)+', flags=re.MULTILINE)
+keep_file_locator_pattern = re.compile(r'([0-9a-f]{32}\+[0-9]+)/(.*)')
+keepuri_pattern = re.compile(r'keep:([0-9a-f]{32}\+[0-9]+)/(.*)')
 
 def _deprecated(version=None, preferred=None):
     """Mark a callable as deprecated in the SDK
index 381a61e2aa044103d53470d4bfe84fa99569344a..bce57eda61b7be549af205525bb0c81eba9e035d 100644 (file)
@@ -97,7 +97,9 @@ class TestHttpToKeep(unittest.TestCase):
         utcnow.return_value = datetime.datetime(2018, 5, 15)
 
         r = http_to_keep(api, None, "http://example.com/file1.txt", utcnow=utcnow)
-        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt"))
+        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt",
+                             'zzzzz-4zz18-zzzzzzzzzzzzzz3', 'http://example.com/file1.txt',
+                             datetime.datetime(2018, 5, 15, 0, 0)))
 
         assert mockobj.url == b"http://example.com/file1.txt"
         assert mockobj.perform_was_called is True
@@ -146,7 +148,9 @@ class TestHttpToKeep(unittest.TestCase):
         utcnow.return_value = datetime.datetime(2018, 5, 16)
 
         r = http_to_keep(api, None, "http://example.com/file1.txt", utcnow=utcnow)
-        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt"))
+        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt",
+                             'zzzzz-4zz18-zzzzzzzzzzzzzz3', 'http://example.com/file1.txt',
+                             datetime.datetime(2018, 5, 16, 0, 0)))
 
         assert mockobj.perform_was_called is False
 
@@ -185,7 +189,8 @@ class TestHttpToKeep(unittest.TestCase):
         utcnow.return_value = datetime.datetime(2018, 5, 16)
 
         r = http_to_keep(api, None, "http://example.com/file1.txt", utcnow=utcnow)
-        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt"))
+        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt", 'zzzzz-4zz18-zzzzzzzzzzzzzz3',
+                             'http://example.com/file1.txt', datetime.datetime(2018, 5, 16, 0, 0)))
 
         assert mockobj.perform_was_called is False
 
@@ -224,7 +229,10 @@ class TestHttpToKeep(unittest.TestCase):
         utcnow.return_value = datetime.datetime(2018, 5, 17)
 
         r = http_to_keep(api, None, "http://example.com/file1.txt", utcnow=utcnow)
-        self.assertEqual(r, ("99999999999999999999999999999997+99", "file1.txt"))
+        self.assertEqual(r, ("99999999999999999999999999999997+99", "file1.txt",
+                             'zzzzz-4zz18-zzzzzzzzzzzzzz4',
+                             'http://example.com/file1.txt', datetime.datetime(2018, 5, 17, 0, 0)))
+
 
         assert mockobj.url == b"http://example.com/file1.txt"
         assert mockobj.perform_was_called is True
@@ -278,7 +286,9 @@ class TestHttpToKeep(unittest.TestCase):
         utcnow.return_value = datetime.datetime(2018, 5, 17)
 
         r = http_to_keep(api, None, "http://example.com/file1.txt", utcnow=utcnow)
-        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt"))
+        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt",
+                             'zzzzz-4zz18-zzzzzzzzzzzzzz3', 'http://example.com/file1.txt',
+                             datetime.datetime(2018, 5, 17, 0, 0)))
 
         cm.open.assert_not_called()
 
@@ -315,7 +325,10 @@ class TestHttpToKeep(unittest.TestCase):
         utcnow.return_value = datetime.datetime(2018, 5, 15)
 
         r = http_to_keep(api, None, "http://example.com/download?fn=/file1.txt", utcnow=utcnow)
-        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt"))
+        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt",
+                             'zzzzz-4zz18-zzzzzzzzzzzzzz3',
+                             'http://example.com/download?fn=/file1.txt',
+                             datetime.datetime(2018, 5, 15, 0, 0)))
 
         assert mockobj.url == b"http://example.com/download?fn=/file1.txt"
 
@@ -369,7 +382,9 @@ class TestHttpToKeep(unittest.TestCase):
         utcnow.return_value = datetime.datetime(2018, 5, 17)
 
         r = http_to_keep(api, None, "http://example.com/file1.txt", utcnow=utcnow)
-        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt"))
+        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt",
+                             'zzzzz-4zz18-zzzzzzzzzzzzzz3', 'http://example.com/file1.txt',
+                             datetime.datetime(2018, 5, 17, 0, 0)))
 
         print(mockobj.req_headers)
         assert mockobj.req_headers == ["Accept: application/octet-stream", "If-None-Match: \"123456\""]
@@ -418,7 +433,8 @@ class TestHttpToKeep(unittest.TestCase):
         utcnow.return_value = datetime.datetime(2018, 5, 17)
 
         r = http_to_keep(api, None, "http://example.com/file1.txt", utcnow=utcnow, prefer_cached_downloads=True)
-        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt"))
+        self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt", 'zzzzz-4zz18-zzzzzzzzzzzzzz3',
+                             'http://example.com/file1.txt', datetime.datetime(2018, 5, 17, 0, 0)))
 
         assert mockobj.perform_was_called is False
         cm.open.assert_not_called()
@@ -465,7 +481,8 @@ class TestHttpToKeep(unittest.TestCase):
 
             r = http_to_keep(api, None, "http://example.com/file1.txt?KeyId=123&Signature=456&Expires=789",
                                               utcnow=utcnow, varying_url_params="KeyId,Signature,Expires")
-            self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt"))
+            self.assertEqual(r, ("99999999999999999999999999999998+99", "file1.txt", 'zzzzz-4zz18-zzzzzzzzzzzzzz3',
+                                 'http://example.com/file1.txt', datetime.datetime(2018, 5, 17, 0, 0)))
 
             assert mockobj.perform_was_called is True
             cm.open.assert_not_called()
index 5c0aeba589aa65867bd30c48dd56fd9a8fce3193..0c9248b819e81091867b126d20f5c2797e0d8e64 100644 (file)
@@ -163,63 +163,70 @@ class Group < ArvadosModel
     #   Remove groups that don't belong from trash
     #   Add/update groups that do belong in the trash
 
-    temptable = "group_subtree_#{rand(2**64).to_s(10)}"
-    ActiveRecord::Base.connection.exec_query(
-      "create temporary table #{temptable} on commit drop " +
-      "as select * from project_subtree_with_trash_at($1, LEAST($2, $3)::timestamp)",
+    frozen_descendants = ActiveRecord::Base.connection.exec_query(%{
+with temptable as (select * from project_subtree_with_trash_at($1, LEAST($2, $3)::timestamp))
+      select uuid from frozen_groups, temptable where uuid = target_uuid
+},
       "Group.update_trash.select",
       [[nil, self.uuid],
        [nil, TrashedGroup.find_by_group_uuid(self.owner_uuid).andand.trash_at],
        [nil, self.trash_at]])
-    frozen_descendants = ActiveRecord::Base.connection.exec_query(
-      "select uuid from frozen_groups, #{temptable} where uuid = target_uuid",
-      "Group.update_trash.check_frozen")
     if frozen_descendants.any?
       raise ArgumentError.new("cannot trash project containing frozen project #{frozen_descendants[0]["uuid"]}")
     end
-    ActiveRecord::Base.connection.exec_delete(
-      "delete from trashed_groups where group_uuid in (select target_uuid from #{temptable} where trash_at is NULL)",
-      "Group.update_trash.delete")
-    ActiveRecord::Base.connection.exec_query(
-      "insert into trashed_groups (group_uuid, trash_at) "+
-      "select target_uuid as group_uuid, trash_at from #{temptable} where trash_at is not NULL " +
-      "on conflict (group_uuid) do update set trash_at=EXCLUDED.trash_at",
-      "Group.update_trash.insert")
-    ActiveRecord::Base.connection.exec_query(
-      "select container_uuid from container_requests where " +
-      "owner_uuid in (select target_uuid from #{temptable}) and " +
-      "requesting_container_uuid is NULL and state = 'Committed' and container_uuid is not NULL",
-      "Group.update_trash.update_priorities").each do |container_uuid|
+
+    ActiveRecord::Base.connection.exec_query(%{
+with temptable as (select * from project_subtree_with_trash_at($1, LEAST($2, $3)::timestamp)),
+
+delete_rows as (delete from trashed_groups where group_uuid in (select target_uuid from temptable where trash_at is NULL)),
+
+insert_rows as (insert into trashed_groups (group_uuid, trash_at)
+  select target_uuid as group_uuid, trash_at from temptable where trash_at is not NULL
+  on conflict (group_uuid) do update set trash_at=EXCLUDED.trash_at)
+
+select container_uuid from container_requests where
+  owner_uuid in (select target_uuid from temptable) and
+  requesting_container_uuid is NULL and state = 'Committed' and container_uuid is not NULL
+},
+      "Group.update_trash.select",
+      [[nil, self.uuid],
+       [nil, TrashedGroup.find_by_group_uuid(self.owner_uuid).andand.trash_at],
+       [nil, self.trash_at]]).each do |container_uuid|
       update_priorities container_uuid["container_uuid"]
     end
   end
 
   def update_frozen
     return unless saved_change_to_frozen_by_uuid? || saved_change_to_owner_uuid?
-    temptable = "group_subtree_#{rand(2**64).to_s(10)}"
-    ActiveRecord::Base.connection.exec_query(
-      "create temporary table #{temptable} on commit drop as select * from project_subtree_with_is_frozen($1,$2)",
-      "Group.update_frozen.select",
-      [[nil, self.uuid],
-       [nil, !self.frozen_by_uuid.nil?]])
+
     if frozen_by_uuid
-      rows = ActiveRecord::Base.connection.exec_query(
-        "select cr.uuid, cr.state from container_requests cr, #{temptable} frozen " +
-        "where cr.owner_uuid = frozen.uuid and frozen.is_frozen " +
-        "and cr.state not in ($1, $2) limit 1",
-        "Group.update_frozen.check_container_requests",
-        [[nil, ContainerRequest::Uncommitted],
-         [nil, ContainerRequest::Final]])
+      rows = ActiveRecord::Base.connection.exec_query(%{
+with temptable as (select * from project_subtree_with_is_frozen($1,$2))
+
+select cr.uuid, cr.state from container_requests cr, temptable frozen
+  where cr.owner_uuid = frozen.uuid and frozen.is_frozen
+  and cr.state not in ($3, $4) limit 1
+},
+                                                      "Group.update_frozen.check_container_requests",
+                                                      [[nil, self.uuid],
+                                                       [nil, !self.frozen_by_uuid.nil?],
+                                                       [nil, ContainerRequest::Uncommitted],
+                                                       [nil, ContainerRequest::Final]])
       if rows.any?
         raise ArgumentError.new("cannot freeze project containing container request #{rows.first['uuid']} with state = #{rows.first['state']}")
       end
     end
-    ActiveRecord::Base.connection.exec_delete(
-      "delete from frozen_groups where uuid in (select uuid from #{temptable} where not is_frozen)",
-      "Group.update_frozen.delete")
-    ActiveRecord::Base.connection.exec_query(
-      "insert into frozen_groups (uuid) select uuid from #{temptable} where is_frozen on conflict do nothing",
-      "Group.update_frozen.insert")
+
+ActiveRecord::Base.connection.exec_query(%{
+with temptable as (select * from project_subtree_with_is_frozen($1,$2)),
+
+delete_rows as (delete from frozen_groups where uuid in (select uuid from temptable where not is_frozen))
+
+insert into frozen_groups (uuid) select uuid from temptable where is_frozen on conflict do nothing
+}, "Group.update_frozen.update",
+                                         [[nil, self.uuid],
+                                          [nil, !self.frozen_by_uuid.nil?]])
+
   end
 
   def before_ownership_change
diff --git a/services/api/db/migrate/20231013000000_compute_permission_index.rb b/services/api/db/migrate/20231013000000_compute_permission_index.rb
new file mode 100644 (file)
index 0000000..ecd85ef
--- /dev/null
@@ -0,0 +1,27 @@
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+class ComputePermissionIndex < ActiveRecord::Migration[5.2]
+  def up
+    # The inner part of compute_permission_subgraph has a query clause like this:
+    #
+    #    where u.perm_origin_uuid = m.target_uuid AND m.traverse_owned
+    #         AND (m.user_uuid = m.target_uuid or m.target_uuid not like '_____-tpzed-_______________')
+    #
+    # This will end up doing a sequential scan on
+    # materialized_permissions, which can easily have millions of
+    # rows, unless we fully index the table for this query.  In one test,
+    # this brought the compute_permission_subgraph query from over 6
+    # seconds down to 250ms.
+    #
+    ActiveRecord::Base.connection.execute "drop index if exists index_materialized_permissions_target_is_not_user"
+    ActiveRecord::Base.connection.execute %{
+create index index_materialized_permissions_target_is_not_user on materialized_permissions (target_uuid, traverse_owned, (user_uuid = target_uuid or target_uuid not like '_____-tpzed-_______________'));
+}
+  end
+
+  def down
+    ActiveRecord::Base.connection.execute "drop index if exists index_materialized_permissions_target_is_not_user"
+  end
+end
index a26db2e5dbd659b73ee414aaf7768f4fb4a8e983..c0d4263d97aa6bdde258688d091b5cf69fdd47af 100644 (file)
@@ -62,10 +62,10 @@ with
      permission (permission origin is self).
   */
   perm_from_start(perm_origin_uuid, target_uuid, val, traverse_owned) as (
-    
+
 WITH RECURSIVE
         traverse_graph(origin_uuid, target_uuid, val, traverse_owned, starting_set) as (
-            
+
              values (perm_origin_uuid, starting_uuid, starting_perm,
                     should_traverse_owned(starting_uuid, starting_perm),
                     (perm_origin_uuid = starting_uuid or starting_uuid not like '_____-tpzed-_______________'))
@@ -107,10 +107,10 @@ case (edges.edge_id = perm_edge_id)
        can_manage permission granted by ownership.
   */
   additional_perms(perm_origin_uuid, target_uuid, val, traverse_owned) as (
-    
+
 WITH RECURSIVE
         traverse_graph(origin_uuid, target_uuid, val, traverse_owned, starting_set) as (
-            
+
     select edges.tail_uuid as origin_uuid, edges.head_uuid as target_uuid, edges.val,
            should_traverse_owned(edges.head_uuid, edges.val),
            edges.head_uuid like '_____-j7d0g-_______________'
@@ -342,30 +342,6 @@ SET default_tablespace = '';
 
 SET default_with_oids = false;
 
---
--- Name: groups; Type: TABLE; Schema: public; Owner: -
---
-
-CREATE TABLE public.groups (
-    id bigint NOT NULL,
-    uuid character varying(255),
-    owner_uuid character varying(255),
-    created_at timestamp without time zone NOT NULL,
-    modified_by_client_uuid character varying(255),
-    modified_by_user_uuid character varying(255),
-    modified_at timestamp without time zone,
-    name character varying(255) NOT NULL,
-    description character varying(524288),
-    updated_at timestamp without time zone NOT NULL,
-    group_class character varying(255),
-    trash_at timestamp without time zone,
-    is_trashed boolean DEFAULT false NOT NULL,
-    delete_at timestamp without time zone,
-    properties jsonb DEFAULT '{}'::jsonb,
-    frozen_by_uuid character varying
-);
-
-
 --
 -- Name: api_client_authorizations; Type: TABLE; Schema: public; Owner: -
 --
@@ -690,6 +666,30 @@ CREATE TABLE public.frozen_groups (
 );
 
 
+--
+-- Name: groups; Type: TABLE; Schema: public; Owner: -
+--
+
+CREATE TABLE public.groups (
+    id bigint NOT NULL,
+    uuid character varying(255),
+    owner_uuid character varying(255),
+    created_at timestamp without time zone NOT NULL,
+    modified_by_client_uuid character varying(255),
+    modified_by_user_uuid character varying(255),
+    modified_at timestamp without time zone,
+    name character varying(255) NOT NULL,
+    description character varying(524288),
+    updated_at timestamp without time zone NOT NULL,
+    group_class character varying(255),
+    trash_at timestamp without time zone,
+    is_trashed boolean DEFAULT false NOT NULL,
+    delete_at timestamp without time zone,
+    properties jsonb DEFAULT '{}'::jsonb,
+    frozen_by_uuid character varying
+);
+
+
 --
 -- Name: groups_id_seq; Type: SEQUENCE; Schema: public; Owner: -
 --
@@ -2590,6 +2590,13 @@ CREATE INDEX index_logs_on_summary ON public.logs USING btree (summary);
 CREATE UNIQUE INDEX index_logs_on_uuid ON public.logs USING btree (uuid);
 
 
+--
+-- Name: index_materialized_permissions_target_is_not_user; Type: INDEX; Schema: public; Owner: -
+--
+
+CREATE INDEX index_materialized_permissions_target_is_not_user ON public.materialized_permissions USING btree (target_uuid, traverse_owned, ((((user_uuid)::text = (target_uuid)::text) OR ((target_uuid)::text !~~ '_____-tpzed-_______________'::text))));
+
+
 --
 -- Name: index_nodes_on_created_at; Type: INDEX; Schema: public; Owner: -
 --
@@ -3308,6 +3315,5 @@ INSERT INTO "schema_migrations" (version) VALUES
 ('20230503224107'),
 ('20230815160000'),
 ('20230821000000'),
-('20230922000000');
-
-
+('20230922000000'),
+('20231013000000');
index b7e5476404869f6a89603302eafe828397acd1c5..5c8072b48a5594b18200725feb0ab5515b72c963 100644 (file)
@@ -100,44 +100,41 @@ def update_permissions perm_origin_uuid, starting_uuid, perm_level, edge_id=nil
     # tested this on Postgres 9.6, so in the future we should reevaluate
     # the performance & query plan on Postgres 12.
     #
+    # Update: as of 2023-10-13, incorrect merge join behavior is still
+    # observed on at least one major user installation that is using
+    # Postgres 14, so it seems this workaround is still needed.
+    #
     # https://git.furworks.de/opensourcemirror/postgresql/commit/a314c34079cf06d05265623dd7c056f8fa9d577f
     #
     # Disable merge join for just this query (also local for this transaction), then reenable it.
     ActiveRecord::Base.connection.exec_query "SET LOCAL enable_mergejoin to false;"
 
-    temptable_perms = "temp_perms_#{rand(2**64).to_s(10)}"
-    ActiveRecord::Base.connection.exec_query %{
-create temporary table #{temptable_perms} on commit drop
-as select * from compute_permission_subgraph($1, $2, $3, $4)
-},
-                                             'update_permissions.select',
-                                             [[nil, perm_origin_uuid],
-                                              [nil, starting_uuid],
-                                              [nil, perm_level],
-                                              [nil, edge_id]]
-
-    ActiveRecord::Base.connection.exec_query "SET LOCAL enable_mergejoin to true;"
-
-    # Now that we have recomputed a set of permissions, delete any
-    # rows from the materialized_permissions table where (target_uuid,
-    # user_uuid) is not present or has perm_level=0 in the recomputed
-    # set.
-    ActiveRecord::Base.connection.exec_delete %{
-delete from #{PERMISSION_VIEW} where
-  target_uuid in (select target_uuid from #{temptable_perms}) and
-  not exists (select 1 from #{temptable_perms}
-              where target_uuid=#{PERMISSION_VIEW}.target_uuid and
-                    user_uuid=#{PERMISSION_VIEW}.user_uuid and
-                    val>0)
-},
-                                              "update_permissions.delete"
-
-    # Now insert-or-update permissions in the recomputed set.  The
-    # WHERE clause is important to avoid redundantly updating rows
-    # that haven't actually changed.
     ActiveRecord::Base.connection.exec_query %{
+with temptable_perms as (
+  select * from compute_permission_subgraph($1, $2, $3, $4)),
+
+/*
+    Now that we have recomputed a set of permissions, delete any
+    rows from the materialized_permissions table where (target_uuid,
+    user_uuid) is not present or has perm_level=0 in the recomputed
+    set.
+*/
+delete_rows as (
+  delete from #{PERMISSION_VIEW} where
+    target_uuid in (select target_uuid from temptable_perms) and
+    not exists (select 1 from temptable_perms
+                where target_uuid=#{PERMISSION_VIEW}.target_uuid and
+                      user_uuid=#{PERMISSION_VIEW}.user_uuid and
+                      val>0)
+)
+
+/*
+  Now insert-or-update permissions in the recomputed set.  The
+  WHERE clause is important to avoid redundantly updating rows
+  that haven't actually changed.
+*/
 insert into #{PERMISSION_VIEW} (user_uuid, target_uuid, perm_level, traverse_owned)
-  select user_uuid, target_uuid, val as perm_level, traverse_owned from #{temptable_perms} where val>0
+  select user_uuid, target_uuid, val as perm_level, traverse_owned from temptable_perms where val>0
 on conflict (user_uuid, target_uuid) do update
 set perm_level=EXCLUDED.perm_level, traverse_owned=EXCLUDED.traverse_owned
 where #{PERMISSION_VIEW}.user_uuid=EXCLUDED.user_uuid and
@@ -145,7 +142,11 @@ where #{PERMISSION_VIEW}.user_uuid=EXCLUDED.user_uuid and
        (#{PERMISSION_VIEW}.perm_level != EXCLUDED.perm_level or
         #{PERMISSION_VIEW}.traverse_owned != EXCLUDED.traverse_owned);
 },
-                                             "update_permissions.insert"
+                                             'update_permissions.select',
+                                             [[nil, perm_origin_uuid],
+                                              [nil, starting_uuid],
+                                              [nil, perm_level],
+                                              [nil, edge_id]]
 
     if perm_level>0
       check_permissions_against_full_refresh
index bc480e4c9806e7a951cbed2da8f50b0f5ba90943..63cf8a11a5e30cac4c0ae968fed90fec24b0bce1 100644 (file)
@@ -37,9 +37,9 @@
     "is-image": "3.0.0",
     "js-yaml": "3.13.1",
     "jssha": "2.3.1",
-    "jszip": "3.1.5",
+    "jszip": "^3.10.1",
     "lodash": "^4.17.21",
-    "lodash-es": "4.17.14",
+    "lodash-es": "^4.17.21",
     "lodash.mergewith": "4.6.2",
     "lodash.template": "4.5.0",
     "material-ui-pickers": "^2.2.4",
@@ -61,7 +61,7 @@
     "react-router": "4.3.1",
     "react-router-dom": "4.3.1",
     "react-router-redux": "5.0.0-alpha.9",
-    "react-rte": "0.16.3",
+    "react-rte": "^0.16.5",
     "react-scripts": "3.4.4",
     "react-splitter-layout": "3.0.1",
     "react-transition-group": "2.5.0",
     "enzyme": "3.11.0",
     "enzyme-adapter-react-16": "1.15.6",
     "jest-localstorage-mock": "2.2.0",
-    "node-sass": "^4.9.4",
-    "node-sass-chokidar": "1.5.0",
+    "node-sass": "^9.0.0",
+    "node-sass-chokidar": "^2.0.0",
     "redux-devtools": "3.4.1",
     "redux-mock-store": "1.5.4",
     "ts-mock-imports": "1.3.7",
index bb661bc288b3ec0bb1c30bc02f67f41c255174ea..39cce0483496237bbd65202e6da4955836c8b687 100644 (file)
@@ -2,11 +2,11 @@
 //
 // SPDX-License-Identifier: AGPL-3.0
 
-import React from 'react';
-import Menu from '@material-ui/core/Menu';
-import IconButton from '@material-ui/core/IconButton';
-import { PopoverOrigin } from '@material-ui/core/Popover';
-import { Tooltip } from '@material-ui/core';
+import React from "react";
+import Menu from "@material-ui/core/Menu";
+import IconButton from "@material-ui/core/IconButton";
+import { PopoverOrigin } from "@material-ui/core/Popover";
+import { Tooltip } from "@material-ui/core";
 
 interface DropdownMenuProps {
     id: string;
@@ -20,12 +20,12 @@ interface DropdownMenuState {
 
 export class DropdownMenu extends React.Component<DropdownMenuProps, DropdownMenuState> {
     state = {
-        anchorEl: undefined
+        anchorEl: undefined,
     };
 
     transformOrigin: PopoverOrigin = {
         vertical: -50,
-        horizontal: 0
+        horizontal: 0,
     };
 
     render() {
@@ -33,7 +33,9 @@ export class DropdownMenu extends React.Component<DropdownMenuProps, DropdownMen
         const { anchorEl } = this.state;
         return (
             <div>
-                <Tooltip title={title}>
+                <Tooltip
+                    title={title}
+                    disableFocusListener>
                     <IconButton
                         aria-owns={anchorEl ? id : undefined}
                         aria-haspopup="true"
@@ -57,9 +59,9 @@ export class DropdownMenu extends React.Component<DropdownMenuProps, DropdownMen
 
     handleClose = () => {
         this.setState({ anchorEl: undefined });
-    }
+    };
 
     handleOpen = (event: React.MouseEvent<HTMLButtonElement>) => {
         this.setState({ anchorEl: event.currentTarget });
-    }
+    };
 }
index 9fae638107e5aba50e07e4a77427ce50c2b962a1..7e39186c09f8d852f7868ed26579dd0a4e619eba 100644 (file)
@@ -2,39 +2,40 @@
 //
 // SPDX-License-Identifier: AGPL-3.0
 
-import React, { useState, useCallback, useEffect } from 'react';
+import React, { useState, useCallback, useEffect } from "react";
 import { Dialog, DialogContent, DialogActions, Button, StyleRulesCallback, withStyles, WithStyles } from "@material-ui/core";
 import { connect } from "react-redux";
 import { RootState } from "store/store";
 import bannerActions from "store/banner/banner-action";
-import { ArvadosTheme } from 'common/custom-theme';
-import servicesProvider from 'common/service-provider';
-import { Dispatch } from 'redux';
+import { ArvadosTheme } from "common/custom-theme";
+import servicesProvider from "common/service-provider";
+import { Dispatch } from "redux";
 
-type CssRules = 'dialogContent' | 'dialogContentIframe';
+type CssRules = "dialogContent" | "dialogContentIframe";
 
 const styles: StyleRulesCallback<CssRules> = (theme: ArvadosTheme) => ({
     dialogContent: {
-        minWidth: '550px',
-        minHeight: '500px',
-        display: 'block'
+        minWidth: "550px",
+        minHeight: "500px",
+        display: "block",
     },
     dialogContentIframe: {
-        minWidth: '550px',
-        minHeight: '500px'
-    }
+        minWidth: "550px",
+        minHeight: "500px",
+    },
 });
 
 interface BannerProps {
     isOpen: boolean;
     bannerUUID?: string;
     keepWebInlineServiceUrl: string;
-};
+}
 
-type BannerComponentProps = BannerProps & WithStyles<CssRules> & {
-    openBanner: Function,
-    closeBanner: Function,
-};
+type BannerComponentProps = BannerProps &
+    WithStyles<CssRules> & {
+        openBanner: Function;
+        closeBanner: Function;
+    };
 
 const mapStateToProps = (state: RootState): BannerProps => ({
     isOpen: state.banner.isOpen,
@@ -47,27 +48,23 @@ const mapDispatchToProps = (dispatch: Dispatch) => ({
     closeBanner: () => dispatch<any>(bannerActions.closeBanner()),
 });
 
-export const BANNER_LOCAL_STORAGE_KEY = 'bannerFileData';
+export const BANNER_LOCAL_STORAGE_KEY = "bannerFileData";
 
 export const BannerComponent = (props: BannerComponentProps) => {
-    const { 
-        isOpen,
-        openBanner,
-        closeBanner,
-        bannerUUID,
-        keepWebInlineServiceUrl
-    } = props;
-    const [bannerContents, setBannerContents] = useState(`<h1>Loading ...</h1>`)
+    const { isOpen, openBanner, closeBanner, bannerUUID, keepWebInlineServiceUrl } = props;
+    const [bannerContents, setBannerContents] = useState(`<h1>Loading ...</h1>`);
 
     const onConfirm = useCallback(() => {
         closeBanner();
-    }, [closeBanner])
+    }, [closeBanner]);
 
     useEffect(() => {
         if (!!bannerUUID && bannerUUID !== "") {
-            servicesProvider.getServices().collectionService.files(bannerUUID)
+            servicesProvider
+                .getServices()
+                .collectionService.files(bannerUUID)
                 .then(results => {
-                    const bannerFileData = results.find(({name}) => name === 'banner.html');
+                    const bannerFileData = results.find(({ name }) => name === "banner.html");
                     const result = localStorage.getItem(BANNER_LOCAL_STORAGE_KEY);
 
                     if (result && result === JSON.stringify(bannerFileData) && !isOpen) {
@@ -75,7 +72,8 @@ export const BannerComponent = (props: BannerComponentProps) => {
                     }
 
                     if (bannerFileData) {
-                        servicesProvider.getServices()
+                        servicesProvider
+                            .getServices()
                             .collectionService.getFileContents(bannerFileData)
                             .then(data => {
                                 setBannerContents(data);
@@ -88,24 +86,28 @@ export const BannerComponent = (props: BannerComponentProps) => {
     }, [bannerUUID, keepWebInlineServiceUrl, openBanner, isOpen]);
 
     return (
-        <Dialog open={isOpen}>
-            <div data-cy='confirmation-dialog'>
+        <Dialog
+            open={isOpen}
+            maxWidth="md"
+        >
+            <div data-cy="confirmation-dialog">
                 <DialogContent className={props.classes.dialogContent}>
                     <div dangerouslySetInnerHTML={{ __html: bannerContents }}></div>
                 </DialogContent>
-                <DialogActions style={{ margin: '0px 24px 24px' }}>
+                <DialogActions style={{ margin: "0px 24px 24px" }}>
                     <Button
-                        data-cy='confirmation-dialog-ok-btn'
-                        variant='contained'
-                        color='primary'
-                        type='submit'
-                        onClick={onConfirm}>
+                        data-cy="confirmation-dialog-ok-btn"
+                        variant="contained"
+                        color="primary"
+                        type="submit"
+                        onClick={onConfirm}
+                    >
                         Close
                     </Button>
                 </DialogActions>
             </div>
         </Dialog>
     );
-}
+};
 
 export const Banner = withStyles(styles)(connect(mapStateToProps, mapDispatchToProps)(BannerComponent));
index ca97a612bb11875460c17eeed6487266b9c46cf3..89fd2e9184793bf7cd790d7d052c43ad1917d1d8 100644 (file)
@@ -26,11 +26,11 @@ const mapDispatchToProps = (dispatch: Dispatch) => ({
 type NotificationsMenuProps = {
     isOpen: boolean;
     bannerUUID?: string;
-}
+};
 
 type NotificationsMenuComponentProps = NotificationsMenuProps & {
     openBanner: any;
-}
+};
 
 export const NotificationsMenuComponent = (props: NotificationsMenuComponentProps) => {
     const { isOpen, openBanner } = props;
@@ -39,41 +39,58 @@ export const NotificationsMenuComponent = (props: NotificationsMenuComponentProp
     const menuItems: any[] = [];
 
     if (!isOpen && bannerResult) {
-        menuItems.push(<MenuItem><span onClick={openBanner}>Restore Banner</span></MenuItem>);
+        menuItems.push(
+            <MenuItem onClick={openBanner}>
+                <span>Restore Banner</span>
+            </MenuItem>
+        );
     }
 
     const toggleTooltips = useCallback(() => {
         if (tooltipResult) {
             localStorage.removeItem(TOOLTIP_LOCAL_STORAGE_KEY);
         } else {
-            localStorage.setItem(TOOLTIP_LOCAL_STORAGE_KEY, 'true');
+            localStorage.setItem(TOOLTIP_LOCAL_STORAGE_KEY, "true");
         }
         window.location.reload();
     }, [tooltipResult]);
 
     if (tooltipResult) {
-        menuItems.push(<MenuItem><span onClick={toggleTooltips}>Enable tooltips</span></MenuItem>);
+        menuItems.push(
+            <MenuItem onClick={toggleTooltips}>
+                <span>Enable tooltips</span>
+            </MenuItem>
+        );
     } else {
-        menuItems.push(<MenuItem><span onClick={toggleTooltips}>Disable tooltips</span></MenuItem>);
+        menuItems.push(
+            <MenuItem onClick={toggleTooltips}>
+                <span>Disable tooltips</span>
+            </MenuItem>
+        );
     }
 
     if (menuItems.length === 0) {
         menuItems.push(<MenuItem>You are up to date</MenuItem>);
     }
 
-    return (<DropdownMenu
-        icon={
-            <Badge
-                badgeContent={0}
-                color="primary">
-                <NotificationIcon />
-            </Badge>}
-        id="account-menu"
-        title="Notifications">
-        {
-            menuItems.map((item, i) => <div key={i}>{item}</div>)
-        }
-    </DropdownMenu>);
-}
+    return (
+        <DropdownMenu
+            icon={
+                <Badge
+                    badgeContent={0}
+                    color="primary"
+                >
+                    <NotificationIcon />
+                </Badge>
+            }
+            id="account-menu"
+            title="Notifications"
+        >
+            {menuItems.map((item, i) => (
+                <div key={i}>{item}</div>
+            ))}
+        </DropdownMenu>
+    );
+};
 
 export const NotificationsMenu = connect(mapStateToProps, mapDispatchToProps)(NotificationsMenuComponent);
index 826baac8d0272f3efba02111cdde57a471387fb2..54b1883af9c78132d6a9ca1b6463d47eaf2ae52b 100644 (file)
@@ -1765,7 +1765,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"@gar/promisify@npm:^1.0.1":
+"@gar/promisify@npm:^1.0.1, @gar/promisify@npm:^1.1.3":
   version: 1.1.3
   resolution: "@gar/promisify@npm:1.1.3"
   checksum: 4059f790e2d07bf3c3ff3e0fec0daa8144fe35c1f6e0111c9921bd32106adaa97a4ab096ad7dab1e28ee6a9060083c4d1a4ada42a7f5f3f7a96b8812e2b757c1
@@ -2182,6 +2182,16 @@ __metadata:
   languageName: node
   linkType: hard
 
+"@npmcli/fs@npm:^2.1.0":
+  version: 2.1.2
+  resolution: "@npmcli/fs@npm:2.1.2"
+  dependencies:
+    "@gar/promisify": ^1.1.3
+    semver: ^7.3.5
+  checksum: 405074965e72d4c9d728931b64d2d38e6ea12066d4fad651ac253d175e413c06fe4350970c783db0d749181da8fe49c42d3880bd1cbc12cd68e3a7964d820225
+  languageName: node
+  linkType: hard
+
 "@npmcli/move-file@npm:^1.0.1":
   version: 1.1.2
   resolution: "@npmcli/move-file@npm:1.1.2"
@@ -2192,6 +2202,16 @@ __metadata:
   languageName: node
   linkType: hard
 
+"@npmcli/move-file@npm:^2.0.0":
+  version: 2.0.1
+  resolution: "@npmcli/move-file@npm:2.0.1"
+  dependencies:
+    mkdirp: ^1.0.4
+    rimraf: ^3.0.2
+  checksum: 52dc02259d98da517fae4cb3a0a3850227bdae4939dda1980b788a7670636ca2b4a01b58df03dd5f65c1e3cb70c50fa8ce5762b582b3f499ec30ee5ce1fd9380
+  languageName: node
+  linkType: hard
+
 "@phenomnomnominal/tsquery@npm:^3.0.0":
   version: 3.0.0
   resolution: "@phenomnomnominal/tsquery@npm:3.0.0"
@@ -2410,6 +2430,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"@tootallnate/once@npm:1":
+  version: 1.1.2
+  resolution: "@tootallnate/once@npm:1.1.2"
+  checksum: e1fb1bbbc12089a0cb9433dc290f97bddd062deadb6178ce9bcb93bb7c1aecde5e60184bc7065aec42fe1663622a213493c48bbd4972d931aae48315f18e1be9
+  languageName: node
+  linkType: hard
+
 "@tootallnate/once@npm:2":
   version: 2.0.0
   resolution: "@tootallnate/once@npm:2.0.0"
@@ -2642,6 +2669,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"@types/minimist@npm:^1.2.0":
+  version: 1.2.3
+  resolution: "@types/minimist@npm:1.2.3"
+  checksum: 666ea4f8c39dcbdfbc3171fe6b3902157c845cc9cb8cee33c10deb706cda5e0cc80f98ace2d6d29f6774b0dc21180c96cd73c592a1cbefe04777247c7ba0e84b
+  languageName: node
+  linkType: hard
+
 "@types/node@npm:*, @types/node@npm:15.12.4":
   version: 15.12.4
   resolution: "@types/node@npm:15.12.4"
@@ -2649,6 +2683,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"@types/normalize-package-data@npm:^2.4.0":
+  version: 2.4.2
+  resolution: "@types/normalize-package-data@npm:2.4.2"
+  checksum: 2132e4054711e6118de967ae3a34f8c564e58d71fbcab678ec2c34c14659f638a86c35a0fd45237ea35a4a03079cf0a485e3f97736ffba5ed647bfb5da086b03
+  languageName: node
+  linkType: hard
+
 "@types/parse-json@npm:^4.0.0":
   version: 4.0.0
   resolution: "@types/parse-json@npm:4.0.0"
@@ -3327,6 +3368,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"agentkeepalive@npm:^4.1.3":
+  version: 4.5.0
+  resolution: "agentkeepalive@npm:4.5.0"
+  dependencies:
+    humanize-ms: ^1.2.1
+  checksum: 13278cd5b125e51eddd5079f04d6fe0914ac1b8b91c1f3db2c1822f99ac1a7457869068997784342fe455d59daaff22e14fb7b8c3da4e741896e7e31faf92481
+  languageName: node
+  linkType: hard
+
 "agentkeepalive@npm:^4.2.1":
   version: 4.2.1
   resolution: "agentkeepalive@npm:4.2.1"
@@ -3451,27 +3501,20 @@ __metadata:
   linkType: hard
 
 "ansi-regex@npm:^3.0.0":
-  version: 3.0.0
-  resolution: "ansi-regex@npm:3.0.0"
-  checksum: 2ad11c416f81c39f5c65eafc88cf1d71aa91d76a2f766e75e457c2a3c43e8a003aadbf2966b61c497aa6a6940a36412486c975b3270cdfc3f413b69826189ec3
+  version: 3.0.1
+  resolution: "ansi-regex@npm:3.0.1"
+  checksum: 09daf180c5f59af9850c7ac1bd7fda85ba596cc8cbeb210826e90755f06c818af86d9fa1e6e8322fab2c3b9e9b03f56c537b42241139f824dd75066a1e7257cc
   languageName: node
   linkType: hard
 
 "ansi-regex@npm:^4.0.0, ansi-regex@npm:^4.1.0":
-  version: 4.1.0
-  resolution: "ansi-regex@npm:4.1.0"
-  checksum: 97aa4659538d53e5e441f5ef2949a3cffcb838e57aeaad42c4194e9d7ddb37246a6526c4ca85d3940a9d1e19b11cc2e114530b54c9d700c8baf163c31779baf8
-  languageName: node
-  linkType: hard
-
-"ansi-regex@npm:^5.0.0":
-  version: 5.0.0
-  resolution: "ansi-regex@npm:5.0.0"
-  checksum: b1bb4e992a5d96327bb4f72eaba9f8047f1d808d273ad19d399e266bfcc7fb19a4d1a127a32f7bc61fe46f1a94a4d04ec4c424e3fbe184929aa866323d8ed4ce
+  version: 4.1.1
+  resolution: "ansi-regex@npm:4.1.1"
+  checksum: b1a6ee44cb6ecdabaa770b2ed500542714d4395d71c7e5c25baa631f680fb2ad322eb9ba697548d498a6fd366949fc8b5bfcf48d49a32803611f648005b01888
   languageName: node
   linkType: hard
 
-"ansi-regex@npm:^5.0.1":
+"ansi-regex@npm:^5.0.0, ansi-regex@npm:^5.0.1":
   version: 5.0.1
   resolution: "ansi-regex@npm:5.0.1"
   checksum: 2aa4bb54caf2d622f1afdad09441695af2a83aa3fe8b8afa581d205e57ed4261c183c4d3877cee25794443fde5876417d859c108078ab788d6af7e4fe52eb66b
@@ -3537,7 +3580,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"aproba@npm:^1.0.3, aproba@npm:^1.1.1":
+"aproba@npm:^1.1.1":
   version: 1.2.0
   resolution: "aproba@npm:1.2.0"
   checksum: 0fca141966559d195072ed047658b6e6c4fe92428c385dd38e288eacfc55807e7b4989322f030faff32c0f46bb0bc10f1e0ac32ec22d25315a1e5bbc0ebb76dc
@@ -3551,23 +3594,23 @@ __metadata:
   languageName: node
   linkType: hard
 
-"are-we-there-yet@npm:^3.0.0":
-  version: 3.0.0
-  resolution: "are-we-there-yet@npm:3.0.0"
+"are-we-there-yet@npm:^2.0.0":
+  version: 2.0.0
+  resolution: "are-we-there-yet@npm:2.0.0"
   dependencies:
     delegates: ^1.0.0
     readable-stream: ^3.6.0
-  checksum: 348edfdd931b0b50868b55402c01c3f64df1d4c229ab6f063539a5025fd6c5f5bb8a0cab409bbed8d75d34762d22aa91b7c20b4204eb8177063158d9ba792981
+  checksum: 6c80b4fd04ecee6ba6e737e0b72a4b41bdc64b7d279edfc998678567ff583c8df27e27523bc789f2c99be603ffa9eaa612803da1d886962d2086e7ff6fa90c7c
   languageName: node
   linkType: hard
 
-"are-we-there-yet@npm:~1.1.2":
-  version: 1.1.5
-  resolution: "are-we-there-yet@npm:1.1.5"
+"are-we-there-yet@npm:^3.0.0":
+  version: 3.0.0
+  resolution: "are-we-there-yet@npm:3.0.0"
   dependencies:
     delegates: ^1.0.0
-    readable-stream: ^2.0.6
-  checksum: 9a746b1dbce4122f44002b0c39fbba5b2c6f52c00e88b6ccba6fc68652323f8a1355a20e8ab94846995626d8de3bf67669a3b4a037dff0885db14607168f2b15
+    readable-stream: ^3.6.0
+  checksum: 348edfdd931b0b50868b55402c01c3f64df1d4c229ab6f063539a5025fd6c5f5bb8a0cab409bbed8d75d34762d22aa91b7c20b4204eb8177063158d9ba792981
   languageName: node
   linkType: hard
 
@@ -3796,17 +3839,17 @@ __metadata:
     jest-localstorage-mock: 2.2.0
     js-yaml: 3.13.1
     jssha: 2.3.1
-    jszip: 3.1.5
+    jszip: ^3.10.1
     lodash: ^4.17.21
-    lodash-es: 4.17.14
+    lodash-es: ^4.17.21
     lodash.mergewith: 4.6.2
     lodash.template: 4.5.0
     material-ui-pickers: ^2.2.4
     mem: 4.0.0
     mime: ^3.0.0
     moment: 2.29.1
-    node-sass: ^4.9.4
-    node-sass-chokidar: 1.5.0
+    node-sass: ^9.0.0
+    node-sass-chokidar: ^2.0.0
     parse-duration: 0.4.4
     prop-types: 15.7.2
     query-string: 6.9.0
@@ -3822,7 +3865,7 @@ __metadata:
     react-router: 4.3.1
     react-router-dom: 4.3.1
     react-router-redux: 5.0.0-alpha.9
-    react-rte: 0.16.3
+    react-rte: ^0.16.5
     react-scripts: 3.4.4
     react-splitter-layout: 3.0.1
     react-transition-group: 2.5.0
@@ -3938,18 +3981,18 @@ __metadata:
   linkType: hard
 
 "async@npm:^2.6.2":
-  version: 2.6.3
-  resolution: "async@npm:2.6.3"
+  version: 2.6.4
+  resolution: "async@npm:2.6.4"
   dependencies:
     lodash: ^4.17.14
-  checksum: 5e5561ff8fca807e88738533d620488ac03a5c43fce6c937451f7e35f943d33ad06c24af3f681a48cca3d2b0002b3118faff0a128dc89438a9bf0226f712c499
+  checksum: a52083fb32e1ebe1d63e5c5624038bb30be68ff07a6c8d7dfe35e47c93fc144bd8652cbec869e0ac07d57dde387aa5f1386be3559cdee799cb1f789678d88e19
   languageName: node
   linkType: hard
 
 "async@npm:^3.2.0":
-  version: 3.2.0
-  resolution: "async@npm:3.2.0"
-  checksum: 6739fae769e6c9f76b272558f118ef041d45c979c573a8fe93f8cfbc32eb9c92da032e9effe6bbcc9b1131292cde6c4a9e61a442894aa06a262addd8dd3adda1
+  version: 3.2.4
+  resolution: "async@npm:3.2.4"
+  checksum: 43d07459a4e1d09b84a20772414aa684ff4de085cbcaec6eea3c7a8f8150e8c62aa6cd4e699fe8ee93c3a5b324e777d34642531875a0817a35697522c1b02e89
   languageName: node
   linkType: hard
 
@@ -4035,11 +4078,11 @@ __metadata:
   linkType: hard
 
 "axios@npm:^0.21.1":
-  version: 0.21.1
-  resolution: "axios@npm:0.21.1"
+  version: 0.21.4
+  resolution: "axios@npm:0.21.4"
   dependencies:
-    follow-redirects: ^1.10.0
-  checksum: c87915fa0b18c15c63350112b6b3563a3e2ae524d7707de0a73d2e065e0d30c5d3da8563037bc29d4cc1b7424b5a350cb7274fa52525c6c04a615fe561c6ab11
+    follow-redirects: ^1.14.0
+  checksum: 44245f24ac971e7458f3120c92f9d66d1fc695e8b97019139de5b0cc65d9b8104647db01e5f46917728edfc0cfd88eb30fc4c55e6053eef4ace76768ce95ff3c
   languageName: node
   linkType: hard
 
@@ -4484,15 +4527,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"block-stream@npm:*":
-  version: 0.0.9
-  resolution: "block-stream@npm:0.0.9"
-  dependencies:
-    inherits: ~2.0.0
-  checksum: 72733cbb816181b7c92449e7b650247c02122f743526ce9d948ff68afc27d8709106cd62f2c876c6d8cd3977e0204a014f38d22805974008039bd3bed35f2cbd
-  languageName: node
-  linkType: hard
-
 "bluebird@npm:^3.5.5, bluebird@npm:^3.7.2":
   version: 3.7.2
   resolution: "bluebird@npm:3.7.2"
@@ -4563,6 +4597,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"brace-expansion@npm:^2.0.1":
+  version: 2.0.1
+  resolution: "brace-expansion@npm:2.0.1"
+  dependencies:
+    balanced-match: ^1.0.0
+  checksum: a61e7cd2e8a8505e9f0036b3b6108ba5e926b4b55089eeb5550cd04a471fe216c96d4fe7e4c7f995c728c554ae20ddfc4244cad10aef255e72b62930afd233d1
+  languageName: node
+  linkType: hard
+
 "braces@npm:^2.3.1, braces@npm:^2.3.2":
   version: 2.3.2
   resolution: "braces@npm:2.3.2"
@@ -4581,7 +4624,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"braces@npm:^3.0.1, braces@npm:~3.0.2":
+"braces@npm:^3.0.2, braces@npm:~3.0.2":
   version: 3.0.2
   resolution: "braces@npm:3.0.2"
   dependencies:
@@ -4708,17 +4751,16 @@ __metadata:
   linkType: hard
 
 "browserslist@npm:^4.0.0, browserslist@npm:^4.12.0, browserslist@npm:^4.16.6, browserslist@npm:^4.6.2, browserslist@npm:^4.6.4, browserslist@npm:^4.9.1":
-  version: 4.16.6
-  resolution: "browserslist@npm:4.16.6"
+  version: 4.22.1
+  resolution: "browserslist@npm:4.22.1"
   dependencies:
-    caniuse-lite: ^1.0.30001219
-    colorette: ^1.2.2
-    electron-to-chromium: ^1.3.723
-    escalade: ^3.1.1
-    node-releases: ^1.1.71
+    caniuse-lite: ^1.0.30001541
+    electron-to-chromium: ^1.4.535
+    node-releases: ^2.0.13
+    update-browserslist-db: ^1.0.13
   bin:
     browserslist: cli.js
-  checksum: 3dffc86892d2dcfcfc66b52519b7e5698ae070b4fc92ab047e760efc4cae0474e9e70bbe10d769c8d3491b655ef3a2a885b88e7196c83cc5dc0a46dfdba8b70c
+  checksum: 7e6b10c53f7dd5d83fd2b95b00518889096382539fed6403829d447e05df4744088de46a571071afb447046abc3c66ad06fbc790e70234ec2517452e32ffd862
   languageName: node
   linkType: hard
 
@@ -4847,7 +4889,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"cacache@npm:^15.3.0":
+"cacache@npm:^15.2.0, cacache@npm:^15.3.0":
   version: 15.3.0
   resolution: "cacache@npm:15.3.0"
   dependencies:
@@ -4873,6 +4915,32 @@ __metadata:
   languageName: node
   linkType: hard
 
+"cacache@npm:^16.1.0":
+  version: 16.1.3
+  resolution: "cacache@npm:16.1.3"
+  dependencies:
+    "@npmcli/fs": ^2.1.0
+    "@npmcli/move-file": ^2.0.0
+    chownr: ^2.0.0
+    fs-minipass: ^2.1.0
+    glob: ^8.0.1
+    infer-owner: ^1.0.4
+    lru-cache: ^7.7.1
+    minipass: ^3.1.6
+    minipass-collect: ^1.0.2
+    minipass-flush: ^1.0.5
+    minipass-pipeline: ^1.2.4
+    mkdirp: ^1.0.4
+    p-map: ^4.0.0
+    promise-inflight: ^1.0.1
+    rimraf: ^3.0.2
+    ssri: ^9.0.0
+    tar: ^6.1.11
+    unique-filename: ^2.0.0
+  checksum: d91409e6e57d7d9a3a25e5dcc589c84e75b178ae8ea7de05cbf6b783f77a5fae938f6e8fda6f5257ed70000be27a681e1e44829251bfffe4c10216002f8f14e6
+  languageName: node
+  linkType: hard
+
 "cache-base@npm:^1.0.1":
   version: 1.0.1
   resolution: "cache-base@npm:1.0.1"
@@ -4966,6 +5034,17 @@ __metadata:
   languageName: node
   linkType: hard
 
+"camelcase-keys@npm:^6.2.2":
+  version: 6.2.2
+  resolution: "camelcase-keys@npm:6.2.2"
+  dependencies:
+    camelcase: ^5.3.1
+    map-obj: ^4.0.0
+    quick-lru: ^4.0.1
+  checksum: 43c9af1adf840471e54c68ab3e5fe8a62719a6b7dbf4e2e86886b7b0ff96112c945736342b837bd2529ec9d1c7d1934e5653318478d98e0cf22c475c04658e2a
+  languageName: node
+  linkType: hard
+
 "camelcase@npm:5.3.1, camelcase@npm:^5.0.0, camelcase@npm:^5.3.1":
   version: 5.3.1
   resolution: "camelcase@npm:5.3.1"
@@ -5006,13 +5085,20 @@ __metadata:
   languageName: node
   linkType: hard
 
-"caniuse-lite@npm:^1.0.0, caniuse-lite@npm:^1.0.30000981, caniuse-lite@npm:^1.0.30001035, caniuse-lite@npm:^1.0.30001109, caniuse-lite@npm:^1.0.30001219":
+"caniuse-lite@npm:^1.0.0, caniuse-lite@npm:^1.0.30000981, caniuse-lite@npm:^1.0.30001035, caniuse-lite@npm:^1.0.30001109":
   version: 1.0.30001486
   resolution: "caniuse-lite@npm:1.0.30001486"
   checksum: 5e8c2ba2679e4ad17dea6d2761a6449b814441bfeac81af6cc9d58af187df6af4b79b27befcbfc4a557e720b21c0399a7d1911c8705922e38938dcc0f40b5d4b
   languageName: node
   linkType: hard
 
+"caniuse-lite@npm:^1.0.30001541":
+  version: 1.0.30001543
+  resolution: "caniuse-lite@npm:1.0.30001543"
+  checksum: 1a65c8b0b93913b6241c7d66e1e1f3ea0f194f7e140eefe500512641c2eb4df285991ec9869a1ba2856ea6f6d21e9f3d7bcd91971b5fb1721e3fa0390feec6f1
+  languageName: node
+  linkType: hard
+
 "capture-exit@npm:^2.0.0":
   version: 2.0.0
   resolution: "capture-exit@npm:2.0.0"
@@ -5047,7 +5133,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"chalk@npm:^1.0.0, chalk@npm:^1.1.1, chalk@npm:^1.1.3":
+"chalk@npm:^1.0.0, chalk@npm:^1.1.3":
   version: 1.1.3
   resolution: "chalk@npm:1.1.3"
   dependencies:
@@ -5060,13 +5146,13 @@ __metadata:
   languageName: node
   linkType: hard
 
-"chalk@npm:^4.0.0, chalk@npm:^4.1.0":
-  version: 4.1.1
-  resolution: "chalk@npm:4.1.1"
+"chalk@npm:^4.0.0, chalk@npm:^4.1.0, chalk@npm:^4.1.2":
+  version: 4.1.2
+  resolution: "chalk@npm:4.1.2"
   dependencies:
     ansi-styles: ^4.1.0
     supports-color: ^7.1.0
-  checksum: 036e973e665ba1a32c975e291d5f3d549bceeb7b1b983320d4598fb75d70fe20c5db5d62971ec0fe76cdbce83985a00ee42372416abfc3a5584465005a7855ed
+  checksum: fe75c9d5c76a7a98d45495b91b2172fa3b7a09e0cc9370e5c8feb1c567b85c4288e2b3fded7cfdd7359ac28d6b3844feb8b82b8686842e93d23c827c417e83fc
   languageName: node
   linkType: hard
 
@@ -5143,8 +5229,8 @@ __metadata:
   linkType: hard
 
 "chokidar@npm:^3.3.0, chokidar@npm:^3.4.0, chokidar@npm:^3.4.1":
-  version: 3.5.2
-  resolution: "chokidar@npm:3.5.2"
+  version: 3.5.3
+  resolution: "chokidar@npm:3.5.3"
   dependencies:
     anymatch: ~3.1.2
     braces: ~3.0.2
@@ -5157,7 +5243,7 @@ __metadata:
   dependenciesMeta:
     fsevents:
       optional: true
-  checksum: d1fda32fcd67d9f6170a8468ad2630a3c6194949c9db3f6a91b16478c328b2800f433fb5d2592511b6cb145a47c013ea1cce60b432b1a001ae3ee978a8bffc2d
+  checksum: b49fcde40176ba007ff361b198a2d35df60d9bb2a5aab228279eb810feae9294a6b4649ab15981304447afe1e6ffbf4788ad5db77235dc770ab777c6e771980c
   languageName: node
   linkType: hard
 
@@ -5346,6 +5432,17 @@ __metadata:
   languageName: node
   linkType: hard
 
+"cliui@npm:^8.0.1":
+  version: 8.0.1
+  resolution: "cliui@npm:8.0.1"
+  dependencies:
+    string-width: ^4.2.0
+    strip-ansi: ^6.0.1
+    wrap-ansi: ^7.0.0
+  checksum: 79648b3b0045f2e285b76fb2e24e207c6db44323581e421c3acbd0e86454cba1b37aea976ab50195a49e7384b871e6dfb2247ad7dec53c02454ac6497394cb56
+  languageName: node
+  linkType: hard
+
 "clone-deep@npm:^0.2.4":
   version: 0.2.4
   resolution: "clone-deep@npm:0.2.4"
@@ -5454,7 +5551,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"color-support@npm:^1.1.3":
+"color-support@npm:^1.1.2, color-support@npm:^1.1.3":
   version: 1.1.3
   resolution: "color-support@npm:1.1.3"
   bin:
@@ -5473,7 +5570,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"colorette@npm:^1.2.1, colorette@npm:^1.2.2":
+"colorette@npm:^1.2.1":
   version: 1.2.2
   resolution: "colorette@npm:1.2.2"
   checksum: 69fec14ddaedd0f5b00e4bae40dc4bc61f7050ebdc82983a595d6fd64e650b9dc3c033fff378775683138e992e0ddd8717ac7c7cec4d089679dcfbe3cd921b04
@@ -5611,7 +5708,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"console-control-strings@npm:^1.0.0, console-control-strings@npm:^1.1.0, console-control-strings@npm:~1.1.0":
+"console-control-strings@npm:^1.0.0, console-control-strings@npm:^1.1.0":
   version: 1.1.0
   resolution: "console-control-strings@npm:1.1.0"
   checksum: 8755d76787f94e6cf79ce4666f0c5519906d7f5b02d4b884cf41e11dcd759ed69c57da0670afd9236d229a46e0f9cf519db0cd829c6dca820bb5a5c3def584ed
@@ -5755,13 +5852,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"core-js@npm:~2.3.0":
-  version: 2.3.0
-  resolution: "core-js@npm:2.3.0"
-  checksum: eb2e9e82d71e646e91abc9480ee4da8a4c02606418ea83602daae5988b4ba558a233f1a29dc8d660e2e4aaa7f6e4297b6c3089b55b0e7292917eef07a3952972
-  languageName: node
-  linkType: hard
-
 "core-util-is@npm:1.0.2, core-util-is@npm:~1.0.0":
   version: 1.0.2
   resolution: "core-util-is@npm:1.0.2"
@@ -5832,11 +5922,11 @@ __metadata:
   linkType: hard
 
 "cross-fetch@npm:^3.0.4":
-  version: 3.1.4
-  resolution: "cross-fetch@npm:3.1.4"
+  version: 3.1.8
+  resolution: "cross-fetch@npm:3.1.8"
   dependencies:
-    node-fetch: 2.6.1
-  checksum: 2107e5e633aa327bdacab036b1907c7ddd28651ede0c1d4fd14db04510944d56849a8255e2f5b8f9a1da0e061b6cee943f6819fe29ed9a130195e7fadd82a4ff
+    node-fetch: ^2.6.12
+  checksum: 78f993fa099eaaa041122ab037fe9503ecbbcb9daef234d1d2e0b9230a983f64d645d088c464e21a247b825a08dc444a6e7064adfa93536d3a9454b4745b3632
   languageName: node
   linkType: hard
 
@@ -5851,16 +5941,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"cross-spawn@npm:^3.0.0":
-  version: 3.0.1
-  resolution: "cross-spawn@npm:3.0.1"
-  dependencies:
-    lru-cache: ^4.0.1
-    which: ^1.2.9
-  checksum: a029a5028629ce2b7773e341b57415b344b6e46b98b39b308822c3b524e8e92e15f10c4ca3384e90722b882dfce2cc8e10edc8e84ee1394afe9744c4a1082776
-  languageName: node
-  linkType: hard
-
 "cross-spawn@npm:^6.0.0, cross-spawn@npm:^6.0.5":
   version: 6.0.5
   resolution: "cross-spawn@npm:6.0.5"
@@ -5874,7 +5954,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"cross-spawn@npm:^7.0.0":
+"cross-spawn@npm:^7.0.0, cross-spawn@npm:^7.0.3":
   version: 7.0.3
   resolution: "cross-spawn@npm:7.0.3"
   dependencies:
@@ -5997,15 +6077,15 @@ __metadata:
   linkType: hard
 
 "css-select@npm:^4.1.3":
-  version: 4.1.3
-  resolution: "css-select@npm:4.1.3"
+  version: 4.3.0
+  resolution: "css-select@npm:4.3.0"
   dependencies:
     boolbase: ^1.0.0
-    css-what: ^5.0.0
-    domhandler: ^4.2.0
-    domutils: ^2.6.0
-    nth-check: ^2.0.0
-  checksum: 40928f1aa6c71faf36430e7f26bcbb8ab51d07b98b754caacb71906400a195df5e6c7020a94f2982f02e52027b9bd57c99419220cf7020968c3415f14e4be5f8
+    css-what: ^6.0.1
+    domhandler: ^4.3.1
+    domutils: ^2.8.0
+    nth-check: ^2.0.1
+  checksum: d6202736839194dd7f910320032e7cfc40372f025e4bf21ca5bf6eb0a33264f322f50ba9c0adc35dadd342d3d6fae5ca244779a4873afbfa76561e343f2058e0
   languageName: node
   linkType: hard
 
@@ -6038,13 +6118,20 @@ __metadata:
   languageName: node
   linkType: hard
 
-"css-what@npm:^3.2.1, css-what@npm:^5.0.0, css-what@npm:^5.0.1":
+"css-what@npm:^3.2.1, css-what@npm:^5.0.1":
   version: 5.0.1
   resolution: "css-what@npm:5.0.1"
   checksum: 7a3de33a1c130d32d711cce4e0fa747be7a9afe6b5f2c6f3d56bc2765f150f6034f5dd5fe263b9359a1c371c01847399602d74b55322c982742b336d998602cd
   languageName: node
   linkType: hard
 
+"css-what@npm:^6.0.1":
+  version: 6.1.0
+  resolution: "css-what@npm:6.1.0"
+  checksum: b975e547e1e90b79625918f84e67db5d33d896e6de846c9b584094e529f0c63e2ab85ee33b9daffd05bff3a146a1916bec664e18bb76dd5f66cbff9fc13b2bbe
+  languageName: node
+  linkType: hard
+
 "css@npm:^2.0.0":
   version: 2.2.4
   resolution: "css@npm:2.2.4"
@@ -6382,7 +6469,29 @@ __metadata:
   languageName: node
   linkType: hard
 
-"decamelize@npm:^1.1.1, decamelize@npm:^1.1.2, decamelize@npm:^1.2.0":
+"debug@npm:^4.3.3":
+  version: 4.3.4
+  resolution: "debug@npm:4.3.4"
+  dependencies:
+    ms: 2.1.2
+  peerDependenciesMeta:
+    supports-color:
+      optional: true
+  checksum: 3dbad3f94ea64f34431a9cbf0bafb61853eda57bff2880036153438f50fb5a84f27683ba0d8e5426bf41a8c6ff03879488120cf5b3a761e77953169c0600a708
+  languageName: node
+  linkType: hard
+
+"decamelize-keys@npm:^1.1.0":
+  version: 1.1.1
+  resolution: "decamelize-keys@npm:1.1.1"
+  dependencies:
+    decamelize: ^1.1.0
+    map-obj: ^1.0.0
+  checksum: fc645fe20b7bda2680bbf9481a3477257a7f9304b1691036092b97ab04c0ab53e3bf9fcc2d2ae382536568e402ec41fb11e1d4c3836a9abe2d813dd9ef4311e0
+  languageName: node
+  linkType: hard
+
+"decamelize@npm:^1.1.0, decamelize@npm:^1.1.1, decamelize@npm:^1.1.2, decamelize@npm:^1.2.0":
   version: 1.2.0
   resolution: "decamelize@npm:1.2.0"
   checksum: ad8c51a7e7e0720c70ec2eeb1163b66da03e7616d7b98c9ef43cce2416395e84c1e9548dd94f5f6ffecfee9f8b94251fc57121a8b021f2ff2469b2bae247b8aa
@@ -6390,9 +6499,9 @@ __metadata:
   linkType: hard
 
 "decode-uri-component@npm:^0.2.0":
-  version: 0.2.0
-  resolution: "decode-uri-component@npm:0.2.0"
-  checksum: f3749344ab9305ffcfe4bfe300e2dbb61fc6359e2b736812100a3b1b6db0a5668cba31a05e4b45d4d63dbf1a18dfa354cd3ca5bb3ededddabb8cd293f4404f94
+  version: 0.2.2
+  resolution: "decode-uri-component@npm:0.2.2"
+  checksum: 95476a7d28f267292ce745eac3524a9079058bbb35767b76e3ee87d42e34cd0275d2eb19d9d08c3e167f97556e8a2872747f5e65cbebcac8b0c98d83e285f139
   languageName: node
   linkType: hard
 
@@ -6769,6 +6878,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"domhandler@npm:^4.3.1":
+  version: 4.3.1
+  resolution: "domhandler@npm:4.3.1"
+  dependencies:
+    domelementtype: ^2.2.0
+  checksum: 4c665ceed016e1911bf7d1dadc09dc888090b64dee7851cccd2fcf5442747ec39c647bb1cb8c8919f8bbdd0f0c625a6bafeeed4b2d656bbecdbae893f43ffaaa
+  languageName: node
+  linkType: hard
+
 "domutils@npm:^1.7.0":
   version: 1.7.0
   resolution: "domutils@npm:1.7.0"
@@ -6779,7 +6897,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"domutils@npm:^2.5.2, domutils@npm:^2.6.0, domutils@npm:^2.7.0":
+"domutils@npm:^2.5.2, domutils@npm:^2.7.0":
   version: 2.7.0
   resolution: "domutils@npm:2.7.0"
   dependencies:
@@ -6790,6 +6908,17 @@ __metadata:
   languageName: node
   linkType: hard
 
+"domutils@npm:^2.8.0":
+  version: 2.8.0
+  resolution: "domutils@npm:2.8.0"
+  dependencies:
+    dom-serializer: ^1.0.1
+    domelementtype: ^2.2.0
+    domhandler: ^4.2.0
+  checksum: abf7434315283e9aadc2a24bac0e00eab07ae4313b40cc239f89d84d7315ebdfd2fb1b5bf750a96bc1b4403d7237c7b2ebf60459be394d625ead4ca89b934391
+  languageName: node
+  linkType: hard
+
 "dot-case@npm:^3.0.4":
   version: 3.0.4
   resolution: "dot-case@npm:3.0.4"
@@ -6945,13 +7074,20 @@ __metadata:
   languageName: node
   linkType: hard
 
-"electron-to-chromium@npm:^1.3.378, electron-to-chromium@npm:^1.3.723":
+"electron-to-chromium@npm:^1.3.378":
   version: 1.3.758
   resolution: "electron-to-chromium@npm:1.3.758"
   checksum: 2fec13dcdd1b24a2314d309566bd08c7f0ce383787e64ea43c14a7fc2a11c8a76fdb9a56ce7a1da6137e1ef46365f999d10c656f2fb6b9ff792ea3ae808ebb86
   languageName: node
   linkType: hard
 
+"electron-to-chromium@npm:^1.4.535":
+  version: 1.4.540
+  resolution: "electron-to-chromium@npm:1.4.540"
+  checksum: 78a48690a5cca3f89544d4e33a11e3101adb0b220da64078f67e167b396cbcd85044853cb88a9453444796599fe157c190ca5ebd00e9daf668ed5a9df3d0bba8
+  languageName: node
+  linkType: hard
+
 "elegant-spinner@npm:^1.0.1":
   version: 1.0.1
   resolution: "elegant-spinner@npm:1.0.1"
@@ -7009,7 +7145,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"encoding@npm:^0.1.11, encoding@npm:^0.1.13":
+"encoding@npm:^0.1.11, encoding@npm:^0.1.12, encoding@npm:^0.1.13":
   version: 0.1.13
   resolution: "encoding@npm:0.1.13"
   dependencies:
@@ -7228,13 +7364,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"es6-promise@npm:~3.0.2":
-  version: 3.0.2
-  resolution: "es6-promise@npm:3.0.2"
-  checksum: f9d6cabf3fa5cff33ddd9791c190b4ae83f372489b62c81d5c19dc10afd2e59736a31e20994f80fc54151c39c00ccc493b11b5b9dfc5e605eff597f239650da5
-  languageName: node
-  linkType: hard
-
 "es6-symbol@npm:^3.1.1, es6-symbol@npm:~3.1.3":
   version: 3.1.3
   resolution: "es6-symbol@npm:3.1.3"
@@ -7620,11 +7749,9 @@ __metadata:
   linkType: hard
 
 "eventsource@npm:^1.0.7":
-  version: 1.1.0
-  resolution: "eventsource@npm:1.1.0"
-  dependencies:
-    original: ^1.0.0
-  checksum: 78338b7e75ec471cb793efb3319e0c4d2bf00fb638a2e3f888ad6d98cd1e3d4492a29f554c0921c7b2ac5130c3a732a1a0056739f6e2f548d714aec685e5da7e
+  version: 1.1.2
+  resolution: "eventsource@npm:1.1.2"
+  checksum: fe8f2ac3c70b1b63ee3cef5c0a28680cb00b5747bfda1d9835695fab3ed602be41c5c799b1fc997b34b02633573fead25b12b036bdf5212f23a6aa9f59212e9b
   languageName: node
   linkType: hard
 
@@ -7879,17 +8006,16 @@ __metadata:
   languageName: node
   linkType: hard
 
-"fast-glob@npm:^3.1.1":
-  version: 3.2.5
-  resolution: "fast-glob@npm:3.2.5"
+"fast-glob@npm:^3.2.9":
+  version: 3.3.1
+  resolution: "fast-glob@npm:3.3.1"
   dependencies:
     "@nodelib/fs.stat": ^2.0.2
     "@nodelib/fs.walk": ^1.2.3
-    glob-parent: ^5.1.0
+    glob-parent: ^5.1.2
     merge2: ^1.3.0
-    micromatch: ^4.0.2
-    picomatch: ^2.2.1
-  checksum: 5d6772c9b63dbb739d60b5630851e1f2cbf9744119e0968eac44c9f8cbc2d3d5cb4f2f0c74715ccb23daa336c87bea42186ed367e6c991afee61cd3d967320eb
+    micromatch: ^4.0.4
+  checksum: b6f3add6403e02cf3a798bfbb1183d0f6da2afd368f27456010c0bc1f9640aea308243d4cb2c0ab142f618276e65ecb8be1661d7c62a7b4e5ba774b9ce5432e5
   languageName: node
   linkType: hard
 
@@ -7951,8 +8077,8 @@ __metadata:
   linkType: hard
 
 "fbjs@npm:^0.8.1":
-  version: 0.8.17
-  resolution: "fbjs@npm:0.8.17"
+  version: 0.8.18
+  resolution: "fbjs@npm:0.8.18"
   dependencies:
     core-js: ^1.0.0
     isomorphic-fetch: ^2.1.1
@@ -7960,8 +8086,8 @@ __metadata:
     object-assign: ^4.1.0
     promise: ^7.1.1
     setimmediate: ^1.0.5
-    ua-parser-js: ^0.7.18
-  checksum: e969aeb175ccf97d8818aab9907a78f253568e0cc1b8762621c5d235bf031419d7e700f16f7711e89dfd1e0fce2b87a05f8a2800f18df0a96258f0780615fd8b
+    ua-parser-js: ^0.7.30
+  checksum: 668731b946a765908c9cbe51d5160f973abb78004b3d122587c3e930e3e1ddcc0ce2b17f2a8637dc9d733e149aa580f8d3035a35cc2d3bc78b78f1b19aab90e2
   languageName: node
   linkType: hard
 
@@ -8136,7 +8262,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"find-up@npm:4.1.0, find-up@npm:^4.0.0":
+"find-up@npm:4.1.0, find-up@npm:^4.0.0, find-up@npm:^4.1.0":
   version: 4.1.0
   resolution: "find-up@npm:4.1.0"
   dependencies:
@@ -8209,13 +8335,13 @@ __metadata:
   languageName: node
   linkType: hard
 
-"follow-redirects@npm:^1.0.0, follow-redirects@npm:^1.10.0":
-  version: 1.14.1
-  resolution: "follow-redirects@npm:1.14.1"
+"follow-redirects@npm:^1.0.0, follow-redirects@npm:^1.14.0":
+  version: 1.15.3
+  resolution: "follow-redirects@npm:1.15.3"
   peerDependenciesMeta:
     debug:
       optional: true
-  checksum: 7381a55bdc6951c5c1ab73a8da99d9fa4c0496ce72dba92cd2ac2babe0e3ebde9b81c5bca889498ad95984bc773d713284ca2bb17f1b1e1416e5f6531e39a488
+  checksum: 584da22ec5420c837bd096559ebfb8fe69d82512d5585004e36a3b4a6ef6d5905780e0c74508c7b72f907d1fa2b7bd339e613859e9c304d0dc96af2027fd0231
   languageName: node
   linkType: hard
 
@@ -8363,7 +8489,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"fs-minipass@npm:^2.0.0":
+"fs-minipass@npm:^2.0.0, fs-minipass@npm:^2.1.0":
   version: 2.1.0
   resolution: "fs-minipass@npm:2.1.0"
   dependencies:
@@ -8450,7 +8576,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"fstream@npm:1.0.12, fstream@npm:^1.0.0, fstream@npm:^1.0.12":
+"fstream@npm:1.0.12":
   version: 1.0.12
   resolution: "fstream@npm:1.0.12"
   dependencies:
@@ -8495,6 +8621,23 @@ __metadata:
   languageName: node
   linkType: hard
 
+"gauge@npm:^3.0.0":
+  version: 3.0.2
+  resolution: "gauge@npm:3.0.2"
+  dependencies:
+    aproba: ^1.0.3 || ^2.0.0
+    color-support: ^1.1.2
+    console-control-strings: ^1.0.0
+    has-unicode: ^2.0.1
+    object-assign: ^4.1.1
+    signal-exit: ^3.0.0
+    string-width: ^4.2.3
+    strip-ansi: ^6.0.1
+    wide-align: ^1.1.2
+  checksum: 81296c00c7410cdd48f997800155fbead4f32e4f82109be0719c63edc8560e6579946cc8abd04205297640691ec26d21b578837fd13a4e96288ab4b40b1dc3e9
+  languageName: node
+  linkType: hard
+
 "gauge@npm:^4.0.0":
   version: 4.0.2
   resolution: "gauge@npm:4.0.2"
@@ -8512,22 +8655,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"gauge@npm:~2.7.3":
-  version: 2.7.4
-  resolution: "gauge@npm:2.7.4"
-  dependencies:
-    aproba: ^1.0.3
-    console-control-strings: ^1.0.0
-    has-unicode: ^2.0.0
-    object-assign: ^4.1.0
-    signal-exit: ^3.0.0
-    string-width: ^1.0.1
-    strip-ansi: ^3.0.1
-    wide-align: ^1.1.0
-  checksum: a89b53cee65579b46832e050b5f3a79a832cc422c190de79c6b8e2e15296ab92faddde6ddf2d376875cbba2b043efa99b9e1ed8124e7365f61b04e3cee9d40ee
-  languageName: node
-  linkType: hard
-
 "gaze@npm:^1.0.0":
   version: 1.1.3
   resolution: "gaze@npm:1.1.3"
@@ -8636,7 +8763,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"glob-parent@npm:^5.0.0, glob-parent@npm:^5.1.0, glob-parent@npm:~5.1.2":
+"glob-parent@npm:^5.0.0, glob-parent@npm:^5.1.2, glob-parent@npm:~5.1.2":
   version: 5.1.2
   resolution: "glob-parent@npm:5.1.2"
   dependencies:
@@ -8666,6 +8793,19 @@ __metadata:
   languageName: node
   linkType: hard
 
+"glob@npm:^8.0.1":
+  version: 8.1.0
+  resolution: "glob@npm:8.1.0"
+  dependencies:
+    fs.realpath: ^1.0.0
+    inflight: ^1.0.4
+    inherits: 2
+    minimatch: ^5.0.1
+    once: ^1.3.0
+  checksum: 92fbea3221a7d12075f26f0227abac435de868dd0736a17170663783296d0dd8d3d532a5672b4488a439bf5d7fb85cdd07c11185d6cd39184f0385cbdfb86a47
+  languageName: node
+  linkType: hard
+
 "global-dirs@npm:^2.0.1":
   version: 2.1.0
   resolution: "global-dirs@npm:2.1.0"
@@ -8734,16 +8874,16 @@ __metadata:
   linkType: hard
 
 "globby@npm:^11.0.3":
-  version: 11.0.4
-  resolution: "globby@npm:11.0.4"
+  version: 11.1.0
+  resolution: "globby@npm:11.1.0"
   dependencies:
     array-union: ^2.1.0
     dir-glob: ^3.0.1
-    fast-glob: ^3.1.1
-    ignore: ^5.1.4
-    merge2: ^1.3.0
+    fast-glob: ^3.2.9
+    ignore: ^5.2.0
+    merge2: ^1.4.1
     slash: ^3.0.0
-  checksum: d3e02d5e459e02ffa578b45f040381c33e3c0538ed99b958f0809230c423337999867d7b0dbf752ce93c46157d3bbf154d3fff988a93ccaeb627df8e1841775b
+  checksum: b4be8885e0cfa018fc783792942d53926c35c50b3aefd3fdcfb9d22c627639dc26bd2327a40a0b74b074100ce95bb7187bfeae2f236856aa3de183af7a02aea6
   languageName: node
   linkType: hard
 
@@ -8826,6 +8966,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"hard-rejection@npm:^2.1.0":
+  version: 2.1.0
+  resolution: "hard-rejection@npm:2.1.0"
+  checksum: 7baaf80a0c7fff4ca79687b4060113f1529589852152fa935e6787a2bc96211e784ad4588fb3048136ff8ffc9dfcf3ae385314a5b24db32de20bea0d1597f9dc
+  languageName: node
+  linkType: hard
+
 "harmony-reflect@npm:^1.4.6":
   version: 1.6.2
   resolution: "harmony-reflect@npm:1.6.2"
@@ -8870,7 +9017,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"has-unicode@npm:^2.0.0, has-unicode@npm:^2.0.1":
+"has-unicode@npm:^2.0.1":
   version: 2.0.1
   resolution: "has-unicode@npm:2.0.1"
   checksum: 1eab07a7436512db0be40a710b29b5dc21fa04880b7f63c9980b706683127e3c1b57cb80ea96d47991bdae2dfe479604f6a1ba410106ee1046a41d1bd0814400
@@ -9027,6 +9174,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"hosted-git-info@npm:^4.0.1":
+  version: 4.1.0
+  resolution: "hosted-git-info@npm:4.1.0"
+  dependencies:
+    lru-cache: ^6.0.0
+  checksum: c3f87b3c2f7eb8c2748c8f49c0c2517c9a95f35d26f4bf54b2a8cba05d2e668f3753548b6ea366b18ec8dadb4e12066e19fa382a01496b0ffa0497eb23cbe461
+  languageName: node
+  linkType: hard
+
 "hpack.js@npm:^2.1.6":
   version: 2.1.6
   resolution: "hpack.js@npm:2.1.6"
@@ -9132,9 +9288,9 @@ __metadata:
   linkType: hard
 
 "http-cache-semantics@npm:^4.1.0":
-  version: 4.1.0
-  resolution: "http-cache-semantics@npm:4.1.0"
-  checksum: 974de94a81c5474be07f269f9fd8383e92ebb5a448208223bfb39e172a9dbc26feff250192ecc23b9593b3f92098e010406b0f24bd4d588d631f80214648ed42
+  version: 4.1.1
+  resolution: "http-cache-semantics@npm:4.1.1"
+  checksum: 83ac0bc60b17a3a36f9953e7be55e5c8f41acc61b22583060e8dedc9dd5e3607c823a88d0926f9150e571f90946835c7fe150732801010845c72cd8bbff1a236
   languageName: node
   linkType: hard
 
@@ -9190,6 +9346,17 @@ __metadata:
   languageName: node
   linkType: hard
 
+"http-proxy-agent@npm:^4.0.1":
+  version: 4.0.1
+  resolution: "http-proxy-agent@npm:4.0.1"
+  dependencies:
+    "@tootallnate/once": 1
+    agent-base: 6
+    debug: 4
+  checksum: c6a5da5a1929416b6bbdf77b1aca13888013fe7eb9d59fc292e25d18e041bb154a8dfada58e223fc7b76b9b2d155a87e92e608235201f77d34aa258707963a82
+  languageName: node
+  linkType: hard
+
 "http-proxy-agent@npm:^5.0.0":
   version: 5.0.0
   resolution: "http-proxy-agent@npm:5.0.0"
@@ -9339,10 +9506,10 @@ __metadata:
   languageName: node
   linkType: hard
 
-"ignore@npm:^5.1.4":
-  version: 5.1.8
-  resolution: "ignore@npm:5.1.8"
-  checksum: 967abadb61e2cb0e5c5e8c4e1686ab926f91bc1a4680d994b91947d3c65d04c3ae126dcdf67f08e0feeb8ff8407d453e641aeeddcc47a3a3cca359f283cf6121
+"ignore@npm:^5.2.0":
+  version: 5.2.4
+  resolution: "ignore@npm:5.2.4"
+  checksum: 3d4c309c6006e2621659311783eaea7ebcd41fe4ca1d78c91c473157ad6666a57a2df790fe0d07a12300d9aac2888204d7be8d59f9aaf665b1c7fcdb432517ef
   languageName: node
   linkType: hard
 
@@ -9438,18 +9605,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"in-publish@npm:^2.0.0":
-  version: 2.0.1
-  resolution: "in-publish@npm:2.0.1"
-  bin:
-    in-install: in-install.js
-    in-publish: in-publish.js
-    not-in-install: not-in-install.js
-    not-in-publish: not-in-publish.js
-  checksum: 5efde2992a1e76550614a5a2c51f53669d9f3ee3a11d364de22b0c77c41de0b87c52c4c9b04375eaa276761b1944dd2b166323894d2344192328ffe85927ad38
-  languageName: node
-  linkType: hard
-
 "indefinite-observable@npm:^1.0.1":
   version: 1.0.2
   resolution: "indefinite-observable@npm:1.0.2"
@@ -9634,6 +9789,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"ip@npm:^2.0.0":
+  version: 2.0.0
+  resolution: "ip@npm:2.0.0"
+  checksum: cfcfac6b873b701996d71ec82a7dd27ba92450afdb421e356f44044ed688df04567344c36cbacea7d01b1c39a4c732dc012570ebe9bebfb06f27314bca625349
+  languageName: node
+  linkType: hard
+
 "ipaddr.js@npm:1.9.1, ipaddr.js@npm:^1.9.0":
   version: 1.9.1
   resolution: "ipaddr.js@npm:1.9.1"
@@ -9778,6 +9940,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"is-core-module@npm:^2.5.0":
+  version: 2.13.0
+  resolution: "is-core-module@npm:2.13.0"
+  dependencies:
+    has: ^1.0.3
+  checksum: 053ab101fb390bfeb2333360fd131387bed54e476b26860dc7f5a700bbf34a0ec4454f7c8c4d43e8a0030957e4b3db6e16d35e1890ea6fb654c833095e040355
+  languageName: node
+  linkType: hard
+
 "is-data-descriptor@npm:^0.1.4":
   version: 0.1.4
   resolution: "is-data-descriptor@npm:0.1.4"
@@ -10037,7 +10208,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"is-plain-obj@npm:^1.0.0":
+"is-plain-obj@npm:^1.0.0, is-plain-obj@npm:^1.1.0":
   version: 1.1.0
   resolution: "is-plain-obj@npm:1.1.0"
   checksum: 0ee04807797aad50859652a7467481816cbb57e5cc97d813a7dcd8915da8195dc68c436010bf39d195226cde6a2d352f4b815f16f26b7bf486a5754290629931
@@ -10769,7 +10940,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"js-base64@npm:^2.1.8":
+"js-base64@npm:^2.1.8, js-base64@npm:^2.4.9":
   version: 2.6.4
   resolution: "js-base64@npm:2.6.4"
   checksum: 5f4084078d6c46f8529741d110df84b14fac3276b903760c21fa8cc8521370d607325dfe1c1a9fbbeaae1ff8e602665aaeef1362427d8fef704f9e3659472ce8
@@ -10937,10 +11108,10 @@ __metadata:
   languageName: node
   linkType: hard
 
-"json-schema@npm:0.2.3":
-  version: 0.2.3
-  resolution: "json-schema@npm:0.2.3"
-  checksum: bbc2070988fb5f2a2266a31b956f1b5660e03ea7eaa95b33402901274f625feb586ae0c485e1df854fde40a7f0dc679f3b3ca8e5b8d31f8ea07a0d834de785c7
+"json-schema@npm:0.4.0":
+  version: 0.4.0
+  resolution: "json-schema@npm:0.4.0"
+  checksum: 66389434c3469e698da0df2e7ac5a3281bcff75e797a5c127db7c5b56270e01ae13d9afa3c03344f76e32e81678337a8c912bdbb75101c62e487dc3778461d72
   languageName: node
   linkType: hard
 
@@ -10984,24 +11155,22 @@ __metadata:
   linkType: hard
 
 "json5@npm:^1.0.1":
-  version: 1.0.1
-  resolution: "json5@npm:1.0.1"
+  version: 1.0.2
+  resolution: "json5@npm:1.0.2"
   dependencies:
     minimist: ^1.2.0
   bin:
     json5: lib/cli.js
-  checksum: e76ea23dbb8fc1348c143da628134a98adf4c5a4e8ea2adaa74a80c455fc2cdf0e2e13e6398ef819bfe92306b610ebb2002668ed9fc1af386d593691ef346fc3
+  checksum: 866458a8c58a95a49bef3adba929c625e82532bcff1fe93f01d29cb02cac7c3fe1f4b79951b7792c2da9de0b32871a8401a6e3c5b36778ad852bf5b8a61165d7
   languageName: node
   linkType: hard
 
 "json5@npm:^2.1.2":
-  version: 2.2.0
-  resolution: "json5@npm:2.2.0"
-  dependencies:
-    minimist: ^1.2.5
+  version: 2.2.3
+  resolution: "json5@npm:2.2.3"
   bin:
     json5: lib/cli.js
-  checksum: e88fc5274bb58fc99547baa777886b069d2dd96d9cfc4490b305fd16d711dabd5979e35a4f90873cefbeb552e216b041a304fe56702bedba76e19bc7845f208d
+  checksum: 2a7436a93393830bce797d4626275152e37e877b265e94ca69c99e3d20c2b9dab021279146a39cdb700e71b2dd32a4cebd1514cd57cee102b1af906ce5040349
   languageName: node
   linkType: hard
 
@@ -11038,14 +11207,14 @@ __metadata:
   linkType: hard
 
 "jsprim@npm:^1.2.2":
-  version: 1.4.1
-  resolution: "jsprim@npm:1.4.1"
+  version: 1.4.2
+  resolution: "jsprim@npm:1.4.2"
   dependencies:
     assert-plus: 1.0.0
     extsprintf: 1.3.0
-    json-schema: 0.2.3
+    json-schema: 0.4.0
     verror: 1.10.0
-  checksum: 6bcb20ec265ae18bb48e540a6da2c65f9c844f7522712d6dfcb01039527a49414816f4869000493363f1e1ea96cbad00e46188d5ecc78257a19f152467587373
+  checksum: 2ad1b9fdcccae8b3d580fa6ced25de930eaa1ad154db21bbf8478a4d30bbbec7925b5f5ff29b933fba9412b16a17bd484a8da4fdb3663b5e27af95dd693bab2a
   languageName: node
   linkType: hard
 
@@ -11137,16 +11306,15 @@ __metadata:
   languageName: node
   linkType: hard
 
-"jszip@npm:3.1.5":
-  version: 3.1.5
-  resolution: "jszip@npm:3.1.5"
+"jszip@npm:^3.10.1":
+  version: 3.10.1
+  resolution: "jszip@npm:3.10.1"
   dependencies:
-    core-js: ~2.3.0
-    es6-promise: ~3.0.2
-    lie: ~3.1.0
+    lie: ~3.3.0
     pako: ~1.0.2
-    readable-stream: ~2.0.6
-  checksum: 2d0464089d7a4604c7b7586d089b7aa39fbcfe7cc058f7c066b3c92b43f3b94f69362d1b6dd8252049f5729e1fc452a788703382cbce6d77f607d3ce1227b231
+    readable-stream: ~2.3.6
+    setimmediate: ^1.0.5
+  checksum: abc77bfbe33e691d4d1ac9c74c8851b5761fba6a6986630864f98d876f3fcc2d36817dfc183779f32c00157b5d53a016796677298272a714ae096dfe6b1c8b60
   languageName: node
   linkType: hard
 
@@ -11198,7 +11366,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"kind-of@npm:^6.0.0, kind-of@npm:^6.0.2":
+"kind-of@npm:^6.0.0, kind-of@npm:^6.0.2, kind-of@npm:^6.0.3":
   version: 6.0.3
   resolution: "kind-of@npm:6.0.3"
   checksum: 3ab01e7b1d440b22fe4c31f23d8d38b4d9b91d9f291df683476576493d5dfd2e03848a8b05813dd0c3f0e835bc63f433007ddeceb71f05cb25c45ae1b19c6d3b
@@ -11285,12 +11453,12 @@ __metadata:
   languageName: node
   linkType: hard
 
-"lie@npm:~3.1.0":
-  version: 3.1.1
-  resolution: "lie@npm:3.1.1"
+"lie@npm:~3.3.0":
+  version: 3.3.0
+  resolution: "lie@npm:3.3.0"
   dependencies:
     immediate: ~3.0.5
-  checksum: 6da9f2121d2dbd15f1eca44c0c7e211e66a99c7b326ec8312645f3648935bc3a658cf0e9fa7b5f10144d9e2641500b4f55bd32754607c3de945b5f443e50ddd1
+  checksum: 33102302cf19766f97919a6a98d481e01393288b17a6aa1f030a3542031df42736edde8dab29ffdbf90bebeffc48c761eb1d064dc77592ca3ba3556f9fe6d2a8
   languageName: node
   linkType: hard
 
@@ -11421,24 +11589,24 @@ __metadata:
   linkType: hard
 
 "loader-utils@npm:^1.1.0, loader-utils@npm:^1.2.3, loader-utils@npm:^1.4.0":
-  version: 1.4.0
-  resolution: "loader-utils@npm:1.4.0"
+  version: 1.4.2
+  resolution: "loader-utils@npm:1.4.2"
   dependencies:
     big.js: ^5.2.2
     emojis-list: ^3.0.0
     json5: ^1.0.1
-  checksum: d150b15e7a42ac47d935c8b484b79e44ff6ab4c75df7cc4cb9093350cf014ec0b17bdb60c5d6f91a37b8b218bd63b973e263c65944f58ca2573e402b9a27e717
+  checksum: eb6fb622efc0ffd1abdf68a2022f9eac62bef8ec599cf8adb75e94d1d338381780be6278534170e99edc03380a6d29bc7eb1563c89ce17c5fed3a0b17f1ad804
   languageName: node
   linkType: hard
 
 "loader-utils@npm:^2.0.0":
-  version: 2.0.0
-  resolution: "loader-utils@npm:2.0.0"
+  version: 2.0.4
+  resolution: "loader-utils@npm:2.0.4"
   dependencies:
     big.js: ^5.2.2
     emojis-list: ^3.0.0
     json5: ^2.1.2
-  checksum: 6856423131b50b6f5f259da36f498cfd7fc3c3f8bb17777cf87fdd9159e797d4ba4288d9a96415fd8da62c2906960e88f74711dee72d03a9003bddcd0d364a51
+  checksum: a5281f5fff1eaa310ad5e1164095689443630f3411e927f95031ab4fb83b4a98f388185bb1fe949e8ab8d4247004336a625e9255c22122b815bb9a4c5d8fc3b7
   languageName: node
   linkType: hard
 
@@ -11471,14 +11639,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"lodash-es@npm:4.17.14":
-  version: 4.17.14
-  resolution: "lodash-es@npm:4.17.14"
-  checksum: 56d39dc8e76ac366eae79d4e8d7c19bd2f8981b640a46942bf2d88fa871b2e083e48fe2b895c84ed139e13c0b466cac22ea27d7394be04f2ba62c518392c39be
-  languageName: node
-  linkType: hard
-
-"lodash-es@npm:^4.17.10, lodash-es@npm:^4.17.5, lodash-es@npm:^4.2.1":
+"lodash-es@npm:^4.17.10, lodash-es@npm:^4.17.21, lodash-es@npm:^4.17.5, lodash-es@npm:^4.2.1":
   version: 4.17.21
   resolution: "lodash-es@npm:4.17.21"
   checksum: 05cbffad6e2adbb331a4e16fbd826e7faee403a1a04873b82b42c0f22090f280839f85b95393f487c1303c8a3d2a010048bf06151a6cbe03eee4d388fb0a12d2
@@ -11671,16 +11832,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"lru-cache@npm:^4.0.1":
-  version: 4.1.5
-  resolution: "lru-cache@npm:4.1.5"
-  dependencies:
-    pseudomap: ^1.0.2
-    yallist: ^2.1.2
-  checksum: 4bb4b58a36cd7dc4dcec74cbe6a8f766a38b7426f1ff59d4cf7d82a2aa9b9565cd1cb98f6ff60ce5cd174524868d7bc9b7b1c294371851356066ca9ac4cf135a
-  languageName: node
-  linkType: hard
-
 "lru-cache@npm:^5.1.1":
   version: 5.1.1
   resolution: "lru-cache@npm:5.1.1"
@@ -11706,6 +11857,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"lru-cache@npm:^7.7.1":
+  version: 7.18.3
+  resolution: "lru-cache@npm:7.18.3"
+  checksum: e550d772384709deea3f141af34b6d4fa392e2e418c1498c078de0ee63670f1f46f5eee746e8ef7e69e1c895af0d4224e62ee33e66a543a14763b0f2e74c1356
+  languageName: node
+  linkType: hard
+
 "make-dir@npm:^2.0.0, make-dir@npm:^2.1.0":
   version: 2.1.0
   resolution: "make-dir@npm:2.1.0"
@@ -11749,8 +11907,56 @@ __metadata:
   languageName: node
   linkType: hard
 
-"makeerror@npm:1.0.x":
-  version: 1.0.11
+"make-fetch-happen@npm:^10.0.4":
+  version: 10.2.1
+  resolution: "make-fetch-happen@npm:10.2.1"
+  dependencies:
+    agentkeepalive: ^4.2.1
+    cacache: ^16.1.0
+    http-cache-semantics: ^4.1.0
+    http-proxy-agent: ^5.0.0
+    https-proxy-agent: ^5.0.0
+    is-lambda: ^1.0.1
+    lru-cache: ^7.7.1
+    minipass: ^3.1.6
+    minipass-collect: ^1.0.2
+    minipass-fetch: ^2.0.3
+    minipass-flush: ^1.0.5
+    minipass-pipeline: ^1.2.4
+    negotiator: ^0.6.3
+    promise-retry: ^2.0.1
+    socks-proxy-agent: ^7.0.0
+    ssri: ^9.0.0
+  checksum: 2332eb9a8ec96f1ffeeea56ccefabcb4193693597b132cd110734d50f2928842e22b84cfa1508e921b8385cdfd06dda9ad68645fed62b50fff629a580f5fb72c
+  languageName: node
+  linkType: hard
+
+"make-fetch-happen@npm:^9.1.0":
+  version: 9.1.0
+  resolution: "make-fetch-happen@npm:9.1.0"
+  dependencies:
+    agentkeepalive: ^4.1.3
+    cacache: ^15.2.0
+    http-cache-semantics: ^4.1.0
+    http-proxy-agent: ^4.0.1
+    https-proxy-agent: ^5.0.0
+    is-lambda: ^1.0.1
+    lru-cache: ^6.0.0
+    minipass: ^3.1.3
+    minipass-collect: ^1.0.2
+    minipass-fetch: ^1.3.2
+    minipass-flush: ^1.0.5
+    minipass-pipeline: ^1.2.4
+    negotiator: ^0.6.2
+    promise-retry: ^2.0.1
+    socks-proxy-agent: ^6.0.0
+    ssri: ^8.0.0
+  checksum: 0eb371c85fdd0b1584fcfdf3dc3c62395761b3c14658be02620c310305a9a7ecf1617a5e6fb30c1d081c5c8aaf177fa133ee225024313afabb7aa6a10f1e3d04
+  languageName: node
+  linkType: hard
+
+"makeerror@npm:1.0.x":
+  version: 1.0.11
   resolution: "makeerror@npm:1.0.11"
   dependencies:
     tmpl: 1.0.x
@@ -11788,6 +11994,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"map-obj@npm:^4.0.0":
+  version: 4.3.0
+  resolution: "map-obj@npm:4.3.0"
+  checksum: fbc554934d1a27a1910e842bc87b177b1a556609dd803747c85ece420692380827c6ae94a95cce4407c054fa0964be3bf8226f7f2cb2e9eeee432c7c1985684e
+  languageName: node
+  linkType: hard
+
 "map-visit@npm:^1.0.0":
   version: 1.0.0
   resolution: "map-visit@npm:1.0.0"
@@ -11911,6 +12124,26 @@ __metadata:
   languageName: node
   linkType: hard
 
+"meow@npm:^9.0.0":
+  version: 9.0.0
+  resolution: "meow@npm:9.0.0"
+  dependencies:
+    "@types/minimist": ^1.2.0
+    camelcase-keys: ^6.2.2
+    decamelize: ^1.2.0
+    decamelize-keys: ^1.1.0
+    hard-rejection: ^2.1.0
+    minimist-options: 4.1.0
+    normalize-package-data: ^3.0.0
+    read-pkg-up: ^7.0.1
+    redent: ^3.0.0
+    trim-newlines: ^3.0.0
+    type-fest: ^0.18.0
+    yargs-parser: ^20.2.3
+  checksum: 99799c47247f4daeee178e3124f6ef6f84bde2ba3f37652865d5d8f8b8adcf9eedfc551dd043e2455cd8206545fd848e269c0c5ab6b594680a0ad4d3617c9639
+  languageName: node
+  linkType: hard
+
 "merge-deep@npm:^3.0.2":
   version: 3.0.3
   resolution: "merge-deep@npm:3.0.3"
@@ -11936,7 +12169,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"merge2@npm:^1.2.3, merge2@npm:^1.3.0":
+"merge2@npm:^1.2.3, merge2@npm:^1.3.0, merge2@npm:^1.4.1":
   version: 1.4.1
   resolution: "merge2@npm:1.4.1"
   checksum: 7268db63ed5169466540b6fb947aec313200bcf6d40c5ab722c22e242f651994619bcd85601602972d3c85bd2cc45a358a4c61937e9f11a061919a1da569b0c2
@@ -11978,13 +12211,13 @@ __metadata:
   languageName: node
   linkType: hard
 
-"micromatch@npm:^4.0.2":
-  version: 4.0.4
-  resolution: "micromatch@npm:4.0.4"
+"micromatch@npm:^4.0.4":
+  version: 4.0.5
+  resolution: "micromatch@npm:4.0.5"
   dependencies:
-    braces: ^3.0.1
-    picomatch: ^2.2.3
-  checksum: ef3d1c88e79e0a68b0e94a03137676f3324ac18a908c245a9e5936f838079fcc108ac7170a5fadc265a9c2596963462e402841406bda1a4bb7b68805601d631c
+    braces: ^3.0.2
+    picomatch: ^2.3.1
+  checksum: 02a17b671c06e8fefeeb6ef996119c1e597c942e632a21ef589154f23898c9c6a9858526246abb14f8bca6e77734aa9dcf65476fca47cedfb80d9577d52843fc
   languageName: node
   linkType: hard
 
@@ -12057,6 +12290,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"min-indent@npm:^1.0.0":
+  version: 1.0.1
+  resolution: "min-indent@npm:1.0.1"
+  checksum: bfc6dd03c5eaf623a4963ebd94d087f6f4bbbfd8c41329a7f09706b0cb66969c4ddd336abeb587bc44bc6f08e13bf90f0b374f9d71f9f01e04adc2cd6f083ef1
+  languageName: node
+  linkType: hard
+
 "mini-css-extract-plugin@npm:0.9.0":
   version: 0.9.0
   resolution: "mini-css-extract-plugin@npm:0.9.0"
@@ -12085,7 +12325,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"minimatch@npm:3.0.4, minimatch@npm:^3.0.4, minimatch@npm:~3.0.2":
+"minimatch@npm:3.0.4":
   version: 3.0.4
   resolution: "minimatch@npm:3.0.4"
   dependencies:
@@ -12094,10 +12334,48 @@ __metadata:
   languageName: node
   linkType: hard
 
+"minimatch@npm:^3.0.4":
+  version: 3.1.2
+  resolution: "minimatch@npm:3.1.2"
+  dependencies:
+    brace-expansion: ^1.1.7
+  checksum: c154e566406683e7bcb746e000b84d74465b3a832c45d59912b9b55cd50dee66e5c4b1e5566dba26154040e51672f9aa450a9aef0c97cfc7336b78b7afb9540a
+  languageName: node
+  linkType: hard
+
+"minimatch@npm:^5.0.1":
+  version: 5.1.6
+  resolution: "minimatch@npm:5.1.6"
+  dependencies:
+    brace-expansion: ^2.0.1
+  checksum: 7564208ef81d7065a370f788d337cd80a689e981042cb9a1d0e6580b6c6a8c9279eba80010516e258835a988363f99f54a6f711a315089b8b42694f5da9d0d77
+  languageName: node
+  linkType: hard
+
+"minimatch@npm:~3.0.2":
+  version: 3.0.8
+  resolution: "minimatch@npm:3.0.8"
+  dependencies:
+    brace-expansion: ^1.1.7
+  checksum: 850cca179cad715133132693e6963b0db64ab0988c4d211415b087fc23a3e46321e2c5376a01bf5623d8782aba8bdf43c571e2e902e51fdce7175c7215c29f8b
+  languageName: node
+  linkType: hard
+
+"minimist-options@npm:4.1.0":
+  version: 4.1.0
+  resolution: "minimist-options@npm:4.1.0"
+  dependencies:
+    arrify: ^1.0.1
+    is-plain-obj: ^1.1.0
+    kind-of: ^6.0.3
+  checksum: 8c040b3068811e79de1140ca2b708d3e203c8003eb9a414c1ab3cd467fc5f17c9ca02a5aef23bedc51a7f8bfbe77f87e9a7e31ec81fba304cda675b019496f4e
+  languageName: node
+  linkType: hard
+
 "minimist@npm:^1.1.1, minimist@npm:^1.1.3, minimist@npm:^1.2.0, minimist@npm:^1.2.5":
-  version: 1.2.5
-  resolution: "minimist@npm:1.2.5"
-  checksum: 86706ce5b36c16bfc35c5fe3dbb01d5acdc9a22f2b6cc810b6680656a1d2c0e44a0159c9a3ba51fb072bb5c203e49e10b51dcd0eec39c481f4c42086719bae52
+  version: 1.2.8
+  resolution: "minimist@npm:1.2.8"
+  checksum: 75a6d645fb122dad29c06a7597bddea977258957ed88d7a6df59b5cd3fe4a527e253e9bbf2e783e4b73657f9098b96a5fe96ab8a113655d4109108577ecf85b0
   languageName: node
   linkType: hard
 
@@ -12110,6 +12388,21 @@ __metadata:
   languageName: node
   linkType: hard
 
+"minipass-fetch@npm:^1.3.2":
+  version: 1.4.1
+  resolution: "minipass-fetch@npm:1.4.1"
+  dependencies:
+    encoding: ^0.1.12
+    minipass: ^3.1.0
+    minipass-sized: ^1.0.3
+    minizlib: ^2.0.0
+  dependenciesMeta:
+    encoding:
+      optional: true
+  checksum: ec93697bdb62129c4e6c0104138e681e30efef8c15d9429dd172f776f83898471bc76521b539ff913248cc2aa6d2b37b652c993504a51cc53282563640f29216
+  languageName: node
+  linkType: hard
+
 "minipass-fetch@npm:^2.0.2":
   version: 2.0.3
   resolution: "minipass-fetch@npm:2.0.3"
@@ -12125,6 +12418,21 @@ __metadata:
   languageName: node
   linkType: hard
 
+"minipass-fetch@npm:^2.0.3":
+  version: 2.1.2
+  resolution: "minipass-fetch@npm:2.1.2"
+  dependencies:
+    encoding: ^0.1.13
+    minipass: ^3.1.6
+    minipass-sized: ^1.0.3
+    minizlib: ^2.1.2
+  dependenciesMeta:
+    encoding:
+      optional: true
+  checksum: 3f216be79164e915fc91210cea1850e488793c740534985da017a4cbc7a5ff50506956d0f73bb0cb60e4fe91be08b6b61ef35101706d3ef5da2c8709b5f08f91
+  languageName: node
+  linkType: hard
+
 "minipass-flush@npm:^1.0.5":
   version: 1.0.5
   resolution: "minipass-flush@npm:1.0.5"
@@ -12161,6 +12469,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"minipass@npm:^3.1.0, minipass@npm:^3.1.3":
+  version: 3.3.6
+  resolution: "minipass@npm:3.3.6"
+  dependencies:
+    yallist: ^4.0.0
+  checksum: a30d083c8054cee83cdcdc97f97e4641a3f58ae743970457b1489ce38ee1167b3aaf7d815cd39ec7a99b9c40397fd4f686e83750e73e652b21cb516f6d845e48
+  languageName: node
+  linkType: hard
+
 "minipass@npm:^3.1.6":
   version: 3.1.6
   resolution: "minipass@npm:3.1.6"
@@ -12170,7 +12487,14 @@ __metadata:
   languageName: node
   linkType: hard
 
-"minizlib@npm:^2.1.1, minizlib@npm:^2.1.2":
+"minipass@npm:^5.0.0":
+  version: 5.0.0
+  resolution: "minipass@npm:5.0.0"
+  checksum: 425dab288738853fded43da3314a0b5c035844d6f3097a8e3b5b29b328da8f3c1af6fc70618b32c29ff906284cf6406b6841376f21caaadd0793c1d5a6a620ea
+  languageName: node
+  linkType: hard
+
+"minizlib@npm:^2.0.0, minizlib@npm:^2.1.1, minizlib@npm:^2.1.2":
   version: 2.1.2
   resolution: "minizlib@npm:2.1.2"
   dependencies:
@@ -12218,7 +12542,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"mkdirp@npm:>=0.5 0, mkdirp@npm:^0.5.0, mkdirp@npm:^0.5.1, mkdirp@npm:^0.5.3, mkdirp@npm:^0.5.4, mkdirp@npm:^0.5.5, mkdirp@npm:~0.5.1":
+"mkdirp@npm:>=0.5 0, mkdirp@npm:^0.5.1, mkdirp@npm:^0.5.3, mkdirp@npm:^0.5.4, mkdirp@npm:^0.5.5, mkdirp@npm:~0.5.1":
   version: 0.5.5
   resolution: "mkdirp@npm:0.5.5"
   dependencies:
@@ -12238,13 +12562,20 @@ __metadata:
   languageName: node
   linkType: hard
 
-"moment@npm:2.29.1, moment@npm:^2.27.0":
+"moment@npm:2.29.1":
   version: 2.29.1
   resolution: "moment@npm:2.29.1"
   checksum: 1e14d5f422a2687996be11dd2d50c8de3bd577c4a4ca79ba5d02c397242a933e5b941655de6c8cb90ac18f01cc4127e55b4a12ae3c527a6c0a274e455979345e
   languageName: node
   linkType: hard
 
+"moment@npm:^2.27.0":
+  version: 2.29.4
+  resolution: "moment@npm:2.29.4"
+  checksum: 0ec3f9c2bcba38dc2451b1daed5daded747f17610b92427bebe1d08d48d8b7bdd8d9197500b072d14e326dd0ccf3e326b9e3d07c5895d3d49e39b6803b76e80e
+  languageName: node
+  linkType: hard
+
 "moo@npm:^0.5.0":
   version: 0.5.1
   resolution: "moo@npm:0.5.1"
@@ -12329,6 +12660,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"nan@npm:^2.17.0":
+  version: 2.18.0
+  resolution: "nan@npm:2.18.0"
+  dependencies:
+    node-gyp: latest
+  checksum: 4fe42f58456504eab3105c04a5cffb72066b5f22bd45decf33523cb17e7d6abc33cca2a19829407b9000539c5cb25f410312d4dc5b30220167a3594896ea6a0a
+  languageName: node
+  linkType: hard
+
 "nanomatch@npm:^1.2.9":
   version: 1.2.13
   resolution: "nanomatch@npm:1.2.13"
@@ -12379,7 +12719,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"negotiator@npm:^0.6.3":
+"negotiator@npm:^0.6.2, negotiator@npm:^0.6.3":
   version: 0.6.3
   resolution: "negotiator@npm:0.6.3"
   checksum: b8ffeb1e262eff7968fc90a2b6767b04cfd9842582a9d0ece0af7049537266e7b2506dfb1d107a32f06dd849ab2aea834d5830f7f4d0e5cb7d36e1ae55d021d9
@@ -12430,13 +12770,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"node-fetch@npm:2.6.1":
-  version: 2.6.1
-  resolution: "node-fetch@npm:2.6.1"
-  checksum: 91075bedd57879117e310fbcc36983ad5d699e522edb1ebcdc4ee5294c982843982652925c3532729fdc86b2d64a8a827797a745f332040d91823c8752ee4d7c
-  languageName: node
-  linkType: hard
-
 "node-fetch@npm:^1.0.1":
   version: 1.7.3
   resolution: "node-fetch@npm:1.7.3"
@@ -12447,6 +12780,20 @@ __metadata:
   languageName: node
   linkType: hard
 
+"node-fetch@npm:^2.6.12":
+  version: 2.7.0
+  resolution: "node-fetch@npm:2.7.0"
+  dependencies:
+    whatwg-url: ^5.0.0
+  peerDependencies:
+    encoding: ^0.1.0
+  peerDependenciesMeta:
+    encoding:
+      optional: true
+  checksum: d76d2f5edb451a3f05b15115ec89fc6be39de37c6089f1b6368df03b91e1633fd379a7e01b7ab05089a25034b2023d959b47e59759cb38d88341b2459e89d6e5
+  languageName: node
+  linkType: hard
+
 "node-forge@npm:^0.10.0":
   version: 0.10.0
   resolution: "node-forge@npm:0.10.0"
@@ -12454,25 +12801,23 @@ __metadata:
   languageName: node
   linkType: hard
 
-"node-gyp@npm:^3.8.0":
-  version: 3.8.0
-  resolution: "node-gyp@npm:3.8.0"
+"node-gyp@npm:^8.4.1":
+  version: 8.4.1
+  resolution: "node-gyp@npm:8.4.1"
   dependencies:
-    fstream: ^1.0.0
-    glob: ^7.0.3
-    graceful-fs: ^4.1.2
-    mkdirp: ^0.5.0
-    nopt: 2 || 3
-    npmlog: 0 || 1 || 2 || 3 || 4
-    osenv: 0
-    request: ^2.87.0
-    rimraf: 2
-    semver: ~5.3.0
-    tar: ^2.0.0
-    which: 1
+    env-paths: ^2.2.0
+    glob: ^7.1.4
+    graceful-fs: ^4.2.6
+    make-fetch-happen: ^9.1.0
+    nopt: ^5.0.0
+    npmlog: ^6.0.0
+    rimraf: ^3.0.2
+    semver: ^7.3.5
+    tar: ^6.1.2
+    which: ^2.0.2
   bin:
-    node-gyp: ./bin/node-gyp.js
-  checksum: e99d740db6f5462cfd2f03fdfa89bae7e509e37f158d78a2fec0c858984cceb801723510656110d8f1d0ecf69cc2ceba8b477d22aac3e69ce8094db19dff6b2b
+    node-gyp: bin/node-gyp.js
+  checksum: 341710b5da39d3660e6a886b37e210d33f8282047405c2e62c277bcc744c7552c5b8b972ebc3a7d5c2813794e60cc48c3ebd142c46d6e0321db4db6c92dd0355
   languageName: node
   linkType: hard
 
@@ -12554,66 +12899,84 @@ __metadata:
   languageName: node
   linkType: hard
 
-"node-releases@npm:^1.1.52, node-releases@npm:^1.1.71":
+"node-releases@npm:^1.1.52":
   version: 1.1.73
   resolution: "node-releases@npm:1.1.73"
   checksum: 44a6caec3330538a669c156fa84833725ae92b317585b106e08ab292c14da09f30cb913c10f1a7402180a51b10074832d4e045b6c3512d74c37d86b41a69e63b
   languageName: node
   linkType: hard
 
-"node-sass-chokidar@npm:1.5.0":
-  version: 1.5.0
-  resolution: "node-sass-chokidar@npm:1.5.0"
+"node-releases@npm:^2.0.13":
+  version: 2.0.13
+  resolution: "node-releases@npm:2.0.13"
+  checksum: 17ec8f315dba62710cae71a8dad3cd0288ba943d2ece43504b3b1aa8625bf138637798ab470b1d9035b0545996f63000a8a926e0f6d35d0996424f8b6d36dda3
+  languageName: node
+  linkType: hard
+
+"node-sass-chokidar@npm:^2.0.0":
+  version: 2.0.0
+  resolution: "node-sass-chokidar@npm:2.0.0"
   dependencies:
     async-foreach: ^0.1.3
     chokidar: ^3.4.0
     get-stdin: ^4.0.1
     glob: ^7.0.3
     meow: ^3.7.0
-    node-sass: ^4.14.1
+    node-sass: ^7.0.1
     sass-graph: ^2.2.4
     stdout-stream: ^1.4.0
   bin:
     node-sass-chokidar: bin/node-sass-chokidar
-  checksum: fb3197b1dcc06b7b3c8e7d2e63ab9397745466f2e78871f8ba112f3740f7092f37f6668bc25a0d7bea82fe8a78b4d8dd009151eb0f041dc62029e76a38004e8d
+  checksum: 5aeffc93cddf5cc32d0e86de4999e56e3cdccb1d86b5ed211e2d661f4e579bac19c078ca791662e2aaff9752ba2e18ce87324c07de5b3222064a4c9703856d9c
   languageName: node
   linkType: hard
 
-"node-sass@npm:^4.14.1, node-sass@npm:^4.9.4":
-  version: 4.14.1
-  resolution: "node-sass@npm:4.14.1"
+"node-sass@npm:^7.0.1":
+  version: 7.0.3
+  resolution: "node-sass@npm:7.0.3"
   dependencies:
     async-foreach: ^0.1.3
-    chalk: ^1.1.1
-    cross-spawn: ^3.0.0
+    chalk: ^4.1.2
+    cross-spawn: ^7.0.3
     gaze: ^1.0.0
     get-stdin: ^4.0.1
     glob: ^7.0.3
-    in-publish: ^2.0.0
     lodash: ^4.17.15
-    meow: ^3.7.0
-    mkdirp: ^0.5.1
+    meow: ^9.0.0
     nan: ^2.13.2
-    node-gyp: ^3.8.0
-    npmlog: ^4.0.0
+    node-gyp: ^8.4.1
+    npmlog: ^5.0.0
     request: ^2.88.0
-    sass-graph: 2.2.5
+    sass-graph: ^4.0.1
     stdout-stream: ^1.4.0
     true-case-path: ^1.0.2
   bin:
     node-sass: bin/node-sass
-  checksum: 6894709e7d8c4482fd0d53ce8473fd7c3ddf38ef36a109bbda96aca750e7c28777e89fcf277c9e032ca69328062f10a12be61e01a385ed0d221fbbdfd0ac7448
+  checksum: 7d577d0fb68948959f367341e6cfc2858aa37abc5fadbd9e6b477ed0d192bebf7f8516d0b53c27be30ab05d5cd62d8a9bab08cc4442ef901b02cb51d864b4419
   languageName: node
   linkType: hard
 
-"nopt@npm:2 || 3":
-  version: 3.0.6
-  resolution: "nopt@npm:3.0.6"
+"node-sass@npm:^9.0.0":
+  version: 9.0.0
+  resolution: "node-sass@npm:9.0.0"
   dependencies:
-    abbrev: 1
+    async-foreach: ^0.1.3
+    chalk: ^4.1.2
+    cross-spawn: ^7.0.3
+    gaze: ^1.0.0
+    get-stdin: ^4.0.1
+    glob: ^7.0.3
+    lodash: ^4.17.15
+    make-fetch-happen: ^10.0.4
+    meow: ^9.0.0
+    nan: ^2.17.0
+    node-gyp: ^8.4.1
+    sass-graph: ^4.0.1
+    stdout-stream: ^1.4.0
+    true-case-path: ^2.2.1
   bin:
-    nopt: ./bin/nopt.js
-  checksum: 7f8579029a0d7cb3341c6b1610b31e363f708b7aaaaf3580e3ec5ae8528d1f3a79d350d8bfa331776e6c6703a5a148b72edd9b9b4c1dd55874d8e70e963d1e20
+    node-sass: bin/node-sass
+  checksum: b15fa76b1564c37d65cde7556731e3c09b49c74a6919cd5cff6f71ddbe454bd1ad9e458f5f02f0f81f43919b8755b5f56cf657fa4e32a0a2644a48fbc07147bb
   languageName: node
   linkType: hard
 
@@ -12628,7 +12991,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"normalize-package-data@npm:^2.3.2, normalize-package-data@npm:^2.3.4":
+"normalize-package-data@npm:^2.3.2, normalize-package-data@npm:^2.3.4, normalize-package-data@npm:^2.5.0":
   version: 2.5.0
   resolution: "normalize-package-data@npm:2.5.0"
   dependencies:
@@ -12640,6 +13003,18 @@ __metadata:
   languageName: node
   linkType: hard
 
+"normalize-package-data@npm:^3.0.0":
+  version: 3.0.3
+  resolution: "normalize-package-data@npm:3.0.3"
+  dependencies:
+    hosted-git-info: ^4.0.1
+    is-core-module: ^2.5.0
+    semver: ^7.3.4
+    validate-npm-package-license: ^3.0.1
+  checksum: bbcee00339e7c26fdbc760f9b66d429258e2ceca41a5df41f5df06cc7652de8d82e8679ff188ca095cad8eff2b6118d7d866af2b68400f74602fbcbce39c160a
+  languageName: node
+  linkType: hard
+
 "normalize-path@npm:^2.1.1":
   version: 2.1.1
   resolution: "normalize-path@npm:2.1.1"
@@ -12707,15 +13082,15 @@ __metadata:
   languageName: node
   linkType: hard
 
-"npmlog@npm:0 || 1 || 2 || 3 || 4, npmlog@npm:^4.0.0":
-  version: 4.1.2
-  resolution: "npmlog@npm:4.1.2"
+"npmlog@npm:^5.0.0":
+  version: 5.0.1
+  resolution: "npmlog@npm:5.0.1"
   dependencies:
-    are-we-there-yet: ~1.1.2
-    console-control-strings: ~1.1.0
-    gauge: ~2.7.3
-    set-blocking: ~2.0.0
-  checksum: edbda9f95ec20957a892de1839afc6fb735054c3accf6fbefe767bac9a639fd5cea2baeac6bd2bcd50a85cb54924d57d9886c81c7fbc2332c2ddd19227504192
+    are-we-there-yet: ^2.0.0
+    console-control-strings: ^1.1.0
+    gauge: ^3.0.0
+    set-blocking: ^2.0.0
+  checksum: 516b2663028761f062d13e8beb3f00069c5664925871a9b57989642ebe09f23ab02145bf3ab88da7866c4e112cafff72401f61a672c7c8a20edc585a7016ef5f
   languageName: node
   linkType: hard
 
@@ -12740,12 +13115,12 @@ __metadata:
   languageName: node
   linkType: hard
 
-"nth-check@npm:^2.0.0":
-  version: 2.0.0
-  resolution: "nth-check@npm:2.0.0"
+"nth-check@npm:^2.0.1":
+  version: 2.1.1
+  resolution: "nth-check@npm:2.1.1"
   dependencies:
     boolbase: ^1.0.0
-  checksum: a22eb19616719d46a5b517f76c32e67e4a2b6a229d67ba2f3efb296e24d79687d52b904c2298cd16510215d5d2a419f8ba671f5957a3b4b73905f62ba7aafa3b
+  checksum: 5afc3dafcd1573b08877ca8e6148c52abd565f1d06b1eb08caf982e3fa289a82f2cae697ffb55b5021e146d60443f1590a5d6b944844e944714a5b549675bcd3
   languageName: node
   linkType: hard
 
@@ -13003,15 +13378,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"original@npm:^1.0.0":
-  version: 1.0.2
-  resolution: "original@npm:1.0.2"
-  dependencies:
-    url-parse: ^1.4.3
-  checksum: 8dca9311dab50c8953366127cb86b7c07bf547d6aa6dc6873a75964b7563825351440557e5724d9c652c5e99043b8295624f106af077f84bccf19592e421beb9
-  languageName: node
-  linkType: hard
-
 "os-browserify@npm:^0.3.0":
   version: 0.3.0
   resolution: "os-browserify@npm:0.3.0"
@@ -13035,23 +13401,13 @@ __metadata:
   languageName: node
   linkType: hard
 
-"os-tmpdir@npm:^1.0.0, os-tmpdir@npm:^1.0.1, os-tmpdir@npm:~1.0.2":
+"os-tmpdir@npm:^1.0.1, os-tmpdir@npm:~1.0.2":
   version: 1.0.2
   resolution: "os-tmpdir@npm:1.0.2"
   checksum: 5666560f7b9f10182548bf7013883265be33620b1c1b4a4d405c25be2636f970c5488ff3e6c48de75b55d02bde037249fe5dbfbb4c0fb7714953d56aed062e6d
   languageName: node
   linkType: hard
 
-"osenv@npm:0":
-  version: 0.1.5
-  resolution: "osenv@npm:0.1.5"
-  dependencies:
-    os-homedir: ^1.0.0
-    os-tmpdir: ^1.0.0
-  checksum: 779d261920f2a13e5e18cf02446484f12747d3f2ff82280912f52b213162d43d312647a40c332373cbccd5e3fb8126915d3bfea8dde4827f70f82da76e52d359
-  languageName: node
-  linkType: hard
-
 "ospath@npm:^1.2.2":
   version: 1.2.2
   resolution: "ospath@npm:1.2.2"
@@ -13482,13 +13838,34 @@ __metadata:
   languageName: node
   linkType: hard
 
-"picomatch@npm:^2.0.4, picomatch@npm:^2.2.1, picomatch@npm:^2.2.3":
+"picocolors@npm:^0.2.1":
+  version: 0.2.1
+  resolution: "picocolors@npm:0.2.1"
+  checksum: 3b0f441f0062def0c0f39e87b898ae7461c3a16ffc9f974f320b44c799418cabff17780ee647fda42b856a1dc45897e2c62047e1b546d94d6d5c6962f45427b2
+  languageName: node
+  linkType: hard
+
+"picocolors@npm:^1.0.0":
+  version: 1.0.0
+  resolution: "picocolors@npm:1.0.0"
+  checksum: a2e8092dd86c8396bdba9f2b5481032848525b3dc295ce9b57896f931e63fc16f79805144321f72976383fc249584672a75cc18d6777c6b757603f372f745981
+  languageName: node
+  linkType: hard
+
+"picomatch@npm:^2.0.4, picomatch@npm:^2.2.1":
   version: 2.3.0
   resolution: "picomatch@npm:2.3.0"
   checksum: 16818720ea7c5872b6af110760dee856c8e4cd79aed1c7a006d076b1cc09eff3ae41ca5019966694c33fbd2e1cc6ea617ab10e4adac6df06556168f13be3fca2
   languageName: node
   linkType: hard
 
+"picomatch@npm:^2.3.1":
+  version: 2.3.1
+  resolution: "picomatch@npm:2.3.1"
+  checksum: 050c865ce81119c4822c45d3c84f1ced46f93a0126febae20737bd05ca20589c564d6e9226977df859ed5e03dc73f02584a2b0faad36e896936238238b0446cf
+  languageName: node
+  linkType: hard
+
 "pify@npm:^2.0.0, pify@npm:^2.2.0":
   version: 2.3.0
   resolution: "pify@npm:2.3.0"
@@ -14421,13 +14798,12 @@ __metadata:
   linkType: hard
 
 "postcss@npm:^7, postcss@npm:^7.0.0, postcss@npm:^7.0.1, postcss@npm:^7.0.14, postcss@npm:^7.0.17, postcss@npm:^7.0.2, postcss@npm:^7.0.23, postcss@npm:^7.0.27, postcss@npm:^7.0.32, postcss@npm:^7.0.5, postcss@npm:^7.0.6":
-  version: 7.0.36
-  resolution: "postcss@npm:7.0.36"
+  version: 7.0.39
+  resolution: "postcss@npm:7.0.39"
   dependencies:
-    chalk: ^2.4.2
+    picocolors: ^0.2.1
     source-map: ^0.6.1
-    supports-color: ^6.1.0
-  checksum: 4cfc0989b9ad5d0e8971af80d87f9c5beac5c84cb89ff22ad69852edf73c0a2fa348e7e0a135b5897bf893edad0fe86c428769050431ad9b532f072ff530828d
+  checksum: 4ac793f506c23259189064bdc921260d869a115a82b5e713973c5af8e94fbb5721a5cc3e1e26840500d7e1f1fa42a209747c5b1a151918a9bc11f0d7ed9048e3
   languageName: node
   linkType: hard
 
@@ -14493,13 +14869,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"process-nextick-args@npm:~1.0.6":
-  version: 1.0.7
-  resolution: "process-nextick-args@npm:1.0.7"
-  checksum: 41224fbc803ac6c96907461d4dfc20942efa3ca75f2d521bcf7cf0e89f8dec127fb3fb5d76746b8fb468a232ea02d84824fae08e027aec185fd29049c66d49f8
-  languageName: node
-  linkType: hard
-
 "process-nextick-args@npm:~2.0.0":
   version: 2.0.1
   resolution: "process-nextick-args@npm:2.0.1"
@@ -14616,13 +14985,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"pseudomap@npm:^1.0.2":
-  version: 1.0.2
-  resolution: "pseudomap@npm:1.0.2"
-  checksum: 856c0aae0ff2ad60881168334448e898ad7a0e45fe7386d114b150084254c01e200c957cf378378025df4e052c7890c5bd933939b0e0d2ecfcc1dc2f0b2991f5
-  languageName: node
-  linkType: hard
-
 "psl@npm:^1.1.28":
   version: 1.8.0
   resolution: "psl@npm:1.8.0"
@@ -14711,9 +15073,9 @@ __metadata:
   linkType: hard
 
 "qs@npm:~6.5.2":
-  version: 6.5.2
-  resolution: "qs@npm:6.5.2"
-  checksum: 24af7b9928ba2141233fba2912876ff100403dba1b08b20c3b490da9ea6c636760445ea2211a079e7dfa882a5cf8f738337b3748c8bdd0f93358fa8881d2db8f
+  version: 6.5.3
+  resolution: "qs@npm:6.5.3"
+  checksum: 6f20bf08cabd90c458e50855559539a28d00b2f2e7dddcb66082b16a43188418cb3cb77cbd09268bcef6022935650f0534357b8af9eeb29bf0f27ccb17655692
   languageName: node
   linkType: hard
 
@@ -14766,6 +15128,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"quick-lru@npm:^4.0.1":
+  version: 4.0.1
+  resolution: "quick-lru@npm:4.0.1"
+  checksum: bea46e1abfaa07023e047d3cf1716a06172c4947886c053ede5c50321893711577cb6119360f810cc3ffcd70c4d7db4069c3cee876b358ceff8596e062bd1154
+  languageName: node
+  linkType: hard
+
 "raf@npm:^3.4.1":
   version: 3.4.1
   resolution: "raf@npm:3.4.1"
@@ -15077,9 +15446,9 @@ __metadata:
   languageName: node
   linkType: hard
 
-"react-rte@npm:0.16.3":
-  version: 0.16.3
-  resolution: "react-rte@npm:0.16.3"
+"react-rte@npm:^0.16.5":
+  version: 0.16.5
+  resolution: "react-rte@npm:0.16.5"
   dependencies:
     babel-runtime: ^6.23.0
     class-autobind: ^0.1.4
@@ -15092,9 +15461,9 @@ __metadata:
     draft-js-utils: ">=0.2.0"
     immutable: ^3.8.1
   peerDependencies:
-    react: 0.14.x || 15.x.x || 16.x.x
-    react-dom: 0.14.x || 15.x.x || 16.x.x
-  checksum: 812ed35161bea266cbdf42da0173398834eba0166328a01ae521c86b29b573ed25107985d3a077344ecd30536804376c0d94cb7d534abecdbc1dbf4d7af8bdc4
+    react: 0.14.x || 15.x.x || 16.x.x || 17.x.x
+    react-dom: 0.14.x || 15.x.x || 16.x.x || 17.x.x
+  checksum: 3af94acd7790989c44babc7b1327a0a047a1a7fd03f13d5c1ef2d276e949d7346a8b1b875b8457c2624e5c0cdcb6e3980f967280c52ff2f92d8234debec01c03
   languageName: node
   linkType: hard
 
@@ -15299,6 +15668,17 @@ __metadata:
   languageName: node
   linkType: hard
 
+"read-pkg-up@npm:^7.0.1":
+  version: 7.0.1
+  resolution: "read-pkg-up@npm:7.0.1"
+  dependencies:
+    find-up: ^4.1.0
+    read-pkg: ^5.2.0
+    type-fest: ^0.8.1
+  checksum: e4e93ce70e5905b490ca8f883eb9e48b5d3cebc6cd4527c25a0d8f3ae2903bd4121c5ab9c5a3e217ada0141098eeb661313c86fa008524b089b8ed0b7f165e44
+  languageName: node
+  linkType: hard
+
 "read-pkg@npm:^1.0.0":
   version: 1.1.0
   resolution: "read-pkg@npm:1.1.0"
@@ -15332,7 +15712,19 @@ __metadata:
   languageName: node
   linkType: hard
 
-"readable-stream@npm:1 || 2, readable-stream@npm:^2.0.0, readable-stream@npm:^2.0.1, readable-stream@npm:^2.0.2, readable-stream@npm:^2.0.6, readable-stream@npm:^2.1.5, readable-stream@npm:^2.2.2, readable-stream@npm:^2.3.3, readable-stream@npm:^2.3.6, readable-stream@npm:~2.3.6":
+"read-pkg@npm:^5.2.0":
+  version: 5.2.0
+  resolution: "read-pkg@npm:5.2.0"
+  dependencies:
+    "@types/normalize-package-data": ^2.4.0
+    normalize-package-data: ^2.5.0
+    parse-json: ^5.0.0
+    type-fest: ^0.6.0
+  checksum: eb696e60528b29aebe10e499ba93f44991908c57d70f2d26f369e46b8b9afc208ef11b4ba64f67630f31df8b6872129e0a8933c8c53b7b4daf0eace536901222
+  languageName: node
+  linkType: hard
+
+"readable-stream@npm:1 || 2, readable-stream@npm:^2.0.0, readable-stream@npm:^2.0.1, readable-stream@npm:^2.0.2, readable-stream@npm:^2.1.5, readable-stream@npm:^2.2.2, readable-stream@npm:^2.3.3, readable-stream@npm:^2.3.6, readable-stream@npm:~2.3.6":
   version: 2.3.7
   resolution: "readable-stream@npm:2.3.7"
   dependencies:
@@ -15358,20 +15750,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"readable-stream@npm:~2.0.6":
-  version: 2.0.6
-  resolution: "readable-stream@npm:2.0.6"
-  dependencies:
-    core-util-is: ~1.0.0
-    inherits: ~2.0.1
-    isarray: ~1.0.0
-    process-nextick-args: ~1.0.6
-    string_decoder: ~0.10.x
-    util-deprecate: ~1.0.1
-  checksum: 5258b248531e58cbd855dab6a67dde3f4939f78a6d7707042ce61a74fe3421a7596405bc9c8970484dc9b2d929136e6cc40985f76759b9264a0a273f6136ed3b
-  languageName: node
-  linkType: hard
-
 "readdirp@npm:^2.2.1":
   version: 2.2.1
   resolution: "readdirp@npm:2.2.1"
@@ -15468,6 +15846,16 @@ __metadata:
   languageName: node
   linkType: hard
 
+"redent@npm:^3.0.0":
+  version: 3.0.0
+  resolution: "redent@npm:3.0.0"
+  dependencies:
+    indent-string: ^4.0.0
+    strip-indent: ^3.0.0
+  checksum: fa1ef20404a2d399235e83cc80bd55a956642e37dd197b4b612ba7327bf87fa32745aeb4a1634b2bab25467164ab4ed9c15be2c307923dd08b0fe7c52431ae6b
+  languageName: node
+  linkType: hard
+
 "redux-devtools-extension@npm:^2.13.9":
   version: 2.13.9
   resolution: "redux-devtools-extension@npm:2.13.9"
@@ -16182,31 +16570,31 @@ __metadata:
   languageName: node
   linkType: hard
 
-"sass-graph@npm:2.2.5":
-  version: 2.2.5
-  resolution: "sass-graph@npm:2.2.5"
+"sass-graph@npm:^2.2.4":
+  version: 2.2.6
+  resolution: "sass-graph@npm:2.2.6"
   dependencies:
     glob: ^7.0.0
     lodash: ^4.0.0
     scss-tokenizer: ^0.2.3
-    yargs: ^13.3.2
+    yargs: ^7.0.0
   bin:
     sassgraph: bin/sassgraph
-  checksum: 283b6e5a38c8b4fca77cdc4fc1da9641679120dba80e89361c82b6a3975f90d01cc78129f9f8fd148822e5a648f540c58c9a38b8c2b11ca97abc4f381613c013
+  checksum: 1fb1719c659fdea00a9f55be9722c5902c3d1f1a0919d2e5ceb8a318064f2b214981d98b7d7fecaafc25f522302f919a948351e4ae1d1680b9c045d563550a93
   languageName: node
   linkType: hard
 
-"sass-graph@npm:^2.2.4":
-  version: 2.2.6
-  resolution: "sass-graph@npm:2.2.6"
+"sass-graph@npm:^4.0.1":
+  version: 4.0.1
+  resolution: "sass-graph@npm:4.0.1"
   dependencies:
     glob: ^7.0.0
-    lodash: ^4.0.0
-    scss-tokenizer: ^0.2.3
-    yargs: ^7.0.0
+    lodash: ^4.17.11
+    scss-tokenizer: ^0.4.3
+    yargs: ^17.2.1
   bin:
     sassgraph: bin/sassgraph
-  checksum: 1fb1719c659fdea00a9f55be9722c5902c3d1f1a0919d2e5ceb8a318064f2b214981d98b7d7fecaafc25f522302f919a948351e4ae1d1680b9c045d563550a93
+  checksum: 896f99253bd77a429a95e483ebddee946e195b61d3f84b3e1ccf8ad843265ec0585fa40bf55fbf354c5f57eb9fd0349834a8b190cd2161ab1234cb9af10e3601
   languageName: node
   linkType: hard
 
@@ -16303,6 +16691,16 @@ __metadata:
   languageName: node
   linkType: hard
 
+"scss-tokenizer@npm:^0.4.3":
+  version: 0.4.3
+  resolution: "scss-tokenizer@npm:0.4.3"
+  dependencies:
+    js-base64: ^2.4.9
+    source-map: ^0.7.3
+  checksum: f3697bb155ae23d88c7cd0275988a73231fe675fbbd250b4e56849ba66319fc249a597f3799a92f9890b12007f00f8f6a7f441283e634679e2acdb2287a341d1
+  languageName: node
+  linkType: hard
+
 "select-hose@npm:^2.0.0":
   version: 2.0.0
   resolution: "select-hose@npm:2.0.0"
@@ -16320,15 +16718,15 @@ __metadata:
   linkType: hard
 
 "semver@npm:2 || 3 || 4 || 5, semver@npm:^5.3.0, semver@npm:^5.4.1, semver@npm:^5.5.0, semver@npm:^5.5.1, semver@npm:^5.6.0, semver@npm:^5.7.0, semver@npm:^5.7.1":
-  version: 5.7.1
-  resolution: "semver@npm:5.7.1"
+  version: 5.7.2
+  resolution: "semver@npm:5.7.2"
   bin:
-    semver: ./bin/semver
-  checksum: 57fd0acfd0bac382ee87cd52cd0aaa5af086a7dc8d60379dfe65fea491fb2489b6016400813930ecd61fd0952dae75c115287a1b16c234b1550887117744dfaf
+    semver: bin/semver
+  checksum: fb4ab5e0dd1c22ce0c937ea390b4a822147a9c53dbd2a9a0132f12fe382902beef4fbf12cf51bb955248d8d15874ce8cd89532569756384f994309825f10b686
   languageName: node
   linkType: hard
 
-"semver@npm:6.3.0, semver@npm:^6.0.0, semver@npm:^6.1.1, semver@npm:^6.1.2, semver@npm:^6.2.0, semver@npm:^6.3.0":
+"semver@npm:6.3.0":
   version: 6.3.0
   resolution: "semver@npm:6.3.0"
   bin:
@@ -16346,23 +16744,23 @@ __metadata:
   languageName: node
   linkType: hard
 
-"semver@npm:^7.3.5":
-  version: 7.3.5
-  resolution: "semver@npm:7.3.5"
-  dependencies:
-    lru-cache: ^6.0.0
+"semver@npm:^6.0.0, semver@npm:^6.1.1, semver@npm:^6.1.2, semver@npm:^6.2.0, semver@npm:^6.3.0":
+  version: 6.3.1
+  resolution: "semver@npm:6.3.1"
   bin:
     semver: bin/semver.js
-  checksum: 5eafe6102bea2a7439897c1856362e31cc348ccf96efd455c8b5bc2c61e6f7e7b8250dc26b8828c1d76a56f818a7ee907a36ae9fb37a599d3d24609207001d60
+  checksum: ae47d06de28836adb9d3e25f22a92943477371292d9b665fb023fae278d345d508ca1958232af086d85e0155aee22e313e100971898bbb8d5d89b8b1d4054ca2
   languageName: node
   linkType: hard
 
-"semver@npm:~5.3.0":
-  version: 5.3.0
-  resolution: "semver@npm:5.3.0"
+"semver@npm:^7.3.4, semver@npm:^7.3.5":
+  version: 7.5.4
+  resolution: "semver@npm:7.5.4"
+  dependencies:
+    lru-cache: ^6.0.0
   bin:
-    semver: ./bin/semver
-  checksum: 2717b14299c76a4b35aec0aafebca22a3644da2942d2a4095f26e36d77a9bbe17a9a3a5199795f83edd26323d5c22024a2d9d373a038dec4e023156fa166d314
+    semver: bin/semver.js
+  checksum: 12d8ad952fa353b0995bf180cdac205a4068b759a140e5d3c608317098b3575ac2f1e09182206bf2eb26120e1c0ed8fb92c48c592f6099680de56bb071423ca3
   languageName: node
   linkType: hard
 
@@ -16423,7 +16821,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"set-blocking@npm:^2.0.0, set-blocking@npm:~2.0.0":
+"set-blocking@npm:^2.0.0":
   version: 2.0.0
   resolution: "set-blocking@npm:2.0.0"
   checksum: 6e65a05f7cf7ebdf8b7c75b101e18c0b7e3dff4940d480efed8aad3a36a4005140b660fa1d804cb8bce911cac290441dc728084a30504d3516ac2ff7ad607b02
@@ -16719,6 +17117,17 @@ __metadata:
   languageName: node
   linkType: hard
 
+"socks-proxy-agent@npm:^6.0.0":
+  version: 6.2.1
+  resolution: "socks-proxy-agent@npm:6.2.1"
+  dependencies:
+    agent-base: ^6.0.2
+    debug: ^4.3.3
+    socks: ^2.6.2
+  checksum: 9ca089d489e5ee84af06741135c4b0d2022977dad27ac8d649478a114cdce87849e8d82b7c22b51501a4116e231241592946fc7fae0afc93b65030ee57084f58
+  languageName: node
+  linkType: hard
+
 "socks-proxy-agent@npm:^6.1.1":
   version: 6.1.1
   resolution: "socks-proxy-agent@npm:6.1.1"
@@ -16730,6 +17139,17 @@ __metadata:
   languageName: node
   linkType: hard
 
+"socks-proxy-agent@npm:^7.0.0":
+  version: 7.0.0
+  resolution: "socks-proxy-agent@npm:7.0.0"
+  dependencies:
+    agent-base: ^6.0.2
+    debug: ^4.3.3
+    socks: ^2.6.2
+  checksum: 720554370154cbc979e2e9ce6a6ec6ced205d02757d8f5d93fe95adae454fc187a5cbfc6b022afab850a5ce9b4c7d73e0f98e381879cf45f66317a4895953846
+  languageName: node
+  linkType: hard
+
 "socks@npm:^2.6.1":
   version: 2.6.2
   resolution: "socks@npm:2.6.2"
@@ -16740,6 +17160,16 @@ __metadata:
   languageName: node
   linkType: hard
 
+"socks@npm:^2.6.2":
+  version: 2.7.1
+  resolution: "socks@npm:2.7.1"
+  dependencies:
+    ip: ^2.0.0
+    smart-buffer: ^4.2.0
+  checksum: 259d9e3e8e1c9809a7f5c32238c3d4d2a36b39b83851d0f573bfde5f21c4b1288417ce1af06af1452569cd1eb0841169afd4998f0e04ba04656f6b7f0e46d748
+  languageName: node
+  linkType: hard
+
 "sort-keys@npm:^1.0.0":
   version: 1.1.2
   resolution: "sort-keys@npm:1.1.2"
@@ -16818,6 +17248,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"source-map@npm:^0.7.3":
+  version: 0.7.4
+  resolution: "source-map@npm:0.7.4"
+  checksum: 01cc5a74b1f0e1d626a58d36ad6898ea820567e87f18dfc9d24a9843a351aaa2ec09b87422589906d6ff1deed29693e176194dc88bcae7c9a852dc74b311dbf5
+  languageName: node
+  linkType: hard
+
 "spdx-correct@npm:^3.0.0":
   version: 3.1.1
   resolution: "spdx-correct@npm:3.1.1"
@@ -16942,7 +17379,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"ssri@npm:^8.0.1":
+"ssri@npm:^8.0.0, ssri@npm:^8.0.1":
   version: 8.0.1
   resolution: "ssri@npm:8.0.1"
   dependencies:
@@ -16951,6 +17388,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"ssri@npm:^9.0.0":
+  version: 9.0.1
+  resolution: "ssri@npm:9.0.1"
+  dependencies:
+    minipass: ^3.1.1
+  checksum: fb58f5e46b6923ae67b87ad5ef1c5ab6d427a17db0bead84570c2df3cd50b4ceb880ebdba2d60726588272890bae842a744e1ecce5bd2a2a582fccd5068309eb
+  languageName: node
+  linkType: hard
+
 "stable@npm:^0.1.8":
   version: 0.1.8
   resolution: "stable@npm:0.1.8"
@@ -17096,7 +17542,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"string-width@npm:^1.0.2 || 2, string-width@npm:^2.1.1":
+"string-width@npm:^2.1.1":
   version: 2.1.1
   resolution: "string-width@npm:2.1.1"
   dependencies:
@@ -17184,13 +17630,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"string_decoder@npm:~0.10.x":
-  version: 0.10.31
-  resolution: "string_decoder@npm:0.10.31"
-  checksum: fe00f8e303647e5db919948ccb5ce0da7dea209ab54702894dd0c664edd98e5d4df4b80d6fabf7b9e92b237359d21136c95bf068b2f7760b772ca974ba970202
-  languageName: node
-  linkType: hard
-
 "string_decoder@npm:~1.1.1":
   version: 1.1.1
   resolution: "string_decoder@npm:1.1.1"
@@ -17211,7 +17650,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"strip-ansi@npm:6.0.0, strip-ansi@npm:^6.0.0":
+"strip-ansi@npm:6.0.0":
   version: 6.0.0
   resolution: "strip-ansi@npm:6.0.0"
   dependencies:
@@ -17247,7 +17686,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"strip-ansi@npm:^6.0.1":
+"strip-ansi@npm:^6.0.0, strip-ansi@npm:^6.0.1":
   version: 6.0.1
   resolution: "strip-ansi@npm:6.0.1"
   dependencies:
@@ -17307,6 +17746,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"strip-indent@npm:^3.0.0":
+  version: 3.0.0
+  resolution: "strip-indent@npm:3.0.0"
+  dependencies:
+    min-indent: ^1.0.0
+  checksum: 18f045d57d9d0d90cd16f72b2313d6364fd2cb4bf85b9f593523ad431c8720011a4d5f08b6591c9d580f446e78855c5334a30fb91aa1560f5d9f95ed1b4a0530
+  languageName: node
+  linkType: hard
+
 "strip-json-comments@npm:^3.0.1":
   version: 3.1.1
   resolution: "strip-json-comments@npm:3.1.1"
@@ -17439,28 +17887,17 @@ __metadata:
   languageName: node
   linkType: hard
 
-"tar@npm:^2.0.0":
-  version: 2.2.2
-  resolution: "tar@npm:2.2.2"
-  dependencies:
-    block-stream: "*"
-    fstream: ^1.0.12
-    inherits: 2
-  checksum: c0c3727d529077423cf771f9f9c06edaaff82034d05d685806d3cee69d334ee8e6f394ee8d02dbd294cdecb95bb22625703279caff24bdb90b17e59de03a4733
-  languageName: node
-  linkType: hard
-
-"tar@npm:^6.0.2, tar@npm:^6.1.2":
-  version: 6.1.11
-  resolution: "tar@npm:6.1.11"
+"tar@npm:^6.0.2, tar@npm:^6.1.11, tar@npm:^6.1.2":
+  version: 6.2.0
+  resolution: "tar@npm:6.2.0"
   dependencies:
     chownr: ^2.0.0
     fs-minipass: ^2.0.0
-    minipass: ^3.0.0
+    minipass: ^5.0.0
     minizlib: ^2.1.1
     mkdirp: ^1.0.3
     yallist: ^4.0.0
-  checksum: a04c07bb9e2d8f46776517d4618f2406fb977a74d914ad98b264fc3db0fe8224da5bec11e5f8902c5b9bcb8ace22d95fbe3c7b36b8593b7dfc8391a25898f32f
+  checksum: db4d9fe74a2082c3a5016630092c54c8375ff3b280186938cfd104f2e089c4fd9bad58688ef6be9cf186a889671bf355c7cda38f09bbf60604b281715ca57f5c
   languageName: node
   linkType: hard
 
@@ -17503,15 +17940,15 @@ __metadata:
   linkType: hard
 
 "terser@npm:^4.1.2, terser@npm:^4.6.12, terser@npm:^4.6.3":
-  version: 4.8.0
-  resolution: "terser@npm:4.8.0"
+  version: 4.8.1
+  resolution: "terser@npm:4.8.1"
   dependencies:
     commander: ^2.20.0
     source-map: ~0.6.1
     source-map-support: ~0.5.12
   bin:
     terser: bin/terser
-  checksum: f980789097d4f856c1ef4b9a7ada37beb0bb022fb8aa3057968862b5864ad7c244253b3e269c9eb0ab7d0caf97b9521273f2d1cf1e0e942ff0016e0583859c71
+  checksum: b342819bf7e82283059aaa3f22bb74deb1862d07573ba5a8947882190ad525fd9b44a15074986be083fd379c58b9a879457a330b66dcdb77b485c44267f9a55a
   languageName: node
   linkType: hard
 
@@ -17630,9 +18067,9 @@ __metadata:
   linkType: hard
 
 "tmpl@npm:1.0.x":
-  version: 1.0.4
-  resolution: "tmpl@npm:1.0.4"
-  checksum: 72c93335044b5b8771207d2e9cf71e8c26b110d0f0f924f6d6c06b509d89552c7c0e4086a574ce4f05110ac40c1faf6277ecba7221afeb57ebbab70d8de39cc4
+  version: 1.0.5
+  resolution: "tmpl@npm:1.0.5"
+  checksum: cd922d9b853c00fe414c5a774817be65b058d54a2d01ebb415840960406c669a0fc632f66df885e24cb022ec812739199ccbdb8d1164c3e513f85bfca5ab2873
   languageName: node
   linkType: hard
 
@@ -17730,7 +18167,14 @@ __metadata:
   languageName: node
   linkType: hard
 
-"trim-newlines@npm:^1.0.0":
+"tr46@npm:~0.0.3":
+  version: 0.0.3
+  resolution: "tr46@npm:0.0.3"
+  checksum: 726321c5eaf41b5002e17ffbd1fb7245999a073e8979085dacd47c4b4e8068ff5777142fc6726d6ca1fd2ff16921b48788b87225cbc57c72636f6efa8efbffe3
+  languageName: node
+  linkType: hard
+
+"trim-newlines@npm:^1.0.0, trim-newlines@npm:^3.0.0":
   version: 3.0.1
   resolution: "trim-newlines@npm:3.0.1"
   checksum: b530f3fadf78e570cf3c761fb74fef655beff6b0f84b29209bac6c9622db75ad1417f4a7b5d54c96605dcd72734ad44526fef9f396807b90839449eb543c6206
@@ -17753,6 +18197,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"true-case-path@npm:^2.2.1":
+  version: 2.2.1
+  resolution: "true-case-path@npm:2.2.1"
+  checksum: fd5f1c2a87a122a65ffb1f84b580366be08dac7f552ea0fa4b5a6ab0a013af950b0e752beddb1c6c1652e6d6a2b293b7b3fd86a5a1706242ad365b68f1b5c6f1
+  languageName: node
+  linkType: hard
+
 "ts-mock-imports@npm:1.3.7":
   version: 1.3.7
   resolution: "ts-mock-imports@npm:1.3.7"
@@ -17913,6 +18364,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"type-fest@npm:^0.18.0":
+  version: 0.18.1
+  resolution: "type-fest@npm:0.18.1"
+  checksum: e96dcee18abe50ec82dab6cbc4751b3a82046da54c52e3b2d035b3c519732c0b3dd7a2fa9df24efd1a38d953d8d4813c50985f215f1957ee5e4f26b0fe0da395
+  languageName: node
+  linkType: hard
+
 "type-fest@npm:^0.21.3":
   version: 0.21.3
   resolution: "type-fest@npm:0.21.3"
@@ -17920,6 +18378,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"type-fest@npm:^0.6.0":
+  version: 0.6.0
+  resolution: "type-fest@npm:0.6.0"
+  checksum: b2188e6e4b21557f6e92960ec496d28a51d68658018cba8b597bd3ef757721d1db309f120ae987abeeda874511d14b776157ff809f23c6d1ce8f83b9b2b7d60f
+  languageName: node
+  linkType: hard
+
 "type-fest@npm:^0.8.1":
   version: 0.8.1
   resolution: "type-fest@npm:0.8.1"
@@ -17978,10 +18443,10 @@ __metadata:
   languageName: node
   linkType: hard
 
-"ua-parser-js@npm:^0.7.18":
-  version: 0.7.24
-  resolution: "ua-parser-js@npm:0.7.24"
-  checksum: 722e0291fe6ad0d439cd29c4cd919f4e1b7262fe78e4c2149756180f8ad723ae04713839115eeb8738aca6d6258a743668090fb1e1417bc1fba27acc815a84e2
+"ua-parser-js@npm:^0.7.18, ua-parser-js@npm:^0.7.30":
+  version: 0.7.36
+  resolution: "ua-parser-js@npm:0.7.36"
+  checksum: 04e18e7f6bf4964a10d74131ea9784c7f01d0c2d3b96f73340ac0a1f8e83d010b99fd7d425e7a2100fa40c58b72f6201408cbf4baa2df1103637f96fb59f2a30
   languageName: node
   linkType: hard
 
@@ -18070,6 +18535,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"unique-filename@npm:^2.0.0":
+  version: 2.0.1
+  resolution: "unique-filename@npm:2.0.1"
+  dependencies:
+    unique-slug: ^3.0.0
+  checksum: 807acf3381aff319086b64dc7125a9a37c09c44af7620bd4f7f3247fcd5565660ac12d8b80534dcbfd067e6fe88a67e621386dd796a8af828d1337a8420a255f
+  languageName: node
+  linkType: hard
+
 "unique-slug@npm:^2.0.0":
   version: 2.0.2
   resolution: "unique-slug@npm:2.0.2"
@@ -18079,6 +18553,15 @@ __metadata:
   languageName: node
   linkType: hard
 
+"unique-slug@npm:^3.0.0":
+  version: 3.0.0
+  resolution: "unique-slug@npm:3.0.0"
+  dependencies:
+    imurmurhash: ^0.1.4
+  checksum: 49f8d915ba7f0101801b922062ee46b7953256c93ceca74303bd8e6413ae10aa7e8216556b54dc5382895e8221d04f1efaf75f945c2e4a515b4139f77aa6640c
+  languageName: node
+  linkType: hard
+
 "universalify@npm:^0.1.0":
   version: 0.1.2
   resolution: "universalify@npm:0.1.2"
@@ -18131,6 +18614,20 @@ __metadata:
   languageName: node
   linkType: hard
 
+"update-browserslist-db@npm:^1.0.13":
+  version: 1.0.13
+  resolution: "update-browserslist-db@npm:1.0.13"
+  dependencies:
+    escalade: ^3.1.1
+    picocolors: ^1.0.0
+  peerDependencies:
+    browserslist: ">= 4.21.0"
+  bin:
+    update-browserslist-db: cli.js
+  checksum: 1e47d80182ab6e4ad35396ad8b61008ae2a1330221175d0abd37689658bdb61af9b705bfc41057fd16682474d79944fb2d86767c5ed5ae34b6276b9bed353322
+  languageName: node
+  linkType: hard
+
 "uri-js@npm:^4.2.2":
   version: 4.4.1
   resolution: "uri-js@npm:4.4.1"
@@ -18165,12 +18662,12 @@ __metadata:
   linkType: hard
 
 "url-parse@npm:^1.4.3":
-  version: 1.5.1
-  resolution: "url-parse@npm:1.5.1"
+  version: 1.5.10
+  resolution: "url-parse@npm:1.5.10"
   dependencies:
     querystringify: ^2.1.1
     requires-port: ^1.0.0
-  checksum: ce5c400db52d83b941944502000081e2338e46834cf16f2888961dc034ea5d49dbeb85ac8fdbe28c3fe738c09320a71a2f6d9286b748895cd464b1e208b6b991
+  checksum: fbdba6b1d83336aca2216bbdc38ba658d9cfb8fc7f665eb8b17852de638ff7d1a162c198a8e4ed66001ddbf6c9888d41e4798912c62b4fd777a31657989f7bdf
   languageName: node
   linkType: hard
 
@@ -18437,6 +18934,13 @@ __metadata:
   languageName: node
   linkType: hard
 
+"webidl-conversions@npm:^3.0.0":
+  version: 3.0.1
+  resolution: "webidl-conversions@npm:3.0.1"
+  checksum: c92a0a6ab95314bde9c32e1d0a6dfac83b578f8fa5f21e675bc2706ed6981bc26b7eb7e6a1fab158e5ce4adf9caa4a0aee49a52505d4d13c7be545f15021b17c
+  languageName: node
+  linkType: hard
+
 "webidl-conversions@npm:^4.0.2":
   version: 4.0.2
   resolution: "webidl-conversions@npm:4.0.2"
@@ -18624,6 +19128,16 @@ __metadata:
   languageName: node
   linkType: hard
 
+"whatwg-url@npm:^5.0.0":
+  version: 5.0.0
+  resolution: "whatwg-url@npm:5.0.0"
+  dependencies:
+    tr46: ~0.0.3
+    webidl-conversions: ^3.0.0
+  checksum: b8daed4ad3356cc4899048a15b2c143a9aed0dfae1f611ebd55073310c7b910f522ad75d727346ad64203d7e6c79ef25eafd465f4d12775ca44b90fa82ed9e2c
+  languageName: node
+  linkType: hard
+
 "whatwg-url@npm:^6.4.1":
   version: 6.5.0
   resolution: "whatwg-url@npm:6.5.0"
@@ -18673,7 +19187,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"which@npm:1, which@npm:^1.2.9, which@npm:^1.3.0, which@npm:^1.3.1":
+"which@npm:^1.2.9, which@npm:^1.3.0, which@npm:^1.3.1":
   version: 1.3.1
   resolution: "which@npm:1.3.1"
   dependencies:
@@ -18695,16 +19209,7 @@ __metadata:
   languageName: node
   linkType: hard
 
-"wide-align@npm:^1.1.0":
-  version: 1.1.3
-  resolution: "wide-align@npm:1.1.3"
-  dependencies:
-    string-width: ^1.0.2 || 2
-  checksum: d09c8012652a9e6cab3e82338d1874a4d7db2ad1bd19ab43eb744acf0b9b5632ec406bdbbbb970a8f4771a7d5ef49824d038ba70aa884e7723f5b090ab87134d
-  languageName: node
-  linkType: hard
-
-"wide-align@npm:^1.1.5":
+"wide-align@npm:^1.1.2, wide-align@npm:^1.1.5":
   version: 1.1.5
   resolution: "wide-align@npm:1.1.5"
   dependencies:
@@ -18714,9 +19219,9 @@ __metadata:
   linkType: hard
 
 "word-wrap@npm:~1.2.3":
-  version: 1.2.3
-  resolution: "word-wrap@npm:1.2.3"
-  checksum: 30b48f91fcf12106ed3186ae4fa86a6a1842416df425be7b60485de14bec665a54a68e4b5156647dec3a70f25e84d270ca8bc8cd23182ed095f5c7206a938c1f
+  version: 1.2.5
+  resolution: "word-wrap@npm:1.2.5"
+  checksum: f93ba3586fc181f94afdaff3a6fef27920b4b6d9eaefed0f428f8e07adea2a7f54a5f2830ce59406c8416f033f86902b91eb824072354645eea687dff3691ccb
   languageName: node
   linkType: hard
 
@@ -19045,13 +19550,6 @@ __metadata:
   languageName: node
   linkType: hard
 
-"yallist@npm:^2.1.2":
-  version: 2.1.2
-  resolution: "yallist@npm:2.1.2"
-  checksum: 9ba99409209f485b6fcb970330908a6d41fa1c933f75e08250316cce19383179a6b70a7e0721b89672ebb6199cc377bf3e432f55100da6a7d6e11902b0a642cb
-  languageName: node
-  linkType: hard
-
 "yallist@npm:^3.0.2":
   version: 3.1.1
   resolution: "yallist@npm:3.1.1"
@@ -19096,13 +19594,20 @@ __metadata:
   languageName: node
   linkType: hard
 
-"yargs-parser@npm:^20.2.2":
+"yargs-parser@npm:^20.2.2, yargs-parser@npm:^20.2.3":
   version: 20.2.9
   resolution: "yargs-parser@npm:20.2.9"
   checksum: 8bb69015f2b0ff9e17b2c8e6bfe224ab463dd00ca211eece72a4cd8a906224d2703fb8a326d36fdd0e68701e201b2a60ed7cf81ce0fd9b3799f9fe7745977ae3
   languageName: node
   linkType: hard
 
+"yargs-parser@npm:^21.1.1":
+  version: 21.1.1
+  resolution: "yargs-parser@npm:21.1.1"
+  checksum: ed2d96a616a9e3e1cc7d204c62ecc61f7aaab633dcbfab2c6df50f7f87b393993fe6640d017759fe112d0cb1e0119f2b4150a87305cc873fd90831c6a58ccf1c
+  languageName: node
+  linkType: hard
+
 "yargs-parser@npm:^5.0.1":
   version: 5.0.1
   resolution: "yargs-parser@npm:5.0.1"
@@ -19146,6 +19651,21 @@ __metadata:
   languageName: node
   linkType: hard
 
+"yargs@npm:^17.2.1":
+  version: 17.7.2
+  resolution: "yargs@npm:17.7.2"
+  dependencies:
+    cliui: ^8.0.1
+    escalade: ^3.1.1
+    get-caller-file: ^2.0.5
+    require-directory: ^2.1.1
+    string-width: ^4.2.3
+    y18n: ^5.0.5
+    yargs-parser: ^21.1.1
+  checksum: 73b572e863aa4a8cbef323dd911d79d193b772defd5a51aab0aca2d446655216f5002c42c5306033968193bdbf892a7a4c110b0d77954a7fdf563e653967b56a
+  languageName: node
+  linkType: hard
+
 "yargs@npm:^7.0.0":
   version: 7.1.2
   resolution: "yargs@npm:7.1.2"
index 87ed7c4507fa21b5d10f875556ca312e7d9fa537..e610ec158ffc380275ba5619bdc86036061e804d 100644 (file)
@@ -33,6 +33,16 @@ nginx:
         requires:
           __CERT_REQUIRES__
         config:
+          # Maps WB1 '/actions?uuid=X' URLs to their equivalent on WB2
+          - 'map $request_uri $actions_redirect':
+            - '~^/actions\?uuid=(.*-4zz18-.*)': '/collections/$1'
+            - '~^/actions\?uuid=(.*-j7d0g-.*)': '/projects/$1'
+            - '~^/actions\?uuid=(.*-tpzed-.*)': '/projects/$1'
+            - '~^/actions\?uuid=(.*-7fd4e-.*)': '/workflows/$1'
+            - '~^/actions\?uuid=(.*-xvhdp-.*)': '/processes/$1'
+            - '~^/actions\?uuid=(.*)': '/'
+            - default: 0
+
           - server:
             - server_name: workbench.__DOMAIN__
             - listen:
@@ -49,6 +59,10 @@ nginx:
     # rewrite ^/projects.* /projects redirect;
     # rewrite ^/trash /trash redirect;
 
+            # WB1 '/actions?uuid=X' URL Redirects
+            - 'if ($actions_redirect)':
+              - return: '301 $actions_redirect'
+
     # Redirects that include a uuid
             - rewrite: '^/work_units/(.*) /processes/$1 redirect'
             - rewrite: '^/container_requests/(.*) /processes/$1 redirect'
index 5a7d9a269a5817c0c8be6570703b2d48b6f485d0..132a2d63828520740b82f2236e3e889b1435c88a 100644 (file)
@@ -24,18 +24,40 @@ extra_custom_certs_file_directory_certs_dir:
   {%- for cert in certs %}
     {%- set cert_file = 'arvados-' ~ cert ~ '.pem' %}
     {%- set key_file = 'arvados-' ~ cert ~ '.key' %}
-    {% for c in [cert_file, key_file] %}
-extra_custom_certs_file_copy_{{ c }}:
+extra_custom_certs_{{ cert }}_cert_file_copy:
   file.copy:
-    - name: {{ dest_cert_dir }}/{{ c }}
-    - source: {{ orig_cert_dir }}/{{ c }}
+    - name: {{ dest_cert_dir }}/{{ cert_file }}
+    - source: {{ orig_cert_dir }}/{{ cert_file }}
     - force: true
     - user: root
     - group: root
     - mode: 0640
-    - unless: cmp {{ dest_cert_dir }}/{{ c }} {{ orig_cert_dir }}/{{ c }}
+    - unless: cmp {{ dest_cert_dir }}/{{ cert_file }} {{ orig_cert_dir }}/{{ cert_file }}
     - require:
       - file: extra_custom_certs_file_directory_certs_dir
-    {%- endfor %}
+
+extra_custom_certs_{{ cert }}_key_file_copy:
+  file.copy:
+    - name: {{ dest_cert_dir }}/{{ key_file }}
+    - source: {{ orig_cert_dir }}/{{ key_file }}
+    - force: true
+    - user: root
+    - group: root
+    - mode: 0640
+    - unless: cmp {{ dest_cert_dir }}/{{ key_file }} {{ orig_cert_dir }}/{{ key_file }}
+    - require:
+      - file: extra_custom_certs_file_directory_certs_dir
+
+extra_nginx_service_reload_on_{{ cert }}_certs_changes:
+  cmd.run:
+    - name: systemctl reload nginx
+    - require:
+      - file: extra_custom_certs_{{ cert }}_cert_file_copy
+      - file: extra_custom_certs_{{ cert }}_key_file_copy
+    - onchanges:
+      - file: extra_custom_certs_{{ cert }}_cert_file_copy
+      - file: extra_custom_certs_{{ cert }}_key_file_copy
+    - onlyif:
+      - test $(openssl rsa -modulus -noout -in {{ dest_cert_dir }}/{{ key_file }}) == $(openssl x509 -modulus -noout -in {{ dest_cert_dir }}/{{ cert_file }})
   {%- endfor %}
 {%- endif %}
index 89412e42403d14e7e35b6ad1003b1adef437b449..85e711dc6aea2d5c75c017fc1a2d4bd9db2947c7 100644 (file)
@@ -33,7 +33,7 @@ nginx:
         enabled: true
         overwrite: true
         requires:
-          file: extra_custom_certs_file_copy_arvados-keepproxy.pem
+          file: extra_custom_certs_keepproxy_cert_file_copy
         config:
           - server:
             - server_name: keep.__CLUSTER__.__DOMAIN__
index 5859d4cfa4d3cf33f6c44471b967c5f505bb7f92..daa1f319299db4491970a5a34852f5afa276ad50 100644 (file)
@@ -39,7 +39,7 @@ nginx:
         enabled: true
         overwrite: true
         requires:
-          file: extra_custom_certs_file_copy_arvados-{{ vh }}.pem
+          file: extra_custom_certs_{{ vh }}_cert_file_copy
         config:
           - server:
             - server_name: {{ vh }}.__CLUSTER__.__DOMAIN__
index 1afc7ab80500a575711613cbca7a248cc9be0e26..541921ca31efe41ba3871930ca90e54d961915c3 100644 (file)
@@ -55,7 +55,7 @@ nginx:
         enabled: true
         overwrite: true
         requires:
-          file: extra_custom_certs_file_copy_arvados-webshell.pem
+          file: extra_custom_certs_webshell_cert_file_copy
         config:
           - server:
             - server_name: webshell.__CLUSTER__.__DOMAIN__
index 2a1f241836bf3d3b327e0461fdd63a37a2665d96..f9864f109d5d260a9204aec0525e765e691c810a 100644 (file)
@@ -33,7 +33,7 @@ nginx:
         enabled: true
         overwrite: true
         requires:
-          file: extra_custom_certs_file_copy_arvados-websocket.pem
+          file: extra_custom_certs_websocket_cert_file_copy
         config:
           - server:
             - server_name: ws.__CLUSTER__.__DOMAIN__
index 87ed7c4507fa21b5d10f875556ca312e7d9fa537..e610ec158ffc380275ba5619bdc86036061e804d 100644 (file)
@@ -33,6 +33,16 @@ nginx:
         requires:
           __CERT_REQUIRES__
         config:
+          # Maps WB1 '/actions?uuid=X' URLs to their equivalent on WB2
+          - 'map $request_uri $actions_redirect':
+            - '~^/actions\?uuid=(.*-4zz18-.*)': '/collections/$1'
+            - '~^/actions\?uuid=(.*-j7d0g-.*)': '/projects/$1'
+            - '~^/actions\?uuid=(.*-tpzed-.*)': '/projects/$1'
+            - '~^/actions\?uuid=(.*-7fd4e-.*)': '/workflows/$1'
+            - '~^/actions\?uuid=(.*-xvhdp-.*)': '/processes/$1'
+            - '~^/actions\?uuid=(.*)': '/'
+            - default: 0
+
           - server:
             - server_name: workbench.__DOMAIN__
             - listen:
@@ -49,6 +59,10 @@ nginx:
     # rewrite ^/projects.* /projects redirect;
     # rewrite ^/trash /trash redirect;
 
+            # WB1 '/actions?uuid=X' URL Redirects
+            - 'if ($actions_redirect)':
+              - return: '301 $actions_redirect'
+
     # Redirects that include a uuid
             - rewrite: '^/work_units/(.*) /processes/$1 redirect'
             - rewrite: '^/container_requests/(.*) /processes/$1 redirect'
index 3b2be59f368c353793bec874b9cf9dae1adde896..cf8874c2d59757969b0fd20b8b2071ba40fc50ec 100644 (file)
@@ -15,19 +15,41 @@ extra_custom_certs_file_directory_certs_dir:
 
   {%- for cert in certs %}
     {%- set cert_file = 'arvados-' ~ cert ~ '.pem' %}
-    {#- set csr_file = 'arvados-' ~ cert ~ '.csr' #}
     {%- set key_file = 'arvados-' ~ cert ~ '.key' %}
-    {% for c in [cert_file, key_file] %}
-extra_custom_certs_file_copy_{{ c }}:
+extra_custom_certs_{{ cert }}_cert_file_copy:
   file.copy:
-    - name: {{ dest_cert_dir }}/{{ c }}
-    - source: {{ orig_cert_dir }}/{{ c }}
+    - name: {{ dest_cert_dir }}/{{ cert_file }}
+    - source: {{ orig_cert_dir }}/{{ cert_file }}
     - force: true
     - user: root
     - group: root
-    - unless: cmp {{ dest_cert_dir }}/{{ c }} {{ orig_cert_dir }}/{{ c }}
+    - mode: 0640
+    - unless: cmp {{ dest_cert_dir }}/{{ cert_file }} {{ orig_cert_dir }}/{{ cert_file }}
     - require:
       - file: extra_custom_certs_file_directory_certs_dir
-    {%- endfor %}
+
+extra_custom_certs_{{ cert }}_key_file_copy:
+  file.copy:
+    - name: {{ dest_cert_dir }}/{{ key_file }}
+    - source: {{ orig_cert_dir }}/{{ key_file }}
+    - force: true
+    - user: root
+    - group: root
+    - mode: 0640
+    - unless: cmp {{ dest_cert_dir }}/{{ key_file }} {{ orig_cert_dir }}/{{ key_file }}
+    - require:
+      - file: extra_custom_certs_file_directory_certs_dir
+
+extra_nginx_service_reload_on_{{ cert }}_certs_changes:
+  cmd.run:
+    - name: systemctl reload nginx
+    - require:
+      - file: extra_custom_certs_{{ cert }}_cert_file_copy
+      - file: extra_custom_certs_{{ cert }}_key_file_copy
+    - onchanges:
+      - file: extra_custom_certs_{{ cert }}_cert_file_copy
+      - file: extra_custom_certs_{{ cert }}_key_file_copy
+    - onlyif:
+      - test $(openssl rsa -modulus -noout -in {{ dest_cert_dir }}/{{ key_file }}) == $(openssl x509 -modulus -noout -in {{ dest_cert_dir }}/{{ cert_file }})
   {%- endfor %}
 {%- endif %}
index 5f83582bc3c32e496c555383c2ad004ec312c8ec..6518646a74bbd40a19199f15df9156c9d9ce4e28 100644 (file)
@@ -173,8 +173,8 @@ extra_snakeoil_certs_arvados_snakeoil_cert_{{ vh }}_cmd_run:
       - pkg: extra_snakeoil_certs_dependencies_pkg_installed
       - cmd: extra_snakeoil_certs_arvados_snakeoil_ca_cmd_run
     - require_in:
-      - file: extra_custom_certs_file_copy_arvados-{{ vh }}.pem
-      - file: extra_custom_certs_file_copy_arvados-{{ vh }}.key
+      - file: extra_custom_certs_{{ vh }}_cert_file_copy
+      - file: extra_custom_certs_{{ vh }}_key_file_copy
 
   {%- if grains.get('os_family') == 'Debian' %}
 extra_snakeoil_certs_certs_permissions_{{ vh}}_cmd_run:
index 3b2be59f368c353793bec874b9cf9dae1adde896..cf8874c2d59757969b0fd20b8b2071ba40fc50ec 100644 (file)
@@ -15,19 +15,41 @@ extra_custom_certs_file_directory_certs_dir:
 
   {%- for cert in certs %}
     {%- set cert_file = 'arvados-' ~ cert ~ '.pem' %}
-    {#- set csr_file = 'arvados-' ~ cert ~ '.csr' #}
     {%- set key_file = 'arvados-' ~ cert ~ '.key' %}
-    {% for c in [cert_file, key_file] %}
-extra_custom_certs_file_copy_{{ c }}:
+extra_custom_certs_{{ cert }}_cert_file_copy:
   file.copy:
-    - name: {{ dest_cert_dir }}/{{ c }}
-    - source: {{ orig_cert_dir }}/{{ c }}
+    - name: {{ dest_cert_dir }}/{{ cert_file }}
+    - source: {{ orig_cert_dir }}/{{ cert_file }}
     - force: true
     - user: root
     - group: root
-    - unless: cmp {{ dest_cert_dir }}/{{ c }} {{ orig_cert_dir }}/{{ c }}
+    - mode: 0640
+    - unless: cmp {{ dest_cert_dir }}/{{ cert_file }} {{ orig_cert_dir }}/{{ cert_file }}
     - require:
       - file: extra_custom_certs_file_directory_certs_dir
-    {%- endfor %}
+
+extra_custom_certs_{{ cert }}_key_file_copy:
+  file.copy:
+    - name: {{ dest_cert_dir }}/{{ key_file }}
+    - source: {{ orig_cert_dir }}/{{ key_file }}
+    - force: true
+    - user: root
+    - group: root
+    - mode: 0640
+    - unless: cmp {{ dest_cert_dir }}/{{ key_file }} {{ orig_cert_dir }}/{{ key_file }}
+    - require:
+      - file: extra_custom_certs_file_directory_certs_dir
+
+extra_nginx_service_reload_on_{{ cert }}_certs_changes:
+  cmd.run:
+    - name: systemctl reload nginx
+    - require:
+      - file: extra_custom_certs_{{ cert }}_cert_file_copy
+      - file: extra_custom_certs_{{ cert }}_key_file_copy
+    - onchanges:
+      - file: extra_custom_certs_{{ cert }}_cert_file_copy
+      - file: extra_custom_certs_{{ cert }}_key_file_copy
+    - onlyif:
+      - test $(openssl rsa -modulus -noout -in {{ dest_cert_dir }}/{{ key_file }}) == $(openssl x509 -modulus -noout -in {{ dest_cert_dir }}/{{ cert_file }})
   {%- endfor %}
 {%- endif %}
index 0ee79491830f58e3c47933b71dfd164f1d5001da..2cee5c9b49bd73750ebf07646d4113ff0a67a37a 100644 (file)
@@ -143,8 +143,8 @@ extra_snakeoil_certs_arvados_snakeoil_cert___HOSTNAME_EXT___cmd_run:
       - pkg: extra_snakeoil_certs_dependencies_pkg_installed
       - cmd: extra_snakeoil_certs_arvados_snakeoil_ca_cmd_run
     - require_in:
-      - file: extra_custom_certs_file_copy_arvados-__HOSTNAME_EXT__.pem
-      - file: extra_custom_certs_file_copy_arvados-__HOSTNAME_EXT__.key
+      - file: extra_custom_certs___HOSTNAME_EXT___cert_file_copy
+      - file: extra_custom_certs___HOSTNAME_EXT___key_file_copy
 
   {%- if grains.get('os_family') == 'Debian' %}
 extra_snakeoil_certs_certs_permissions___HOSTNAME_EXT___cmd_run:
index beac6b035362472248bdc2a550ee58fa7c2fa70d..a93899a61a338fa10c54f361a4688f1acc4b6c8c 100755 (executable)
@@ -676,7 +676,7 @@ if [ -z "${ROLES:-}" ]; then
       grep -q ${CERT_NAME} ${P_DIR}/extra_custom_certs.sls || echo "  - ${CERT_NAME}" >> ${P_DIR}/extra_custom_certs.sls
 
       # As the pillar differs whether we use LE or custom certs, we need to do a final edition on them
-      sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${CERT_NAME}.pem/g;
+      sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_${CERT_NAME}_cert_file_copy/g;
               s#__CERT_PEM__#/etc/nginx/ssl/arvados-${CERT_NAME}.pem#g;
               s#__CERT_KEY__#/etc/nginx/ssl/arvados-${CERT_NAME}.key#g" \
       ${P_DIR}/nginx_${c}_configuration.sls
@@ -766,7 +766,7 @@ else
         elif [ "${SSL_MODE}" = "bring-your-own" ]; then
           grep -q "ssl_key_encrypted" ${PILLARS_TOP} || echo "    - ssl_key_encrypted" >> ${PILLARS_TOP}
           for SVC in grafana prometheus; do
-            sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${SVC}.pem/g;
+            sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_${SVC}_cert_file_copy/g;
                     s#__CERT_PEM__#/etc/nginx/ssl/arvados-${SVC}.pem#g;
                     s#__CERT_KEY__#/etc/nginx/ssl/arvados-${SVC}.key#g" \
               ${P_DIR}/nginx_${SVC}_configuration.sls
@@ -804,7 +804,7 @@ else
           fi
         elif [ "${SSL_MODE}" = "bring-your-own" ]; then
           grep -q "ssl_key_encrypted" ${PILLARS_TOP} || echo "    - ssl_key_encrypted" >> ${PILLARS_TOP}
-          sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${R}.pem/g;
+          sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_${R}_cert_file_copy/g;
                   s#__CERT_PEM__#/etc/nginx/ssl/arvados-${R}.pem#g;
                   s#__CERT_KEY__#/etc/nginx/ssl/arvados-${R}.key#g" \
             ${P_DIR}/nginx_${R}_configuration.sls
@@ -860,7 +860,7 @@ else
             ${P_DIR}/nginx_${R}_configuration.sls
           else
             grep -q "ssl_key_encrypted" ${PILLARS_TOP} || echo "    - ssl_key_encrypted" >> ${PILLARS_TOP}
-            sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${R}.pem/g;
+            sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_${R}_cert_file_copy/g;
                     s#__CERT_PEM__#/etc/nginx/ssl/arvados-${R}.pem#g;
                     s#__CERT_KEY__#/etc/nginx/ssl/arvados-${R}.key#g" \
             ${P_DIR}/nginx_${R}_configuration.sls
@@ -949,14 +949,14 @@ else
           # Special case for keepweb
           if [ ${R} = "keepweb" ]; then
             for kwsub in download collections; do
-              sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${kwsub}.pem/g;
+              sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_${kwsub}_cert_file_copy/g;
                       s#__CERT_PEM__#/etc/nginx/ssl/arvados-${kwsub}.pem#g;
                       s#__CERT_KEY__#/etc/nginx/ssl/arvados-${kwsub}.key#g" \
               ${P_DIR}/nginx_${kwsub}_configuration.sls
               grep -q ${kwsub} ${P_DIR}/extra_custom_certs.sls || echo "  - ${kwsub}" >> ${P_DIR}/extra_custom_certs.sls
             done
           else
-            sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_file_copy_arvados-${R}.pem/g;
+            sed -i "s/__CERT_REQUIRES__/file: extra_custom_certs_${R}_cert_file_copy/g;
                     s#__CERT_PEM__#/etc/nginx/ssl/arvados-${R}.pem#g;
                     s#__CERT_KEY__#/etc/nginx/ssl/arvados-${R}.key#g" \
             ${P_DIR}/nginx_${R}_configuration.sls