Thomas Mooney <tmooney@genome.wustl.edu>
Chen Chen <aflyhorse@gmail.com>
Veritas Genetics, Inc. <*@veritasgenetics.com>
+Curii Corporation, Inc. <*@curii.com>
--- /dev/null
+Arvados Code of Conduct
+=======================
+
+The Arvados Project is dedicated to providing a harassment-free experience for
+everyone. We do not tolerate harassment of participants in any form.
+
+This code of conduct applies to all Arvados Project spaces both online and off:
+Gitter chat, Redmine issues, wiki, mailing lists, forums, video chats, and any other
+Arvados spaces. Anyone who violates this code of conduct may be sanctioned or
+expelled from these spaces at the discretion of the Arvados Team.
+
+Some Arvados Project spaces may have additional rules in place, which will be
+made clearly available to participants. Participants are responsible for
+knowing and abiding by these rules.
+
+Harassment includes, but is not limited to:
+
+ - Offensive comments related to gender, gender identity and expression, sexual
+orientation, disability, mental illness, neuro(a)typicality, physical
+appearance, body size, age, race, or religion.
+ - Unwelcome comments regarding a person’s lifestyle choices and practices,
+including those related to food, health, parenting, drugs, and employment.
+ - Deliberate misgendering or use of [dead](https://www.quora.com/What-is-deadnaming/answer/Nancy-C-Walker)
+or rejected names.
+ - Gratuitous or off-topic sexual images or behaviour in spaces where they’re not
+appropriate.
+ - Physical contact and simulated physical contact (eg, textual descriptions like
+“\*hug\*” or “\*backrub\*”) without consent or after a request to stop.
+ - Threats of violence.
+ - Incitement of violence towards any individual, including encouraging a person
+to commit suicide or to engage in self-harm.
+ - Deliberate intimidation.
+ - Stalking or following.
+ - Harassing photography or recording, including logging online activity for
+harassment purposes.
+ - Sustained disruption of discussion.
+ - Unwelcome sexual attention.
+ - Pattern of inappropriate social contact, such as requesting/assuming
+inappropriate levels of intimacy with others
+ - Continued one-on-one communication after requests to cease.
+ - Deliberate “outing” of any aspect of a person’s identity without their consent
+except as necessary to protect vulnerable people from intentional abuse.
+ - Publication of non-harassing private communication.
+
+The Arvados Project prioritizes marginalized people’s safety over privileged
+people’s comfort. The Arvados Leadership Team will not act on complaints regarding:
+
+ - ‘Reverse’ -isms, including ‘reverse racism,’ ‘reverse sexism,’ and ‘cisphobia’
+ - Reasonable communication of boundaries, such as “leave me alone,” “go away,” or
+“I’m not discussing this with you.”
+ - Communicating in a [tone](http://geekfeminism.wikia.com/wiki/Tone_argument)
+you don’t find congenial
+
+Reporting
+---------
+
+If you are being harassed by a member of the Arvados Project, notice that someone
+else is being harassed, or have any other concerns, please contact the Arvados
+Project Team at contact@arvados.org. If person who is harassing
+you is on the team, they will recuse themselves from handling your incident. We
+will respond as promptly as we can.
+
+This code of conduct applies to Arvados Project spaces, but if you are being
+harassed by a member of Arvados Project outside our spaces, we still want to
+know about it. We will take all good-faith reports of harassment by Arvados Project
+members, especially the Arvados Team, seriously. This includes harassment
+outside our spaces and harassment that took place at any point in time. The
+abuse team reserves the right to exclude people from the Arvados Project based on
+their past behavior, including behavior outside Arvados Project spaces and
+behavior towards people who are not in the Arvados Project.
+
+In order to protect volunteers from abuse and burnout, we reserve the right to
+reject any report we believe to have been made in bad faith. Reports intended
+to silence legitimate criticism may be deleted without response.
+
+We will respect confidentiality requests for the purpose of protecting victims
+of abuse. At our discretion, we may publicly name a person about whom we’ve
+received harassment complaints, or privately warn third parties about them, if
+we believe that doing so will increase the safety of Arvados Project members or
+the general public. We will not name harassment victims without their
+affirmative consent.
+
+Consequences
+------------
+
+Participants asked to stop any harassing behavior are expected to comply
+immediately.
+
+If a participant engages in harassing behavior, the Arvados Team may
+take any action they deem appropriate, up to and including expulsion from all
+Arvados Project spaces and identification of the participant as a harasser to other
+Arvados Project members or the general public.
+
+This anti-harassment policy is based on the [example policy from the Geek
+Feminism wiki](http://geekfeminism.wikia.com/wiki/Community_anti-harassment/Policy),
+created by the Geek Feminism community.
--- /dev/null
+[comment]: # (Copyright © The Arvados Authors. All rights reserved.)
+[comment]: # ()
+[comment]: # (SPDX-License-Identifier: CC-BY-SA-3.0)
+
+# Contributing
+
+Arvados is free software, which means it is free for all to use, learn
+from, and improve. We encourage contributions from the community that
+improve Arvados for everyone. Some examples of contributions are bug
+reports, bug fixes, new features, and scripts or documentation that help
+with using, administering, or installing Arvados. We also love to
+hear about Arvados success stories.
+
+Those interested in contributing should begin by joining the [Arvados community
+channel](https://gitter.im/arvados/community) and telling us about your interest.
+
+Contributers should also create an account at https://dev.arvados.org
+to be able to create and comment on bug tracker issues. The
+Arvados public bug tracker is located at
+https://dev.arvados.org/projects/arvados/issues .
+
+Contributers may also be interested in the [development road map](https://dev.arvados.org/issues/gantt?utf8=%E2%9C%93&set_filter=1&gantt=1&f%5B%5D=project_id&op%5Bproject_id%5D=%3D&v%5Bproject_id%5D%5B%5D=49&f%5B%5D=&zoom=1).
+
+# Development
+
+Git repositories for primary development are located at
+https://git.arvados.org/ and can also be browsed at
+https://dev.arvados.org/projects/arvados/repository . Every push to
+the master branch is also mirrored to Github at
+https://github.com/arvados/arvados .
+
+Visit [Hacking Arvados](https://dev.arvados.org/projects/arvados/wiki/Hacking) for
+detailed information about setting up an Arvados development
+environment, development process, coding standards, and notes about specific components.
+
+If you wish to build the Arvados documentation from a local git clone, see
+[doc/README.textile](doc/README.textile) for instructions.
+
+# Pull requests
+
+The preferred method for making contributions is through Github pull requests.
+
+This is the general contribution process:
+
+1. Fork the Arvados repository using the Github "Fork" button
+2. Clone your fork, make your changes, commit to your fork.
+3. Every commit message must have a DCO sign-off and every file must have a SPDX license (see below).
+4. Add yourself to the [AUTHORS](AUTHORS) file
+5. When your fork is ready, through Github, Create a Pull Request against `arvados:master`
+6. Notify the core team about your pull request through the [Arvados development
+channel](https://gitter.im/arvados/development) or by other means.
+7. A member of the core team will review the pull request. They may have questions or comments, or request changes.
+8. When the contribution is ready, a member of the core team will
+merge the pull request into the master branch, which will
+automatically resolve the pull request.
+
+The Arvados project does not require a contributor agreement in advance, but does require each commit message include a [Developer Certificate of Origin](https://dev.arvados.org/projects/arvados/wiki/Developer_Certificate_Of_Origin). Please ensure *every git commit message* includes `Arvados-DCO-1.1-Signed-off-by`. If you have already made commits without it, fix them with `git commit --amend` or `git rebase`.
+
+The Developer Certificate of Origin line looks like this:
+
+```
+Arvados-DCO-1.1-Signed-off-by: Joe Smith <joe.smith@example.com>
+```
+
+New files must also include `SPDX-License-Identifier` at the top with one of the three Arvados open source licenses. See [COPYING](COPYING) for details.
+
+# Continuous integration
+
+Continuous integration is hosted at https://ci.arvados.org/
+
+Currently, external contributers cannot trigger builds. We are investigating integration with Github pull requests for the future.
+
+[![Build Status](https://ci.arvados.org/buildStatus/icon?job=run-tests)](https://ci.arvados.org/job/run-tests/)
+
+[![Go Report Card](https://goreportcard.com/badge/github.com/arvados/arvados)](https://goreportcard.com/report/github.com/arvados/arvados)
AGPL-3.0: agpl-3.0.txt
Apache-2.0: apache-2.0.txt
CC-BY-SA-3.0: cc-by-sa-3.0.txt
+
+As a general rule, code in the sdk/ directory is licensed Apache-2.0,
+documentation in the doc/ directory is licensed CC-BY-SA-3.0, and
+everything else is licensed AGPL-3.0.
\ No newline at end of file
[comment]: # ()
[comment]: # (SPDX-License-Identifier: CC-BY-SA-3.0)
-[Arvados](https://arvados.org) is a free software distributed computing platform
-for bioinformatics, data science, and high throughput analysis of massive data
-sets. Arvados supports a variety of cloud, cluster and HPC environments.
+[![Join the chat at https://gitter.im/arvados/community](https://badges.gitter.im/arvados/community.svg)](https://gitter.im/arvados/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) | [Installing Arvados](https://doc.arvados.org/install/index.html) | [Installing Client SDKs](https://doc.arvados.org/sdk/index.html) | [Report a bug](https://dev.arvados.org/projects/arvados/issues/new) | [Development and Contributing](CONTRIBUTING.md)
-Arvados consists of:
+<img align="right" src="doc/images/dax.png" height="240px">
-* *Keep*: a petabyte-scale content-addressed distributed storage system for managing and
- storing collections of files, accessible via HTTP and FUSE mount.
+[Arvados](https://arvados.org) is an open source platform for
+managing, processing, and sharing genomic and other large scientific
+and biomedical data. With Arvados, bioinformaticians run and scale
+compute-intensive workflows, developers create biomedical
+applications, and IT administrators manage large compute and storage
+resources.
-* *Crunch*: a Docker-based cluster and HPC workflow engine designed providing
- strong versioning, reproducibilty, and provenance of computations.
+The key components of Arvados are:
-* Related services and components including a web workbench for managing files
- and compute jobs, REST APIs, SDKs, and other tools.
+* *Keep*: Keep is the Arvados storage system for managing and storing large
+collections of files. Keep combines content addressing and a
+distributed storage architecture resulting in both high reliability
+and high throughput. Every file stored in Keep can be accurately
+verified every time it is retrieved. Keep supports the creation of
+collections as a flexible way to define data sets without having to
+re-organize or needlessly copy data. Keep works on a wide range of
+underlying filesystems and object stores.
-## Quick start
+* *Crunch*: Crunch is the orchestration system for running [Common Workflow Language](https://www.commonwl.org) workflows. It is
+designed to maintain data provenance and workflow
+reproducibility. Crunch automatically tracks data inputs and outputs
+through Keep and executes workflow processes in Docker containers. In
+a cloud environment, Crunch optimizes costs by scaling compute on demand.
-Veritas Genetics maintains a public installation of Arvados for evaluation and trial use, the [Arvados Playground](https://playground.arvados.org). A Google account is required to log in.
+* *Workbench*: The Workbench web application allows users to interactively access
+Arvados functionality. It is especially helpful for querying and
+browsing data, visualizing provenance, and tracking the progress of
+workflows.
+
+* *Command Line tools*: The command line interface (CLI) provides convenient access to Arvados
+functionality in the Arvados platform from the command line.
+
+* *API and SDKs*: Arvados is designed to be integrated with existing infrastructure. All
+the services in Arvados are accessed through a RESTful API. SDKs are
+available for Python, Go, R, Perl, Ruby, and Java.
+
+# Quick start
To try out Arvados on your local workstation, you can use Arvbox, which
provides Arvados components pre-installed in a Docker container (requires
configure Arvbox to be accessible over a network and for other options see
http://doc.arvados.org/install/arvbox.html for details.
-## Documentation
+# Documentation
-Complete documentation, including a User Guide, Installation documentation and
-API documentation is available at http://doc.arvados.org/
+Complete documentation, including the [User Guide](https://doc.arvados.org/user/index.html), [Installation documentation](https://doc.arvados.org/install/index.html), [Administrator documentation](https://doc.arvados.org/admin/index.html) and
+[API documentation](https://doc.arvados.org/api/index.html) is available at http://doc.arvados.org/
If you wish to build the Arvados documentation from a local git clone, see
-doc/README.textile for instructions.
+[doc/README.textile](doc/README.textile) for instructions.
-## Community
+# Community
-[![Join the chat at https://gitter.im/curoverse/arvados](https://badges.gitter.im/curoverse/arvados.svg)](https://gitter.im/curoverse/arvados?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
+[![Join the chat at https://gitter.im/arvados/community](https://badges.gitter.im/arvados/community.svg)](https://gitter.im/arvados/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
-The [curoverse/arvados channel](https://gitter.im/curoverse/arvados)
+The [Arvados community channel](https://gitter.im/arvados/community)
channel at [gitter.im](https://gitter.im) is available for live
discussion and support.
-The
-[Arvados user mailing list](http://lists.arvados.org/mailman/listinfo/arvados)
-is a forum for general discussion, questions, and news about Arvados
-development. The
-[Arvados developer mailing list](http://lists.arvados.org/mailman/listinfo/arvados-dev)
-is a forum for more technical discussion, intended for developers and
-contributors to Arvados.
+The [Arvados developement channel](https://gitter.im/arvados/development)
+channel at [gitter.im](https://gitter.im) is used to coordinate development.
-## Development
+The [Arvados user mailing list](http://lists.arvados.org/mailman/listinfo/arvados)
+is used to announce new versions and other news.
-[![Build Status](https://ci.curoverse.com/buildStatus/icon?job=run-tests)](https://ci.curoverse.com/job/run-tests/)
-[![Go Report Card](https://goreportcard.com/badge/github.com/curoverse/arvados)](https://goreportcard.com/report/github.com/curoverse/arvados)
+All participants are expected to abide by the [Arvados Code of Conduct](CODE_OF_CONDUCT.md).
-The Arvados public bug tracker is located at https://dev.arvados.org/projects/arvados/issues
+# Reporting bugs
-Continuous integration is hosted at https://ci.curoverse.com/
+[Report a bug](https://dev.arvados.org/projects/arvados/issues/new) on [dev.arvados.org](https://dev.arvados.org).
-Instructions for setting up a development environment and working on specific
-components can be found on the
-["Hacking Arvados" page of the Arvados wiki](https://dev.arvados.org/projects/arvados/wiki/Hacking).
+# Development and Contributing
-## Contributing
+See [CONTRIBUTING](CONTRIBUTING.md) for information about Arvados development and how to contribute to the Arvados project.
-When making a pull request, please ensure *every git commit message* includes a one-line [Developer Certificate of Origin](https://dev.arvados.org/projects/arvados/wiki/Developer_Certificate_Of_Origin). If you have already made commits without it, fix them with `git commit --amend` or `git rebase`.
+The [development road map](https://dev.arvados.org/issues/gantt?utf8=%E2%9C%93&set_filter=1&gantt=1&f%5B%5D=project_id&op%5Bproject_id%5D=%3D&v%5Bproject_id%5D%5B%5D=49&f%5B%5D=&zoom=1) outlines some of the project priorities over the next twelve months.
-## Licensing
+# Licensing
-Arvados is Free Software. See COPYING for information about Arvados Free
-Software licenses.
+Arvados is Free Software. See [COPYING](COPYING) for information about the open source licenses used in Arvados.
source 'https://rubygems.org'
gem 'rails', '~> 5.0.0'
-gem 'arvados', git: 'https://github.com/curoverse/arvados.git', glob: 'sdk/ruby/arvados.gemspec'
+gem 'arvados', git: 'https://github.com/arvados/arvados.git', glob: 'sdk/ruby/arvados.gemspec'
-gem 'activerecord-nulldb-adapter', git: 'https://github.com/curoverse/nulldb'
+gem 'activerecord-nulldb-adapter', git: 'https://github.com/arvados/nulldb'
gem 'multi_json'
gem 'oj'
gem 'sass'
# Wiselinks hasn't been updated for many years and it's using deprecated methods
# Use our own Wiselinks fork until this PR is accepted:
# https://github.com/igor-alexandrov/wiselinks/pull/116
-# gem 'wiselinks', git: 'https://github.com/curoverse/wiselinks.git', branch: 'rails-5.1-compatibility'
+# gem 'wiselinks', git: 'https://github.com/arvados/wiselinks.git', branch: 'rails-5.1-compatibility'
gem 'sshkey'
gem 'httpclient', '~> 2.5'
# This fork has Rails 4 compatible routes
-gem 'themes_for_rails', git: 'https://github.com/curoverse/themes_for_rails'
+gem 'themes_for_rails', git: 'https://github.com/arvados/themes_for_rails'
gem "deep_merge", :require => 'deep_merge/rails_compat'
GIT
- remote: https://github.com/curoverse/arvados.git
- revision: dd9f2403f43bcb93da5908ddde57d8c0491bb4c2
+ remote: https://github.com/arvados/arvados.git
+ revision: c210114aa8c77ba0bb8e4d487fc1507b40f9560f
glob: sdk/ruby/arvados.gemspec
specs:
- arvados (1.4.1.20191019025325)
+ arvados (1.5.0.pre20200114202620)
activesupport (>= 3)
andand (~> 1.3, >= 1.3.3)
arvados-google-api-client (>= 0.7, < 0.8.9)
jwt (>= 0.1.5, < 2)
GIT
- remote: https://github.com/curoverse/nulldb
+ remote: https://github.com/arvados/nulldb
revision: d8e0073b665acdd2537c5eb15178a60f02f4b413
specs:
activerecord-nulldb-adapter (0.3.9)
activerecord (>= 2.0.0)
GIT
- remote: https://github.com/curoverse/themes_for_rails
+ remote: https://github.com/arvados/themes_for_rails
revision: ddf6e592b3b6493ea0c2de7b5d3faa120ed35be0
specs:
themes_for_rails (0.5.1)
rails-dom-testing (>= 1, < 3)
railties (>= 4.2.0)
thor (>= 0.14, < 2.0)
- json (2.2.0)
+ json (2.3.0)
jwt (1.5.6)
launchy (2.4.3)
addressable (~> 2.3)
nokogiri (>= 1.5.9)
mail (2.7.1)
mini_mime (>= 0.1.1)
- memoist (0.16.0)
+ memoist (0.16.2)
metaclass (0.0.4)
method_source (0.9.2)
mime-types (3.2.2)
net-ssh-gateway (2.0.0)
net-ssh (>= 4.0.0)
nio4r (2.3.1)
- nokogiri (1.10.5)
+ nokogiri (1.10.8)
mini_portile2 (~> 2.4.0)
npm-rails (0.2.1)
rails (>= 3.2)
cliver (~> 0.3.1)
multi_json (~> 1.0)
websocket-driver (>= 0.2.0)
- public_suffix (4.0.1)
+ public_suffix (4.0.3)
rack (2.0.7)
rack-mini-profiler (1.0.2)
rack (>= 1.2.0)
method_source
rake (>= 0.8.7)
thor (>= 0.18.1, < 2.0)
- rake (12.3.2)
+ rake (13.0.1)
raphael-rails (2.1.2)
rb-fsevent (0.10.3)
rb-inotify (0.10.0)
thor (0.20.3)
thread_safe (0.3.6)
tilt (2.0.9)
- tzinfo (1.2.5)
+ tzinfo (1.2.6)
thread_safe (~> 0.1)
uglifier (2.7.2)
execjs (>= 0.3.0)
uglifier (~> 2.0)
BUNDLED WITH
- 1.17.3
+ 1.11
# star / unstar the current project
def star
- links = Link.where(tail_uuid: current_user.uuid,
+ links = Link.where(owner_uuid: current_user.uuid,
head_uuid: @object.uuid,
link_class: 'star')
# the browser can't.
f.json { render opts.merge(json: {success: false, errors: @errors}) }
f.html { render({action: 'error'}.merge(opts)) }
+ f.all { render({action: 'error', formats: 'text'}.merge(opts)) }
end
end
helper_method :my_starred_projects
def my_starred_projects user
return if defined?(@starred_projects) && @starred_projects
- links = Link.filter([['tail_uuid', '=', user.uuid],
+ links = Link.filter([['owner_uuid', 'in', ["#{Rails.configuration.ClusterID}-j7d0g-fffffffffffffff", user.uuid]],
['link_class', '=', 'star'],
['head_uuid', 'is_a', 'arvados#group']]).with_count("none").select(%w(head_uuid))
uuids = links.collect { |x| x.head_uuid }
# Prefer the attachment-only-host when we want an attachment
# (and when there is no preview link configured)
tmpl = Rails.configuration.Services.WebDAVDownload.ExternalURL.to_s
- elsif not Rails.configuration.Workbench.TrustAllContent
+ elsif not Rails.configuration.Collections.TrustAllContent
check_uri = URI.parse(tmpl.sub("*", munged_id))
if opts[:query_token] and
(check_uri.host.nil? or (
def show_pane_list
if current_user.andand.is_admin
- super | %w(Admin)
+ %w(Admin) | super
else
super
end
end
def webshell
- return render_not_found if Rails.configuration.Workbench.ShellInABoxURL == URI("")
- webshell_url = URI(Rails.configuration.Workbench.ShellInABoxURL)
+ return render_not_found if Rails.configuration.Services.WebShell.ExternalURL == URI("")
+ webshell_url = URI(Rails.configuration.Services.WebShell.ExternalURL)
if webshell_url.host.index("*") != nil
webshell_url.host = webshell_url.host.sub("*", @object.hostname)
else
end
def current_api_host
- "#{Rails.configuration.Services.Controller.ExternalURL.hostname}:#{Rails.configuration.Services.Controller.ExternalURL.port}"
+ if Rails.configuration.Services.Controller.ExternalURL.port == 443
+ "#{Rails.configuration.Services.Controller.ExternalURL.hostname}"
+ else
+ "#{Rails.configuration.Services.Controller.ExternalURL.hostname}:#{Rails.configuration.Services.Controller.ExternalURL.port}"
+ end
end
def current_uuid_prefix
my_child_containers = my_children.map(&:container_uuid).compact.uniq
grandchildren = {}
my_child_containers.each { |c| grandchildren[c] = []} if my_child_containers.any?
- reqs = ContainerRequest.select(cols).where(requesting_container_uuid: my_child_containers).with_count("none").results if my_child_containers.any?
+ reqs = ContainerRequest.select(cols).where(requesting_container_uuid: my_child_containers).order(["requesting_container_uuid", "uuid"]).with_count("none").results if my_child_containers.any?
reqs.each {|cr| grandchildren[cr.requesting_container_uuid] << cr} if reqs
my_children.each do |cr|
--- /dev/null
+<%# Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: AGPL-3.0 %>
+
+Oh... fiddlesticks.
+
+Sorry, I had some trouble handling your request.
+
+<% if @errors.is_a? Array then @errors.each do |error| %>
+<%= error %>
+<% end end %>
<div class="modal-header">
<button type="button" class="close" onClick="reset_form()" data-dismiss="modal" aria-hidden="true">×</button>
<div>
- <div class="col-sm-6"> <h4 class="modal-title">Setup Shell Account</h4> </div>
+ <div class="col-sm-6"> <h4 class="modal-title">Setup Account</h4> </div>
<div class="spinner spinner-32px spinner-h-center col-sm-1" hidden="true"></div>
</div>
<br/>
<% end %>
</div>
<div class="form-group">
- <label for="vm_uuid">Virtual Machine</label>
+ <label for="vm_uuid">Virtual Machine (optional)</label>
<select class="form-control" name="vm_uuid">
<option value="" <%= 'selected' unless selected_vm %>>
Choose One:
</select>
</div>
<div class="groups-group">
- <label for="groups">Groups for virtual machine (comma separated list)</label>
+ <label for="groups">Groups for virtual machine (comma separated list) (optional)</label>
<input class="form-control" id="groups" maxlength="250" name="groups" type="text" value="<%=groups%>">
</div>
</div>
<div class="row">
<div class="col-md-6">
+
<p>
- As an admin, you can log in as this user. When you’ve
- finished, you will need to log out and log in again with your
- own account.
+ This page enables you to <a href="https://doc.arvados.org/master/admin/user-management.html">manage users</a>.
</p>
- <blockquote>
- <%= button_to "Log in as #{@object.full_name}", sudo_user_url(id: @object.uuid), class: 'btn btn-primary' %>
- </blockquote>
-
<p>
- As an admin, you can setup a shell account for this user.
- The login name is automatically generated from the user's e-mail address.
+ This button sets up a user. After setup, they will be able use
+ Arvados. This dialog box also allows you to optionally set up a
+ shell account for this user. The login name is automatically
+ generated from the user's e-mail address.
</p>
- <blockquote>
- <%= link_to "Setup shell account #{'for ' if @object.full_name.present?} #{@object.full_name}", setup_popup_user_url(id: @object.uuid), {class: 'btn btn-primary', :remote => true, 'data-toggle' => "modal", 'data-target' => '#user-setup-modal-window'} %>
- </blockquote>
+ <%= link_to "Setup account #{'for ' if @object.full_name.present?} #{@object.full_name}", setup_popup_user_url(id: @object.uuid), {class: 'btn btn-primary', :remote => true, 'data-toggle' => "modal", 'data-target' => '#user-setup-modal-window'} %>
- <p>
+ <p style="margin-top: 3em">
As an admin, you can deactivate and reset this user. This will
remove all repository/VM permissions for the user. If you
"setup" the user again, the user will have to sign the user
- agreement again.
+ agreement again. You may also want to <a href="https://doc.arvados.org/master/admin/reassign-ownership.html">reassign data ownership</a>.
+ </p>
+
+ <%= button_to "Deactivate #{@object.full_name}", unsetup_user_url(id: @object.uuid), class: 'btn btn-primary', data: {confirm: "Are you sure you want to deactivate #{@object.full_name}?"} %>
+
+ <p style="margin-top: 3em">
+ As an admin, you can log in as this user. When you’ve
+ finished, you will need to log out and log in again with your
+ own account.
</p>
- <blockquote>
- <%= button_to "Deactivate #{@object.full_name}", unsetup_user_url(id: @object.uuid), class: 'btn btn-primary', data: {confirm: "Are you sure you want to deactivate #{@object.full_name}?"} %>
- </blockquote>
+ <%= button_to "Log in as #{@object.full_name}", sudo_user_url(id: @object.uuid), class: 'btn btn-primary' %>
</div>
<div class="col-md-6">
<div class="panel panel-default">
<td style="word-break:break-all;">
<% if @my_vm_logins[vm[:uuid]] %>
<% @my_vm_logins[vm[:uuid]].each do |login| %>
- <code>ssh <%= login %>@<%= vm[:hostname] %>.<%= current_uuid_prefix || 'xyzzy' %></code>
+ <code>ssh <%= login %>@<%= vm[:hostname] %><%=Rails.configuration.Workbench.SSHHelpHostSuffix%></code>
<% end %>
<% end %>
</td>
<% end %>
</div>
</div>
- <p>In order to access virtual machines using SSH, <%= link_to ssh_keys_user_path(current_user) do%> add an SSH key to your account<%end%> and add a section like this to your SSH configuration file ( <i>~/.ssh/config</i>):</p>
- <pre>Host *.<%= current_uuid_prefix || 'xyzzy' %>
- TCPKeepAlive yes
- ServerAliveInterval 60
- ProxyCommand ssh -p2222 turnout@switchyard.<%= current_api_host || 'xyzzy.arvadosapi.com' %> -x -a $SSH_PROXY_FLAGS %h
- </pre>
+
+<p>In order to access virtual machines using SSH, <%= link_to ssh_keys_user_path(current_user) do%>add an SSH key to your account<%end%>.</p>
+
+<%= raw(Rails.configuration.Workbench.SSHHelpPageHTML) %>
SPDX-License-Identifier: AGPL-3.0 %>
-<p>
-Sample <code>~/.ssh/config</code> section:
-</p>
-
-<pre>
-Host *.arvados
- ProxyCommand ssh -p2222 turnout@switchyard.<%= current_api_host || 'xyzzy.arvadosapi.com' %> -x -a $SSH_PROXY_FLAGS %h
-<% if @objects.first.andand.current_user_logins.andand.first %>
- User <%= @objects.first.current_user_logins.andand.first %>
-<% end %>
-</pre>
-
-<p>
-Sample login command:
-</p>
-
-<pre>
-ssh <%= @objects.first.andand.hostname.andand.sub('.'+current_api_host,'') or 'vm-hostname' %>.arvados
-</pre>
-
-<p>
- See also:
- <%= link_to raw('Arvados Docs → User Guide → SSH access'),
- "#{Rails.configuration.Workbench.ArvadosDocsite}/user/getting_started/ssh-access-unix.html",
- target: "_blank"%>.
-</p>
+<%= raw(Rails.configuration.Workbench.SSHHelpPageHTML) %>
Bundler.require(:default, Rails.env)
+if ENV["ARVADOS_RAILS_LOG_TO_STDOUT"]
+ Rails.logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT))
+end
+
module ArvadosWorkbench
class Application < Rails::Application
case "$TARGET" in
centos*)
- fpm_depends+=(git)
+ fpm_depends+=(git bison make automake gcc gcc-c++ graphviz)
;;
debian* | ubuntu*)
- fpm_depends+=(git g++)
+ fpm_depends+=(git g++ bison zlib1g-dev make graphviz)
;;
esac
end
test "Redirect to keep_web_url via #{id_type} when trust_all_content enabled" do
- Rails.configuration.Workbench.TrustAllContent = true
+ Rails.configuration.Collections.TrustAllContent = true
setup_for_keep_web('https://collections.example',
'https://download.example')
tok = api_token('active')
[false, true].each do |trust_all_content|
test "Redirect preview to keep_web_download_url when preview is disabled and trust_all_content is #{trust_all_content}" do
- Rails.configuration.Workbench.TrustAllContent = trust_all_content
+ Rails.configuration.Collections.TrustAllContent = trust_all_content
setup_for_keep_web "", 'https://download.example/'
tok = api_token('active')
id = api_fixture('collections')['w_a_z_file']['uuid']
end
['', ' asc', ' desc'].each do |direction|
- test "projects#show tab partial orders correctly by #{direction}" do
+ test "projects#show tab partial orders correctly by created_at#{direction}" do
_test_tab_content_order direction
end
end
find('a', text: 'Show').
click
+ click_link 'Attributes'
+
assert page.has_text? 'modified_by_user_uuid'
page.within(:xpath, '//span[@data-name="is_active"]') do
assert_equal "false", text, "Expected new user's is_active to be false"
click_link 'Advanced'
click_link 'Metadata'
- assert page.has_text? 'can_login' # make sure page is rendered / ready
+ assert page.has_text? 'can_read' # make sure page is rendered / ready
assert page.has_no_text? 'VirtualMachine:'
end
# Setup user
click_link 'Admin'
- assert page.has_text? 'As an admin, you can setup'
+ assert page.has_text? 'This button sets up a user'
- click_link 'Setup shell account for Active User'
+ click_link 'Setup account for Active User'
within '.modal-content' do
find 'label', text: 'Virtual Machine'
end
visit user_url
+ click_link 'Attributes'
assert page.has_text? 'modified_by_client_uuid'
click_link 'Advanced'
# Click on Setup button again and this time also choose a VM
click_link 'Admin'
- click_link 'Setup shell account for Active User'
+ click_link 'Setup account for Active User'
within '.modal-content' do
select("testvm.shell", :from => 'vm_uuid')
end
visit user_url
+ click_link 'Attributes'
find '#Attributes', text: 'modified_by_client_uuid'
click_link 'Advanced'
click_link 'Metadata'
assert page.has_text? 'VirtualMachine: testvm.shell'
assert page.has_text? '["test group one", "test-group-two"]'
+ vm_links = all("a", text: "VirtualMachine:")
+ assert_equal(2, vm_links.size)
end
test "unsetup active user" do
user_url = page.current_url
# Verify that is_active is set
- find('a,button', text: 'Attributes').click
+ click_link 'Attributes'
assert page.has_text? 'modified_by_user_uuid'
page.within(:xpath, '//span[@data-name="is_active"]') do
assert_equal "true", text, "Expected user's is_active to be true"
# poltergeist returns true for confirm(), so we don't need to accept.
end
+ click_link 'Attributes'
+
# Should now be back in the Attributes tab for the user
assert page.has_text? 'modified_by_user_uuid'
page.within(:xpath, '//span[@data-name="is_active"]') do
# setup user again and verify links present
click_link 'Admin'
- click_link 'Setup shell account for Active User'
+ click_link 'Setup account for Active User'
within '.modal-content' do
select("testvm.shell", :from => 'vm_uuid')
end
visit user_url
+ click_link 'Attributes'
assert page.has_text? 'modified_by_client_uuid'
click_link 'Advanced'
# Setup user
click_link 'Admin'
- assert page.has_text? 'As an admin, you can setup'
+ assert page.has_text? 'This button sets up a user'
click_link 'Add new group'
n += 1
raise if n > 2 || e.is_a?(Skip)
STDERR.puts "Test failed, retrying (##{n})"
+ ActiveSupport::TestCase.reset_api_fixtures_now
retry
end
rescue *PASSTHROUGH_EXCEPTIONS
WORKSPACE=path Path to the Arvados source tree to build packages from
CWLTOOL=path (optional) Path to cwltool git repository.
SALAD=path (optional) Path to schema_salad git repository.
-PYCMD=pythonexec (optional) Specify the python executable to use in the docker image. Defaults to "python".
+PYCMD=pythonexec (optional) Specify the python executable to use in the docker image. Defaults to "python3".
EOF
cd "$WORKSPACE"
-py=python
+py=python3
pipcmd=pip
if [[ -n "$PYCMD" ]] ; then
py="$PYCMD"
- if [[ $py = python3 ]] ; then
- pipcmd=pip3
- fi
+fi
+if [[ $py = python3 ]] ; then
+ pipcmd=pip3
fi
(cd sdk/python && python setup.py sdist)
cwl_runner_version=$(cd sdk/python && nohash_version_from_git 1.0)
fi
-docker build --build-arg sdk=$sdk --build-arg runner=$runner --build-arg salad=$salad --build-arg cwltool=$cwltool --build-arg pythoncmd=$py --build-arg pipcmd=$pipcmd -f "$WORKSPACE/sdk/dev-jobs.dockerfile" -t arvados/jobs:$cwl_runner_version "$WORKSPACE/sdk"
+set -x
+docker build --no-cache --build-arg sdk=$sdk --build-arg runner=$runner --build-arg salad=$salad --build-arg cwltool=$cwltool --build-arg pythoncmd=$py --build-arg pipcmd=$pipcmd -f "$WORKSPACE/sdk/dev-jobs.dockerfile" -t arvados/jobs:$cwl_runner_version "$WORKSPACE/sdk"
echo arv-keepdocker arvados/jobs $cwl_runner_version
arv-keepdocker arvados/jobs $cwl_runner_version
using_fork=true
if [[ $using_fork = true ]]; then
- LIBCLOUD_PIN_SRC="https://github.com/curoverse/libcloud/archive/apache-libcloud-$LIBCLOUD_PIN.zip"
+ LIBCLOUD_PIN_SRC="https://github.com/arvados/libcloud/archive/apache-libcloud-$LIBCLOUD_PIN.zip"
else
LIBCLOUD_PIN_SRC=""
fi
#
# SPDX-License-Identifier: AGPL-3.0
-all: centos7/generated debian9/generated ubuntu1604/generated ubuntu1804/generated
+all: centos7/generated debian9/generated debian10/generated ubuntu1604/generated ubuntu1804/generated
centos7/generated: common-generated-all
test -d centos7/generated || mkdir centos7/generated
test -d debian9/generated || mkdir debian9/generated
cp -rlt debian9/generated common-generated/*
+debian10/generated: common-generated-all
+ test -d debian10/generated || mkdir debian10/generated
+ cp -rlt debian10/generated common-generated/*
+
+
ubuntu1604/generated: common-generated-all
test -d ubuntu1604/generated || mkdir ubuntu1604/generated
cp -rlt ubuntu1604/generated common-generated/*
test -d ubuntu1804/generated || mkdir ubuntu1804/generated
cp -rlt ubuntu1804/generated common-generated/*
-GOTARBALL=go1.12.7.linux-amd64.tar.gz
+GOTARBALL=go1.13.4.linux-amd64.tar.gz
NODETARBALL=node-v6.11.2-linux-x64.tar.xz
RVMKEY1=mpapis.asc
RVMKEY2=pkuczynski.asc
# SPDX-License-Identifier: AGPL-3.0
FROM centos:7
-MAINTAINER Ward Vandewege <ward@curoverse.com>
+MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
# Install dependencies.
RUN yum -q -y install make automake gcc gcc-c++ libyaml-devel patch readline-devel zlib-devel libffi-devel openssl-devel bzip2 libtool bison sqlite-devel rpm-build git perl-ExtUtils-MakeMaker libattr-devel nss-devel libcurl-devel which tar unzip scl-utils centos-release-scl postgresql-devel python-devel python-setuptools fuse-devel xz-libs git python-virtualenv wget
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install Bash 4.4.12 // see https://dev.arvados.org/issues/15612
&& ln -sf /usr/local/src/bash-4.4.12/bash /bin/bash
# Install golang binary
-ADD generated/go1.12.7.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
RUN wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN rpm -ivh epel-release-latest-7.noarch.rpm
-RUN git clone --depth 1 git://git.curoverse.com/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
+RUN git clone --depth 1 git://git.arvados.org/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
# The version of setuptools that comes with CentOS is way too old
-RUN pip install --upgrade setuptools
+RUN pip install --upgrade 'setuptools<45'
ENV WORKSPACE /arvados
CMD ["scl", "enable", "rh-python36", "/usr/local/rvm/bin/rvm-exec default bash /jenkins/run-build-packages.sh --target centos7"]
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+## dont use debian:10 here since the word 'buster' is used for rvm precompiled binaries
+FROM debian:buster
+MAINTAINER Ward Vandewege <wvandewege@veritasgenetics.com>
+
+ENV DEBIAN_FRONTEND noninteractive
+
+# Install dependencies.
+RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python2.7-dev python3 python-setuptools python3-setuptools python3-pip libcurl4-gnutls-dev curl git procps libattr1-dev libfuse-dev libgnutls28-dev libpq-dev python-pip unzip python3-venv python3-dev
+
+# Install virtualenv
+RUN /usr/bin/pip install 'virtualenv<20'
+
+# Install RVM
+ADD generated/mpapis.asc /tmp/
+ADD generated/pkuczynski.asc /tmp/
+RUN gpg --import --no-tty /tmp/mpapis.asc && \
+ gpg --import --no-tty /tmp/pkuczynski.asc && \
+ curl -L https://get.rvm.io | bash -s stable && \
+ /usr/local/rvm/bin/rvm install 2.5 && \
+ /usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
+ /usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
+
+# Install golang binary
+ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
+RUN ln -s /usr/local/go/bin/go /usr/local/bin/
+
+# Install nodejs and npm
+ADD generated/node-v6.11.2-linux-x64.tar.xz /usr/local/
+RUN ln -s /usr/local/node-v6.11.2-linux-x64/bin/* /usr/local/bin/
+
+RUN git clone --depth 1 git://git.arvados.org/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
+
+ENV WORKSPACE /arvados
+CMD ["/usr/local/rvm/bin/rvm-exec", "default", "bash", "/jenkins/run-build-packages.sh", "--target", "debian10"]
RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python2.7-dev python3 python-setuptools python3-setuptools python3-pip libcurl4-gnutls-dev curl git procps libattr1-dev libfuse-dev libgnutls28-dev libpq-dev python-pip unzip python3-venv python3-dev
# Install virtualenv
-RUN /usr/bin/pip install virtualenv
+RUN /usr/bin/pip install 'virtualenv<20'
# Install RVM
ADD generated/mpapis.asc /tmp/
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install golang binary
-ADD generated/go1.12.7.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
ADD generated/node-v6.11.2-linux-x64.tar.xz /usr/local/
RUN ln -s /usr/local/node-v6.11.2-linux-x64/bin/* /usr/local/bin/
-RUN git clone --depth 1 git://git.curoverse.com/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
+RUN git clone --depth 1 git://git.arvados.org/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
ENV WORKSPACE /arvados
CMD ["/usr/local/rvm/bin/rvm-exec", "default", "bash", "/jenkins/run-build-packages.sh", "--target", "debian9"]
# SPDX-License-Identifier: AGPL-3.0
FROM ubuntu:xenial
-MAINTAINER Ward Vandewege <ward@curoverse.com>
+MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
ENV DEBIAN_FRONTEND noninteractive
RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python2.7-dev python3 python-setuptools python3-setuptools python3-pip libcurl4-gnutls-dev libgnutls-dev curl git libattr1-dev libfuse-dev libpq-dev python-pip unzip tzdata python3-venv python3-dev
# Install virtualenv
-RUN /usr/bin/pip install virtualenv
+RUN /usr/bin/pip install 'virtualenv<20'
# Install RVM
ADD generated/mpapis.asc /tmp/
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install golang binary
-ADD generated/go1.12.7.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
ADD generated/node-v6.11.2-linux-x64.tar.xz /usr/local/
RUN ln -s /usr/local/node-v6.11.2-linux-x64/bin/* /usr/local/bin/
-RUN git clone --depth 1 git://git.curoverse.com/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
+RUN git clone --depth 1 git://git.arvados.org/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
ENV WORKSPACE /arvados
CMD ["/usr/local/rvm/bin/rvm-exec", "default", "bash", "/jenkins/run-build-packages.sh", "--target", "ubuntu1604"]
# SPDX-License-Identifier: AGPL-3.0
FROM ubuntu:bionic
-MAINTAINER Ward Vandewege <ward@curoverse.com>
+MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
ENV DEBIAN_FRONTEND noninteractive
RUN /usr/bin/apt-get update && /usr/bin/apt-get install -q -y python2.7-dev python3 python-setuptools python3-pip libcurl4-gnutls-dev libgnutls28-dev curl git libattr1-dev libfuse-dev libpq-dev python-pip unzip tzdata python3-venv python3-dev
# Install virtualenv
-RUN /usr/bin/pip install virtualenv
+RUN /usr/bin/pip install 'virtualenv<20'
# Install RVM
ADD generated/mpapis.asc /tmp/
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
/usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2 && \
/usr/local/rvm/bin/rvm-exec default gem install fpm --version 1.10.2
# Install golang binary
-ADD generated/go1.12.7.linux-amd64.tar.gz /usr/local/
+ADD generated/go1.13.4.linux-amd64.tar.gz /usr/local/
RUN ln -s /usr/local/go/bin/go /usr/local/bin/
# Install nodejs and npm
ADD generated/node-v6.11.2-linux-x64.tar.xz /usr/local/
RUN ln -s /usr/local/node-v6.11.2-linux-x64/bin/* /usr/local/bin/
-RUN git clone --depth 1 git://git.curoverse.com/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
+RUN git clone --depth 1 git://git.arvados.org/arvados.git /tmp/arvados && cd /tmp/arvados/services/api && /usr/local/rvm/bin/rvm-exec default bundle && cd /tmp/arvados/apps/workbench && /usr/local/rvm/bin/rvm-exec default bundle
ENV WORKSPACE /arvados
CMD ["/usr/local/rvm/bin/rvm-exec", "default", "bash", "/jenkins/run-build-packages.sh", "--target", "ubuntu1804"]
#
# SPDX-License-Identifier: AGPL-3.0
-all: centos7/generated debian9/generated ubuntu1604/generated ubuntu1804/generated
+all: centos7/generated debian9/generated debian10/generated ubuntu1604/generated ubuntu1804/generated
centos7/generated: common-generated-all
test -d centos7/generated || mkdir centos7/generated
test -d debian9/generated || mkdir debian9/generated
cp -rlt debian9/generated common-generated/*
+debian10/generated: common-generated-all
+ test -d debian10/generated || mkdir debian10/generated
+ cp -rlt debian10/generated common-generated/*
+
ubuntu1604/generated: common-generated-all
test -d ubuntu1604/generated || mkdir ubuntu1604/generated
cp -rlt ubuntu1604/generated common-generated/*
# SPDX-License-Identifier: AGPL-3.0
FROM centos:7
-MAINTAINER Ward Vandewege <wvandewege@veritasgenetics.com>
+MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
# Install dependencies.
RUN yum -q -y install scl-utils centos-release-scl which tar wget
gpg --import --no-tty /tmp/pkuczynski.asc && \
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.3 && \
- /usr/local/rvm/bin/rvm alias create default ruby-2.3
+ /usr/local/rvm/bin/rvm alias create default ruby-2.3 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
# Install Bash 4.4.12 // see https://dev.arvados.org/issues/15612
RUN cd /usr/local/src \
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+FROM debian:buster
+MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
+
+ENV DEBIAN_FRONTEND noninteractive
+
+# Install dependencies
+RUN apt-get update && \
+ apt-get -y install --no-install-recommends curl ca-certificates gpg procps gpg-agent
+
+# Install RVM
+ADD generated/mpapis.asc /tmp/
+ADD generated/pkuczynski.asc /tmp/
+RUN gpg --import --no-tty /tmp/mpapis.asc && \
+ gpg --import --no-tty /tmp/pkuczynski.asc && \
+ curl -L https://get.rvm.io | bash -s stable && \
+ /usr/local/rvm/bin/rvm install 2.5 && \
+ /usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
+
+# udev daemon can't start in a container, so don't try.
+RUN mkdir -p /etc/udev/disabled
+
+RUN echo "deb file:///arvados/packages/debian10/ /" >>/etc/apt/sources.list
# SPDX-License-Identifier: AGPL-3.0
FROM debian:stretch
-MAINTAINER Ward Vandewege <wvandewege@veritasgenetics.com>
+MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
ENV DEBIAN_FRONTEND noninteractive
gpg --import --no-tty /tmp/pkuczynski.asc && \
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
- /usr/local/rvm/bin/rvm alias create default ruby-2.5
+ /usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
# udev daemon can't start in a container, so don't try.
RUN mkdir -p /etc/udev/disabled
# SPDX-License-Identifier: AGPL-3.0
FROM ubuntu:xenial
-MAINTAINER Ward Vandewege <wvandewege@veritasgenetics.com>
+MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
ENV DEBIAN_FRONTEND noninteractive
gpg --import --no-tty /tmp/pkuczynski.asc && \
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
- /usr/local/rvm/bin/rvm alias create default ruby-2.5
+ /usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
# udev daemon can't start in a container, so don't try.
RUN mkdir -p /etc/udev/disabled
# SPDX-License-Identifier: AGPL-3.0
FROM ubuntu:bionic
-MAINTAINER Ward Vandewege <wvandewege@veritasgenetics.com>
+MAINTAINER Arvados Package Maintainers <packaging@arvados.org>
ENV DEBIAN_FRONTEND noninteractive
gpg --import --no-tty /tmp/pkuczynski.asc && \
curl -L https://get.rvm.io | bash -s stable && \
/usr/local/rvm/bin/rvm install 2.5 && \
- /usr/local/rvm/bin/rvm alias create default ruby-2.5
+ /usr/local/rvm/bin/rvm alias create default ruby-2.5 && \
+ /usr/local/rvm/bin/rvm-exec default gem install bundler --version 2.0.2
# udev daemon can't start in a container, so don't try.
RUN mkdir -p /etc/udev/disabled
dpkg-query --show > "$ARV_PACKAGES_DIR/$1.before"
-apt-get $DASHQQ_UNLESS_DEBUG update
+apt-get $DASHQQ_UNLESS_DEBUG --allow-insecure-repositories update
apt-get $DASHQQ_UNLESS_DEBUG -y --allow-unauthenticated install "$1" >"$STDOUT_IF_DEBUG" 2>"$STDERR_IF_DEBUG"
--- /dev/null
+deb-common-test-packages.sh
\ No newline at end of file
setup_confdirs /etc/arvados "$CONFIG_PATH"
setup_conffile environments/production.rb environments/production.rb.example \
|| true
- setup_conffile application.yml application.yml.example || APPLICATION_READY=0
- if [ -n "$RAILSPKG_DATABASE_LOAD_TASK" ]; then
- setup_conffile database.yml database.yml.example || DATABASE_READY=0
- fi
setup_extra_conffiles
echo "... done."
export RAILS_ENV=production
if ! $COMMAND_PREFIX bundle --version >/dev/null; then
- run_and_report "Installing bundle" $COMMAND_PREFIX gem install bundle
+ run_and_report "Installing bundler" $COMMAND_PREFIX gem install bundler --version 1.17.3
fi
run_and_report "Running bundle install" \
fi
if [ 11 = "$RAILSPKG_SUPPORTS_CONFIG_CHECK$APPLICATION_READY" ]; then
- run_and_report "Checking application.yml for completeness" \
+ run_and_report "Checking configuration for completeness" \
$COMMAND_PREFIX bundle exec rake config:check || APPLICATION_READY=0
fi
configure_version
fi
-report_not_ready "$DATABASE_READY" "$CONFIG_PATH/database.yml"
if printf '%s\n' "$CONFIG_PATH" | grep -Fqe "sso"; then
report_not_ready "$APPLICATION_READY" "$CONFIG_PATH/application.yml"
+ report_not_ready "$DATABASE_READY" "$CONFIG_PATH/database.yml"
else
report_not_ready "$APPLICATION_READY" "/etc/arvados/config.yml"
fi
WORKSPACE=/path/to/arvados $(basename $0) [options]
--target <target>
- Distribution to build packages for (default: debian9)
+ Distribution to build packages for (default: debian10)
--command
Build command to execute (default: use built-in Docker image command)
--test-packages
Build only a specific package
--only-test <package>
Test only a specific package
+--force-build
+ Build even if the package exists upstream or if it has already been
+ built locally
--force-test
Test even if there is no new untested package
--build-version <string>
fi
PARSEDOPTS=$(getopt --name "$0" --longoptions \
- help,debug,test-packages,target:,command:,only-test:,force-test,only-build:,build-version: \
+ help,debug,test-packages,target:,command:,only-test:,force-test,only-build:,force-build,build-version: \
-- "" "$@")
if [ $? -ne 0 ]; then
exit 1
fi
-TARGET=debian9
+TARGET=debian10
+FORCE_BUILD=0
COMMAND=
DEBUG=
--force-test)
FORCE_TEST=true
;;
+ --force-build)
+ FORCE_BUILD=1
+ ;;
--only-build)
ONLY_BUILD="$2"; shift
;;
--env ARVADOS_BUILDING_ITERATION="$ARVADOS_BUILDING_ITERATION" \
--env ARVADOS_DEBUG=$ARVADOS_DEBUG \
--env "ONLY_BUILD=$ONLY_BUILD" \
+ --env "FORCE_BUILD=$FORCE_BUILD" \
"$IMAGE" $COMMAND
then
echo
--debug
Output debug information (default: false)
--target
- Distribution to build packages for (default: debian9)
+ Distribution to build packages for (default: debian10)
WORKSPACE=path Path to the Arvados SSO source tree to build packages from
EXITCODE=0
DEBUG=${ARVADOS_DEBUG:-0}
-TARGET=debian9
+TARGET=debian10
PARSEDOPTS=$(getopt --name "$0" --longoptions \
help,build-bundle-packages,debug,target: \
--debug
Output debug information (default: false)
--target <target>
- Distribution to build packages for (default: debian9)
+ Distribution to build packages for (default: debian10)
--only-build <package>
Build only a specific package (or $ONLY_BUILD from environment)
+--force-build
+ Build even if the package exists upstream or if it has already been
+ built locally
--command
Build command to execute (defaults to the run command defined in the
Docker image)
# set to --no-cache-dir to disable pip caching
CACHE_FLAG=
-MAINTAINER="Ward Vandewege <wvandewege@veritasgenetics.com>"
-VENDOR="Veritas Genetics, Inc."
+MAINTAINER="Arvados Package Maintainers <packaging@arvados.org>"
+VENDOR="The Arvados Project"
# End of user configuration
DEBUG=${ARVADOS_DEBUG:-0}
+FORCE_BUILD=${FORCE_BUILD:-0}
EXITCODE=0
-TARGET=debian9
+TARGET=debian10
COMMAND=
PARSEDOPTS=$(getopt --name "$0" --longoptions \
- help,build-bundle-packages,debug,target:,only-build: \
+ help,build-bundle-packages,debug,target:,only-build:,force-build \
-- "" "$@")
if [ $? -ne 0 ]; then
exit 1
--only-build)
ONLY_BUILD="$2"; shift
;;
+ --force-build)
+ FORCE_BUILD=1
+ ;;
--debug)
DEBUG=1
;;
exit 1
fi
-PYTHON2_FPM_INSTALLER=(--python-easyinstall "$(find_python_program easy_install-$PYTHON2_VERSION easy_install)")
-install3=$(find_python_program easy_install-$PYTHON3_VERSION easy_install3 pip-$PYTHON3_VERSION pip3)
-if [[ $install3 =~ easy_ ]]; then
- PYTHON3_FPM_INSTALLER=(--python-easyinstall "$install3")
-else
- PYTHON3_FPM_INSTALLER=(--python-pip "$install3")
-fi
-
RUN_BUILD_PACKAGES_PATH="`dirname \"$0\"`"
RUN_BUILD_PACKAGES_PATH="`( cd \"$RUN_BUILD_PACKAGES_PATH\" && pwd )`" # absolutized and normalized
if [ -z "$RUN_BUILD_PACKAGES_PATH" ] ; then
# Go binaries
cd $WORKSPACE/packages/$TARGET
export GOPATH=$(mktemp -d)
-go get github.com/kardianos/govendor
package_go_binary cmd/arvados-client arvados-client \
"Arvados command line tool (beta)"
package_go_binary cmd/arvados-server arvados-server \
"Dispatch Crunch containers on the local system"
package_go_binary services/crunch-dispatch-slurm crunch-dispatch-slurm \
"Dispatch Crunch containers to a SLURM cluster"
-package_go_binary services/crunch-run crunch-run \
+package_go_binary cmd/arvados-server crunch-run \
"Supervise a single Crunch container"
package_go_binary services/crunchstat crunchstat \
"Gather cpu/memory/network statistics of running Crunch jobs"
# The Python SDK - Should be built first because it's needed by others
fpm_build_virtualenv "arvados-python-client" "sdk/python"
-# Arvados cwl runner
-fpm_build_virtualenv "arvados-cwl-runner" "sdk/cwl"
+# The Python SDK - Python3 package
+fpm_build_virtualenv "arvados-python-client" "sdk/python" "python3"
+
+# Arvados cwl runner - Only supports Python3 now
+fpm_build_virtualenv "arvados-cwl-runner" "sdk/cwl" "python3"
# The PAM module
fpm_build_virtualenv "libpam-arvados" "sdk/pam"
# The Arvados crunchstat-summary tool
fpm_build_virtualenv "crunchstat-summary" "tools/crunchstat-summary"
-# The Python SDK - Python3 package
-fpm_build_virtualenv "arvados-python-client" "sdk/python" "python3"
-
# The Docker image cleaner
fpm_build_virtualenv "arvados-docker-cleaner" "services/dockercleaner" "python3"
rm -rf "$WORKSPACE/cwltest"
fi
git clone https://github.com/common-workflow-language/cwltest.git
+# last release to support python 2.7
+(cd cwltest && git checkout 1.0.20190906212748)
# signal to our build script that we want a cwltest executable installed in /usr/bin/
mkdir cwltest/bin && touch cwltest/bin/cwltest
fpm_build_virtualenv "cwltest" "cwltest"
WORKSPACE=/path/to/arvados $(basename $0) [options]
--target <target>
- Distribution to build packages for (default: debian9)
+ Distribution to build packages for (default: debian10)
--upload
If the build and test steps are successful, upload the packages
to a remote apt repository (default: false)
exit 1
fi
-TARGET=debian9
+TARGET=debian10
UPLOAD=0
RC=0
DEBUG=
if [ ${#failures[@]} -eq 0 ]; then
if [[ "$RC" != 0 ]]; then
- echo "/usr/local/arvados-dev/jenkins/run_upload_packages_testing.py -H jenkinsapt@apt.arvados.org -o Port=2222 --workspace $WORKSPACE $TARGET"
- /usr/local/arvados-dev/jenkins/run_upload_packages_testing.py -H jenkinsapt@apt.arvados.org -o Port=2222 --workspace $WORKSPACE $TARGET
+ echo "/usr/local/arvados-dev/jenkins/run_upload_packages.py --repo testing -H jenkinsapt@apt.arvados.org -o Port=2222 --workspace $WORKSPACE $TARGET"
+ /usr/local/arvados-dev/jenkins/run_upload_packages.py --repo testing -H jenkinsapt@apt.arvados.org -o Port=2222 --workspace $WORKSPACE $TARGET
else
- echo "/usr/local/arvados-dev/jenkins/run_upload_packages.py -H jenkinsapt@apt.arvados.org -o Port=2222 --workspace $WORKSPACE $TARGET"
- /usr/local/arvados-dev/jenkins/run_upload_packages.py -H jenkinsapt@apt.arvados.org -o Port=2222 --workspace $WORKSPACE $TARGET
+ echo "/usr/local/arvados-dev/jenkins/run_upload_packages.py --repo dev -H jenkinsapt@apt.arvados.org -o Port=2222 --workspace $WORKSPACE $TARGET"
+ /usr/local/arvados-dev/jenkins/run_upload_packages.py --repo dev -H jenkinsapt@apt.arvados.org -o Port=2222 --workspace $WORKSPACE $TARGET
fi
else
echo "Skipping package upload, there were errors building and/or testing the packages"
format_last_commit_here() {
local format="$1"; shift
- TZ=UTC git log -n1 --first-parent "--format=format:$format" .
+ local dir="${1:-.}"; shift
+ TZ=UTC git log -n1 --first-parent "--format=format:$format" "$dir"
}
version_from_git() {
# Output the version being built, or if we're building a
# dev/prerelease, output a version number based on the git log for
- # the current working directory.
+ # the given $subdir.
+ local minorversion="$1"; shift # unused
+ local subdir="$1"; shift
if [[ -n "$ARVADOS_BUILDING_VERSION" ]]; then
echo "$ARVADOS_BUILDING_VERSION"
return
fi
- local git_ts git_hash prefix
- if [[ -n "$1" ]] ; then
- prefix="$1"
- else
- prefix="0.1"
- fi
-
- declare $(format_last_commit_here "git_ts=%ct git_hash=%h")
- ARVADOS_BUILDING_VERSION="$(git tag -l |sort -V -r |head -n1).$(date -ud "@$git_ts" +%Y%m%d%H%M%S)"
+ local git_ts git_hash
+ declare $(format_last_commit_here "git_ts=%ct git_hash=%h" "$subdir")
+ ARVADOS_BUILDING_VERSION="$($WORKSPACE/build/version-at-commit.sh $git_hash)"
echo "$ARVADOS_BUILDING_VERSION"
}
}
timestamp_from_git() {
- format_last_commit_here "%ct"
+ local subdir="$1"; shift
+ format_last_commit_here "%ct" "$subdir"
}
handle_python_package () {
# to another variable that is passed in as the first argument to this function.
# see https://www.gnu.org/software/bash/manual/html_node/Shell-Parameters.html
local -n __returnvar="$1"; shift
- local src_path="$1"; shift
-
- mkdir -p "$GOPATH/src/git.curoverse.com"
- ln -sfn "$WORKSPACE" "$GOPATH/src/git.curoverse.com/arvados.git"
- (cd "$GOPATH/src/git.curoverse.com/arvados.git" && "$GOPATH/bin/govendor" sync -v)
+ local oldpwd="$PWD"
- cd "$GOPATH/src/git.curoverse.com/arvados.git/$src_path"
- local version="$(version_from_git)"
- local timestamp="$(timestamp_from_git)"
+ cd "$WORKSPACE"
+ go mod download
# Update the version number and build a new package if the vendor
# bundle has changed, or the command imports anything from the
# Arvados SDK and the SDK has changed.
- declare -a checkdirs=(vendor)
- if grep -qr git.curoverse.com/arvados .; then
+ declare -a checkdirs=(go.mod go.sum)
+ while [ -n "$1" ]; do
+ checkdirs+=("$1")
+ shift
+ done
+ if grep -qr git.arvados.org/arvados .; then
checkdirs+=(sdk/go lib)
fi
+ local timestamp=0
for dir in ${checkdirs[@]}; do
- cd "$GOPATH/src/git.curoverse.com/arvados.git/$dir"
- ts="$(timestamp_from_git)"
+ cd "$WORKSPACE"
+ ts="$(timestamp_from_git "$dir")"
if [[ "$ts" -gt "$timestamp" ]]; then
- version=$(version_from_git)
+ version=$(version_from_git "" "$dir")
timestamp="$ts"
fi
done
-
+ cd "$oldpwd"
__returnvar="$version"
}
return 1
fi
- go get -ldflags "-X git.curoverse.com/arvados.git/lib/cmd.version=${go_package_version} -X main.version=${go_package_version}" "git.curoverse.com/arvados.git/$src_path"
+ go get -ldflags "-X git.arvados.org/arvados.git/lib/cmd.version=${go_package_version} -X main.version=${go_package_version}" "git.arvados.org/arvados.git/$src_path"
local -a switches=()
systemd_unit="$WORKSPACE/${src_path}/${prog}.service"
rails_package_version() {
local pkgname="$1"; shift
+ local srcdir="$1"; shift
if [[ -n "$ARVADOS_BUILDING_VERSION" ]]; then
echo "$ARVADOS_BUILDING_VERSION"
return
fi
local version="$(version_from_git)"
if [ $pkgname = "arvados-api-server" -o $pkgname = "arvados-workbench" ] ; then
- local P="$PWD"
- cd $WORKSPACE
- local arvados_server_version
- calculate_go_package_version arvados_server_version cmd/arvados-server
- cd $P
- if [ $arvados_server_version > $version ] ; then
- version=$arvados_server_version
- fi
+ calculate_go_package_version version cmd/arvados-server "$srcdir"
fi
echo $version
}
cd $srcdir
- local version="$(rails_package_version $pkgname)"
+ local version="$(rails_package_version "$pkgname" "$srcdir")"
cd $tmppwd
# sure it gets picked up by the test and/or upload steps.
# Get the list of packages from the repos
- if [[ "$FORMAT" == "deb" ]]; then
+ if [[ "$FORCE_BUILD" == "1" ]]; then
+ echo "Package $full_pkgname build forced with --force-build, building"
+ elif [[ "$FORMAT" == "deb" ]]; then
declare -A dd
dd[debian9]=stretch
dd[debian10]=buster
local srcdir="$1"; shift
cd "$srcdir"
local license_path="$1"; shift
- local version="$(rails_package_version $pkgname)"
+ local version="$(rails_package_version "$pkgname" "$srcdir")"
echo "$version" >package-build.version
local scripts_dir="$(mktemp --tmpdir -d "$pkgname-XXXXXXXX.scripts")" && \
(
rm -rf dist/*
# Get the latest setuptools
- if ! $pip install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U setuptools; then
+ if ! $pip install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U 'setuptools<45'; then
echo "Error, unable to upgrade setuptools with"
- echo " $pip install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U setuptools"
+ echo " $pip install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U 'setuptools<45'"
exit 1
fi
# filter a useless warning (when building the cwltest package) from the stderr output
fi
echo "pip version: `build/usr/share/$python/dist/$PYTHON_PKG/bin/$pip --version`"
- if ! build/usr/share/$python/dist/$PYTHON_PKG/bin/$pip install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U setuptools; then
+ if ! build/usr/share/$python/dist/$PYTHON_PKG/bin/$pip install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U 'setuptools<45'; then
echo "Error, unable to upgrade setuptools with"
- echo " build/usr/share/$python/dist/$PYTHON_PKG/bin/$pip install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U setuptools"
+ echo " build/usr/share/$python/dist/$PYTHON_PKG/bin/$pip install $DASHQ_UNLESS_DEBUG $CACHE_FLAG -U 'setuptools<45'"
exit 1
fi
echo "setuptools version: `build/usr/share/$python/dist/$PYTHON_PKG/bin/$python -c 'import setuptools; print(setuptools.__version__)'`"
lib/controller/router
lib/controller/rpc
lib/crunchstat
+lib/crunch-run
lib/cloud
lib/cloud/azure
lib/cloud/cloudtest
lib/dispatchcloud/scheduler
lib/dispatchcloud/ssh_executor
lib/dispatchcloud/worker
+lib/mount
lib/service
services/api
services/arv-git-httpd
services/login-sync
services/nodemanager
services/nodemanager_integration
-services/crunch-run
services/crunch-dispatch-local
services/crunch-dispatch-slurm
services/ws
sdk/go/asyncbuf
sdk/go/stats
sdk/go/crunchrunner
-sdk/cwl
+sdk/cwl:py3
sdk/R
sdk/java-v2
tools/sync-groups
checkhealth() {
svc="$1"
- base=$(python -c "import yaml; print list(yaml.safe_load(file('$ARVADOS_CONFIG'))['Clusters']['zzzzz']['Services']['$1']['InternalURLs'].keys())[0]")
+ base=$("${VENVDIR}/bin/python" -c "import yaml; print list(yaml.safe_load(file('$ARVADOS_CONFIG'))['Clusters']['zzzzz']['Services']['$1']['InternalURLs'].keys())[0]")
url="$base/_health/ping"
if ! curl -Ss -H "Authorization: Bearer e687950a23c3a9bceec28c6223a06c79" "${url}" | tee -a /dev/stderr | grep '"OK"'; then
echo "${url} failed"
dd="https://${1}/discovery/v1/apis/arvados/v1/rest"
if ! (set -o pipefail; curl -fsk "$dd" | grep -q ^{ ); then
echo >&2 "ERROR: could not retrieve discovery doc from RailsAPI at $dd"
- tail -v $WORKSPACE/services/api/log/test.log
+ tail -v $WORKSPACE/tmp/railsapi.log
return 1
fi
echo "${dd} ok"
|| fatal 'rvm gemset setup'
rvm env
+ (bundle version | grep -q 2.0.2) || gem install bundler -v 2.0.2
+ bundle="$(which bundle)"
+ echo "$bundle"
+ "$bundle" version | grep 2.0.2 || fatal 'install bundler'
else
# When our "bundle install"s need to install new gems to
# satisfy dependencies, we want them to go where "gem install
echo "Will install dependencies to $(gem env gemdir)"
echo "Will install arvados gems to $tmpdir_gem_home"
echo "Gem search path is GEM_PATH=$GEM_PATH"
+ bundle="$(gem env gempath | cut -f1 -d:)/bin/bundle"
+ (
+ export HOME=$GEMHOME
+ ("$bundle" version | grep -q 2.0.2) \
+ || gem install --user bundler -v 2.0.2
+ "$bundle" version | tee /dev/stderr | grep -q 'version 2'
+ ) || fatal 'install bundler'
fi
- bundle config || gem install bundler \
- || fatal 'install bundler'
}
with_test_gemset() {
}
install_env() {
- (
- set -e
- mkdir -p "$GOPATH/src/git.curoverse.com"
- if [[ ! -h "$GOPATH/src/git.curoverse.com/arvados.git" ]]; then
- for d in \
- "$GOPATH/src/git.curoverse.com/arvados.git/tmp/GOPATH" \
- "$GOPATH/src/git.curoverse.com/arvados.git/tmp" \
- "$GOPATH/src/git.curoverse.com/arvados.git/arvados" \
- "$GOPATH/src/git.curoverse.com/arvados.git"; do
- [[ -h "$d" ]] && rm "$d"
- [[ -d "$d" ]] && rmdir "$d"
- done
- fi
- ln -vsfT "$WORKSPACE" "$GOPATH/src/git.curoverse.com/arvados.git"
- go get -v github.com/kardianos/govendor
- cd "$GOPATH/src/git.curoverse.com/arvados.git"
- go get -v -d ...
- "$GOPATH/bin/govendor" sync
- which goimports >/dev/null || go get golang.org/x/tools/cmd/goimports
- ) || fatal "Go setup failed"
+ go mod download || fatal "Go deps failed"
+ which goimports >/dev/null || go get golang.org/x/tools/cmd/goimports || fatal "Go setup failed"
setup_virtualenv "$VENVDIR" --python python2.7
. "$VENVDIR/bin/activate"
# Needed for run_test_server.py which is used by certain (non-Python) tests.
- pip install --no-cache-dir PyYAML future \
- || fatal "pip install PyYAML failed"
+ (
+ set -e
+ "${VENVDIR}/bin/pip" install PyYAML
+ "${VENV3DIR}/bin/pip" install PyYAML
+ cd "$WORKSPACE/sdk/python"
+ python setup.py install
+ ) || fatal "installing PyYAML and sdk/python failed"
# Preinstall libcloud if using a fork; otherwise nodemanager "pip
# install" won't pick it up by default.
EOF
fi
-
- if ! which bundler >/dev/null
- then
- gem install --user-install bundler || fatal 'Could not install bundler'
- fi
}
retry() {
stop_services
check_arvados_config "$1"
;;
- gofmt | govendor | doc | lib/cli | lib/cloud/azure | lib/cloud/ec2 | lib/cloud/cloudtest | lib/cmd | lib/dispatchcloud/ssh_executor | lib/dispatchcloud/worker)
+ gofmt | doc | lib/cli | lib/cloud/azure | lib/cloud/ec2 | lib/cloud/cloudtest | lib/cmd | lib/dispatchcloud/ssh_executor | lib/dispatchcloud/worker)
check_arvados_config "$1"
# don't care whether services are running
;;
go_ldflags() {
version=${ARVADOS_VERSION:-$(git log -n1 --format=%H)-dev}
- echo "-X git.curoverse.com/arvados.git/lib/cmd.version=${version} -X main.version=${version}"
+ echo "-X git.arvados.org/arvados.git/lib/cmd.version=${version} -X main.version=${version}"
}
do_test_once() {
then
covername="coverage-$(echo "$1" | sed -e 's/\//_/g')"
coverflags=("-covermode=count" "-coverprofile=$WORKSPACE/tmp/.$covername.tmp")
- # We do "go get -t" here to catch compilation errors
+ # We do "go install" here to catch compilation errors
# before trying "go test". Otherwise, coverage-reporting
# mode makes Go show the wrong line numbers when reporting
# compilation errors.
- go get -ldflags "$(go_ldflags)" -t "git.curoverse.com/arvados.git/$1" && \
- cd "$GOPATH/src/git.curoverse.com/arvados.git/$1" && \
+ go install -ldflags "$(go_ldflags)" "$WORKSPACE/$1" && \
+ cd "$WORKSPACE/$1" && \
if [[ -n "${testargs[$1]}" ]]
then
# "go test -check.vv giturl" doesn't work, but this
else
# The above form gets verbose even when testargs is
# empty, so use this form in such cases:
- go test ${short:+-short} ${coverflags[@]} "git.curoverse.com/arvados.git/$1"
+ go test ${short:+-short} ${coverflags[@]} "git.arvados.org/arvados.git/$1"
fi
result=${result:-$?}
if [[ -f "$WORKSPACE/tmp/.$covername.tmp" ]]
result=1
elif [[ "$2" == "go" ]]
then
- go get -ldflags "$(go_ldflags)" -t "git.curoverse.com/arvados.git/$1"
+ go install -ldflags "$(go_ldflags)" "$WORKSPACE/$1"
elif [[ "$2" == "pip" ]]
then
# $3 can name a path directory for us to use, including trailing
cd "$WORKSPACE/$1" \
&& "${3}python" setup.py sdist rotate --keep=1 --match .tar.gz \
&& cd "$WORKSPACE" \
- && "${3}pip" install --no-cache-dir --quiet "$WORKSPACE/$1/dist"/*.tar.gz \
- && "${3}pip" install --no-cache-dir --quiet --no-deps --ignore-installed "$WORKSPACE/$1/dist"/*.tar.gz
+ && "${3}pip" install --no-cache-dir "$WORKSPACE/$1/dist"/*.tar.gz \
+ && "${3}pip" install --no-cache-dir --no-deps --ignore-installed "$WORKSPACE/$1/dist"/*.tar.gz
elif [[ "$2" != "" ]]
then
"install_$2"
(
set -e
echo "(Running bundle install --local. 'could not find package' messages are OK.)"
- if ! bundle install --local --no-deployment; then
+ if ! "$bundle" install --local --no-deployment; then
echo "(Running bundle install again, without --local.)"
- bundle install --no-deployment
+ "$bundle" install --no-deployment
fi
- bundle package --all
+ "$bundle" package
)
}
install_services/api() {
stop_services
+ check_arvados_config "services/api"
cd "$WORKSPACE/services/api" \
- && RAILS_ENV=test bundle_install_trylocal
+ && RAILS_ENV=test bundle_install_trylocal \
+ || return 1
rm -f config/environments/test.rb
cp config/environments/test.rb.example config/environments/test.rb
# database, so that we can drop it. This assumes the current user
# is a postgresql superuser.
cd "$WORKSPACE/services/api" \
- && test_database=$(python -c "import yaml; print yaml.safe_load(file('$ARVADOS_CONFIG'))['Clusters']['zzzzz']['PostgreSQL']['Connection']['dbname']") \
+ && test_database=$("${VENVDIR}/bin/python" -c "import yaml; print yaml.safe_load(file('$ARVADOS_CONFIG'))['Clusters']['zzzzz']['PostgreSQL']['Connection']['dbname']") \
&& psql "$test_database" -c "SELECT pg_terminate_backend (pg_stat_activity.pid::int) FROM pg_stat_activity WHERE pg_stat_activity.datname = '$test_database';" 2>/dev/null
mkdir -p "$WORKSPACE/services/api/tmp/pids"
&& git --git-dir internal.git init \
|| return 1
-
- (cd "$WORKSPACE/services/api"
- export RAILS_ENV=test
- if bundle exec rails db:environment:set ; then
- bundle exec rake db:drop
- fi
- bundle exec rake db:setup \
- && bundle exec rake db:fixtures:load
- )
+ (
+ set -e
+ cd "$WORKSPACE/services/api"
+ export RAILS_ENV=test
+ if "$bundle" exec rails db:environment:set ; then
+ "$bundle" exec rake db:drop
+ fi
+ "$bundle" exec rake db:setup
+ "$bundle" exec rake db:fixtures:load
+ ) || return 1
}
declare -a pythonstuff
sdk/pam
sdk/python
sdk/python:py3
- sdk/cwl
sdk/cwl:py3
services/dockercleaner:py3
services/fuse
cd "$WORKSPACE/apps/workbench" \
&& mkdir -p tmp/cache \
&& RAILS_ENV=test bundle_install_trylocal \
- && RAILS_ENV=test RAILS_GROUPS=assets bundle exec rake npm:install
+ && RAILS_ENV=test RAILS_GROUPS=assets "$bundle" exec rake npm:install
}
test_doc() {
ARVADOS_API_HOST=qr1hi.arvadosapi.com
# Make sure python-epydoc is installed or the next line won't
# do much good!
- PYTHONPATH=$WORKSPACE/sdk/python/ bundle exec rake linkchecker baseurl=file://$WORKSPACE/doc/.site/ arvados_workbench_host=https://workbench.$ARVADOS_API_HOST arvados_api_host=$ARVADOS_API_HOST
+ PYTHONPATH=$WORKSPACE/sdk/python/ "$bundle" exec rake linkchecker baseurl=file://$WORKSPACE/doc/.site/ arvados_workbench_host=https://workbench.$ARVADOS_API_HOST arvados_api_host=$ARVADOS_API_HOST
)
}
[[ -z "$(gofmt -e -d $dirs | tee -a /dev/stderr)" ]]
}
-test_govendor() {
- (
- set -e
- cd "$GOPATH/src/git.curoverse.com/arvados.git"
- # Remove cached source dirs in workdir. Otherwise, they will
- # not qualify as +missing or +external below, and we won't be
- # able to detect that they're missing from vendor/vendor.json.
- rm -rf vendor/*/
- go get -v -d ...
- "$GOPATH/bin/govendor" sync
- if [[ -n $("$GOPATH/bin/govendor" list +unused +missing +external | tee /dev/stderr) ]]; then
- echo >&2 "vendor/vendor.json has unused or missing dependencies -- try:
-
-(export GOPATH=\"${GOPATH}\"; cd \$GOPATH/src/git.curoverse.com/arvados.git && \$GOPATH/bin/govendor add +missing +external && \$GOPATH/bin/govendor remove +unused)
-
-"
- return 1
- fi
- )
-}
-
test_services/api() {
rm -f "$WORKSPACE/services/api/git-commit.version"
cd "$WORKSPACE/services/api" \
- && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} bundle exec rake test TESTOPTS=\'-v -d\' ${testargs[services/api]}
+ && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} "$bundle" exec rake test TESTOPTS=\'-v -d\' ${testargs[services/api]}
}
test_sdk/ruby() {
cd "$WORKSPACE/sdk/ruby" \
- && bundle exec rake test TESTOPTS=-v ${testargs[sdk/ruby]}
+ && "$bundle" exec rake test TESTOPTS=-v ${testargs[sdk/ruby]}
}
test_sdk/R() {
test_sdk/cli() {
cd "$WORKSPACE/sdk/cli" \
&& mkdir -p /tmp/keep \
- && KEEP_LOCAL_STORE=/tmp/keep bundle exec rake test TESTOPTS=-v ${testargs[sdk/cli]}
+ && KEEP_LOCAL_STORE=/tmp/keep "$bundle" exec rake test TESTOPTS=-v ${testargs[sdk/cli]}
}
test_sdk/java-v2() {
test_services/login-sync() {
cd "$WORKSPACE/services/login-sync" \
- && bundle exec rake test TESTOPTS=-v ${testargs[services/login-sync]}
+ && "$bundle" exec rake test TESTOPTS=-v ${testargs[services/login-sync]}
}
test_services/nodemanager_integration() {
test_apps/workbench_units() {
local TASK="test:units"
cd "$WORKSPACE/apps/workbench" \
- && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} bundle exec rake ${TASK} TESTOPTS=\'-v -d\' ${testargs[apps/workbench]} ${testargs[apps/workbench_units]}
+ && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} "$bundle" exec rake ${TASK} TESTOPTS=\'-v -d\' ${testargs[apps/workbench]} ${testargs[apps/workbench_units]}
}
test_apps/workbench_functionals() {
local TASK="test:functionals"
cd "$WORKSPACE/apps/workbench" \
- && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} bundle exec rake ${TASK} TESTOPTS=\'-v -d\' ${testargs[apps/workbench]} ${testargs[apps/workbench_functionals]}
+ && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} "$bundle" exec rake ${TASK} TESTOPTS=\'-v -d\' ${testargs[apps/workbench]} ${testargs[apps/workbench_functionals]}
}
test_apps/workbench_integration() {
local TASK="test:integration"
cd "$WORKSPACE/apps/workbench" \
- && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} bundle exec rake ${TASK} TESTOPTS=\'-v -d\' ${testargs[apps/workbench]} ${testargs[apps/workbench_integration]}
+ && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} "$bundle" exec rake ${TASK} TESTOPTS=\'-v -d\' ${testargs[apps/workbench]} ${testargs[apps/workbench_integration]}
}
test_apps/workbench_benchmark() {
local TASK="test:benchmark"
cd "$WORKSPACE/apps/workbench" \
- && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} bundle exec rake ${TASK} ${testargs[apps/workbench_benchmark]}
+ && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} "$bundle" exec rake ${TASK} ${testargs[apps/workbench_benchmark]}
}
test_apps/workbench_profile() {
local TASK="test:profile"
cd "$WORKSPACE/apps/workbench" \
- && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} bundle exec rake ${TASK} ${testargs[apps/workbench_profile]}
+ && eval env RAILS_ENV=test ${short:+RAILS_TEST_SHORT=1} "$bundle" exec rake ${TASK} ${testargs[apps/workbench_profile]}
}
install_deps() {
do_install sdk/cli
do_install sdk/perl
do_install sdk/python pip
+ do_install sdk/python pip "${VENV3DIR}/bin/"
do_install sdk/ruby
do_install services/api
do_install services/arv-git-httpd go
fi
do_test gofmt
- do_test govendor
do_test doc
do_test sdk/ruby
do_test sdk/R
${verb}_${target}
;;
*)
- testargs["$target"]="${opts}"
+ argstarget=${target%:py3}
+ testargs["$argstarget"]="${opts}"
tt="${testfuncargs[${target}]}"
tt="${tt:-$target}"
do_$verb $tt
--- /dev/null
+#!/bin/bash
+
+set -e -o pipefail
+commit="$1"
+versionglob="[0-9].[0-9]*.[0-9]*"
+devsuffix=".dev"
+
+# automatically assign version
+#
+# handles the following cases:
+#
+# 1. commit is directly tagged. print that.
+#
+# 2. commit is on master or a development branch, the nearest tag is older
+# than commit where this branch joins master.
+# -> take greatest version tag in repo X.Y.Z and assign X.(Y+1).0
+#
+# 3. commit is on a release branch, the nearest tag is newer
+# than the commit where this branch joins master.
+# -> take nearest tag X.Y.Z and assign X.Y.(Z+1)
+
+tagged=$(git tag --points-at "$commit")
+
+if [[ -n "$tagged" ]] ; then
+ echo $tagged
+else
+ # 1. get the nearest tag with 'git describe'
+ # 2. get the merge base between this commit and master
+ # 3. if the tag is an ancestor of the merge base,
+ # (tag is older than merge base) increment minor version
+ # else, tag is newer than merge base, so increment point version
+
+ nearest_tag=$(git describe --tags --abbrev=0 --match "$versionglob" "$commit")
+ merge_base=$(git merge-base origin/master "$commit")
+
+ if git merge-base --is-ancestor "$nearest_tag" "$merge_base" ; then
+ # x.(y+1).0.devTIMESTAMP, where x.y.z is the newest version that does not contain $commit
+ # grep reads the list of tags (-f) that contain $commit and filters them out (-v)
+ # this prevents a newer tag from retroactively changing the versions of everything before it
+ v=$(git tag | grep -vFf <(git tag --contains "$commit") | sort -Vr | head -n1 | perl -pe 's/\.(\d+)\.\d+/".".($1+1).".0"/e')
+ else
+ # x.y.(z+1).devTIMESTAMP, where x.y.z is the latest released ancestor of $commit
+ v=$(echo $nearest_tag | perl -pe 's/(\d+)$/$1+1/e')
+ fi
+ isodate=$(TZ=UTC git log -n1 --format=%cd --date=iso "$commit")
+ ts=$(TZ=UTC date --date="$isodate" "+%Y%m%d%H%M%S")
+ echo "${v}${devsuffix}${ts}"
+fi
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+all:
+ @printf "*** note *** due to an xgo limitation, this only works when the working tree is in GOPATH\n\n"
+ go mod download
+ docker build --tag=cgofuse --build-arg=http_proxy="$(http_proxy)" --build-arg=https_proxy="$(https_proxy)" https://github.com/arvados/cgofuse.git
+ go run github.com/karalabe/xgo --image=cgofuse --targets=linux/amd64,linux/386,darwin/amd64,darwin/386,windows/amd64,windows/386 .
+ install arvados-* "$(GOPATH)"/bin/
+ rm --interactive=never arvados-*
import (
"os"
- "git.curoverse.com/arvados.git/lib/cli"
- "git.curoverse.com/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/cli"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/mount"
)
var (
"user": cli.APICall,
"virtual_machine": cli.APICall,
"workflow": cli.APICall,
+
+ "mount": mount.Command,
})
)
Type=notify
EnvironmentFile=-/etc/arvados/environment
ExecStart=/usr/bin/arvados-controller
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=1
Type=notify
EnvironmentFile=-/etc/arvados/environment
ExecStart=/usr/bin/arvados-dispatch-cloud
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=1
import (
"os"
- "git.curoverse.com/arvados.git/lib/cloud/cloudtest"
- "git.curoverse.com/arvados.git/lib/cmd"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/lib/controller"
- "git.curoverse.com/arvados.git/lib/dispatchcloud"
+ "git.arvados.org/arvados.git/lib/boot"
+ "git.arvados.org/arvados.git/lib/cloud/cloudtest"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/controller"
+ "git.arvados.org/arvados.git/lib/crunchrun"
+ "git.arvados.org/arvados.git/lib/dispatchcloud"
)
var (
"-version": cmd.Version,
"--version": cmd.Version,
+ "boot": boot.Command,
"cloudtest": cloudtest.Command,
"config-check": config.CheckCommand,
"config-dump": config.DumpCommand,
"config-defaults": config.DumpDefaultsCommand,
"controller": controller.Command,
+ "crunch-run": crunchrun.Command,
"dispatch-cloud": dispatchcloud.Command,
})
)
#
# SPDX-License-Identifier: CC-BY-SA-3.0
+# As a convenience to the documentation writer, you can touch a file
+# called 'no-sdk' in the 'doc' directory and it will suppress
+# generating the documentation for the SDKs, which (the R docs
+# especially) take a fair bit of time and slow down the edit-preview
+# cycle.
+
require "rubygems"
require "colorize"
+module Zenweb
+ class Site
+ @binary_files = %w[png jpg gif eot svg ttf woff2? ico pdf m4a t?gz xlsx]
+ end
+end
+
task :generate => [ :realclean, 'sdk/python/arvados/index.html', 'sdk/R/arvados/index.html', 'sdk/java-v2/javadoc/index.html' ] do
vars = ['baseurl', 'arvados_cluster_uuid', 'arvados_api_host', 'arvados_workbench_host']
vars.each do |v|
end
end
+file ["install/new_cluster_checklist_Azure.xlsx", "install/new_cluster_checklist_AWS.xlsx"] do |t|
+ cp(t, t)
+end
+
file "sdk/python/arvados/index.html" do |t|
+ if ENV['NO_SDK'] || File.exists?("no-sdk")
+ next
+ end
`which epydoc`
if $? == 0
STDERR.puts `epydoc --html --parse-only -o sdk/python/arvados ../sdk/python/arvados/ 2>&1`
end
file "sdk/R/arvados/index.html" do |t|
+ if ENV['NO_SDK'] || File.exists?("no-sdk")
+ next
+ end
`which R`
if $? == 0
tgt = Dir.pwd
Dir.mkdir("sdk/R")
Dir.mkdir("sdk/R/arvados")
+ puts("tgt", tgt)
+ cp('css/R.css', 'sdk/R/arvados')
docfiles = []
Dir.chdir("../sdk/R/") do
STDERR.puts `Rscript createDoc.R README.Rmd #{tgt}/sdk/R/README.md 2>&1`
end
file "sdk/java-v2/javadoc/index.html" do |t|
+ if ENV['NO_SDK'] || File.exists?("no-sdk")
+ next
+ end
`which java`
if $? == 0
`which gradle`
Dir.chdir("../sdk/java-v2") do
STDERR.puts `gradle javadoc 2>&1`
raise if $? != 0
+ puts `sed -i "s/@import.*dejavu.css.*//g" build/docs/javadoc/stylesheet.css`
+ raise if $? != 0
end
cp_r("../sdk/java-v2/build/docs/javadoc", "sdk/java-v2")
raise if $? != 0
require "zenweb/tasks"
load "zenweb-textile.rb"
load "zenweb-liquid.rb"
+load "zenweb-fix-body.rb"
task :extra_wirings do
$website.pages["sdk/python/python.html.textile.liquid"].depends_on("sdk/python/arvados/index.html")
- R:
- sdk/R/index.html.md
- sdk/R/arvados/index.html.textile.liquid
- - Perl:
- - sdk/perl/index.html.textile.liquid
- - sdk/perl/example.html.textile.liquid
- Ruby:
- sdk/ruby/index.html.textile.liquid
- sdk/ruby/example.html.textile.liquid
- Java v1:
- sdk/java/index.html.textile.liquid
- sdk/java/example.html.textile.liquid
+ - Perl:
+ - sdk/perl/index.html.textile.liquid
+ - sdk/perl/example.html.textile.liquid
api:
- Concepts:
- api/index.html.textile.liquid
- api/methods/authorized_keys.html.textile.liquid
- api/methods/groups.html.textile.liquid
- api/methods/users.html.textile.liquid
+ - api/methods/user_agreements.html.textile.liquid
- System resources:
- api/methods/keep_services.html.textile.liquid
- api/methods/links.html.textile.liquid
- api/methods/container_requests.html.textile.liquid
- api/methods/containers.html.textile.liquid
- api/methods/workflows.html.textile.liquid
+ - Management (admin/system):
+ - api/dispatch.html.textile.liquid
- Jobs engine (legacy):
- api/crunch-scripts.html.textile.liquid
- api/methods/jobs.html.textile.liquid
admin:
- Topics:
- admin/index.html.textile.liquid
- - Configuration:
- - admin/config.html.textile.liquid
- - admin/federation.html.textile.liquid
- - Upgrading and migrations:
- - admin/upgrading.html.textile.liquid
- - admin/config-migration.html.textile.liquid
- Users and Groups:
- - install/cheat_sheet.html.textile.liquid
- - admin/activation.html.textile.liquid
+ - admin/user-management.html.textile.liquid
+ - admin/reassign-ownership.html.textile.liquid
+ - admin/user-management-cli.html.textile.liquid
+ - admin/group-management.html.textile.liquid
+ - admin/federation.html.textile.liquid
- admin/merge-remote-account.html.textile.liquid
- admin/migrating-providers.html.textile.liquid
- user/topics/arvados-sync-groups.html.textile.liquid
+ - admin/scoped-tokens.html.textile.liquid
- Monitoring:
- - admin/health-checks.html.textile.liquid
+ - admin/logging.html.textile.liquid
- admin/metrics.html.textile.liquid
+ - admin/health-checks.html.textile.liquid
- admin/management-token.html.textile.liquid
- - Cloud:
- - admin/storage-classes.html.textile.liquid
- - admin/spot-instances.html.textile.liquid
- - admin/cloudtest.html.textile.liquid
- Data Management:
- admin/collection-versioning.html.textile.liquid
- admin/collection-managed-properties.html.textile.liquid
- admin/keep-balance.html.textile.liquid
- admin/controlling-container-reuse.html.textile.liquid
- admin/logs-table-management.html.textile.liquid
+ - admin/workbench2-vocabulary.html.textile.liquid
+ - admin/storage-classes.html.textile.liquid
+ - Cloud:
+ - admin/spot-instances.html.textile.liquid
+ - admin/cloudtest.html.textile.liquid
- Other:
- - admin/troubleshooting.html.textile.liquid
- install/migrate-docker19.html.textile.liquid
- admin/upgrade-crunch2.html.textile.liquid
installguide:
- install/arvados-on-kubernetes.html.textile.liquid
- Manual installation:
- install/install-manual-prerequisites.html.textile.liquid
- - install/install-components.html.textile.liquid
+ - install/packages.html.textile.liquid
+ - admin/upgrading.html.textile.liquid
+ - Configuration:
+ - install/config.html.textile.liquid
+ - admin/config-migration.html.textile.liquid
+ - admin/config.html.textile.liquid
- Core:
- - install/install-postgresql.html.textile.liquid
- install/install-api-server.html.textile.liquid
- - install/install-controller.html.textile.liquid
- Keep:
- install/install-keepstore.html.textile.liquid
- install/configure-fs-storage.html.textile.liquid
- install/install-keep-web.html.textile.liquid
- install/install-keep-balance.html.textile.liquid
- User interface:
+ - install/setup-login.html.textile.liquid
- install/install-sso.html.textile.liquid
- install/install-workbench-app.html.textile.liquid
- install/install-workbench2-app.html.textile.liquid
- install/install-composer.html.textile.liquid
- Additional services:
- install/install-ws.html.textile.liquid
- - install/install-shell-server.html.textile.liquid
- install/install-arv-git-httpd.html.textile.liquid
- - Containers API support on SLURM:
- - install/crunch2-slurm/install-prerequisites.html.textile.liquid
- - install/crunch2-slurm/install-slurm.html.textile.liquid
+ - install/install-shell-server.html.textile.liquid
+ - Containers API:
- install/crunch2-slurm/install-compute-node.html.textile.liquid
+ - install/install-jobs-image.html.textile.liquid
+ - install/install-dispatch-cloud.html.textile.liquid
- install/crunch2-slurm/install-dispatch.html.textile.liquid
- install/crunch2-slurm/install-test.html.textile.liquid
- - install/install-nodemanager.html.textile.liquid
- - install/install-compute-ping.html.textile.liquid
- - Containers API support on cloud (beta):
- - install/install-dispatch-cloud.html.textile.liquid
+ - External dependencies:
+ - install/install-postgresql.html.textile.liquid
+ - install/ruby.html.textile.liquid
+ - install/nginx.html.textile.liquid
+ - install/google-auth.html.textile.liquid
+ - install/install-docker.html.textile.liquid
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Note that each volume has a UUID, like @zzzzz-nyw5e-0123456789abcde@. You assign these manually: replace @zzzzz@ with your cluster ID, and replace @0123456789abcde@ with an arbitrary string of 15 alphanumerics. Once assigned, UUIDs should not be changed.
+Note that each volume has a UUID, like @zzzzz-nyw5e-0123456789abcde@. You assign these manually: replace @zzzzz@ with your Cluster ID, and replace @0123456789abcde@ with an arbitrary unique string of 15 alphanumerics. Once assigned, UUIDs should not be changed.
+
+Essential configuration values are highlighted in <span class="userinput">red</span>. Remaining parameters are provided for documentation, with their default values.
\ No newline at end of file
import (
"fmt"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
"log"
)
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-h2. Install Docker
+h2(#cgroups). Configure Linux cgroups accounting
-Compute nodes must have Docker installed to run containers. This requires a relatively recent version of Linux (at least upstream version 3.10, or a distribution version with the appropriate patches backported). Follow the "Docker Engine installation documentation":https://docs.docker.com/ for your distribution.
-
-For Debian-based systems, the Arvados package repository includes a backported @docker.io@ package with a known-good version you can install.
-
-h2(#configure_docker_daemon). Configure the Docker daemon
-
-Crunch runs Docker containers with relatively little configuration. You may need to start the Docker daemon with specific options to make sure these jobs run smoothly in your environment. This section highlights options that are useful to most installations. Refer to the "Docker daemon reference":https://docs.docker.com/reference/commandline/daemon/ for complete information about all available options.
-
-The best way to configure these options varies by distribution.
-
-* If you're using our backported @docker.io@ package, you can list these options in the @DOCKER_OPTS@ setting in @/etc/default/docker.io@.
-* If you're using another Debian-based package, you can list these options in the @DOCKER_OPTS@ setting in @/etc/default/docker@.
-* On Red Hat-based distributions, you can list these options in the @other_args@ setting in @/etc/sysconfig/docker@.
-
-h3. Default ulimits
-
-Docker containers inherit ulimits from the Docker daemon. However, the ulimits for a single Unix daemon may not accommodate a long-running Crunch job. You may want to increase default limits for compute containers by passing @--default-ulimit@ options to the Docker daemon. For example, to allow containers to open 10,000 files, set @--default-ulimit nofile=10000:10000@.
+Linux can report what compute resources are used by processes in a specific cgroup or Docker container. Crunch can use these reports to share that information with users running compute work. This can help pipeline authors debug and optimize their workflows.
-h3. DNS
+To enable cgroups accounting, you must boot Linux with the command line parameters @cgroup_enable=memory swapaccount=1@.
-Your containers must be able to resolve the hostname of your API server and any hostnames returned in Keep service records. If these names are not in public DNS records, you may need to specify a DNS resolver for the containers by setting the @--dns@ address to an IP address of an appropriate nameserver. You may specify this option more than once to use multiple nameservers.
+After making changes, reboot the system to make these changes effective.
-h2. Configure Linux cgroups accounting
+h3. Red Hat and CentOS
-Linux can report what compute resources are used by processes in a specific cgroup or Docker container. Crunch can use these reports to share that information with users running compute work. This can help pipeline authors debug and optimize their workflows.
+<notextile>
+<pre><code>~$ <span class="userinput">sudo grubby --update-kernel=ALL --args='cgroup_enable=memory swapaccount=1'</span>
+</code></pre>
+</notextile>
-To enable cgroups accounting, you must boot Linux with the command line parameters @cgroup_enable=memory swapaccount=1@.
+h3. Debian and Ubuntu
-On Debian-based systems, open the file @/etc/default/grub@ in an editor. Find where the string @GRUB_CMDLINE_LINUX@ is set. Add @cgroup_enable=memory swapaccount=1@ to that string. Save the file and exit the editor. Then run:
+Open the file @/etc/default/grub@ in an editor. Find where the string @GRUB_CMDLINE_LINUX@ is set. Add @cgroup_enable=memory swapaccount=1@ to that string. Save the file and exit the editor. Then run:
<notextile>
<pre><code>~$ <span class="userinput">sudo update-grub</span>
</code></pre>
</notextile>
-On Red Hat-based systems, run:
+h2(#install_docker). Install Docker
+
+Compute nodes must have Docker installed to run containers. This requires a relatively recent version of Linux (at least upstream version 3.10, or a distribution version with the appropriate patches backported). Follow the "Docker Engine installation documentation":https://docs.docker.com/install/ for your distribution.
+
+Make sure Docker is enabled to start on boot:
<notextile>
-<pre><code>~$ <span class="userinput">sudo grubby --update-kernel=ALL --args='cgroup_enable=memory swapaccount=1'</span>
+<pre><code># <span class="userinput">systemctl enable --now docker</span>
</code></pre>
</notextile>
-Finally, reboot the system to make these changes effective.
-
-h2. Create a project for Docker images
+h2(#configure_docker_daemon). Configure the Docker daemon
-Here we create a default project for the standard Arvados Docker images, and give all users read access to it. The project is owned by the system user.
+Depending on your anticipated workload or cluster configuration, you may need to tweak Docker options.
-<notextile>
-<pre><code>~$ <span class="userinput">uuid_prefix=`arv --format=uuid user current | cut -d- -f1`</span>
-~$ <span class="userinput">project_uuid=`arv --format=uuid group create --group "{\"owner_uuid\":\"$uuid_prefix-tpzed-000000000000000\", \"group_class\":\"project\", \"name\":\"Arvados Standard Docker Images\"}"`</span>
-~$ <span class="userinput">echo "Arvados project uuid is '$project_uuid'"</span>
-~$ <span class="userinput">read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"</span>
-<span class="userinput">{
- "tail_uuid":"$all_users_group_uuid",
- "head_uuid":"$project_uuid",
- "link_class":"permission",
- "name":"can_read"
-}
-EOF</span>
-</code></pre></notextile>
-
-h2. Download and tag the latest arvados/jobs docker image
-
-In order to start workflows from workbench, there needs to be Docker image tagged @arvados/jobs:latest@. The following command downloads the latest arvados/jobs image from Docker Hub, loads it into Keep, and tags it as 'latest'. In this example @$project_uuid@ should be the UUID of the "Arvados Standard Docker Images" project.
+For information about how to set configuration options for the Docker daemon, see https://docs.docker.com/config/daemon/systemd/
-<notextile>
-<pre><code>~$ <span class="userinput">arv-keepdocker --pull arvados/jobs latest --project-uuid $project_uuid</span>
-</code></pre></notextile>
+h3. Changing ulimits
-If the image needs to be downloaded from Docker Hub, the command can take a few minutes to complete, depending on available network bandwidth.
+Docker containers inherit ulimits from the Docker daemon. However, the ulimits for a single Unix daemon may not accommodate a long-running Crunch job. You may want to increase default limits for compute containers by passing @--default-ulimit@ options to the Docker daemon. For example, to allow containers to open 10,000 files, set @--default-ulimit nofile=10000:10000@.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-h2. Configure FUSE
+h2(#fuse). Update fuse.conf
FUSE must be configured with the @user_allow_other@ option enabled for Crunch to set up Keep mounts that are readable by containers. Install this file as @/etc/fuse.conf@:
<notextile>
<pre>
-# Set the maximum number of FUSE mounts allowed to non-root users.
-# The default is 1000.
-#
-#mount_max = 1000
-
# Allow non-root users to specify the 'allow_other' or 'allow_root'
# mount options.
-#
user_allow_other
</pre>
</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
+
+
<notextile>
-<pre><code>~$ <span class="userinput">sudo /usr/bin/apt-key adv --keyserver pool.sks-keyservers.net --recv 1078ECD7</span>
+<pre><code># <span class="userinput">apt-get --no-install-recommends install gnupg</span>
+# <span class="userinput">/usr/bin/apt-key adv --keyserver pool.sks-keyservers.net --recv 1078ECD7</span>
</code></pre>
</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-h2. Configure the Docker cleaner
+h2(#docker-cleaner). Update docker-cleaner.json
-The arvados-docker-cleaner program removes least recently used Docker images as needed to keep disk usage below a configured limit.
-
-{% include 'notebox_begin' %}
-This also removes all containers as soon as they exit, as if they were run with @docker run --rm@. If you need to debug or inspect containers after they stop, temporarily stop arvados-docker-cleaner or configure it with @"RemoveStoppedContainers":"never"@.
-{% include 'notebox_end' %}
+The @arvados-docker-cleaner@ program removes least recently used Docker images as needed to keep disk usage below a configured limit.
Create a file @/etc/arvados/docker-cleaner/docker-cleaner.json@ in an editor, with the following contents.
*Choosing a quota:* Most deployments will want a quota that's at least 10G. From there, a larger quota can help reduce compute overhead by preventing reloading the same Docker image repeatedly, but will leave less space for other files on the same storage (usually Docker volumes). Make sure the quota is less than the total space available for Docker images.
-Restart the service after updating the configuration file.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart arvados-docker-cleaner</span>
-</code></pre>
-</notextile>
-
-*If you are using a different daemon supervisor,* or if you want to test the daemon in a terminal window, run @arvados-docker-cleaner@. Run @arvados-docker-cleaner --help@ for more configuration options.
+{% include 'notebox_begin' %}
+This also removes all containers as soon as they exit, as if they were run with @docker run --rm@. If you need to debug or inspect containers after they stop, temporarily stop arvados-docker-cleaner or configure it with @"RemoveStoppedContainers":"never"@.
+{% include 'notebox_end' %}
+++ /dev/null
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-On a Debian-based system, install the following packages:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install git curl</span>
-</code></pre>
-</notextile>
-
-On a Red Hat-based system, install the following packages:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install git curl</span>
-</code></pre>
-</notextile>
--- /dev/null
+{% comment %}
+packages_to_install should be a list
+fallback on arvados_component if not defined
+{% endcomment %}
+
+{% if package_to_install == nil %}
+ {% assign packages_to_install = arvados_component | split: " " %}
+{% endif %}
+
+h2(#install-packages). Install {{packages_to_install | join: " and " }}
+
+h3. Red Hat and Centos
+
+<notextile>
+<pre><code># <span class="userinput">yum install {{packages_to_install | join: " "}}</span>
+</code></pre>
+</notextile>
+
+h3. Debian and Ubuntu
+
+<notextile>
+<pre><code># <span class="userinput">apt-get install {{packages_to_install | join " "}}</span>
+</code></pre>
+</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-<ol>
+<ol class=>
<li>Start a shell for the postgres user:
-<notextile><pre>~$ <span class="userinput">sudo -u postgres bash</span></pre></notextile>
+<notextile><pre># <span class="userinput">su postgres</span></pre></notextile>
</li>
<li>Generate a new database password:
-<notextile><pre>$ <span class="userinput">ruby -e 'puts rand(2**128).to_s(36)'</span>
+<notextile><pre>postgres$ <span class="userinput"><span class="userinput">tr -dc 0-9a-zA-Z </dev/urandom | head -c25; echo</span>
yourgeneratedpassword
</pre></notextile> Record this. You'll need it when you set up the Rails server later.
</li>
<li>Create a database user with the password you generated:
- <notextile><pre><code>$ <span class="userinput">createuser --encrypted -R -S --pwprompt {{service_role}}</span>
+ <notextile><pre><code>postgres$ <span class="userinput">createuser --encrypted --no-createrole --no-superuser --pwprompt {{service_role}}</span>
Enter password for new role: <span class="userinput">yourgeneratedpassword</span>
Enter it again: <span class="userinput">yourgeneratedpassword</span></code></pre></notextile>
</li>
<li>Create a database owned by the new user:
- <notextile><pre><code>$ <span class="userinput">createdb {{service_database}} -T template0 -E UTF8 -O {{service_role}}</span></code></pre></notextile>
+ <notextile><pre><code>postgres$ <span class="userinput">createdb {{service_database}} -T template0 -E UTF8 -O {{service_role}}</span></code></pre></notextile>
</li>
{% if use_contrib %}
<li>Enable the pg_trgm extension
- <notextile><pre>$ <span class="userinput">psql {{service_database}} -c "CREATE EXTENSION IF NOT EXISTS pg_trgm"</span></pre></notextile>
+ <notextile><pre>postgres$ <span class="userinput">psql {{service_database}} -c "CREATE EXTENSION IF NOT EXISTS pg_trgm"</span></pre></notextile>
</li>
{% endif %}
<li>Exit the postgres user shell:
- <notextile><pre>$ <span class="userinput">exit</span></pre></notextile>
+ <notextile><pre>postgres$ <span class="userinput">exit</span></pre></notextile>
</li>
</ol>
{% assign railscmd = "bundle exec rails console" %}
{% endunless %}
-Using RVM:
-
-<notextile>
-<pre><code>{{railshost}}~$ <span class="userinput">cd {{railsdir}}</span>
-{{railshost}}{{railsdir}}$ <span class="userinput">sudo -u <b>webserver-user</b> RAILS_ENV=production `which rvm-exec` default {{railscmd}}</span>
-{% if railsout %}{{railsout}}
-{% endif %}</code></pre>
-</notextile>
-
-Not using RVM:
-
<notextile>
<pre><code>{{railshost}}~$ <span class="userinput">cd {{railsdir}}</span>
{{railshost}}{{railsdir}}$ <span class="userinput">sudo -u <b>webserver-user</b> RAILS_ENV=production {{railscmd}}</span>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-The Curoverse signing key fingerprint is
+The Arvados signing key fingerprint is
<notextile>
-<pre><code>
-pub 2048R/1078ECD7 2010-11-15 Curoverse, Inc Automatic Signing Key <sysadmin@curoverse.com>
- Key fingerprint = B2DA 2991 656E B4A5 0314 CA2B 5716 5911 1078 ECD7
-sub 2048R/5A8C5A93 2010-11-15
+<pre><code>pub rsa2048 2010-11-15 [SC]
+ B2DA 2991 656E B4A5 0314 CA2B 5716 5911 1078 ECD7
+uid [ unknown] Arvados Automatic Signing Key <sysadmin@arvados.org>
+uid [ unknown] Curoverse, Inc Automatic Signing Key <sysadmin@curoverse.com>
+sub rsa2048 2010-11-15 [E]
</code></pre>
</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Ruby 2.5 is recommended; Ruby 2.3 is also known to work.
+Minimum of Ruby 2.3 is required. Ruby 2.5 is recommended.
-h4(#rvm). *Option 1: Install with RVM*
+* "Option 1: Install from packages":#packages
+* "Option 2: Install with RVM":#rvm
+* "Option 3: Install from source":#fromsource
+
+h2(#packages). Option 1: Install from packages
+
+{% include 'notebox_begin' %}
+Future versions of Arvados may require a newer version of Ruby than is packaged with your OS. Using OS packages simplifies initial install, but may complicate upgrades that rely on a newer Ruby. If this is a concern, we recommend using "RVM.":#rvm
+{% include 'notebox_end' %}
+
+h3. Centos 7
+
+The Ruby version shipped with Centos 7 is too old. Use "RVM.":#rvm
+
+h3. Debian and Ubuntu
+
+Debian 9 (stretch) and Ubuntu 16.04 (xenial) ship Ruby 2.3, which is sufficient to run Arvados. Later releases have newer versions of Ruby that can also run Arvados.
+
+<notextile>
+<pre><code># <span class="userinput">apt-get --no-install-recommends install ruby ruby-dev bundler</span></code></pre>
+</notextile>
+
+h2(#rvm). Option 2: Install with RVM
+
+h3. Install gpg and curl
+
+h4. Centos 7
+
+<pre>
+yum install gpg curl which
+</pre>
+
+h4. Debian and Ubuntu
+
+<pre>
+apt-get --no-install-recommends install gpg curl
+</pre>
+
+h3. Install RVM
+
+<notextile>
+<pre><code># <span class="userinput">gpg --keyserver pool.sks-keyservers.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
+\curl -sSL https://get.rvm.io | bash -s stable --ruby=2.5
+</span></code></pre></notextile>
+
+To use Ruby installed from RVM, load it in an open shell like this:
<notextile>
-<pre><code><span class="userinput">sudo gpg --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
-\curl -sSL https://get.rvm.io | sudo bash -s stable --ruby=2.5
+<pre><code><span class="userinput">. /usr/local/rvm/scripts/rvm
</span></code></pre></notextile>
-Either log out and log back in to activate RVM, or explicitly load it in all open shells like this:
+Alternately you can use @rvm-exec@ (the first parameter is the ruby version to use, or "default"), for example:
<notextile>
-<pre><code><span class="userinput">source /usr/local/rvm/scripts/rvm
+<pre><code><span class="userinput">rvm-exec default rails console
</span></code></pre></notextile>
-Once RVM is activated in your shell, install Bundler:
+Finally, install Bundler:
<notextile>
<pre><code>~$ <span class="userinput">gem install bundler</span>
</code></pre></notextile>
-h4(#fromsource). *Option 2: Install from source*
+h2(#fromsource). Option 3: Install from source
Install prerequisites for Debian 8:
}
</code></pre>|
|Temporary directory|@tmp@|@"capacity"@: capacity (in bytes) of the storage device.
-@"device_type"@ (optional, default "network"): one of @{"ram", "ssd", "disk", "network"}@ indicating the acceptable level of performance.
+@"device_type"@ (optional, default "network"): one of @{"ram", "ssd", "disk", "network"}@ indicating the acceptable level of performance. (*note: not yet implemented as of v1.5*)
At container startup, the target path will be empty. When the container finishes, the content will be discarded. This will be backed by a storage mechanism no slower than the specified type.|<pre><code>{
"kind":"tmp",
"capacity":100000000000
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
+{% assign highlighturl = "" %}
+{% for section in site.navbar[page.navsection] %}
+ {% for entry in section %}
+ {% comment %}
+ Want to highlight the current page on the left nav.
+ But some pages have been renamed with a symlink from the old page to the new one.
+ Then the URL won't match.
+ So if the URL doesn't match, as a fallback look for a page with a matching title.
+ {% endcomment %}
+
+ {% for item in entry[1] %}
+ {% if site.pages[item].url == page.url %}
+ {% assign highlighturl = site.pages[item].url %}
+ {% endif %}
+ {% endfor %}
+
+ {% if highlighturl == "" %}
+ {% for item in entry[1] %}
+ {% if site.pages[item].title == page.title %}
+ {% assign highlighturl = site.pages[item].url %}
+ {% endif %}
+ {% endfor %}
+ {% endif %}
+ {% endfor %}
+{% endfor %}
+
<div class="col-sm-3">
<div class="well">
<ol class="nav nav-list">
{% for entry in section %}
<li><span class="nav-header">{{ entry[0] }}</span>
<ol class="nav nav-list">
- {% for item in entry[1] %}
- {% assign p = site.pages[item] %}
- <li {% if p.url == page.url %} class="active activesubnav" {% elsif p.title == page.subnavsection %} class="activesubnav" {% endif %}>
- <a href="{{ site.baseurl }}{{ p.url }}">{{ p.title }}</a></li>
+ {% for item in entry[1] %}
+ {% assign p = site.pages[item] %}
+ <li {% if p.url == highlighturl %} class="active activesubnav" {% elsif p.title == page.subnavsection %} class="activesubnav" {% endif %}>
+ <a href="{{ site.baseurl }}{{ p.url }}">{{ p.title }}</a></li>
{% endfor %}
</ol>
{% endfor %}
<li><a href="https://arvados.org" style="padding-left: 2em">arvados.org »</a></li>
</ul>
- <div class="pull-right" style="padding-top: 6px">
+ <div class="pull-right" style="padding-top: 6px; padding-right: 25px">
<form method="get" action="https://www.google.com/search">
<div class="input-group" style="width: 220px">
<input type="text" class="form-control" name="q" placeholder="search">
</form>
</div>
</div>
+
+ <div class="alert alert-block alert-info" style="display: none;" id="annotate-notify">
+ <div style="margin-top: -26px; font-size: 12pt">Hey! You can use the annotation sidebar from <a href="https://hypothes.is">hypothes.is</a> to make public comments and private notes
+ <span style="font-size: 32pt">→</span></div>
+ <button type="button" class="close" onclick="dismissAnnotateNotify()">Got it</button>
+ </div>
+
+ <script>
+ function dismissAnnotateNotify() {
+ window.localStorage.setItem("dismiss-annotate-notify", "true");
+ $('#annotate-notify').attr('style', "display: none;");
+ }
+ if (window.localStorage.getItem("dismiss-annotate-notify") === "true") {
+ dismissAnnotateNotify();
+ } else {
+ $('#annotate-notify').attr('style', "display: inline-block;");
+ }
+ </script>
+
</div>
</div>
--- /dev/null
+h2(#restart-api). Restart the API server and controller
+
+*Make sure the cluster config file is up to date on the API server host* then restart the API server and controller processes to ensure the configuration changes are visible to the whole cluster.
+
+<notextile>
+<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
+</code></pre>
+</notextile>
--- /dev/null
+h2(#start-service). Start the service
+
+<notextile>
+<pre><code># <span class="userinput">systemctl enable --now {{arvados_component}}</span>
+# <span class="userinput">systemctl status {{arvados_component}}</span>
+[...]
+</code></pre>
+</notextile>
+
+If @systemctl status@ indicates it is not running, use @journalctl@ to check logs for errors:
+
+<notextile>
+<pre><code># <span class="userinput">journalctl -n12 --unit {{arvados_component}}</span>
+</code></pre>
+</notextile>
--- /dev/null
+../../tools/vocabulary-migrate/vocabulary-migrate.py
\ No newline at end of file
--- /dev/null
+{
+ "strict_tags": false,
+ "tags": {
+ "IDTAGANIMALS": {
+ "strict": false,
+ "labels": [{"label": "Animal" }, {"label": "Creature"}, {"label": "Species"}],
+ "values": {
+ "IDVALANIMALS1": { "labels": [{"label": "Human"}, {"label": "Homo sapiens"}] },
+ "IDVALANIMALS2": { "labels": [{"label": "Dog"}, {"label": "Canis lupus familiaris"}] },
+ "IDVALANIMALS3": { "labels": [{"label": "Elephant"}, {"label": "Loxodonta"}] },
+ "IDVALANIMALS4": { "labels": [{"label": "Eagle"}, {"label": "Haliaeetus leucocephalus"}] }
+ }
+ },
+ "IDTAGCOMMENT": {
+ "labels": [{"label": "Comment"}, {"label": "Suggestion"}]
+ },
+ "IDTAGIMPORTANCES": {
+ "strict": true,
+ "labels": [{"label": "Importance"}, {"label": "Priority"}],
+ "values": {
+ "IDVALIMPORTANCES1": { "labels": [{"label": "Critical"}, {"label": "Urgent"}, {"label": "High"}] },
+ "IDVALIMPORTANCES2": { "labels": [{"label": "Normal"}, {"label": "Moderate"}] },
+ "IDVALIMPORTANCES3": { "labels": [{"label": "Low"}] }
+ }
+ }
+ }
+}
\ No newline at end of file
<link href="{{ site.baseurl }}/css/carousel-override.css" rel="stylesheet">
<link href="{{ site.baseurl }}/css/button-override.css" rel="stylesheet">
<link href="{{ site.baseurl }}/css/images.css" rel="stylesheet">
+ <script src="{{ site.baseurl }}/js/jquery.min.js"></script>
+ <script src="{{ site.baseurl }}/js/bootstrap.min.js"></script>
+ <script src="https://hypothes.is/embed.js" async></script>
<style>
html {
height:100%;
padding-top: 61px;
margin-top: -61px;
}
+
+ #annotate-notify { position: fixed; right: 40px; top: 3px; }
</style>
<!-- HTML5 shim, for IE6-8 support of HTML5 elements -->
{{ content }}
{% else %}
- <div class="container-fluid">
+ <div class="container-fluid" style="padding-right: 30px">
+
<div class="row">
{% include 'navbar_left' %}
<div class="col-sm-9">
</div>
{% endif %}
- <script src="{{ site.baseurl }}/js/jquery.min.js"></script>
- <script src="{{ site.baseurl }}/js/bootstrap.min.js"></script>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
</p>
{% endif %}
-
</body>
</html>
+++ /dev/null
----
-layout: default
-navsection: admin
-title: User activation
-...
-
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-This page describes how new users are created and activated.
-
-"Browser login and management of API tokens is described here.":{{site.baseurl}}/api/tokens.html
-
-h3. Authentication
-
-After completing the authentication process, a callback is made from the SSO server to the API server, providing a user record and @identity_url@ (despite the name, this is actually an Arvados user uuid).
-
-The API server searches for a user record with the @identity_url@ supplied by the SSO. If found, that user account will be used, unless the account has @redirect_to_user_uuid@ set, in which case it will use the user in @redirect_to_user_uuid@ instead (this is used for the "link account":{{site.baseurl}}/user/topics/link-accounts.html feature).
-
-Next, it searches by email address for a "pre-activated account.":#pre-activated
-
-If no existing user record is found, a new user object will be created.
-
-A federated user follows a slightly different flow, whereby a special token is presented and the API server verifies user's identity with the home cluster, however it also results in a user object (representing the remote user) being created.
-
-h3. User setup
-
-If @auto_setup_new_users@ is true, as part of creating the new user object, the user is immediately set up with:
-
-* @can_login@ @permission@ link going (email address → user uuid) which records @identity_url_prefix@
-* Membership in the "All users" group (can read all users, all users can see new user)
-* A new git repo and @can_manage@ permission if @auto_setup_new_users_with_repository@ is true
-* @can_login@ permission to a shell node if @auto_setup_new_users_with_vm_uuid@ is set to the uuid of a vm
-
-Otherwise, an admin must explicitly invoke "setup" on the user via workbench or the API.
-
-h3. User activation
-
-A newly created user is inactive (@is_active@ is false) by default unless @new_users_are_active@.
-
-An inactive user cannot create or update any object, but can read Arvados objects that the user account has permission to read. This implies that if @auto_setup_new_users@ is true, an "inactive" user who has been set up may still be able to do things, such as read things shared with "All users", clone and push to the git repository, or login to a VM.
-
-{% comment %}
-Maybe these services should check is_active.
-
-I believe that when this was originally designed, being able to access git and VM required an ssh key, and an inactive user could not register an ssh key because that required creating a record. However, it is now possible to authenticate to shell VMs and http+git with just an API token.
-{% endcomment %}
-
-At this point, there are two ways a user can be activated.
-
-# An admin can set the @is_active@ field directly. This runs @setup_on_activate@ which sets up oid_login_perm and group membership, but does not set repo or vm (even if if @auto_setup_new_users_with_repository@ and/or @auto_setup_new_users_with_vm_uuid@ are set).
-# Self-activation using the @activate@ method of the users controller.
-
-h3. User agreements
-
-The @activate@ method of the users controller checks if the user @is_invited@ and whether the user has "signed" all the user agreements.
-
-@is_invited@ is true if any of these are true:
-* @is_active@ is true
-* @new_users_are_active@ is true
-* the user account has a permission link to read the system "all users" group.
-
-User agreements are accessed by getting a listing on the @user_agreements@ endpoint. This returns a list of collection uuids. This is executed as a system user, so it bypasses normal read permission checks.
-
-The available user agreements are represented in the Links table as
-
-<pre>
-{
- "link_class": "signature",
- "name": "require",
- "tail_uuid": "*system user uuid*",
- "head_uuid: "*collection uuid*"
-}
-</pre>
-
-The collection contains the user agreement text file.
-
-On workbench, it checks @is_invited@. If true, it displays the clickthrough agreements which the user can "sign". If @is_invited@ is false, the user ends up at the "inactive user" page.
-
-The @user_agreements/sign@ endpoint creates a Link object:
-
-<pre>
-{
- "link_class": "signature"
- "name": "click",
- "tail_uuid": "*user uuid*",
- "head_uuid: "*collection uuid*"
-}
-</pre>
-
-This is executed as a system user, so it bypasses the restriction that inactive users cannot create objects.
-
-The @user_agreements/signatures@ endpoint returns the list of Link objects that represent signatures by the current user (created by @sign@).
-
-h3. User profile
-
-The user profile is checked by workbench after checking if user agreements need to be signed. The requirement to fill out the user profile is not enforced by the API server.
-
-h3(#pre-activated). Pre-activate user by email address
-
-You may create a user account for a user that has not yet logged in, and identify the user by email address.
-
-1. As an admin, create a user object:
-
-<pre>
-{
- "email": "foo@example.com",
- "username": "barney",
- "is_active": true
-}
-</pre>
-
-2. Create a link object, where @tail_uuid@ is the user's email address, @head_uuid@ is the user object created in the previous step, and @xxxxx@ is the value of @uuid_prefix@ of the SSO server.
-
-<pre>
-{
- "link_class": "permission",
- "name": "can_login",
- "tail_uuid": "email address",
- "head_uuid: "user uuid",
- "properties": {
- "identity_url_prefix": "xxxxx-tpzed-"
- }
-}
-</pre>
-
-3. When the user logs in the first time, the email address will be recognized and the user will be associated with the linked user object.
-
-h3. Pre-activate federated user
-
-1. As admin, create a user object with the @uuid@ of the federated user (this is the user's uuid on their home cluster):
-
-<pre>
-{
- "uuid": "home1-tpzed-000000000000000",
- "email": "foo@example.com",
- "username": "barney",
- "is_active": true
-}
-</pre>
-
-2. When the user logs in, they will be associated with the existing user object.
-
-h3. Auto-activate federated users from trusted clusters
-
-In the API server config, configure @auto_activate_users_from@ with a list of one or more five-character cluster ids. A federated user from one of the listed clusters which @is_active@ on the home cluster will be automatically set up and activated on this cluster.
-
-h3(#deactivating_users). Deactivating users
-
-Setting @is_active@ is not sufficient to lock out a user. The user can call @activate@ to become active again. Instead, use @unsetup@:
-
-* Delete oid_login_perms
-* Delete git repository permission links
-* Delete VM login permission links
-* Remove from "All users" group
-* Delete any "signatures"
-* Clear preferences / profile
-* Mark as inactive
-
-{% comment %}
-Does not revoke @is_admin@, so you can't unsetup an admin unless you turn admin off first.
-
-"inactive" does not prevent user from reading things they previously had access to.
-
-Does not revoke API tokens.
-{% endcomment %}
-
-h3. Activation flows
-
-h4. Private instance
-
-Policy: users must be manually approved.
-
-<pre>
-auto_setup_new_users: false
-new_users_are_active: false
-</pre>
-
-# User is created. Not set up. @is_active@ is false.
-# Workbench checks @is_invited@ and finds it is false. User gets "inactive user" page.
-# Admin goes to user page and clicks either "setup user" or manually @is_active@ to true.
-# Clicking "setup user" sets up the user. This includes adding the user to "All users" which qualifies the user as @is_invited@.
-# On refreshing workbench, the user is still inactive, but is able to self-activate after signing clickthrough agreements (if any).
-# Alternately, directly setting @is_active@ to true also sets up the user, but workbench won't display clickthrough agreements (because the user is already active).
-
-h4. Federated instance
-
-Policy: users from other clusters in the federation are activated, users from outside the federation must be manually approved
-
-<pre>
-auto_setup_new_users: false
-new_users_are_active: false
-auto_activate_users_from: [home1]
-</pre>
-
-# Federated user arrives claiming to be from cluster 'home1'
-# API server authenticates user as being from cluster 'home1'
-# Because 'home1' is in @auto_activate_users_from@ the user is set up and activated.
-# User can immediately start using workbench.
-
-h4. Open instance
-
-Policy: anybody who shows up and signs the agreements is activated.
-
-<pre>
-auto_setup_new_users: true
-new_users_are_active: false
-</pre>
-
-# User is created and auto-setup. At this point, @is_active@ is false, but user has been added to "All users" group.
-# Workbench checks @is_invited@ and finds it is true, because the user is a member of "All users" group.
-# Workbench presents user with list of user agreements, user reads and clicks "sign" for each one.
-# Workbench tries to activate user.
-# User is activated.
-
-h4. Developer instance
-
-Policy: avoid wasting developer's time during development/testing
-
-<pre>
-auto_setup_new_users: true
-new_users_are_active: true
-</pre>
-
-# User is created, immediately auto-setup, and auto-activated.
-# User can immediately start using workbench.
--- /dev/null
+user-management.html.textile.liquid
\ No newline at end of file
This page describes how to enable and configure the collection versioning feature on the API server.
-h3. API Server configuration
+h3. Configuration
-There are 2 configuration settings that control this feature, both go on the @application.yml@ file.
+There are 2 configuration settings in the @Collections@ section of @config.yml@ that control this feature.
-h4. Setting: @collection_versioning@ (Boolean. Default: false)
+<pre>
+ Collections:
+ # If true, enable collection versioning.
+ # When a collection's preserve_version field is true or the current version
+ # is older than the amount of seconds defined on PreserveVersionIfIdle,
+ # a snapshot of the collection's previous state is created and linked to
+ # the current collection.
+ CollectionVersioning: false
-If @true@, collection versioning is enabled, meaning that new version records can be created. Note that if you set @collection_versioning@ to @false@ after being enabled, old versions will still be accessible, but further changes will not be versioned.
+ # This setting control the auto-save aspect of collection versioning, and can be set to:
+ # 0s = auto-create a new version on every update.
+ # -1s = never auto-create new versions.
+ # > 0s = auto-create a new version when older than the specified number of seconds.
+ PreserveVersionIfIdle: -1s
+</pre>
-h4. Setting: @preserve_version_if_idle@ (Numeric. Default: -1)
-
-This setting control the auto-save aspect of collection versioning, and can be set to:
-* @-1@: Never auto-save versions. Only save versions when the client ask for it by setting @preserve_version@ to @true@ on any given collection.
-* @0@: Preserve all versions every time a collection gets a versionable update.
-* @N@ (being N > 0): Preserve version when a collection gets a versionable update after a period of at least N seconds since the last time it was modified.
+Note that if you set @collection_versioning@ to @false@ after being enabled, old versions will still be accessible, but further changes will not be versioned.
h3. Using collection versioning
-"Discussed in the user guide":{{site.baseurl}}/user/topics/collection-versioning.html
\ No newline at end of file
+"Discussed in the user guide":{{site.baseurl}}/user/topics/collection-versioning.html
---
layout: default
-navsection: admin
-title: Migrating Configuration
+navsection: installguide
+title: Migrating Configuration from v1.4 to v2.0
...
{% comment %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Arvados is migrating to a centralized configuration file for all components. The centralized Arvados configuration is @/etc/arvados/config.yml@. Components that support the new centralized configuration are listed below. Components not listed here do not yet support centralized configuration. During the migration period, legacy configuration files will continue to be loaded and take precedence over the centralized configuration file.
+{% include 'notebox_begin_warning' %}
+_New installations of Arvados 2.0+ can skip this section_
+{% include 'notebox_end' %}
+
+Arvados 2.0 migrates to a centralized configuration file for all components. The centralized Arvados configuration is @/etc/arvados/config.yml@. Components that support the new centralized configuration are listed below. During the migration period, legacy configuration files are still loaded and take precedence over the centralized configuration file.
h2. API server
This command will also report if no migrations are required.
-h2. crunch-dispatch-slurm
-
-Currently only reads @InstanceTypes@ from centralized configuration. Still requires component-specific configuration file.
-
-h2(#keepstore). keepstore
-
-The legacy keepstore config (loaded from @/etc/arvados/keepstore/keepstore.yml@ or a different location specified via -legacy-keepstore-config command line argument) takes precedence over the centralized config. After you migrate everything from the legacy config to the centralized config, you should delete @/etc/arvados/keepstore/keepstore.yml@ and stop using the -legacy-keepstore-config argument.
+h2. keepstore, keep-web, crunch-dispatch-slurm, arvados-ws, keepproxy, arv-git-httpd, keep-balance
-To migrate a keepstore node's configuration, first install @arvados-server@. Run @arvados-server config-check@, review and apply the recommended changes to @/etc/arvados/config.yml@, and run @arvados-server config-check@ again to check for additional warnings and recommendations. When you are satisfied, delete the legacy config file, restart keepstore, and check its startup logs. Copy the updated centralized config file to your next keepstore server, and repeat the process there.
+The legacy config for each component (loaded from @/etc/arvados/component/component.yml@ or a different location specified via the -legacy-component-config command line argument) takes precedence over the centralized config. After you migrate everything from the legacy config to the centralized config, you should delete @/etc/arvados/component/component.yml@ and/or stop using the corresponding -legacy-component-config argument.
-After migrating and removing all legacy keepstore config files, make sure the @/etc/arvados/config.yml@ file is identical across all system nodes -- API server, keepstore, etc. -- and restart all services to make sure they are using the latest configuration.
+To migrate a component configuration, do this on each node that runs an Arvados service:
-h2(#keepproxy). keepproxy
+# Ensure that the latest @config.yml@ is installed on the current node
+# Install @arvados-server@ using @apt-get@ or @yum@.
+# Run @arvados-server config-check@, review and apply the recommended changes to @/etc/arvados/config.yml@
+# After applying changes, re-run @arvados-server config-check@ again to check for additional warnings and recommendations.
+# When you are satisfied, delete the legacy config file, restart the service, and check its startup logs.
+# Copy the updated @config.yml@ file to your next node, and repeat the process there.
-The legacy keepproxy config (loaded from @/etc/arvados/keepproxy/keepproxy.yml@ or a different location specified via -legacy-keepproxy-config command line argument) takes precedence over the centralized config. After you migrate everything from the legacy config to the centralized config, you should delete @/etc/arvados/keepproxy/keepproxy.yml@ and stop using the -legacy-keepproxy-config argument.
+After migrating and removing all legacy config files, make sure the @/etc/arvados/config.yml@ file is identical across all system nodes -- API server, keepstore, etc. -- and restart all services to make sure they are using the latest configuration.
-h2(#arv-git-httpd). arv-git-httpd
+h2. Cloud installations only: node manager
-The legacy arv-git-httpd config (loaded from @/etc/arvados/git-httpd/git-httpd.yml@ or a different location specified via -legacy-git-httpd-config command line argument) takes precedence over the centralized config. After you migrate everything from the legacy config to the centralized config, you should delete @/etc/arvados/git-httpd/git-httpd.yml@ and stop using the -legacy-git-httpd-config argument.
+Node manager is deprecated and replaced by @arvados-dispatch-cloud@. No automated config migration is available. Follow the instructions to "install the cloud dispatcher":../install/install-dispatch-cloud.html
-h2(#keepbalance). keep-balance
+*Only one dispatch process should be running at a time.* If you are migrating a system that currently runs Node manager and @crunch-dispatch-slurm@, it is safest to remove the @crunch-dispatch-slurm@ service entirely before installing @arvados-dispatch-cloud@.
-The legacy keep-balance config (loaded from @/etc/arvados/keep-balance/keep-balance.yml@ or a different location specified via -legacy-keepbalance-config command line argument) takes precedence over the centralized config. After you migrate everything from the legacy config to the centralized config, you should delete @/etc/arvados/keep-balance/keep-balance.yml@ and stop using the -legacy-keepbalance-config argument.
-
-h2. arvados-controller
-
-Already uses centralized config exclusively. No migration needed.
+<notextile>
+<pre><code>~$ <span class="userinput">sudo systemctl --now disable crunch-dispatch-slurm</span>
+~$ <span class="userinput">sudo apt-get remove crunch-dispatch-slurm</span>
+</code></pre>
+</notextile>
-h2. arvados-dispatch-cloud
+h2. arvados-controller, arvados-dispatch-cloud
Already uses centralized config exclusively. No migration needed.
---
layout: default
-navsection: admin
+navsection: installguide
title: Configuration reference
...
---
layout: default
navsection: admin
-title: Controlling container reuse
+title: Preventing container reuse
...
{% comment %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This page describes how an admin can control container reuse using the @arv@ command. This can be utilized to avoid reusing a completed container without disabling reuse for the corresponding steps in affected workflows. For example, if a container exited successfully but produced bad output, it may not be feasible to update the workflow immediately. Meanwhile, changing the state of the container from @Complete@ to @Cancelled@ will prevent it from being used in subsequent workflows.
+Sometimes a container exited successfully but produced bad output, and re-running the workflow will cause it to re-use the bad container instead of running a new container. One way to deal with this is to re-run the entire workflow with reuse disable. Another way is for the workflow author to tweak the input data or workflow so that on re-run it produces a distinct container request. However, for large or complex workflows both these options may be impractical.
-If a container is in the @Complete@ state, the following @arv@ command will change its state to @Cancelled@, where @xxxxx-xxxxx-xxxxxxxxxxxxxxx@ is the @UUID@ of the container:
+To prevent an individual container from being reused in later workflows, an admin can manually change the state of the bad container record from @Complete@ to @Cancelled@. The following @arv@ command demonstrates how change a container state to @Cancelled@, where @xxxxx-xxxxx-xxxxxxxxxxxxxxx@ is the @UUID@ of the container:
<pre>arv container update -u xxxxx-xxxxx-xxxxxxxxxxxxxxx -c '{"state":"Cancelled"}'</pre>
-
-Use the following command to list all containers that exited with 0 and were then cancelled:
-
-<pre>arv container list --filters='[["state", "=", "Cancelled"], ["exit_code", "=", 0]]'</pre>See the "arv CLI tool overview":{{site.baseurl}}/sdk/cli/index.html for more details about using the @arv@ command.
--- /dev/null
+---
+layout: default
+navsection: admin
+title: Group management
+...
+
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+This page describes how to manage groups at the command line. You should be familiar with the "permission system":{{site.baseurl}}/api/permission-model.html .
+
+h2. Create a group
+
+User groups are entries in the "groups" table with @"group_class": "role"@.
+
+<pre>
+arv group create --group '{"name": "My new group", "group_class": "role"}'
+</pre>
+
+h2(#add). Add a user to a group
+
+There are two separate permissions associated with group membership. The first link grants the user @can_manage@ permission to manage things that the group can manage. The second link grants permission for other users of the group to see that this user is part of the group.
+
+<pre>
+arv link create --link '{
+ "link_class": "permission",
+ "name": "can_manage",
+ "tail_uuid": "the_user_uuid",
+ "head_uuid": "the_group_uuid"}'
+
+arv link create --link '{
+ "link_class": "permission",
+ "name": "can_read",
+ "tail_uuid": "the_group_uuid",
+ "head_uuid": "the_user_uuid"}'
+</pre>
+
+A user can also be given read-only access to a group. In that case, the first link should be created with @can_read@ instead of @can_manage@.
+
+h2. List groups
+
+<pre>
+arv group list --filters '[["group_class", "=", "role"]]'
+</pre>
+
+h2. List members of a group
+
+Use the command "jq":https://stedolan.github.io/jq/ to extract the tail_uuid of each permission link which has the user uuid.
+
+<pre>
+arv link list --filters '[["link_class", "=", "permission"],
+ ["head_uuid", "=", "the_group_uuid"]]' | jq .items[].tail_uuid
+</pre>
+
+h2. Share a project with a group
+
+This will give all members of the group @can_manage@ access.
+
+<pre>
+arv link create --link '{
+ "link_class": "permission",
+ "name": "can_manage",
+ "tail_uuid": "the_group_uuid",
+ "head_uuid": "the_project_uuid"}'
+</pre>
+
+A project can also be shared read-only. In that case, the first link should be created with @can_read@ instead of @can_manage@.
+
+h2. List things shared with the group
+
+Use the command "jq":https://stedolan.github.io/jq/ to extract the head_uuid of each permission link which has the object uuid.
+
+<pre>
+arv link list --filters '[["link_class", "=", "permission"],
+ ["tail_uuid", "=", "the_group_uuid"]]' | jq .items[].head_uuid
+</pre>
+
+h2. Stop sharing a project with a group
+
+This will remove access for members of the group.
+
+The first step is to find the permission link objects. The second step is to delete them.
+
+<pre>
+arv --format=uuid link list --filters '[["link_class", "=", "permission"],
+ ["tail_uuid", "=", "the_group_uuid"], ["head_uuid", "=", "the_project_uuid"]]'
+
+arv link delete --uuid each_link_uuid
+</pre>
+
+h2. Remove user from a group
+
+The first step is to find the permission link objects. The second step is to delete them.
+
+<pre>
+arv --format=uuid link list --filters '[["link_class", "=", "permission"],
+ ["tail_uuid", "in", ["the_user_uuid", "the_group_uuid"]],
+ ["head_uuid", "in", ["the_user_uuid", "the_group_uuid"]]'
+
+arv link delete --uuid each_link_uuid
+</pre>
The service @arvados-health@ performs health checks on all configured services and returns a single value of @OK@ or @ERROR@ for the entire cluster. It exposes the endpoint @/_health/all@ .
-The healthcheck aggregator uses the @NodeProfile@ section of the cluster-wide @arvados.yml@ configuration file. Here is an example.
-
-<pre>
-Cluster:
- # The cluster uuid prefix
- zzzzz:
- ManagementToken: xyzzy
- NodeProfile:
- # For each node, the profile name corresponds to a
- # locally-resolvable hostname, and describes which Arvados
- # services are available on that machine.
- api:
- arvados-controller:
- Listen: :8000
- arvados-api-server:
- Listen: :8001
- manage:
- arvados-node-manager:
- Listen: :8002
- workbench:
- arvados-workbench:
- Listen: :8003
- arvados-ws:
- Listen: :8004
- keep:
- keep-web:
- Listen: :8005
- keepproxy:
- Listen: :8006
- keep-balance:
- Listen: :9005
- keep0:
- keepstore:
- Listen: :25107
- keep1:
- keepstore:
- Listen: :25107
-</pre>
+The healthcheck aggregator uses the @Services@ section of the cluster-wide @config.yml@ configuration file.
--- /dev/null
+---
+layout: default
+navsection: admin
+title: Logging
+...
+
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+Most Arvados services write JSON-format structured logs to stderr, which can be parsed by any operational tools that support JSON.
+
+h2. Request ids
+
+Using a distributed system with several services working together sometimes makes it difficult to find the root cause of errors, as one single client request usually means several different requests to more than one service.
+
+To deal with this difficulty, Arvados creates a request ID that gets carried over different services as the requests take place. This ID has a specific format and it's comprised of the prefix "@req-@" followed by 20 random alphanumeric characters:
+
+<pre>req-frdyrcgdh4rau1ajiq5q</pre>
+
+This ID gets propagated via an HTTP @X-Request-Id@ header, and gets logged on every service.
+
+h3. API Server error reporting and logging
+
+In addition to providing the request ID on every HTTP response, the API Server adds it to every error message so that all clients show enough information to the user to be able to track a particular issue. As an example, let's suppose that we get the following error when trying to create a collection using the CLI tools:
+
+<pre>
+$ arv collection create --collection '{}'
+Error: #<RuntimeError: Whoops, something bad happened> (req-ku5ct9ehw0y71f1c5p79)
+</pre>
+
+The API Server logs every request in JSON format on the @production.log@ (usually under @/var/www/arvados-api/current/log/@ when installing from packages) file, so we can retrieve more information about this by using @grep@ and @jq@ tools:
+
+<pre>
+# grep req-ku5ct9ehw0y71f1c5p79 /var/www/arvados-api/current/log/production.log | jq .
+{
+ "method": "POST",
+ "path": "/arvados/v1/collections",
+ "format": "json",
+ "controller": "Arvados::V1::CollectionsController",
+ "action": "create",
+ "status": 422,
+ "duration": 1.52,
+ "view": 0.25,
+ "db": 0,
+ "request_id": "req-ku5ct9ehw0y71f1c5p79",
+ "client_ipaddr": "127.0.0.1",
+ "client_auth": "zzzzz-gj3su-jllemyj9v3s5emu",
+ "exception": "#<RuntimeError: Whoops, something bad happened>",
+ "exception_backtrace": "/var/www/arvados-api/current/app/controllers/arvados/v1/collections_controller.rb:43:in `create'\n/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/action_controller/metal/basic_implicit_render.rb:4:in `send_action'\n ...[snipped]",
+ "params": {
+ "collection": "{}",
+ "_profile": "true",
+ "cluster_id": "",
+ "collection_given": "true",
+ "ensure_unique_name": "false",
+ "help": "false"
+ },
+ "@timestamp": "2019-07-15T16:40:41.726634182Z",
+ "@version": "1",
+ "message": "[422] POST /arvados/v1/collections (Arvados::V1::CollectionsController#create)"
+}
+</pre>
+
+When logging a request that produced an error, the API Server adds @exception@ and @exception_backtrace@ keys to the JSON log. The latter includes the complete error stack trace as a string, and can be displayed in a more readable form like so:
+
+<pre>
+# grep req-ku5ct9ehw0y71f1c5p79 /var/www/arvados-api/current/log/production.log | jq -r .exception_backtrace
+/var/www/arvados-api/current/app/controllers/arvados/v1/collections_controller.rb:43:in `create'
+/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/action_controller/metal/basic_implicit_render.rb:4:in `send_action'
+/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/abstract_controller/base.rb:188:in `process_action'
+/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/action_controller/metal/rendering.rb:30:in `process_action'
+/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/abstract_controller/callbacks.rb:20:in `block in process_action'
+/var/lib/gems/ruby/2.3.0/gems/activesupport-5.0.7.2/lib/active_support/callbacks.rb:126:in `call'
+...
+</pre>
As a result, this table grows indefinitely, even on sites where policy does not require an audit log; making backups, migrations, and upgrades unnecessarily slow and painful.
-h3. API Server configuration
+h3. Configuration
-To solve the problem mentioned above, the API server offers the possibility to limit the amount of log information stored on the table:
+To solve the problem mentioned above, the @AuditLogs@ section of @config.yml@ offers several options to limit the amount of log information stored on the table:
<pre>
-# Attributes to suppress in events and audit logs. Notably,
-# specifying ["manifest_text"] here typically makes the database
-# smaller and faster.
-#
-# Warning: Using any non-empty value here can have undesirable side
-# effects for any client or component that relies on event logs.
-# Use at your own risk.
-unlogged_attributes: []
+ AuditLogs:
+ # Time to keep audit logs. (An audit log is a row added
+ # to the "logs" table in the PostgreSQL database each time an
+ # Arvados object is created, modified, or deleted.)
+ #
+ # Currently, websocket event notifications rely on audit logs, so
+ # this should not be set lower than 5 minutes.
+ MaxAge: 336h
+
+ # Maximum number of log rows to delete in a single SQL transaction,
+ # to prevent surprises and avoid bad database behavior
+ # (especially the first time the cleanup job runs on an existing
+ # cluster with a huge backlog) a maximum number of rows to
+ # delete in a single transaction.
+ #
+ # If MaxDeleteBatch is 0, log entries will never be
+ # deleted by Arvados. Cleanup can be done by an external process
+ # without affecting any Arvados system processes, as long as very
+ # recent (<5 minutes old) logs are not deleted.
+ #
+ # 100000 is a reasonable batch size for most sites.
+ MaxDeleteBatch: 0
+
+ # Attributes to suppress in events and audit logs. Notably,
+ # specifying {"manifest_text": {}} here typically makes the database
+ # smaller and faster.
+ #
+ # Warning: Using any non-empty value here can have undesirable side
+ # effects for any client or component that relies on event logs.
+ # Use at your own risk.
+ UnloggedAttributes: {}
</pre>
-The above setting affects all events being logged, independently of how much time they will be kept on the database.
-
-<pre>
-# Time to keep audit logs (a row in the log table added each time an
-# Arvados object is created, modified, or deleted) in the PostgreSQL
-# database. Currently, websocket event notifications rely on audit
-# logs, so this should not be set lower than 300 (5 minutes).
-max_audit_log_age: 1209600
-</pre>
-
-...and to prevent surprises and avoid bad database behavior (especially the first time the cleanup job runs on an existing cluster with a huge backlog) a maximum number of rows to delete in a single transaction.
-
-<pre>
-# Maximum number of log rows to delete in a single SQL transaction.
-#
-# If max_audit_log_delete_batch is 0, log entries will never be
-# deleted by Arvados. Cleanup can be done by an external process
-# without affecting any Arvados system processes, as long as very
-# recent (<5 minutes old) logs are not deleted.
-#
-# 100000 is a reasonable batch size for most sites.
-max_audit_log_delete_batch: 0
-</pre>
-
-This feature works when both settings are non-zero, periodically dispatching a background task that deletes all log rows older than @max_audit_log_age@.
-The events being cleaned up by this process don't include job/container stderr logs (they're handled by the existing @delete job/container logs@ rake tasks)
h3. Additional consideration
To access a monitoring endpoint, the requester must provide the HTTP header @Authorization: Bearer (ManagementToken)@.
-h2. API server
-
-Set @ManagementToken@ in the appropriate section of @application.yml@
-
-<pre>
-production:
- # Token to be included in all healthcheck requests. Disabled by default.
- # Server expects request header of the format "Authorization: Bearer xxx"
- ManagementToken: xxx
-</pre>
-
h2. Node Manager
Set @port@ (the listen port) and @ManagementToken@ in the @Manage@ section of @node-manager.ini@.
ManagementToken = xxx
</pre>
-h2. Other services
+h2. API server and other services
-The following services also support monitoring. Set @ManagementToken@ in the respective yaml config file for each service.
+The following services also support monitoring.
+* API server
+* arv-git-httpd
+* controller
+* keep-balance
+* keepproxy
* keepstore
* keep-web
-* keepproxy
-* arv-git-httpd
* websockets
+
+Set @ManagementToken@ in the appropriate section of @/etc/arvados/config.yml@.
+
+<notextile>
+<pre><code>Clusters:
+ <span class="userinput">ClusterID</span>:
+ # Token to be included in all healthcheck requests. Disabled by default.
+ # Server expects request header of the format "Authorization: Bearer xxx"
+ ManagementToken: xxx
+</code></pre>
+</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Some Arvados services publish Prometheus/OpenMetrics-compatible metrics at @/metrics@, and some provide additional runtime status at @/status.json@. Metrics can help you understand how components perform under load, find performance bottlenecks, and detect and diagnose problems.
+Some Arvados services publish Prometheus/OpenMetrics-compatible metrics at @/metrics@. Metrics can help you understand how components perform under load, find performance bottlenecks, and detect and diagnose problems.
To access metrics endpoints, services must be configured with a "management token":management-token.html. When accessing a metrics endpoint, prefix the management token with @"Bearer "@ and supply it in the @Authorization@ request header.
-<pre>curl -sfH "Authorization: Bearer your_management_token_goes_here" "https://0.0.0.0:25107/status.json"
+<pre>curl -sfH "Authorization: Bearer your_management_token_goes_here" "https://0.0.0.0:25107/metrics"
</pre>
-h2. Keep-web
+The plain text export format includes "help" messages with a description of each reported metric.
-Keep-web exports metrics at @/metrics@ -- e.g., @https://collections.zzzzz.arvadosapi.com/metrics@.
+When configuring Prometheus, use a @bearer_token@ or @bearer_token_file@ option to authenticate requests.
-table(table table-bordered table-condensed).
-|_. Name|_. Type|_. Description|
-|request_duration_seconds|summary|elapsed time between receiving a request and sending the last byte of the response body (segmented by HTTP request method and response status code)|
-|time_to_status_seconds|summary|elapsed time between receiving a request and sending the HTTP response status code (segmented by HTTP request method and response status code)|
-
-Metrics in the @arvados_keepweb_collectioncache@ namespace report keep-web's internal cache of Arvados collection metadata.
-
-table(table table-bordered table-condensed).
-|_. Name|_. Type|_. Description|
-|arvados_keepweb_collectioncache_requests|counter|cache lookups|
-|arvados_keepweb_collectioncache_api_calls|counter|outgoing API calls|
-|arvados_keepweb_collectioncache_permission_hits|counter|collection-to-permission cache hits|
-|arvados_keepweb_collectioncache_pdh_hits|counter|UUID-to-PDH cache hits|
-|arvados_keepweb_collectioncache_hits|counter|PDH-to-manifest cache hits|
-|arvados_keepweb_collectioncache_cached_manifests|gauge|number of collections in the cache|
-|arvados_keepweb_collectioncache_cached_manifest_bytes|gauge|memory consumed by cached collection manifests|
-
-h2. Keepstore
-
-Keepstore exports metrics at @/status.json@ -- e.g., @http://keep0.zzzzz.arvadosapi.com:25107/status.json@.
-
-h3. Root
-
-table(table table-bordered table-condensed).
-|_. Attribute|_. Type|_. Description|
-|Volumes| array of "volumeStatusEnt":#volumeStatusEnt ||
-|BufferPool| "PoolStatus":#PoolStatus ||
-|PullQueue| "WorkQueueStatus":#WorkQueueStatus ||
-|TrashQueue| "WorkQueueStatus":#WorkQueueStatus ||
-|RequestsCurrent| int ||
-|RequestsMax| int ||
-|Version| string ||
-
-h3(#volumeStatusEnt). volumeStatusEnt
-
-table(table table-bordered table-condensed).
-|_. Attribute|_. Type|_. Description|
-|Label| string||
-|Status| "VolumeStatus":#VolumeStatus ||
-|VolumeStats| "ioStats":#ioStats ||
-
-h3(#VolumeStatus). VolumeStatus
-
-table(table table-bordered table-condensed).
-|_. Attribute|_. Type|_. Description|
-|MountPoint| string||
-|DeviceNum| uint64||
-|BytesFree| uint64||
-|BytesUsed| uint64||
-
-h3(#ioStats). ioStats
-
-table(table table-bordered table-condensed).
-|_. Attribute|_. Type|_. Description|
-|Errors| uint64||
-|Ops| uint64||
-|CompareOps| uint64||
-|GetOps| uint64||
-|PutOps| uint64||
-|TouchOps| uint64||
-|InBytes| uint64||
-|OutBytes| uint64||
-
-h3(#PoolStatus). PoolStatus
-
-table(table table-bordered table-condensed).
-|_. Attribute|_. Type|_. Description|
-|BytesAllocatedCumulative| uint64||
-|BuffersMax| int||
-|BuffersInUse| int||
-
-h3(#WorkQueueStatus). WorkQueueStatus
-
-table(table table-bordered table-condensed).
-|_. Attribute|_. Type|_. Description|
-|InProgress| int||
-|Queued| int||
-
-h3. Example response
-
-<pre>
-{
- "Volumes": [
- {
- "Label": "[UnixVolume /var/lib/arvados/keep0]",
- "Status": {
- "MountPoint": "/var/lib/arvados/keep0",
- "DeviceNum": 65029,
- "BytesFree": 222532972544,
- "BytesUsed": 435456679936
- },
- "InternalStats": {
- "Errors": 0,
- "InBytes": 1111,
- "OutBytes": 0,
- "OpenOps": 1,
- "StatOps": 4,
- "FlockOps": 0,
- "UtimesOps": 0,
- "CreateOps": 0,
- "RenameOps": 0,
- "UnlinkOps": 0,
- "ReaddirOps": 0
- }
- }
- ],
- "BufferPool": {
- "BytesAllocatedCumulative": 67108864,
- "BuffersMax": 20,
- "BuffersInUse": 0
- },
- "PullQueue": {
- "InProgress": 0,
- "Queued": 0
- },
- "TrashQueue": {
- "InProgress": 0,
- "Queued": 0
- },
- "RequestsCurrent": 1,
- "RequestsMax": 40,
- "Version": "dev"
-}
+<pre>scrape_configs:
+ - job_name: keepstore
+ bearer_token: your_management_token_goes_here
+ static_configs:
+ - targets:
+ - "keep0.ClusterID.example.com:25107"
</pre>
-h2. Keep-balance
-
-Keep-balance exports metrics at @/metrics@ -- e.g., @http://keep.zzzzz.arvadosapi.com:9005/metrics@.
-
-table(table table-bordered table-condensed).
-|_. Name|_. Type|_. Description|
-|arvados_keep_total_{replicas,blocks,bytes}|gauge|stored data (stored in backend volumes, whether referenced or not)|
-|arvados_keep_garbage_{replicas,blocks,bytes}|gauge|garbage data (unreferenced, and old enough to trash)|
-|arvados_keep_transient_{replicas,blocks,bytes}|gauge|transient data (unreferenced, but too new to trash)|
-|arvados_keep_overreplicated_{replicas,blocks,bytes}|gauge|overreplicated data (more replicas exist than are needed)|
-|arvados_keep_underreplicated_{replicas,blocks,bytes}|gauge|underreplicated data (fewer replicas exist than are needed)|
-|arvados_keep_lost_{replicas,blocks,bytes}|gauge|lost data (referenced by collections, but not found on any backend volume)|
-|arvados_keep_dedup_block_ratio|gauge|deduplication ratio (block references in collections ÷ distinct blocks referenced)|
-|arvados_keep_dedup_byte_ratio|gauge|deduplication ratio (block references in collections ÷ distinct blocks referenced, weighted by block size)|
-|arvados_keepbalance_get_state_seconds|summary|time to get all collections and keepstore volume indexes for one iteration|
-|arvados_keepbalance_changeset_compute_seconds|summary|time to compute changesets for one iteration|
-|arvados_keepbalance_send_pull_list_seconds|summary|time to send pull lists to all keepstore servers for one iteration|
-|arvados_keepbalance_send_trash_list_seconds|summary|time to send trash lists to all keepstore servers for one iteration|
-|arvados_keepbalance_sweep_seconds|summary|time to complete one iteration|
-
-Each @arvados_keep_@ storage state statistic above is presented as a set of three metrics:
-
-table(table table-bordered table-condensed).
-|*_blocks|distinct block hashes|
-|*_bytes|bytes stored on backend volumes|
-|*_replicas|objects/files stored on backend volumes|
+table(table table-bordered table-condensed table-hover).
+|_. Component|_. Metrics endpoint|
+|arvados-api-server||
+|arvados-controller|✓|
+|arvados-dispatch-cloud|✓|
+|arvados-git-httpd||
+|arvados-node-manager||
+|arvados-ws||
+|composer||
+|keepproxy||
+|keepstore|✓|
+|keep-balance|✓|
+|keep-web|✓|
+|sso-provider||
+|workbench1||
+|workbench2||
h2. Node manager
-The node manager status end point provides a snapshot of internal status at the time of the most recent wishlist update.
+The node manager does not export prometheus-style metrics, but its @/status.json@ endpoint provides a snapshot of internal status at the time of the most recent wishlist update.
+
+<pre>curl -sfH "Authorization: Bearer your_management_token_goes_here" "http://0.0.0.0:8989/status.json"
+</pre>
table(table table-bordered table-condensed).
|_. Attribute|_. Type|_. Description|
---
layout: default
navsection: admin
-title: "Migrating account providers"
+title: Changing upstream login providers
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-This page describes how to enable users to use more than one provider to log into the same Arvados account. This can be used to migrate account providers, for example, from LDAP to Google. In order to do this, users must be able to log into both the "old" and "new" providers.
+This page describes how to enable users to use more than one upstream identity provider to log into the same Arvados account. This can be used to migrate account providers, for example, from LDAP to Google. In order to do this, users must be able to log into both the "old" and "new" providers.
-h2. Configure multiple providers in SSO
+h2. Configure multiple or alternate provider in SSO
-In @application.yml@ for the SSO server, enable both @google_oauth2@ and @ldap@ providers:
+In @application.yml@ for the SSO server, you can enable both @google_oauth2@ and @ldap@ providers:
<pre>
production:
Restart the SSO server after changing the configuration.
+h2. Matching on email address
+
+If the new account provider supplies an email address (primary or alternate) that matches an existing user account, the user will be logged into that account. No further migration is necessary, and the old provider can be removed from the SSO configuration.
+
h2. Link accounts
-Instruct users to go through the process of "linking accounts":{{site.baseurl}}/user/topics/link-accounts.html
+If the new provider cannot provide matching email addresses, users will have to migrate manually by "linking accounts":{{site.baseurl}}/user/topics/link-accounts.html
After linking accounts, users can use the new provider to access their existing Arvados account.
--- /dev/null
+---
+layout: default
+navsection: admin
+title: "Reassign user data ownership"
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+If a user leaves an organization and stops using their Arvados account, it may be desirable to reassign the data owned by that user to another user to maintain easy access.
+
+This is currently a command line based, admin-only feature.
+
+h3. Step 1: Determine user uuids
+
+User uuids can be determined by browsing workbench or using @arv user list@ at the command line.
+
+The "old user" is the user that is leaving the organization.
+
+The "new user" is the user that will gain ownership of the old user's data. This includes collections, projects, container requests, workflows, and git repositories owned by the old user. It also transfers any permissions granted to the old user, to the new user.
+
+In the example below, @x1u39-tpzed-3kz0nwtjehhl0u4@ is the old user and @x1u39-tpzed-fr97h9t4m5jffxs@ is the new user.
+
+h3. Step 2: Create a project
+
+Create a project owned by the new user that will hold the data from the old user.
+
+<pre>
+$ arv --format=uuid group create --group '{"group_class": "project", "name": "Data from old user", "owner_uuid": "x1u39-tpzed-fr97h9t4m5jffxs"}'
+x1u39-j7d0g-mczqiguhil13083
+</pre>
+
+h3. Step 3: Reassign data from the old user to the new user and project
+
+The @user merge@ method reassigns data from the old user to the new user.
+
+<pre>
+$ arv user merge --old-user-uuid=x1u39-tpzed-3kz0nwtjehhl0u4 \
+ --new-user-uuid=x1u39-tpzed-fr97h9t4m5jffxs \
+ --new-owner-uuid=x1u39-j7d0g-mczqiguhil13083
+</pre>
+
+After reassigning data, use @unsetup@ to deactivate the old user's account.
+
+<pre>
+$ arv user unsetup --uuid=x1u39-tpzed-3kz0nwtjehhl0u4
+</pre>
+
+Note that authorization credentials (API tokens, ssh keys) are *not* transferred to the new user, as this would potentially give the old user access to the new user's account.
--- /dev/null
+---
+layout: default
+navsection: admin
+title: Securing API access with scoped tokens
+...
+
+By default, Arvados API tokens grant unlimited access to a user account, and admin account tokens have unlimited access to the whole system. If you want to grant restricted access to a user account, you can create a "scoped token" which is an Arvados API token which is limited to accessing specific APIs.
+
+One use of token scopes is to grant access to data, such as a collection, to users who do not have an Arvados accounts on your cluster. This is done by creating scoped token that only allows getting a specific record. An example of this is "creating a collection sharing link.":{{site.baseurl}}/sdk/python/cookbook.html#sharing_link
+
+Another example is situations where admin access is required but there is risk of the token being compromised. Setting a scope prevents the token from being used for any action other than the specific action the token is intended for. For example, "synchronizing user accounts on a shell node.":{{site.baseurl}}/install/install-shell-server.html#scoped-token
+
+h2. Defining scopes
+
+A "scope" consists of a HTTP method and API path. A token can have multiple scopes. Token scopes act as a whitelist, and the API server checks the HTTP method and the API path of every request against the scopes of the request token. Scopes are also described on the "API Authorization":{{site.baseurl}}/api/tokens.html#scopes page of the "API documentation":{{site.baseurl}}/api .
+
+These examples use @/arvados/v1/collections@, but can be applied to any endpoint. Consult the "API documentation":{{site.baseurl}}/api to determine the endpoints for specific methods.
+
+The scope @["GET", "/arvados/v1/collections"]@ will allow only GET or HEAD requests for the list of collections. Any other HTTP method or path (including requests for a specific collection record, eg a request with path @/arvados/v1/collections/zzzzz-4zz18-0123456789abcde@) will return a permission error.
+
+A trailing slash in a scope is signficant. The scope @["GET", "/arvados/v1/collections/"]@ will allow only GET or HEAD requests *starting with* @/arvados/v1/collections/@. A request for an individual record path @/arvados/v1/collections/zzzzz-4zz18-0123456789abcde@) is allowed but a request to list collections (@/arvados/v1/collections@) will be denied because it does not end with @/@ (API requests with a trailing @/@ will have the slash stripped before the scope is checked.)
+
+The scope can include an object uuid. The scope @["GET", "/arvados/v1/collections/zzzzz-4zz18-0123456789abcde"]@ only permits requests to read the record @zzzzz-4zz18-0123456789abcde@.
+
+Since a token can have multiple scopes, use @[["GET", "/arvados/v1/collections"], ["GET", "/arvados/v1/collections/"]]@ to allow both listing collections and fetching individual collection records. This will reject requests to create or change collections, or access any other API method.
+
+Object create calls use the @POST@ method. A scope of @["POST", "/arvados/v1/collections"]@ will allow creating collections, but not reading, listing or updating them (or accessing anything else).
+
+Object update calls use the @PATCH@ method. A scope of @["PATCH", "/arvados/v1/collections/"]@ will allow updating collections, but not listing or creating them. (Note: while GET requests are denied an object can be read indirectly by using an empty PATCH which will return the unmodified object as the result).
+
+Similarly, you can use a scope of @["PATCH", "/arvados/v1/collections/zzzzz-4zz18-0123456789abcde"]@ to restrict updates to a single collection.
+
+h2. Creating a scoped token
+
+A scoped token can be created at the command line:
+
+<pre>
+$ arv api_client_authorization create --api-client-authorization '{"scopes": [["GET", "/arvados/v1/collections"], ["GET", "/arvados/v1/collections/"]]}'
+{
+ "href":"/api_client_authorizations/x1u39-gj3su-bizbsw0mx5pju3w",
+ "kind":"arvados#apiClientAuthorization",
+ "etag":"9yk144t0v6cvyp0342exoh2vq",
+ "uuid":"x1u39-gj3su-bizbsw0mx5pju3w",
+ "owner_uuid":"x1u39-tpzed-fr97h9t4m5jffxs",
+ "created_at":"2020-03-12T20:36:12.517375422Z",
+ "modified_by_client_uuid":null,
+ "modified_by_user_uuid":null,
+ "modified_at":null,
+ "user_id":3,
+ "api_client_id":7,
+ "api_token":"5a74htnoqwkhtfo2upekpfbsg04hv7cy5v4nowf7dtpxer086m",
+ "created_by_ip_address":null,
+ "default_owner_uuid":null,
+ "expires_at":null,
+ "last_used_at":null,
+ "last_used_by_ip_address":null,
+ "scopes":[
+ [
+ "GET",
+ "/arvados/v1/collections"
+ ],
+ [
+ "GET",
+ "/arvados/v1/collections/"
+ ]
+ ]
+}
+</pre>
+
+The response will include @api_token@ field which is the newly issued secret token. It can be passed directly to the API server that issued it, or can be used to construct a @v2@ token. A @v2@ format token is required if the token will be used to access other clusters in an Arvados federation. An Arvados @v2@ format token consists of three fields separate by slashes: the prefix @v2@, followed by the token uuid, followed by the token secret. For example: @v2/x1u39-gj3su-bizbsw0mx5pju3w/5a74htnoqwkhtfo2upekpfbsg04hv7cy5v4nowf7dtpxer086m@.
<pre>
Clusters:
- uuid_prefix:
+ ClusterID:
Containers:
UsePreemptibleInstances: true
InstanceTypes:
When @UsePreemptibleInstances@ is enabled, child containers (workflow steps) will automatically be made preemptible. Note that because preempting the workflow runner would cancel the entire workflow, the workflow runner runs in a reserved (non-preemptible) instance.
-If you are using "crunch-dispatch-cloud":{{site.baseurl}}/install/install-dispatch-cloud.html no additional configuration is required.
+If you are using "arvados-dispatch-cloud":{{site.baseurl}}/install/install-dispatch-cloud.html no additional configuration is required.
If you are using the legacy Nodemanager, "see below":#nodemanager .
The storage classes for each volume are set in the per-volume "keepstore configuration":{{site.baseurl}}/install/install-keepstore.html
<pre>
-Volumes:
- - ... Volume configuration ...
- #
- # If no storage classes are specified, will use [default]
- #
- StorageClasses: null
-
- - ... Volume configuration ...
- #
- # Specify this volume is in the "archival" storage class.
- #
- StorageClasses: [archival]
-
+ Volumes:
+ ClusterID-nyw5e-000000000000000:
+ # This volume is in the "default" storage class.
+ StorageClasses:
+ default: true
+ ClusterID-nyw5e-000000000000001:
+ # Specify this volume is in the "archival" storage class.
+ StorageClasses:
+ archival: true
</pre>
Names of storage classes are internal to the cluster and decided by the administrator. Aside from "default", Arvados currently does not define any standard storage class names.
+++ /dev/null
----
-layout: default
-navsection: admin
-title: Troubleshooting
-...
-
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-Using a distributed system with several services working together sometimes makes it difficult to find the root cause of errors, as one single client request usually means several different requests to more than one service.
-
-To deal with this difficulty, Arvados creates a request ID that gets carried over different services as the requests take place. This ID has a specific format and it's comprised of the prefix "@req-@" followed by 20 random alphanumeric characters:
-
-<pre>req-frdyrcgdh4rau1ajiq5q</pre>
-
-This ID gets propagated via an HTTP @X-Request-Id@ header, and gets logged on every service.
-
-h3. API Server error reporting and logging
-
-In addition to providing the request ID on every HTTP response, the API Server adds it to every error message so that all clients show enough information to the user to be able to track a particular issue. As an example, let's suppose that we get the following error when trying to create a collection using the CLI tools:
-
-<pre>
-$ arv collection create --collection '{}'
-Error: #<RuntimeError: Whoops, something bad happened> (req-ku5ct9ehw0y71f1c5p79)
-</pre>
-
-The API Server logs every request in JSON format on the @production.log@ (usually under @/var/www/arvados-api/current/log/@ when installing from packages) file, so we can retrieve more information about this by using @grep@ and @jq@ tools:
-
-<pre>
-# grep req-ku5ct9ehw0y71f1c5p79 /var/www/arvados-api/current/log/production.log | jq .
-{
- "method": "POST",
- "path": "/arvados/v1/collections",
- "format": "json",
- "controller": "Arvados::V1::CollectionsController",
- "action": "create",
- "status": 422,
- "duration": 1.52,
- "view": 0.25,
- "db": 0,
- "request_id": "req-ku5ct9ehw0y71f1c5p79",
- "client_ipaddr": "127.0.0.1",
- "client_auth": "zzzzz-gj3su-jllemyj9v3s5emu",
- "exception": "#<RuntimeError: Whoops, something bad happened>",
- "exception_backtrace": "/var/www/arvados-api/current/app/controllers/arvados/v1/collections_controller.rb:43:in `create'\n/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/action_controller/metal/basic_implicit_render.rb:4:in `send_action'\n ...[snipped]",
- "params": {
- "collection": "{}",
- "_profile": "true",
- "cluster_id": "",
- "collection_given": "true",
- "ensure_unique_name": "false",
- "help": "false"
- },
- "@timestamp": "2019-07-15T16:40:41.726634182Z",
- "@version": "1",
- "message": "[422] POST /arvados/v1/collections (Arvados::V1::CollectionsController#create)"
-}
-</pre>
-
-When logging a request that produced an error, the API Server adds @exception@ and @exception_backtrace@ keys to the JSON log. The latter includes the complete error stack trace as a string, and can be displayed in a more readable form like so:
-
-<pre>
-# grep req-ku5ct9ehw0y71f1c5p79 /var/www/arvados-api/current/log/production.log | jq -r .exception_backtrace
-/var/www/arvados-api/current/app/controllers/arvados/v1/collections_controller.rb:43:in `create'
-/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/action_controller/metal/basic_implicit_render.rb:4:in `send_action'
-/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/abstract_controller/base.rb:188:in `process_action'
-/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/action_controller/metal/rendering.rb:30:in `process_action'
-/var/lib/gems/ruby/2.3.0/gems/actionpack-5.0.7.2/lib/abstract_controller/callbacks.rb:20:in `block in process_action'
-/var/lib/gems/ruby/2.3.0/gems/activesupport-5.0.7.2/lib/active_support/callbacks.rb:126:in `call'
-...
-</pre>
\ No newline at end of file
--- /dev/null
+logging.html.textile.liquid
\ No newline at end of file
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-The "containers" API is the recommended way to submit compute work to Arvados. It supersedes the "jobs" API, which is end-of-life as of Arvados 1.5.
+The "containers" API is the recommended way to submit compute work to Arvados. It supersedes the "jobs" API, which is end-of-life in Arvados 2.0.
h2. Benefits over the "jobs" API
---
layout: default
-navsection: admin
+navsection: installguide
title: "Upgrading Arvados and Release notes"
...
TODO: extract this information based on git commit messages and generate changelogs / release notes automatically.
{% endcomment %}
-h3(#master). development master (as of 2019-08-12)
+<notextile>
+<div class="releasenotes">
+</notextile>
-h4. Delete "keep_services" records
+h2(#master). development master (as of 2020-02-07)
-After all keepproxy and keepstore configurations have been migrated to the centralized configuration file (see below), all keep_services records you added manually during installation should be removed. System logs from keepstore and keepproxy at startup, as well as the output of @arvados-server config-check@, will remind you to do this.
+"Upgrading from 2.0.0":#v2_0_0
-<notextile><pre><code>$ export ARVADOS_API_HOST=...
-$ export ARVADOS_API_TOKEN=...
-$ arv --format=uuid keep_service list | xargs -n1 arv keep_service delete --uuid
-</code></pre></notextile>
+None in current development master.
-Once these old records are removed, @arv keep_service list@ will instead return the services listed under Services/Keepstore/InternalURLs and Services/Keepproxy/ExternalURL in your centralized configuration file.
+h2(#v2_0_0). v2.0.0 (2020-02-07)
+
+"Upgrading from 1.4":#v1_4_1
-h4. Keep-balance configuration migration
+Arvados 2.0 is a major upgrade, with many changes. Please read these upgrade notes carefully before you begin.
+
+h3. Migrating to centralized config.yml
+
+See "Migrating Configuration":config-migration.html for notes on migrating legacy per-component configuration files to the new centralized @/etc/arvados/config.yml@.
+
+To ensure a smooth transition, the per-component config files continue to be read, and take precedence over the centralized configuration. Your cluster should continue to function after upgrade but before doing the full configuration migration. However, several services (keepstore, keep-web, keepproxy) require a minimal `/etc/arvados/config.yml` to start:
+
+<pre>
+Clusters:
+ zzzzz:
+ Services:
+ Controller:
+ ExternalURL: "https://zzzzz.example.com"
+</pre>
+
+h3. Keep-balance configuration migration
(feature "#14714":https://dev.arvados.org/issues/14714 ) The keep-balance service can now be configured using the centralized configuration file at @/etc/arvados/config.yml@. The following command line and configuration options have changed.
Please see the "config migration guide":{{site.baseurl}}/admin/config-migration.html and "keep-balance install guide":{{site.baseurl}}/install/install-keep-balance.html for more details.
-h4. Arv-git-httpd configuration migration
+h3. Arv-git-httpd configuration migration
(feature "#14712":https://dev.arvados.org/issues/14712 ) The arv-git-httpd package can now be configured using the centralized configuration file at @/etc/arvados/config.yml@. Configuration via individual command line arguments is no longer available. Please see "arv-git-httpd's config migration guide":{{site.baseurl}}/admin/config-migration.html#arv-git-httpd for more details.
-h4. Keepstore and keep-web configuration migration
+h3. Keepstore and keep-web configuration migration
keepstore and keep-web no longer support configuration via (previously deprecated) command line configuration flags and environment variables.
keepstore now supports the legacy @keepstore.yml@ config format (used by Arvados 1.4) and the new cluster config file format. Please check the "keepstore config migration notes":{{site.baseurl}}/admin/config-migration.html#keepstore and "keepstore install guide":{{site.baseurl}}/install/install-keepstore.html for more details.
-h4. Jobs API is read-only
-
-(task "#15133":https://dev.arvados.org/issues/15133 ) The legacy 'jobs' API is now read-only. It has long been superceded by containers / container_requests (aka crunch v2). Arvados installations since the end of 2017 (v1.1.0) have probably only used containers, and are unaffected by this change.
-
-So that older Arvados sites don't lose access to legacy records, the API has been converted to read-only. Creating and updating jobs (and related types job_task, pipeline_template and pipeline_instance) is disabled and much of the business logic related has been removed, along with various other code specific to the jobs API. Specifically, the following programs associated with the jobs API have been removed: @crunch-dispatch.rb@, @crunch-job@, @crunchrunner@, @arv-run-pipeline-instance@, @arv-run@.
-
-h4. Keepproxy configuration migration
+h3. Keepproxy configuration migration
(feature "#14715":https://dev.arvados.org/issues/14715 ) Keepproxy can now be configured using the centralized config at @/etc/arvados/config.yml@. Configuration via individual command line arguments is no longer available and the @DisableGet@, @DisablePut@, and @PIDFile@ configuration options are no longer supported. If you are still using the legacy config and @DisableGet@ or @DisablePut@ are set to true or @PIDFile@ has a value, keepproxy will produce an error and fail to start. Please see "keepproxy's config migration guide":{{site.baseurl}}/admin/config-migration.html#keepproxy for more details.
-h4. No longer stripping ':' from strings in serialized database columns
+h3. Delete "keep_services" records
-(bug "#15311":https://dev.arvados.org/issues/15311 ) Strings read from serialized columns in the database with a leading ':' would have the ':' stripped after loading the record. This behavior existed due to legacy serialization behavior which stored Ruby symbols with a leading ':'. Unfortunately this corrupted fields where the leading ":" was intentional. This behavior has been removed.
+After all keepproxy and keepstore configurations have been migrated to the centralized configuration file, all keep_services records you added manually during installation should be removed. System logs from keepstore and keepproxy at startup, as well as the output of @arvados-server config-check@, will remind you to do this.
-You can test if any records in your database are affected by going to the API server directory and running @bundle exec rake symbols:check@. This will report which records contain fields with a leading ':' that would previously have been stripped. If there are records to be updated, you can update the database using @bundle exec rake symbols:stringify@.
+<notextile><pre><code>$ export ARVADOS_API_HOST=...
+$ export ARVADOS_API_TOKEN=...
+$ arv --format=uuid keep_service list | xargs -n1 arv keep_service delete --uuid
+</code></pre></notextile>
+
+Once these old records are removed, @arv keep_service list@ will instead return the services listed under Services/Keepstore/InternalURLs and Services/Keepproxy/ExternalURL in your centralized configuration file.
-h4. Enabling Postgres trigram indexes
+h3. Enabling Postgres trigram indexes
Feature "#15106":https://dev.arvados.org/issues/15106 improves the speed and functionality of full text search by introducing trigram indexes on text searchable database columns via a migration. Prior to updating, you must first install the postgresql-contrib package on your system and subsequently run the <code class="userprint">CREATE EXTENSION pg_trgm</code> SQL command on the arvados_production database as a postgres superuser.
Subsequently, the <code class="userinput">psql -d 'arvados_production' -c '\dx'</code> command will display the installed extensions for the arvados_production database. This list should now contain @pg_trgm@.
-h4. Migrating to centralized config.yml
+h3. New Workbench 2
+
+Workbench 2 is now ready for regular use. Follow the instructions to "install workbench 2":../install/install-workbench2-app.html
-See "Migrating Configuration":config-migration.html for notes on migrating legacy per-component configuration files to the new centralized @/etc/arvados/config.yml@. To ensure a smooth transition, the per-component config files continue to be read, and take precedence over the centralized configuration.
+h3. New property vocabulary format for Workbench2
-h3(#v1_4_1). v1.4.1 (2019-09-20)
+(feature "#14151":https://dev.arvados.org/issues/14151) Workbench2 supports a new vocabulary format and it isn't compatible with the previous one, please read the "workbench2 vocabulary format admin page":{{site.baseurl}}/admin/workbench2-vocabulary.html for more information.
-h4. Centos7 Python 3 dependency upgraded to rh-python36
+h3. Cloud installations only: node manager replaced by arvados-dispatch-cloud
+
+Node manager is deprecated and replaced by @arvados-dispatch-cloud@. No automated config migration is available. Follow the instructions to "install the cloud dispatcher":../install/install-dispatch-cloud.html
+
+*Only one dispatch process should be running at a time.* If you are migrating a system that currently runs Node manager and @crunch-dispatch-slurm@, it is safest to remove the @crunch-dispatch-slurm@ service entirely before installing @arvados-dispatch-cloud@.
+
+<notextile>
+<pre><code>~$ <span class="userinput">sudo systemctl --now disable crunch-dispatch-slurm</span>
+~$ <span class="userinput">sudo apt-get remove crunch-dispatch-slurm</span>
+</code></pre>
+</notextile>
+
+h3. Jobs API is read-only
+
+(task "#15133":https://dev.arvados.org/issues/15133 ) The legacy 'jobs' API is now read-only. It has been superceded since Arvados 1.1 by containers / container_requests (aka crunch v2). Arvados installations since the end of 2017 (v1.1.0) have probably only used containers, and are unaffected by this change.
+
+So that older Arvados sites don't lose access to legacy records, the API has been converted to read-only. Creating and updating jobs (and related types job_task, pipeline_template and pipeline_instance) is disabled and much of the business logic related has been removed, along with various other code specific to the jobs API. Specifically, the following programs associated with the jobs API have been removed: @crunch-dispatch.rb@, @crunch-job@, @crunchrunner@, @arv-run-pipeline-instance@, @arv-run@.
+
+h3. "/" prohibited in collection and project names
+
+(issue "#15836":https://dev.arvados.org/issues/15836) By default, Arvados now rejects new names containing the @/@ character when creating or renaming collections and projects. Previously, these names were permitted, but the resulting objects were invisible in the WebDAV "home" tree. If you prefer, you can restore the previous behavior, and optionally configure a substitution string to make the affected objects accessible via WebDAV. See @ForwardSlashNameSubstitution@ in the "configuration reference":config.html.
+
+h3. No longer stripping ':' from strings in serialized database columns
+
+(bug "#15311":https://dev.arvados.org/issues/15311 ) Strings read from serialized columns in the database with a leading ':' would have the ':' stripped after loading the record. This behavior existed due to legacy serialization behavior which stored Ruby symbols with a leading ':'. Unfortunately this corrupted fields where the leading ":" was intentional. This behavior has been removed.
+
+You can test if any records in your database are affected by going to the API server directory and running @bundle exec rake symbols:check@. This will report which records contain fields with a leading ':' that would previously have been stripped. If there are records to be updated, you can update the database using @bundle exec rake symbols:stringify@.
+
+h3. Scoped tokens should use PATCH for updates
+
+The API server accepts both PUT and PATCH for updates, but they will be normalized to PATCH by arvados-controller. Scoped tokens should be updated accordingly.
+
+
+
+h2(#v1_4_1). v1.4.1 (2019-09-20)
+
+"Upgrading from 1.4.0":#v1_4_0
+
+h3. Centos7 Python 3 dependency upgraded to rh-python36
The Python 3 dependency for Centos7 Arvados packages was upgraded from rh-python35 to rh-python36.
-h3(#v1_4_0). v1.4.0 (2019-06-05)
+h2(#v1_4_0). v1.4.0 (2019-06-05)
-h4. Populating the new file_count and file_size_total columns on the collections table
+"Upgrading from 1.3.3":#v1_3_3
+
+h3. Populating the new file_count and file_size_total columns on the collections table
As part of story "#14484":https://dev.arvados.org/issues/14484, two new columns were added to the collections table in a database migration. If your installation has a large collections table, this migration may take some time. We've seen it take ~5 minutes on an installation with 250k collections, but your mileage may vary.
The new columns are initialized with a zero value. In order to populate them, it is necessary to run a script called <code class="userinput">populate-file-info-columns-in-collections.rb</code> from the scripts directory of the API server. This can be done out of band, ideally directly after the API server has been upgraded to v1.4.0.
-h4. Stricter collection manifest validation on the API server
+h3. Stricter collection manifest validation on the API server
As a consequence of "#14482":https://dev.arvados.org/issues/14482, the Ruby SDK does a more rigorous collection manifest validation. Collections created after 2015-05 are unlikely to be invalid, however you may check for invalid manifests using the script below.
The script will return a final report enumerating any invalid collection by UUID, with its creation date and error message so you can take the proper correction measures, if needed.
-h4. Python packaging change
+h3. Python packaging change
As part of story "#9945":https://dev.arvados.org/issues/9945, the distribution packaging (deb/rpm) of our Python packages has changed. These packages now include a built-in virtualenv to reduce dependencies on system packages. We have also stopped packaging and publishing backports for all the Python dependencies of our packages, as they are no longer needed.
</pre>
</notextile>
-h4. python-arvados-cwl-runner deb/rpm package now conflicts with python-cwltool deb/rpm package
+h3. python-arvados-cwl-runner deb/rpm package now conflicts with python-cwltool deb/rpm package
As part of story "#9945":https://dev.arvados.org/issues/9945, the distribution packaging (deb/rpm) of our Python packages has changed. The python-arvados-cwl-runner package now includes a version of cwltool. If present, the python-cwltool and cwltool distribution packages will need to be uninstalled before the python-arvados-cwl-runner deb or rpm package can be installed.
-h4. Centos7 Python 3 dependency upgraded to rh-python35
+h3. Centos7 Python 3 dependency upgraded to rh-python35
As part of story "#9945":https://dev.arvados.org/issues/9945, the Python 3 dependency for Centos7 Arvados packages was upgraded from SCL python33 to rh-python35.
-h4. Centos7 package for libpam-arvados depends on the python-pam package, which is available from EPEL
+h3. Centos7 package for libpam-arvados depends on the python-pam package, which is available from EPEL
As part of story "#9945":https://dev.arvados.org/issues/9945, it was discovered that the Centos7 package for libpam-arvados was missing a dependency on the python-pam package, which is available from the EPEL repository. The dependency has been added to the libpam-arvados package. This means that going forward, the EPEL repository will need to be enabled to install libpam-arvados on Centos7.
-h4. New configuration
+h3. New configuration
Arvados is migrating to a centralized configuration file for all components. During the migration, legacy configuration files will continue to be loaded. See "Migrating Configuration":config-migration.html for details.
-h3(#v1_3_3). v1.3.3 (2019-05-14)
+h2(#v1_3_3). v1.3.3 (2019-05-14)
+
+"Upgrading from 1.3.0":#v1_3_0
This release corrects a potential data loss issue, if you are running Arvados 1.3.0 or 1.3.1 we strongly recommended disabling @keep-balance@ until you can upgrade to 1.3.3 or 1.4.0. With keep-balance disabled, there is no chance of data loss.
-We've put together a "wiki page":https://dev.arvados.org/projects/arvados/wiki/Recovering_lost_data which outlines how to recover blocks which have been put in the trash, but not yet deleted, as well as how to identify any collections which have missing blocks so that they can be regenerated. The keep-balance component has been enhanced to provide a list of missing blocks and affected collections and we've provided a "utility script":https://github.com/curoverse/arvados/blob/master/tools/keep-xref/keep-xref.py which can be used to identify the workflows that generated those collections and who ran those workflows, so that they can be rerun.
+We've put together a "wiki page":https://dev.arvados.org/projects/arvados/wiki/Recovering_lost_data which outlines how to recover blocks which have been put in the trash, but not yet deleted, as well as how to identify any collections which have missing blocks so that they can be regenerated. The keep-balance component has been enhanced to provide a list of missing blocks and affected collections and we've provided a "utility script":https://github.com/arvados/arvados/blob/master/tools/keep-xref/keep-xref.py which can be used to identify the workflows that generated those collections and who ran those workflows, so that they can be rerun.
-h3(#v1_3_0). v1.3.0 (2018-12-05)
+h2(#v1_3_0). v1.3.0 (2018-12-05)
+
+"Upgrading from 1.2":#v1_2_0
This release includes several database migrations, which will be executed automatically as part of the API server upgrade. On large Arvados installations, these migrations will take a while. We've seen the upgrade take 30 minutes or more on installations with a lot of collections.
-The @arvados-controller@ component now requires the /etc/arvados/config.yml file to be present. See <a href="{{ site.baseurl }}/install/install-controller.html#configuration">the @arvados-controller@ installation instructions</a>.
+The @arvados-controller@ component now requires the /etc/arvados/config.yml file to be present.
Support for the deprecated "jobs" API is broken in this release. Users who rely on it should not upgrade. This will be fixed in an upcoming 1.3.1 patch release, however users are "encouraged to migrate":upgrade-crunch2.html as support for the "jobs" API will be dropped in an upcoming release. Users who are already using the "containers" API are not affected.
-h3(#v1_2_1). v1.2.1 (2018-11-26)
+h2(#v1_2_1). v1.2.1 (2018-11-26)
There are no special upgrade notes for this release.
-h3(#v1_2_0). v1.2.0 (2018-09-05)
+h2(#v1_2_0). v1.2.0 (2018-09-05)
+
+"Upgrading from 1.1.2 or 1.1.3":#v1_1_2
-h4. Regenerate Postgres table statistics
+h3. Regenerate Postgres table statistics
It is recommended to regenerate the table statistics for Postgres after upgrading to v1.2.0. If autovacuum is enabled on your installation, this script would do the trick:
If you also need to do the vacuum, you could adapt the script to run 'vacuum analyze' instead of 'analyze'.
-h4. New component: arvados-controller
+h3. New component: arvados-controller
Commit "db5107dca":https://dev.arvados.org/projects/arvados/repository/revisions/db5107dca adds a new system service, arvados-controller. More detail is available in story "#13496":https://dev.arvados.org/issues/13497.
-To add the Arvados Controller to your system please refer to the "installation instructions":../install/install-controller.html after upgrading your system to 1.2.0.
+To add the Arvados Controller to your system please refer to the "installation instructions":../install/install-api-server.html after upgrading your system to 1.2.0.
Verify your setup by confirming that API calls appear in the controller's logs (_e.g._, @journalctl -fu arvados-controller@) while loading a workbench page.
-h3(#v1_1_4). v1.1.4 (2018-04-10)
+h2(#v1_1_4). v1.1.4 (2018-04-10)
-h4. arvados-cwl-runner regressions (2018-04-05)
+"Upgrading from 1.1.3":#v1_1_3
+
+h3. arvados-cwl-runner regressions (2018-04-05)
<strong>Secondary files missing from toplevel workflow inputs</strong>
This bug has been fixed in Arvados release v1.2.0.
-h3(#v1_1_3). v1.1.3 (2018-02-08)
+h2(#v1_1_3). v1.1.3 (2018-02-08)
There are no special upgrade notes for this release.
-h3(#v1_1_2). v1.1.2 (2017-12-22)
+h2(#v1_1_2). v1.1.2 (2017-12-22)
+
+"Upgrading from 1.1.0 or 1.1.1":#v1_1_0
-h4. The minimum version for Postgres is now 9.4 (2017-12-08)
+h3. The minimum version for Postgres is now 9.4 (2017-12-08)
As part of story "#11908":https://dev.arvados.org/issues/11908, commit "8f987a9271":https://dev.arvados.org/projects/arvados/repository/revisions/8f987a9271 introduces a dependency on Postgres 9.4. Previously, Arvados required Postgres 9.3.
*# Install the @rh-postgresql94@ backport package from either Software Collections: http://doc.arvados.org/install/install-postgresql.html or the Postgres developers: https://www.postgresql.org/download/linux/redhat/
*# Restore from the backup using @psql@
-h3(#v1_1_1). v1.1.1 (2017-11-30)
+h2(#v1_1_1). v1.1.1 (2017-11-30)
There are no special upgrade notes for this release.
-h3(#v1_1_0). v1.1.0 (2017-10-24)
+h2(#v1_1_0). v1.1.0 (2017-10-24)
-h4. The minimum version for Postgres is now 9.3 (2017-09-25)
+h3. The minimum version for Postgres is now 9.3 (2017-09-25)
As part of story "#12032":https://dev.arvados.org/issues/12032, commit "68bdf4cbb1":https://dev.arvados.org/projects/arvados/repository/revisions/68bdf4cbb1 introduces a dependency on Postgres 9.3. Previously, Arvados required Postgres 9.1.
*# Install the @rh-postgresql94@ backport package from either Software Collections: http://doc.arvados.org/install/install-postgresql.html or the Postgres developers: https://www.postgresql.org/download/linux/redhat/
*# Restore from the backup using @psql@
-h3(#older). Older versions
+h2(#older). Older versions
-h4. Upgrade slower than usual (2017-06-30)
+h3. Upgrade slower than usual (2017-06-30)
As part of story "#11807":https://dev.arvados.org/issues/11807, commit "55aafbb":https://dev.arvados.org/projects/arvados/repository/revisions/55aafbb converts old "jobs" database records from YAML to JSON, making the upgrade process slower than usual.
* The conversion runs as a database migration, i.e., during the deb/rpm package upgrade process, while your API server is unavailable.
* Expect it to take about 1 minute per 20K jobs that have ever been created/run.
-h4. Service discovery overhead change in keep-web (2017-06-05)
+h3. Service discovery overhead change in keep-web (2017-06-05)
As part of story "#9005":https://dev.arvados.org/issues/9005, commit "cb230b0":https://dev.arvados.org/projects/arvados/repository/revisions/cb230b0 reduces service discovery overhead in keep-web requests.
* When upgrading keep-web _or keepproxy_ to/past this version, make sure to update API server as well. Otherwise, a bad token in a request can cause keep-web to fail future requests until either keep-web restarts or API server gets upgraded.
-h4. Node manager now has an http endpoint for management (2017-04-12)
+h3. Node manager now has an http endpoint for management (2017-04-12)
As part of story "#11349":https://dev.arvados.org/issues/11349, commit "2c094e2":https://dev.arvados.org/projects/arvados/repository/revisions/2c094e2 adds a "management" http server to nodemanager.
port = 8989</pre> (see example configuration files in source:services/nodemanager/doc or https://doc.arvados.org/install/install-nodemanager.html for more info)
* The server responds to @http://{address}:{port}/status.json@ with a summary of how many nodes are in each state (booting, busy, shutdown, etc.)
-h4. New websockets component (2017-03-23)
+h3. New websockets component (2017-03-23)
As part of story "#10766":https://dev.arvados.org/issues/10766, commit "e8cc0d7":https://dev.arvados.org/projects/arvados/repository/revisions/e8cc0d7 replaces puma with arvados-ws as the recommended websocket server.
* See http://doc.arvados.org/install/install-ws.html for install/upgrade instructions.
$ systemctl stop puma
</pre>
-h4. Change of database encoding for hashes and arrays (2017-03-06)
+h3. Change of database encoding for hashes and arrays (2017-03-06)
As part of story "#11168":https://dev.arvados.org/issues/11168, commit "660a614":https://dev.arvados.org/projects/arvados/repository/revisions/660a614 uses JSON instead of YAML to encode hashes and arrays in the database.
* Downgrading past this version is not supported, and is likely to cause errors. If this happens, the solution is to upgrade past this version.
* After upgrading, make sure to restart puma and crunch-dispatch-* processes.
-h4. Docker image format compatibility check (2017-02-03)
+h3. Docker image format compatibility check (2017-02-03)
As part of story "#10969":https://dev.arvados.org/issues/10969, commit "74a9dec":https://dev.arvados.org/projects/arvados/repository/revisions/74a9dec introduces a Docker image format compatibility check: the @arv keep docker@ command prevents users from inadvertently saving docker images that compute nodes won't be able to run.
* If your compute nodes run a version of *docker older than 1.10* you must override the default by adding to your API server configuration (@/etc/arvados/api/application.yml@): <pre><code class="yaml">docker_image_formats: ["v1"]</code></pre>
* Refer to the comments above @docker_image_formats@ in @/var/www/arvados-api/current/config/application.default.yml@ or source:services/api/config/application.default.yml or issue "#10969":https://dev.arvados.org/issues/10969 for more detail.
* *NOTE:* This does *not* include any support for migrating existing Docker images from v1 to v2 format. This will come later: for now, sites running Docker 1.9 or earlier should still *avoid upgrading Docker further than 1.9.*
-h4. Debian and RPM packages now have systemd unit files (2016-09-27)
+h3. Debian and RPM packages now have systemd unit files (2016-09-27)
Several Debian and RPM packages -- keep-balance ("d9eec0b":https://dev.arvados.org/projects/arvados/repository/revisions/d9eec0b), keep-web ("3399e63":https://dev.arvados.org/projects/arvados/repository/revisions/3399e63), keepproxy ("6de67b6":https://dev.arvados.org/projects/arvados/repository/revisions/6de67b6), and arvados-git-httpd ("9e27ddf":https://dev.arvados.org/projects/arvados/repository/revisions/9e27ddf) -- now enable their respective components using systemd. These components prefer YAML configuration files over command line flags ("3bbe1cd":https://dev.arvados.org/projects/arvados/repository/revisions/3bbe1cd).
** keepproxy - /etc/arvados/keepproxy/keepproxy.yml
** arvados-git-httpd - /etc/arvados/arv-git-httpd/arv-git-httpd.yml
-h4. Installation paths for Python modules and script changed (2016-05-31)
+h3. Installation paths for Python modules and script changed (2016-05-31)
Commits "ae72b172c8":https://dev.arvados.org/projects/arvados/repository/revisions/ae72b172c8 and "3aae316c25":https://dev.arvados.org/projects/arvados/repository/revisions/3aae316c25 change the filesystem location where Python modules and scripts are installed.
* Previous packages installed these files to the distribution's preferred path under @/usr/local@ (or the equivalent location in a Software Collection). Now they get installed to a path under @/usr@. This improves compatibility with other Python packages provided by the distribution. See "#9242":https://dev.arvados.org/issues/9242 for more background.
* If you simply import Python modules from scripts, or call Python tools relying on $PATH, you don't need to make any changes. If you have hardcoded full paths to some of these files (e.g., in symbolic links or configuration files), you will need to update those paths after this upgrade.
-h4. Crunchrunner package is required on compute and shell nodes (2016-04-25)
+h3. Crunchrunner package is required on compute and shell nodes (2016-04-25)
Commit "eebcb5e":https://dev.arvados.org/projects/arvados/repository/revisions/eebcb5e requires the crunchrunner package to be installed on compute nodes and shell nodes in order to run CWL workflows.
* On each Debian-based compute node and shell node, run: @sudo apt-get install crunchrunner@
* On each Red Hat-based compute node and shell node, run: @sudo yum install crunchrunner@
-h4. Keep permission signature algorithm change (2016-04-21)
+h3. Keep permission signature algorithm change (2016-04-21)
Commit "3c88abd":https://dev.arvados.org/projects/arvados/repository/revisions/3c88abd changes the Keep permission signature algorithm.
* All software components that generate signatures must be upgraded together. These are: keepstore, API server, keep-block-check, and keep-rsync. For example, if keepstore < 0.1.20160421183420 but API server >= 0.1.20160421183420, clients will not be able to read or write data in Keep.
* Jobs and client operations that are in progress during the upgrade (including arv-put's "resume cache") will fail.
-h4. Workbench's "Getting Started" popup disabled by default (2015-01-05)
+h3. Workbench's "Getting Started" popup disabled by default (2015-01-05)
Commit "e1276d6e":https://dev.arvados.org/projects/arvados/repository/revisions/e1276d6e disables Workbench's "Getting Started" popup by default.
* If you want new users to continue seeing this popup, set @enable_getting_started_popup: true@ in Workbench's @application.yml@ configuration.
-h4. Crunch jobs now have access to Keep-backed writable scratch storage (2015-12-03)
+h3. Crunch jobs now have access to Keep-backed writable scratch storage (2015-12-03)
Commit "5590c9ac":https://dev.arvados.org/projects/arvados/repository/revisions/5590c9ac makes a Keep-backed writable scratch directory available in crunch jobs (see "#7751":https://dev.arvados.org/issues/7751)
* All compute nodes must be upgraded to arvados-fuse >= 0.1.2015112518060 because crunch-job uses some new arv-mount flags (--mount-tmp, --mount-by-pdh) introduced in merge "346a558":https://dev.arvados.org/projects/arvados/repository/revisions/346a558
* Jobs will fail if the API server (in particular crunch-job from the arvados-cli gem) is upgraded without upgrading arvados-fuse on compute nodes.
-h4. Recommended configuration change for keep-web (2015-11-11)
+h3. Recommended configuration change for keep-web (2015-11-11)
Commit "1e2ace5":https://dev.arvados.org/projects/arvados/repository/revisions/1e2ace5 changes recommended config for keep-web (see "#5824":https://dev.arvados.org/issues/5824)
-* proxy/dns/ssl config should be updated to route "https://download.uuid_prefix.arvadosapi.com/" requests to keep-web (alongside the existing "collections" routing)
-* keep-web command line adds @-attachment-only-host download.uuid_prefix.arvadosapi.com@
+* proxy/dns/ssl config should be updated to route "https://download.ClusterID.example.com/" requests to keep-web (alongside the existing "collections" routing)
+* keep-web command line adds @-attachment-only-host download.ClusterID.example.com@
* Workbench config adds @keep_web_download_url@
* More info on the (still beta/non-TOC-linked) "keep-web doc page":http://doc.arvados.org/install/install-keep-web.html
-h4. Stopped containers are now automatically removed on compute nodes (2015-11-04)
+h3. Stopped containers are now automatically removed on compute nodes (2015-11-04)
Commit "1d1c6de":https://dev.arvados.org/projects/arvados/repository/revisions/1d1c6de removes stopped containers (see "#7444":https://dev.arvados.org/issues/7444)
* arvados-docker-cleaner removes _all_ docker containers as soon as they exit, effectively making @docker run@ default to @--rm@. If you run arvados-docker-cleaner on a host that does anything other than run crunch-jobs, and you still want to be able to use @docker start@, read the "new doc page":http://doc.arvados.org/install/install-compute-node.html to learn how to turn this off before upgrading.
-h4. New keep-web service (2015-11-04)
+h3. New keep-web service (2015-11-04)
Commit "21006cf":https://dev.arvados.org/projects/arvados/repository/revisions/21006cf adds a new keep-web service (see "#5824":https://dev.arvados.org/issues/5824).
* Nothing relies on keep-web yet, but early adopters can install it now by following http://doc.arvados.org/install/install-keep-web.html (it is not yet linked in the TOC).
+
+<notextile>
+</div>
+</notextile>
--- /dev/null
+---
+layout: default
+navsection: admin
+title: User management at the CLI
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+Initial setup
+
+<pre>
+ARVADOS_API_HOST={{ site.arvados_api_host }}
+ARVADOS_API_TOKEN=1234567890qwertyuiopasdfghjklzxcvbnm1234567890zzzz
+</pre>
+
+In these examples, @x1u39-tpzed-3kz0nwtjehhl0u4@ is the sample user account. Replace with the uuid of the user you wish to manipulate.
+
+See "user management":{{site.baseurl}}/admin/activation.html for an overview of how to use these commands.
+
+h3. Setup a user
+
+This creates a default git repository and VM login. Enables user to self-activate using Workbench.
+
+<pre>
+arv user setup --uuid x1u39-tpzed-3kz0nwtjehhl0u4
+</pre>
+
+h3. Deactivate user
+
+<pre>
+arv user unsetup --uuid x1u39-tpzed-3kz0nwtjehhl0u4
+</pre>
+
+When deactivating a user, you may also want to "reassign ownership of their data":{{site.baseurl}}/admin/reassign-ownership.html .
+
+h3. Directly activate user
+
+<pre>
+arv user update --uuid "x1u39-tpzed-3kz0nwtjehhl0u4" --user '{"is_active":true}'
+</pre>
+
+Note this bypasses user agreements checks, and does not set up the user with a default git repository or VM login.
+
+
+h2. Permissions
+
+h3. VM login
+
+Give @$user_uuid@ permission to log in to @$vm_uuid@ as @$target_username@
+
+<pre>
+user_uuid=xxxxxxxchangeme
+vm_uuid=xxxxxxxchangeme
+target_username=xxxxxxxchangeme
+
+read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"
+{
+"tail_uuid":"$user_uuid",
+"head_uuid":"$vm_uuid",
+"link_class":"permission",
+"name":"can_login",
+"properties":{"username":"$target_username"}
+}
+EOF
+</pre>
+
+h3. Git repository
+
+Give @$user_uuid@ permission to commit to @$repo_uuid@ as @$repo_username@
+
+<pre>
+user_uuid=xxxxxxxchangeme
+repo_uuid=xxxxxxxchangeme
+repo_username=xxxxxxxchangeme
+
+read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"
+{
+"tail_uuid":"$user_uuid",
+"head_uuid":"$repo_uuid",
+"link_class":"permission",
+"name":"can_write",
+"properties":{"username":"$repo_username"}
+}
+EOF
+</pre>
--- /dev/null
+---
+layout: default
+navsection: admin
+title: User management
+...
+
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% comment %}
+TODO: Link to relevant workbench documentation when it gets written
+{% endcomment %}
+
+This page describes how user accounts are created, set up and activated.
+
+h2. Authentication
+
+"Browser login and management of API tokens is described here.":{{site.baseurl}}/api/tokens.html
+
+After completing the log in and authentication process, the API server receives a user record from the upstream identity provider (Google, LDAP, etc) consisting of the user's name, primary email address, alternate email addresses, and optional unique provider identifier (@identity_url@).
+
+If a provider identifier is given, the API server searches for a matching user record.
+
+If a provider identifier is not given, no match is found, it next searches by primary email and then alternate email address. This enables "provider migration":migrating-providers.html and a "pre-activated accounts.":#pre-activated
+
+If no user account is found, a new user account is created with the information from the identity provider.
+
+If a user account has been "linked":{{site.baseurl}}/user/topics/link-accounts.html or "migrated":merge-remote-account.html the API server may follow internal redirects (@redirect_to_user_uuid@) to select the linked or migrated user account.
+
+h3. Federated Authentication
+
+A federated user follows a slightly different flow. The client presents a token issued by the remote cluster. The local API server contacts the remote cluster to verify the user's identity. This results in a user object (representing the remote user) being created on the local cluster. If the user cannot be verified, the token will be rejected. If the user is inactive on the remote cluster, a user record will be created, but it will also be inactive.
+
+h2. User activation
+
+This section describes the different user account states.
+
+!(side){{site.baseurl}}/images/user-account-states.svg!
+
+notextile. <div class="spaced-out">
+
+# A new user record is not set up, and not active. An inactive user cannot create or update any object, but can read Arvados objects that the user account has permission to read (such as publicly available items readable by the "anonymous" user).
+# Using Workbench or the "command line":{{site.baseurl}}/install/cheat_sheet.html , the admin invokes @setup@ on the user. The setup method adds the user to the "All users" group.
+- If "Users.AutoSetupNewUsers":config.html is true, this happens automatically during user creation, so in that case new users start at step (3).
+- If "Users.AutoSetupNewUsersWithRepository":config.html is true, a new git repo is created for the user.
+- If "Users.AutoSetupNewUsersWithVmUUID":config.html is set, the user is given login permission to the specified shell node
+# User is set up, but still not yet active. The browser presents "user agreements":#user_agreements (if any) and then invokes the user @activate@ method on the user's behalf.
+# The user @activate@ method checks that all "user agreements":#user_agreements are signed. If so, or there are no user agreements, the user is activated.
+# The user is active. User has normal access to the system.
+# From steps (1) and (3), an admin user can directly update the @is_active@ flag. This bypasses enforcement that user agreements are signed.
+If the user was not yet set up (still in step (1)), it adds the user to the "All users", but bypasses creating default git repository and assigning default VM access.
+# An existing user can have their access revoked using @unsetup@ and "ownership reassigned":reassign-ownership.html .
+Unsetup removes the user from the "All users" group and makes them inactive, preventing them from re-activating themselves.
+"Ownership reassignment":reassign-ownership.html moves any objects or permission from the old user to a new user and deletes any credentials for the old user.
+
+notextile. </div>
+
+User management can be performed through the web using Workbench or the command line. See "user management at the CLI":{{site.baseurl}}/install/cheat_sheet.html for specific examples.
+
+h2(#user_agreements). User agreements and self-activation
+
+The @activate@ method of the users controller checks if the user account is part of the "All Users" group and whether the user has "signed" all the user agreements.
+
+User agreements are accessed through the "user_agreements API":{{site.baseurl}}/api/methods/user_agreements.html . This returns a list of collection records.
+
+The user agreements that users are required to sign should be added to the @links@ table this way:
+
+<pre>
+$ arv link create --link '{
+ "link_class": "signature",
+ "name": "require",
+ "tail_uuid": "*system user uuid*",
+ "head_uuid: "*collection uuid*"
+}'
+</pre>
+
+The collection should contain a single HTML file with the user agreement text.
+
+Workbench displays the clickthrough agreements which the user can "sign".
+
+The @user_agreements/sign@ endpoint creates a Link object:
+
+<pre>
+{
+ "link_class": "signature"
+ "name": "click",
+ "tail_uuid": "*user uuid*",
+ "head_uuid: "*collection uuid*"
+}
+</pre>
+
+The @user_agreements/signatures@ endpoint returns the list of Link objects that represent signatures by the current user (created by @sign@).
+
+h2. User profile
+
+The fields making up the user profile are described in @Workbench.UserProfileFormFields@ . See "Configuration reference":config.html .
+
+The user profile is checked by workbench after checking if user agreements need to be signed. The values entered are stored in the @properties@ field on the user object. Unlike user agreements, the requirement to fill out the user profile is not enforced by the API server.
+
+h2. User visibility
+
+Initially, a user is not part of any groups and will not be able to interact with other users on the system. The admin should determine who the user is permited to interact with and use Workbench or the "command line":group-management.html#add to create and add the user to the appropriate group(s).
+
+h2(#pre-activated). Pre-setup user by email address
+
+You may create a user account for a user that has not yet logged in, and identify the user by email address.
+
+1. As an admin, create a user object:
+
+<pre>
+$ arv --format=uuid user create --user '{"email": "foo@example.com", "username": "foo"}'
+clsr1-tpzed-1234567890abcdf
+$ arv user setup --uuid clsr1-tpzed-1234567890abcdf
+</pre>
+
+2. When the user logs in the first time, the email address will be recognized and the user will be associated with the existing user object.
+
+h2. Pre-activate federated user
+
+1. As admin, create a user object with the @uuid@ of the federated user (this is the user's uuid on their home cluster, called @clsr2@ in this example):
+
+<pre>
+$ arv user create --user '{"uuid": "clsr2-tpzed-1234567890abcdf", "email": "foo@example.com", "username": "foo", "is_active": true}'
+</pre>
+
+2. When the user logs in, they will be associated with the existing user object.
+
+h2. Auto-setup federated users from trusted clusters
+
+By setting @ActivateUsers: true@ for each federated cluster in @RemoteClusters@, a federated user from one of the listed clusters will be automatically set up and activated on this cluster. See configuration example in "Federated instance":#federated .
+
+h2. Activation flows
+
+h3. Private instance
+
+Policy: users must be manually set up by the admin.
+
+Here is the configuration for this policy. This is also the default if not provided.
+(However, be aware that developer/demo builds such as "arvbox":{{site.baseurl}}/install/arvbox.html are configured with the "Open instance" policy described below.)
+
+<pre>
+Users:
+ AutoSetupNewUsers: false
+</pre>
+
+# User is created. Not set up. @is_active@ is false.
+# Workbench checks @is_invited@ and finds it is false. User gets "inactive user" page.
+# Admin goes to user page and clicks "setup user" or sets @is_active@ to true.
+# On refreshing workbench, the user is able to self-activate after signing clickthrough agreements (if any).
+# Alternately, directly setting @is_active@ to true also sets up the user, but skips clickthrough agreements (because the user is already active).
+
+h3(#federated). Federated instance
+
+Policy: users from other clusters in the federation are activated, users from outside the federation must be manually approved.
+
+Here is the configuration for this policy and an example remote cluster @clsr2@.
+
+<pre>
+Users:
+ AutoSetupNewUsers: false
+RemoteClusters:
+ clsr2:
+ ActivateUsers: true
+</pre>
+
+# Federated user arrives claiming to be from cluster 'clsr2'
+# API server authenticates user as being from cluster 'clsr2'
+# Because 'clsr2' has @ActivateUsers@ the user is set up and activated.
+# User can immediately start using Workbench.
+
+h3. Open instance
+
+Policy: anybody who shows up and signs the agreements is activated.
+
+<pre>
+Users:
+ AutoSetupNewUsers: true
+</pre>
+
+"Set up user agreements":#user_agreements by creating "signature" "require" links as described earlier.
+
+# User is created and auto-setup. At this point, @is_active@ is false, but user has been added to "All users" group.
+# Workbench checks @is_invited@ and finds it is true, because the user is a member of "All users" group.
+# Workbench presents user with list of user agreements, user reads and clicks "sign" for each one.
+# Workbench tries to activate user.
+# User is activated.
--- /dev/null
+---
+layout: default
+navsection: admin
+title: User properties vocabulary
+...
+
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+Many Arvados objects (like collections and projects) can store metadata as properties that in turn can be used in searches allowing a flexible way of organizing data inside the system.
+
+The Workbench2 user interface enables the site adminitrator to set up a properties vocabulary formal definition so that users can select from predefined key/value pairs of properties, offering the possibility to add different terms for the same concept.
+
+h2. Workbench2 configuration
+
+Workbench2 retrieves the vocabulary file URL from the cluster config as shown:
+
+<notextile>
+<pre><code>Cluster:
+ zzzzz:
+ Workbench:
+ VocabularyURL: <span class="userinput">https://site.example.com/vocabulary.json</span>
+</code></pre>
+</notextile>
+
+h2. Vocabulary definition format
+
+The JSON file describes the available keys and values and if the user is allowed to enter free text not defined by the vocabulary.
+
+Keys and values are indexed by identifiers so that the concept of a term is preserved even if vocabulary labels are changed.
+
+The following is an example of a vocabulary definition:
+
+{% codeblock as json %}
+{% include 'wb2_vocabulary_example' %}
+{% endcodeblock %}
+
+If the @strict_tags@ flag at the root level is @true@, it will restrict the users from saving property keys other than the ones defined in the vocabulary. Take notice that this restriction is at the client level on Workbench2, it doesn't limit the user's ability to set any arbitrary property via other means (e.g. Python SDK or CLI commands)
+
+Inside the @tags@ member, IDs are defined (@IDTAGANIMALS@, @IDTAGCOMMENT@, @IDTAGIMPORTANCES@) and can have any format that the current application requires. Every key will declare at least a @labels@ list with zero or more label objects.
+
+The @strict@ flag inside a tag definition operates the same as the @strict_tags@ root member, but at the individual tag level. When @strict@ is @true@, a tag’s value options are limited to those defined by the vocabulary.
+
+The @values@ member is optional and is used to define valid key/label pairs when applicable. In the example above, @IDTAGCOMMENT@ allows open-ended text by only defining the tag's ID and labels and leaving out @values@.
+
+When any key or value has more than one label option, Workbench2's user interface will allow the user to select any of the options. But because only the IDs are saved in the system, when the property is displayed in the user interface, the label shown will be the first of each group defined in the vocabulary file. For example, the user could select the property key @Species@ and @Homo sapiens@ as its value, but the user interface will display it as @Animal: Human@ because those labels are the first in the vocabulary definition.
+
+Internally, Workbench2 uses the IDs to do property based searches, so if the user searches by @Animal: Human@ or @Species: Homo sapiens@, both will return the same results.
+
+h2. Properties migration
+
+After installing the new vocabulary definition, it may be necessary to migrate preexisting properties that were set up using literal strings. This can be a big task depending on the number of properties on the vocabulary and the amount of collections and projects on the cluster.
+
+To help with this task we provide below a migration example script that accepts the new vocabulary definition file as an input, and uses the @ARVADOS_API_TOKEN@ and @ARVADOS_API_HOST@ environment variables to connect to the cluster, search for every collection and group that has properties with labels defined on the vocabulary file, and migrates them to the corresponding identifiers.
+
+This script will not run if the vocabulary file has duplicated labels for different keys or for different values inside a key, this is a failsafe mechanism to avoid migration errors.
+
+Please take into account that this script requires admin credentials. It also offers a @--dry-run@ flag that will report what changes are required without applying them, so it can be reviewed by an administrator.
+
+Also, take into consideration that this example script does case-sensitive matching on labels.
+
+{% codeblock as python %}
+{% include 'vocabulary_migrate_py' %}
+{% endcodeblock %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-p=. *Legacy. The job APIs are read-only and disabled by default in new installations. Use "container requests":container_requests.html.textile.liquid .*
+p=. *Legacy. The job APIs are read-only and disabled by default in new installations. Use "container requests":methods/container_requests.html .*
h2. Crunch scripts
--- /dev/null
+---
+layout: default
+navsection: api
+navmenu: API Methods
+title: "cloud dispatcher"
+
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+The cloud dispatcher provides several management/diagnostic APIs, intended to be used by a system administrator.
+
+These APIs are not normally exposed to external clients. To use them, connect directly to the dispatcher's internal URL (see Services.DispatchCloud.InternalURLs in the cluster config file). All requests must include the cluster's management token (@ManagementToken@ in the cluster config file).
+
+Example:
+
+<notextile><pre><code>curl -H "Authorization: Bearer $management_token" http://localhost:9006/arvados/v1/dispatch/containers</code></pre></notextile>
+
+These APIs are not available via @arv@ CLI tool.
+
+Note: the term "instance" here refers to a virtual machine provided by a cloud computing service. The alternate terms "cloud VM", "compute node", and "worker node" are sometimes used as well in config files, documentation, and log messages.
+
+h3. List containers
+
+@GET /arvados/v1/dispatch/containers@
+
+Return a list of containers that are either ready to dispatch, or being started/monitored by the dispatcher.
+
+Each entry in the returned list of @items@ includes:
+* an @instance_type@ entry with the name and attributes of the instance type that will be used to schedule the container (chosen from the @InstanceTypes@ section of your cluster config file); and
+* a @container@ entry with selected attributes of the container itself, including @uuid@, @priority@, @runtime_constraints@, and @state@. Other fields of the container records are not loaded by the dispatcher, and will have empty/zero values here (e.g., @{...,"created_at":"0001-01-01T00:00:00Z","command":[],...}@).
+
+Example response:
+
+<notextile><pre>{
+ "items": [
+ {
+ "container": {
+ "uuid": "zzzzz-dz642-xz68ptr62m49au7",
+ ...
+ "priority": 562948375092493200,
+ ...
+ "state": "Locked",
+ ...
+ },
+ "instance_type": {
+ "Name": "Standard_E2s_v3",
+ "ProviderType": "Standard_E2s_v3",
+ "VCPUs": 2,
+ "RAM": 17179869184,
+ "Scratch": 32000000000,
+ "IncludedScratch": 32000000000,
+ "AddedScratch": 0,
+ "Price": 0.146,
+ "Preemptible": false
+ }
+ },
+ ...
+ ]
+}</pre></notextile>
+
+h3. Terminate a container
+
+@POST /arvados/v1/dispatch/containers/kill?container_uuid={uuid}&reason={string}@
+
+Make a single attempt to terminate the indicated container on the relevant instance. (The caller can implement a delay-and-retry loop if needed.)
+
+A container terminated this way will end with state @Cancelled@ if its docker container had already started, or @Queued@ if it was terminated while setting up the runtime environment.
+
+The provided @reason@ string will appear in the dispatcher's log, but not in the user-visible container log.
+
+If the provided @container_uuid@ is not scheduled/running on an instance, the response status will be 404.
+
+h3. List instances
+
+@GET /arvados/v1/dispatch/instances@
+
+Return a list of cloud instances.
+
+Example response:
+
+<notextile><pre>{
+ "items": [
+ {
+ "instance": "/subscriptions/abcdefab-abcd-abcd-abcd-abcdefabcdef/resourceGroups/zzzzz/providers/Microsoft.Compute/virtualMachines/compute-abcdef0123456789abcdef0123456789-abcdefghijklmno",
+ "address": "10.23.45.67",
+ "price": 0.073,
+ "arvados_instance_type": "Standard_DS1_v2",
+ "provider_instance_type": "Standard_DS1_v2",
+ "last_container_uuid": "zzzzz-dz642-vp7scm21telkadq",
+ "last_busy": "2020-01-13T15:20:21.775019617Z",
+ "worker_state": "running",
+ "idle_behavior": "run"
+ },
+ ...
+}</pre></notextile>
+
+The @instance@ value is the instance's identifier, assigned by the cloud provider. It can be used with the instance APIs below.
+
+The @worker_state@ value indicates the instance's capability to run containers.
+* @unknown@: instance was not created by this dispatcher, and a boot probe has not yet succeeded (this state typically appears briefly after the dispatcher restarts).
+* @booting@: cloud provider says the instance exists, but a boot probe has not yet succeeded.
+* @idle@: instance is idle and ready to run a container.
+* @running@: instance is running a container.
+* @shutdown@: cloud provider has been instructed to terminate the instance.
+
+The @idle_behavior@ value determines what the dispatcher will do with the instance when it is idle; see hold/drain/run APIs below.
+
+h3. Hold an instance
+
+@POST /arvados/v1/dispatch/instances/hold?instance_id={instance}@
+
+Set the indicated instance's idle behavior to @hold@. The instance will not be shut down automatically. If a container is currently running, it will be allowed to continue, but no new containers will be scheduled.
+
+h3. Drain an instance
+
+@POST /arvados/v1/dispatch/instances/drain?instance_id={instance}@
+
+Set the indicated instance's idle behavior to @drain@. If a container is currently running, it will be allowed to continue, but when the instance becomes idle, it will be shut down.
+
+h3. Resume an instance
+
+@POST /arvados/v1/dispatch/instances/run?instance_id={instance}@
+
+Set the indicated instance's idle behavior to @run@ (the normal behavior). When it becomes idle, it will be eligible to run new containers. It will be shut down automatically when the configured idle threshold is reached.
+
+h3. Shut down an instance
+
+@POST /arvados/v1/dispatch/instances/kill?instance_id={instance}&reason={string}@
+
+Terminate the indicated instance.
+
+If a container is running on the instance, it will be killed too; no effort is made to wait for it to end gracefully.
+
+The provided @reason@ string will appear in the dispatcher's log.
The API server publishes a machine-readable description of its endpoints and some additional site configuration values via a JSON-formatted discovery document. This is available at @/discovery/v1/apis/arvados/v1/rest@, for example @https://{{ site.arvados_api_host }}/discovery/v1/apis/arvados/v1/rest@. Some Arvados SDKs use the discovery document to generate language bindings.
+h2. Exported configuration
+
+The Controller exposes a subset of the cluster's configuration and makes it available to clients in JSON format. This public config includes valuable information like several service's URLs, timeout settings, etc. and it is available at @/arvados/v1/config@, for example @https://{{ site.arvados_api_host }}/arvados/v1/config@. The new Workbench is one example of a client using this information, as it's a client-side application and doesn't have access to the cluster's config file.
+
h2. Workbench examples
Many Arvados Workbench pages, under the the *Advanced* tab, provide examples of API and SDK use for accessing the current resource .
|1|operator|string|Comparison operator|@>@, @>=@, @like@, @not in@|
|2|operand|string, array, or null|Value to compare with the resource attribute|@"d00220fb%"@, @"1234"@, @["foo","bar"]@, @nil@|
-The following operators are available.
+The following operators are available.[1]
table(table table-bordered table-condensed).
|_. Operator|_. Operand type|_. Description|_. Example|
|@is_a@|string|Arvados object type|@["head_uuid","is_a","arvados#collection"]@|
|@exists@|string|Test if a subproperty is present.|@["properties","exists","my_subproperty"]@|
+Note:
h4(#substringsearchfilter). Filtering using substring search
|@like@, @ilike@|string|SQL pattern match, single character match is @_@ and wildcard is @%@, ilike is case-insensitive|@["properties.my_subproperty", "like", "d00220fb%"]@|
|@in@, @not in@|array of strings|Set membership|@["properties.my_subproperty", "in", ["fizz", "buzz"]]@|
|@exists@|boolean|Test if a subproperty is present or not (determined by operand).|@["properties.my_subproperty", "exists", true]@|
+|@contains@|string, number|Filter where subproperty has a value either by exact match or value is element of subproperty list.|@["foo", "contains", "bar"]@ will find both @{"foo": "bar"}@ and @{"foo": ["bar", "baz"]}@.|
Note that exclusion filters @!=@ and @not in@ will return records for which the property is not defined at all. To restrict filtering to records on which the subproperty is defined, combine with an @exists@ filter.
* Have filters only matching @[["uuid", "in", [...]]@ or @["uuid", "=", "..."]@
* Specify @count=none@
-* If @select@ is specified, it must include @uuid@
* Not specify @limit@, @offset@ or @order@
* Not request more items than the maximum response size
|_. Argument |_. Type |_. Description |_. Location |
{background:#ccffcc}.|uuid|string|The UUID of the resource in question.|path||
|{resource_type}|object||query||
+
+fn1^. NOTE: The filter operator for full-text search (@@) which previously worked (but was undocumented) is deprecated and will be removed in a future release.
h3. list
-List container_requests.
+List container requests.
See "common resource list method.":{{site.baseurl}}/api/methods.html#index
-See the create method documentation for more information about container request-specific filters.
+The @filters@ argument can also filter on attributes of the container referenced by @container_uuid@. For example, @[["container.state", "=", "Running"]]@ will match any container request whose container is running now.
h3. update
See "common resource list method.":{{site.baseurl}}/api/methods.html#index
-See the create method documentation for more information about Container-specific filters.
-
h3. update
Update attributes of an existing Container.
|repository|string|Git repository name or URL.|Source of the repository where the given script_version is to be found. This can be given as the name of a locally hosted repository, or as a publicly accessible URL starting with @git://@, @http://@, or @https://@.
Examples:
@yourusername/yourrepo@
-@https://github.com/curoverse/arvados.git@|
+@https://github.com/arvados/arvados.git@|
|script_version|string|Git commit|During a **create** transaction, this is the Git branch, tag, or hash supplied by the client. Before the job starts, Arvados updates it to the full 40-character SHA-1 hash of the commit used by the job.
See "Specifying Git versions":#script_version below for more detail about acceptable ways to specify a commit.|
|cancelled_by_client_uuid|string|API client ID|Is null if job has not been cancelled|
See "permission links":{{site.baseurl}}/api/permission-model.html#links section of the permission model.
+h3. star
+
+A **star** link is a shortcut to a project that is displayed in the user interface (Workbench) as "favorites". Users can mark their own favorites (implemented by creating or deleting **star** links).
+
+An admin can also create **star** links owned by the "All Users" group, these will be displayed to all users that have permission to read the project that has been favorited.
+
+The schema for a star link is:
+
+table(table table-bordered table-condensed).
+|_. Field|_. Value|_. Description|
+|owner_uuid|user or group uuid|Either the user that owns the favorite, or the "All Users" group for public favorites.|
+|head_uuid|project uuid|The project being favorited|
+|link_class|string of value "star"|Indicates this represents a link to a user favorite|
+
+h4. Creating a favorite
+
+@owner_uuid@ is either an individual user, or the "All Users" group. The @head_uuid@ is the project being favorited.
+
+<pre>
+$ arv link create --link '{
+ "owner_uuid": "zzzzz-j7d0g-fffffffffffffff",
+ "head_uuid": "zzzzz-j7d0g-theprojectuuid",
+ "link_class": "star"}'
+</pre>
+
+h4. Deleting a favorite
+
+<pre>
+$ arv link delete --uuid zzzzz-o0j2j-thestarlinkuuid
+</pre>
+
+h4. Listing favorites
+
+To list all 'star' links that will be displayed for a user:
+
+<pre>
+$ arv link list --filters '[
+ ["link_class", "=", "star"],
+ ["owner_uuid", "in", ["zzzzz-j7d0g-fffffffffffffff", "zzzzz-tpzed-currentuseruuid"]]]'
+</pre>
+
h3. tag
-A **tag** link describes an object using an unparsed plain text string. Tags can be used to annotate objects that are not editable, like collections and objects shared as read-only.
+A **tag** link describes an object using an unparsed plain text string. Tags can be used to annotate objects that are not directly editable by the user, like collections and objects shared as read-only.
table(table table-bordered table-condensed).
|_. tail_type→head_type|_. name→head_uuid {properties}|
--- /dev/null
+---
+layout: default
+navsection: api
+navmenu: API Methods
+title: "user_agreements"
+
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+API endpoint base: @https://{{ site.arvados_api_host }}/arvados/v1/user_agreements@
+
+h2. Resource
+
+This provides an API for inactive users to sign clickthrough agreements prior to being activated.
+
+h2. Methods
+
+Required arguments are displayed in %{background:#ccffcc}green%.
+
+h3. list
+
+List user agreements. This is a list of collections which contain HTML files with the text of the clickthrough agreement(s) which can be rendered by Workbench.
+
+table(table table-bordered table-condensed).
+|_. Argument |_. Type |_. Description |_. Location |_. Example |
+
+h3. signatures
+
+List user agreements that have already been signed. These are recorded as link objects of @{"link_class": "signature", "name": "click"}@.
+
+table(table table-bordered table-condensed).
+|_. Argument |_. Type |_. Description |_. Location |_. Example |
+
+h3. sign
+
+Sign a user agreement.
+
+table(table table-bordered table-condensed).
+|_. Argument |_. Type |_. Description |_. Location |_. Example |
+{background:#ccffcc}.|uuid|string|The UUID of the user agreement collection.|path||
table(table table-bordered table-condensed).
|_. Argument |_. Type |_. Description |_. Location |_. Example |
{background:#ccffcc}.|uuid|string|The UUID of the User in question.|path||
-|user|object||query||
+|user|object|The new attributes.|query||
h3(#update_uuid). update_uuid
|_. Argument |_. Type |_. Description |_. Location |_. Example |
{background:#ccffcc}.|uuid|string|The current UUID of the user in question.|path|@zzzzz-tpzed-12345abcde12345@|
{background:#ccffcc}.|new_uuid|string|The desired new UUID. It is an error to use a UUID belonging to an existing user.|query|@zzzzz-tpzed-abcde12345abcde@|
+
+h3. setup
+
+Set up a user. Adds the user to the "All users" group. Enables the user to invoke @activate@. See "user management":{{site.baseurl}}/admin/activation.html for details.
+
+Arguments:
+
+table(table table-bordered table-condensed).
+|_. Argument |_. Type |_. Description |_. Location |_. Example |
+{background:#ccffcc}.|uuid|string|The UUID of the User in question.|query||
+
+h3. activate
+
+Check that a user has is set up and has signed all the user agreements. If so, activate the user. Users can invoke this for themselves. See "user agreements":{{site.baseurl}}/admin/activation.html#user_agreements for details.
+
+Arguments:
+
+table(table table-bordered table-condensed).
+|_. Argument |_. Type |_. Description |_. Location |_. Example |
+{background:#ccffcc}.|uuid|string|The UUID of the User in question.|query||
+
+h3. unsetup
+
+Remove the user from the "All users" group and deactivate the user. See "user management":{{site.baseurl}}/admin/activation.html for details.
+
+Arguments:
+
+table(table table-bordered table-condensed).
+|_. Argument |_. Type |_. Description |_. Location |_. Example |
+{background:#ccffcc}.|uuid|string|The UUID of the User in question.|path||
Scopes can restrict a token so it may only access certain resources. This is in addition to normal permission checks for the user associated with the token.
-Each entry in scopes consists of a @request_method@ and @request_path@, where the @request_method@ is a HTTP method (one of @GET@, @POST@, @PUT@ or @DELETE@) and @request_path@ is the request URI. A given request is permitted if it matches a scopes exactly, or the scope ends with @/@ and the request string is a prefix of the scope.
+Each entry in scopes consists of a @request_method@ and @request_path@. The @request_method@ is a HTTP method (one of @GET@, @POST@, @PATCH@ or @DELETE@) and @request_path@ is the request URI. A given request is permitted if it matches a scopes exactly, or the scope ends with @/@ and the request string is a prefix of the scope.
-As a special case, a scope of ["all"] allows all resources.
+As a special case, a scope of @["all"]@ allows all resources. This is the default if no scope is given.
+
+Using scopes is also described on the "Securing API access with scoped tokens":{{site.baseurl}}/admin/scoped-tokens.html page of the admin documentation.
h3. Scope examples
* For automated tests purposes, use "z****"
* For experimental/local-only/private clusters that won't ever be visible on the public Internet, use "x****"
-* For long-lived clusters, we recommend reserving a cluster id. Contact "mailto:support@curoverse.com":support@curoverse.com
+* For long-lived clusters, we recommend reserving a cluster id. Contact "info@curii.com":mailto:info@curii.com
Cluster identifiers are mapped API server hosts one of two ways:
-* Through DNS resolution, under the @arvadosapi.com@ domain. For example, the API server for the cluster @qr1hi@ can be found at @qr1hi.arvadosapi.com@. To register a cluster id for free under @arvadosapi.com@, contact "mailto:support@curoverse.com":support@curoverse.com
+* Through DNS resolution, under the @arvadosapi.com@ domain. For example, the API server for the cluster @qr1hi@ can be found at @qr1hi.arvadosapi.com@. To register a cluster id for free under @arvadosapi.com@, contact "info@curii.com":mailto:info@curii.com
* Through explicit configuration:
The @RemoteClusters@ section of @/etc/arvados/config.yml@ (for arvados-controller)
--- /dev/null
+body {
+ background: white;
+ color: black;
+}
+
+a:link {
+ background: white;
+ color: blue;
+}
+
+a:visited {
+ background: white;
+ color: rgb(50%, 0%, 50%);
+}
+
+h1 {
+ background: white;
+ color: rgb(55%, 55%, 55%);
+ font-family: monospace;
+ font-size: x-large;
+ text-align: center;
+}
+
+h2 {
+ background: white;
+ color: rgb(40%, 40%, 40%);
+ font-family: monospace;
+ font-size: large;
+ text-align: center;
+}
+
+h3 {
+ background: white;
+ color: rgb(40%, 40%, 40%);
+ font-family: monospace;
+ font-size: large;
+}
+
+h4 {
+ background: white;
+ color: rgb(40%, 40%, 40%);
+ font-family: monospace;
+ font-style: italic;
+ font-size: large;
+}
+
+h5 {
+ background: white;
+ color: rgb(40%, 40%, 40%);
+ font-family: monospace;
+}
+
+h6 {
+ background: white;
+ color: rgb(40%, 40%, 40%);
+ font-family: monospace;
+ font-style: italic;
+}
+
+img.toplogo {
+ width: 4em;
+ vertical-align: middle;
+}
+
+img.arrow {
+ width: 30px;
+ height: 30px;
+ border: 0;
+}
+
+span.acronym {
+ font-size: small;
+}
+
+span.env {
+ font-family: monospace;
+}
+
+span.file {
+ font-family: monospace;
+}
+
+span.option{
+ font-family: monospace;
+}
+
+span.pkg {
+ font-weight: bold;
+}
+
+span.samp{
+ font-family: monospace;
+}
+
+div.vignettes a:hover {
+ background: rgb(85%, 85%, 85%);
+}
td.line-numbers {
width: 2em;
}
+
+.releasenotes h2 { margin-top: 1.5em; text-decoration: underline; }
margin-left: 2em;
margin-bottom: 2em;
}
+
+img.side {
+ float: left;
+ width: 50%;
+}
--- /dev/null
+Clusters:
+ zzzzz:
+ ManagementToken: e687950a23c3a9bceec28c6223a06c79
+ SystemRootToken: systemusertesttoken1234567890aoeuidhtnsqjkxbmwvzpy
+ API:
+ RequestTimeout: 30s
+ TLS:
+ Insecure: true
+ Collections:
+ BlobSigningKey: zfhgfenhffzltr9dixws36j1yhksjoll2grmku38mi7yxd66h5j4q9w4jzanezacp8s6q0ro3hxakfye02152hncy6zml2ed0uc
+ TrustAllContent: true
+ ForwardSlashNameSubstitution: /
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<svg version="1.2" width="279.4mm" height="63.5mm" viewBox="0 0 27940 6350" preserveAspectRatio="xMidYMid" fill-rule="evenodd" stroke-width="28.222" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg" xmlns:ooo="http://xml.openoffice.org/svg/export" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:presentation="http://sun.com/xmlns/staroffice/presentation" xmlns:smil="http://www.w3.org/2001/SMIL20/" xmlns:anim="urn:oasis:names:tc:opendocument:xmlns:animation:1.0" xml:space="preserve">
+ <defs class="ClipPathGroup">
+ <clipPath id="presentation_clip_path" clipPathUnits="userSpaceOnUse">
+ <rect x="0" y="0" width="27940" height="6350"/>
+ </clipPath>
+ <clipPath id="presentation_clip_path_shrink" clipPathUnits="userSpaceOnUse">
+ <rect x="27" y="6" width="27885" height="6338"/>
+ </clipPath>
+ </defs>
+ <defs>
+ <font id="EmbeddedFont_1" horiz-adv-x="2048">
+ <font-face font-family="Liberation Sans embedded" units-per-em="2048" font-weight="normal" font-style="normal" ascent="1852" descent="423"/>
+ <missing-glyph horiz-adv-x="2048" d="M 0,0 L 2047,0 2047,2047 0,2047 0,0 Z"/>
+ <glyph unicode="x" horiz-adv-x="1006" d="M 801,0 L 510,444 217,0 23,0 408,556 41,1082 240,1082 510,661 778,1082 979,1082 612,558 1002,0 801,0 Z"/>
+ <glyph unicode="v" horiz-adv-x="1033" d="M 613,0 L 400,0 7,1082 199,1082 437,378 C 442,363 447,346 454,325 460,304 466,282 473,259 480,236 486,215 492,194 497,173 502,155 506,141 510,155 515,173 522,194 528,215 534,236 541,258 548,280 555,302 562,323 569,344 575,361 580,376 L 826,1082 1017,1082 613,0 Z"/>
+ <glyph unicode="t" horiz-adv-x="531" d="M 554,8 C 527,1 499,-5 471,-10 442,-14 409,-16 372,-16 228,-16 156,66 156,229 L 156,951 31,951 31,1082 163,1082 216,1324 336,1324 336,1082 536,1082 536,951 336,951 336,268 C 336,216 345,180 362,159 379,138 408,127 450,127 467,127 484,128 501,131 517,134 535,137 554,141 L 554,8 Z"/>
+ <glyph unicode="s" horiz-adv-x="901" d="M 950,299 C 950,248 940,203 921,164 901,124 872,91 835,64 798,37 752,16 698,2 643,-13 581,-20 511,-20 448,-20 392,-15 342,-6 291,4 247,20 209,41 171,62 139,91 114,126 88,161 69,203 57,254 L 216,285 C 231,227 263,185 311,158 359,131 426,117 511,117 550,117 585,120 618,125 650,130 678,140 701,153 724,166 743,183 756,205 769,226 775,253 775,285 775,318 767,345 752,366 737,387 715,404 688,418 661,432 628,444 589,455 550,465 507,476 460,489 417,500 374,513 331,527 288,541 250,560 216,583 181,606 153,634 132,668 111,702 100,745 100,796 100,895 135,970 206,1022 276,1073 378,1099 513,1099 632,1099 727,1078 798,1036 868,994 912,927 931,834 L 769,814 C 763,842 752,866 736,885 720,904 701,919 678,931 655,942 630,951 602,956 573,961 544,963 513,963 432,963 372,951 333,926 294,901 275,864 275,814 275,785 282,761 297,742 311,723 331,707 357,694 382,681 413,669 449,660 485,650 525,640 568,629 597,622 626,614 656,606 686,597 715,587 744,576 772,564 799,550 824,535 849,519 870,500 889,478 908,456 923,430 934,401 945,372 950,338 950,299 Z"/>
+ <glyph unicode="r" horiz-adv-x="530" d="M 142,0 L 142,830 C 142,853 142,876 142,900 141,923 141,946 140,968 139,990 139,1011 138,1030 137,1049 137,1067 136,1082 L 306,1082 C 307,1067 308,1049 309,1030 310,1010 311,990 312,969 313,948 313,929 314,910 314,891 314,874 314,861 L 318,861 C 331,902 344,938 359,969 373,999 390,1024 409,1044 428,1063 451,1078 478,1088 505,1097 537,1102 575,1102 590,1102 604,1101 617,1099 630,1096 641,1094 648,1092 L 648,927 C 636,930 622,933 606,935 590,936 572,937 552,937 511,937 476,928 447,909 418,890 394,865 376,832 357,799 344,759 335,714 326,668 322,618 322,564 L 322,0 142,0 Z"/>
+ <glyph unicode="o" horiz-adv-x="980" d="M 1053,542 C 1053,353 1011,212 928,119 845,26 724,-20 565,-20 490,-20 422,-9 363,14 304,37 254,71 213,118 172,165 140,223 119,294 97,364 86,447 86,542 86,915 248,1102 571,1102 655,1102 728,1090 789,1067 850,1044 900,1009 939,962 978,915 1006,857 1025,787 1044,717 1053,635 1053,542 Z M 864,542 C 864,626 858,695 845,750 832,805 813,848 788,881 763,914 732,937 696,950 660,963 619,969 574,969 528,969 487,962 450,949 413,935 381,912 355,879 329,846 309,802 296,747 282,692 275,624 275,542 275,458 282,389 297,334 312,279 332,235 358,202 383,169 414,146 449,133 484,120 522,113 563,113 609,113 651,120 688,133 725,146 757,168 783,201 809,234 829,278 843,333 857,388 864,458 864,542 Z"/>
+ <glyph unicode="n" horiz-adv-x="874" d="M 825,0 L 825,686 C 825,739 821,783 814,818 806,853 793,882 776,904 759,925 736,941 708,950 679,959 644,963 602,963 559,963 521,956 487,941 452,926 423,904 399,876 374,847 355,812 342,771 329,729 322,681 322,627 L 322,0 142,0 142,851 C 142,874 142,898 142,923 141,948 141,971 140,994 139,1016 139,1035 138,1051 137,1067 137,1077 136,1082 L 306,1082 C 307,1079 307,1070 308,1055 309,1040 310,1024 311,1005 312,986 312,966 313,947 314,927 314,910 314,897 L 317,897 C 334,928 353,957 374,982 395,1007 419,1029 446,1047 473,1064 505,1078 540,1088 575,1097 616,1102 663,1102 723,1102 775,1095 818,1080 861,1065 897,1043 925,1012 953,981 974,942 987,894 1000,845 1006,788 1006,721 L 1006,0 825,0 Z"/>
+ <glyph unicode="l" horiz-adv-x="187" d="M 138,0 L 138,1484 318,1484 318,0 138,0 Z"/>
+ <glyph unicode="i" horiz-adv-x="187" d="M 137,1312 L 137,1484 317,1484 317,1312 137,1312 Z M 137,0 L 137,1082 317,1082 317,0 137,0 Z"/>
+ <glyph unicode="g" horiz-adv-x="927" d="M 548,-425 C 486,-425 431,-419 383,-406 335,-393 294,-375 260,-352 226,-328 198,-300 177,-267 156,-234 140,-198 131,-158 L 312,-132 C 324,-182 351,-220 392,-248 433,-274 486,-288 553,-288 594,-288 631,-282 664,-271 697,-260 726,-241 749,-217 772,-191 790,-159 803,-119 816,-79 822,-30 822,27 L 822,201 820,201 C 807,174 790,148 771,123 751,98 727,75 699,56 670,37 637,21 600,10 563,-2 520,-8 472,-8 403,-8 345,4 296,27 247,50 207,84 176,130 145,176 122,233 108,302 93,370 86,449 86,539 86,626 93,704 108,773 122,842 145,901 178,950 210,998 252,1035 304,1061 355,1086 418,1099 492,1099 569,1099 635,1082 692,1047 748,1012 791,962 822,897 L 824,897 C 824,914 825,933 826,953 827,974 828,994 829,1012 830,1031 831,1046 832,1060 833,1073 835,1080 836,1080 L 1007,1080 C 1006,1074 1006,1064 1005,1050 1004,1035 1004,1018 1003,998 1002,978 1002,956 1002,932 1001,907 1001,882 1001,856 L 1001,30 C 1001,-121 964,-234 890,-311 815,-387 701,-425 548,-425 Z M 822,541 C 822,616 814,681 798,735 781,788 760,832 733,866 706,900 676,925 642,941 607,957 572,965 536,965 490,965 451,957 418,941 385,925 357,900 336,866 314,831 298,787 288,734 277,680 272,616 272,541 272,463 277,398 288,345 298,292 314,249 335,216 356,183 383,160 416,146 449,132 488,125 533,125 569,125 604,133 639,148 673,163 704,188 731,221 758,254 780,297 797,350 814,403 822,466 822,541 Z"/>
+ <glyph unicode="e" horiz-adv-x="980" d="M 276,503 C 276,446 282,394 294,347 305,299 323,258 348,224 372,189 403,163 441,144 479,125 525,115 578,115 656,115 719,131 766,162 813,193 844,233 861,281 L 1019,236 C 1008,206 992,176 972,146 951,115 924,88 890,64 856,39 814,19 763,4 712,-12 650,-20 578,-20 418,-20 296,28 213,123 129,218 87,360 87,548 87,649 100,735 125,806 150,876 185,933 229,977 273,1021 324,1053 383,1073 442,1092 504,1102 571,1102 662,1102 738,1087 799,1058 860,1029 909,988 946,937 983,885 1009,824 1025,754 1040,684 1048,608 1048,527 L 1048,503 276,503 Z M 862,641 C 852,755 823,838 775,891 727,943 658,969 568,969 538,969 507,964 474,955 441,945 410,928 382,903 354,878 330,845 311,803 292,760 281,706 278,641 L 862,641 Z"/>
+ <glyph unicode="d" horiz-adv-x="927" d="M 821,174 C 788,105 744,55 689,25 634,-5 565,-20 484,-20 347,-20 247,26 183,118 118,210 86,349 86,536 86,913 219,1102 484,1102 566,1102 634,1087 689,1057 744,1027 788,979 821,914 L 823,914 C 823,921 823,931 823,946 822,960 822,975 822,991 821,1006 821,1021 821,1035 821,1049 821,1059 821,1065 L 821,1484 1001,1484 1001,223 C 1001,197 1001,172 1002,148 1002,124 1002,102 1003,82 1004,62 1004,45 1005,31 1006,16 1006,6 1007,0 L 835,0 C 834,7 833,16 832,29 831,41 830,55 829,71 828,87 827,104 826,122 825,139 825,157 825,174 L 821,174 Z M 275,542 C 275,467 280,403 289,350 298,297 313,253 334,219 355,184 381,159 413,143 445,127 484,119 530,119 577,119 619,127 656,142 692,157 722,182 747,217 771,251 789,296 802,351 815,406 821,474 821,554 821,631 815,696 802,749 789,802 771,844 746,877 721,910 691,933 656,948 620,962 579,969 532,969 488,969 450,961 418,946 386,931 359,906 338,872 317,838 301,794 291,740 280,685 275,619 275,542 Z"/>
+ <glyph unicode="c" horiz-adv-x="901" d="M 275,546 C 275,484 280,427 289,375 298,323 313,278 334,241 355,203 384,174 419,153 454,132 497,122 548,122 612,122 666,139 709,174 752,209 778,262 788,334 L 970,322 C 964,277 951,234 931,193 911,152 884,115 850,84 815,53 773,28 724,9 675,-10 618,-20 553,-20 468,-20 396,-6 337,23 278,52 230,91 193,142 156,192 129,251 112,320 95,388 87,462 87,542 87,615 93,679 105,735 117,790 134,839 156,881 177,922 203,957 232,986 261,1014 293,1037 328,1054 362,1071 398,1083 436,1091 474,1098 512,1102 551,1102 612,1102 666,1094 713,1077 760,1060 801,1038 836,1009 870,980 898,945 919,906 940,867 955,824 964,779 L 779,765 C 770,825 746,873 708,908 670,943 616,961 546,961 495,961 452,953 418,936 383,919 355,893 334,859 313,824 298,781 289,729 280,677 275,616 275,546 Z"/>
+ <glyph unicode="a" horiz-adv-x="1060" d="M 414,-20 C 305,-20 224,9 169,66 114,123 87,202 87,302 87,373 101,432 128,478 155,523 190,559 234,585 277,611 327,629 383,639 439,649 496,655 554,656 L 797,660 797,719 C 797,764 792,802 783,833 774,864 759,890 740,909 721,928 697,943 668,952 639,961 604,965 565,965 530,965 499,963 471,958 443,953 419,944 398,931 377,918 361,900 348,878 335,855 327,827 323,793 L 135,810 C 142,853 154,892 173,928 192,963 218,994 253,1020 287,1046 330,1066 382,1081 433,1095 496,1102 569,1102 705,1102 807,1071 876,1009 945,946 979,856 979,738 L 979,272 C 979,219 986,179 1000,152 1014,125 1041,111 1080,111 1090,111 1100,112 1110,113 1120,114 1130,116 1139,118 L 1139,6 C 1116,1 1094,-3 1072,-6 1049,-9 1025,-10 1000,-10 966,-10 937,-5 913,4 888,13 868,26 853,45 838,63 826,86 818,113 810,140 805,171 803,207 L 797,207 C 778,172 757,141 734,113 711,85 684,61 653,42 622,22 588,7 549,-4 510,-15 465,-20 414,-20 Z M 455,115 C 512,115 563,126 606,147 649,168 684,194 713,227 741,260 762,295 776,334 790,373 797,410 797,445 L 797,534 600,530 C 556,529 514,526 475,521 435,515 400,504 370,487 340,470 316,447 299,417 281,387 272,348 272,299 272,240 288,195 320,163 351,131 396,115 455,115 Z"/>
+ <glyph unicode="S" horiz-adv-x="1192" d="M 1272,389 C 1272,330 1261,275 1238,225 1215,175 1179,132 1131,96 1083,59 1023,31 950,11 877,-10 790,-20 690,-20 515,-20 378,11 280,72 182,133 120,222 93,338 L 278,375 C 287,338 302,305 321,275 340,245 367,219 400,198 433,176 473,159 522,147 571,135 629,129 697,129 754,129 806,134 853,144 900,153 941,168 975,188 1009,208 1036,234 1055,266 1074,297 1083,335 1083,379 1083,425 1073,462 1052,491 1031,520 1001,543 963,562 925,581 880,596 827,609 774,622 716,635 652,650 613,659 573,668 534,679 494,689 456,701 420,716 383,730 349,747 317,766 285,785 257,809 234,836 211,863 192,894 179,930 166,965 159,1006 159,1053 159,1120 173,1177 200,1225 227,1272 264,1311 312,1342 360,1373 417,1395 482,1409 547,1423 618,1430 694,1430 781,1430 856,1423 918,1410 980,1396 1032,1375 1075,1348 1118,1321 1152,1287 1178,1247 1203,1206 1224,1159 1239,1106 L 1051,1073 C 1042,1107 1028,1137 1011,1164 993,1191 970,1213 941,1231 912,1249 878,1263 837,1272 796,1281 747,1286 692,1286 627,1286 572,1280 528,1269 483,1257 448,1241 421,1221 394,1201 374,1178 363,1151 351,1124 345,1094 345,1063 345,1021 356,987 377,960 398,933 426,910 462,892 498,874 540,859 587,847 634,835 685,823 738,811 781,801 825,791 868,781 911,770 952,758 991,744 1030,729 1067,712 1102,693 1136,674 1166,650 1191,622 1216,594 1236,561 1251,523 1265,485 1272,440 1272,389 Z"/>
+ <glyph unicode="Q" horiz-adv-x="1430" d="M 1495,711 C 1495,612 1482,521 1457,439 1431,356 1394,284 1346,222 1297,160 1238,110 1168,71 1097,32 1017,6 928,-6 942,-49 958,-85 976,-115 993,-145 1013,-169 1036,-189 1059,-207 1084,-221 1112,-231 1139,-239 1170,-244 1204,-244 1223,-244 1243,-243 1264,-240 1285,-237 1304,-234 1319,-231 L 1319,-365 C 1294,-371 1266,-376 1236,-381 1205,-385 1174,-387 1141,-387 1084,-387 1034,-378 991,-362 948,-344 911,-320 879,-289 846,-257 818,-218 795,-172 772,-126 751,-74 733,-16 628,-11 535,11 456,50 376,88 310,139 257,204 204,268 164,343 137,430 110,516 97,610 97,711 97,821 112,920 143,1009 174,1098 219,1173 278,1236 337,1298 411,1346 498,1380 585,1413 684,1430 797,1430 909,1430 1009,1413 1096,1379 1183,1345 1256,1297 1315,1234 1374,1171 1418,1096 1449,1007 1480,918 1495,820 1495,711 Z M 1300,711 C 1300,796 1289,873 1268,942 1246,1011 1214,1071 1172,1120 1129,1169 1077,1207 1014,1234 951,1261 879,1274 797,1274 713,1274 639,1261 576,1234 513,1207 460,1169 418,1120 375,1071 344,1011 323,942 302,873 291,796 291,711 291,626 302,549 324,479 345,408 377,348 420,297 462,246 515,206 578,178 641,149 713,135 795,135 883,135 959,149 1023,178 1086,207 1139,247 1180,298 1221,349 1251,409 1271,480 1290,551 1300,628 1300,711 Z"/>
+ <glyph unicode="P" horiz-adv-x="1112" d="M 1258,985 C 1258,924 1248,867 1228,814 1207,761 1177,715 1137,676 1096,637 1046,606 985,583 924,560 854,549 773,549 L 359,549 359,0 168,0 168,1409 761,1409 C 844,1409 917,1399 979,1379 1041,1358 1093,1330 1134,1293 1175,1256 1206,1211 1227,1159 1248,1106 1258,1048 1258,985 Z M 1066,983 C 1066,1072 1039,1140 984,1187 929,1233 847,1256 738,1256 L 359,1256 359,700 746,700 C 856,700 937,724 989,773 1040,822 1066,892 1066,983 Z"/>
+ <glyph unicode="N" horiz-adv-x="1165" d="M 1082,0 L 328,1200 C 329,1167 331,1135 333,1103 334,1076 336,1047 337,1017 338,986 338,959 338,936 L 338,0 168,0 168,1409 390,1409 1152,201 C 1150,234 1148,266 1146,299 1145,327 1143,358 1142,391 1141,424 1140,455 1140,485 L 1140,1409 1312,1409 1312,0 1082,0 Z"/>
+ <glyph unicode="L" horiz-adv-x="927" d="M 168,0 L 168,1409 359,1409 359,156 1071,156 1071,0 168,0 Z"/>
+ <glyph unicode="I" horiz-adv-x="213" d="M 189,0 L 189,1409 380,1409 380,0 189,0 Z"/>
+ <glyph unicode="C" horiz-adv-x="1324" d="M 792,1274 C 712,1274 641,1261 580,1234 518,1207 466,1169 425,1120 383,1071 351,1011 330,942 309,873 298,796 298,711 298,626 310,549 333,479 356,408 389,348 432,297 475,246 527,207 590,179 652,151 722,137 800,137 855,137 905,144 950,159 995,173 1035,193 1072,219 1108,245 1140,276 1169,312 1198,347 1223,387 1245,430 L 1401,352 C 1376,299 1344,250 1307,205 1270,160 1226,120 1176,87 1125,54 1068,28 1005,9 941,-10 870,-20 791,-20 677,-20 577,-2 492,35 406,71 334,122 277,187 219,252 176,329 147,418 118,507 104,605 104,711 104,821 119,920 150,1009 180,1098 224,1173 283,1236 341,1298 413,1346 498,1380 583,1413 681,1430 790,1430 940,1430 1065,1401 1166,1342 1267,1283 1341,1196 1388,1081 L 1207,1021 C 1194,1054 1176,1086 1153,1117 1130,1147 1102,1174 1068,1197 1034,1220 994,1239 949,1253 903,1267 851,1274 792,1274 Z"/>
+ <glyph unicode="A" horiz-adv-x="1377" d="M 1167,0 L 1006,412 364,412 202,0 4,0 579,1409 796,1409 1362,0 1167,0 Z M 768,1026 C 757,1053 747,1080 738,1107 728,1134 719,1159 712,1182 705,1204 699,1223 694,1238 689,1253 686,1262 685,1265 684,1262 681,1252 676,1237 671,1222 665,1203 658,1180 650,1157 641,1132 632,1105 622,1078 612,1051 602,1024 L 422,561 949,561 768,1026 Z"/>
+ <glyph unicode=" " horiz-adv-x="556"/>
+ </font>
+ </defs>
+ <defs class="TextShapeIndex">
+ <g ooo:slide="id1" ooo:id-list="id3 id4 id5 id6 id7 id8 id9 id10 id11 id12"/>
+ </defs>
+ <defs class="EmbeddedBulletChars">
+ <g id="bullet-char-template-57356" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 580,1141 L 1163,571 580,0 -4,571 580,1141 Z"/>
+ </g>
+ <g id="bullet-char-template-57354" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 8,1128 L 1137,1128 1137,0 8,0 8,1128 Z"/>
+ </g>
+ <g id="bullet-char-template-10146" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 174,0 L 602,739 174,1481 1456,739 174,0 Z M 1358,739 L 309,1346 659,739 1358,739 Z"/>
+ </g>
+ <g id="bullet-char-template-10132" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 2015,739 L 1276,0 717,0 1260,543 174,543 174,936 1260,936 717,1481 1274,1481 2015,739 Z"/>
+ </g>
+ <g id="bullet-char-template-10007" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 0,-2 C -7,14 -16,27 -25,37 L 356,567 C 262,823 215,952 215,954 215,979 228,992 255,992 264,992 276,990 289,987 310,991 331,999 354,1012 L 381,999 492,748 772,1049 836,1024 860,1049 C 881,1039 901,1025 922,1006 886,937 835,863 770,784 769,783 710,716 594,584 L 774,223 C 774,196 753,168 711,139 L 727,119 C 717,90 699,76 672,76 641,76 570,178 457,381 L 164,-76 C 142,-110 111,-127 72,-127 30,-127 9,-110 8,-76 1,-67 -2,-52 -2,-32 -2,-23 -1,-13 0,-2 Z"/>
+ </g>
+ <g id="bullet-char-template-10004" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 285,-33 C 182,-33 111,30 74,156 52,228 41,333 41,471 41,549 55,616 82,672 116,743 169,778 240,778 293,778 328,747 346,684 L 369,508 C 377,444 397,411 428,410 L 1163,1116 C 1174,1127 1196,1133 1229,1133 1271,1133 1292,1118 1292,1087 L 1292,965 C 1292,929 1282,901 1262,881 L 442,47 C 390,-6 338,-33 285,-33 Z"/>
+ </g>
+ <g id="bullet-char-template-9679" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 813,0 C 632,0 489,54 383,161 276,268 223,411 223,592 223,773 276,916 383,1023 489,1130 632,1184 813,1184 992,1184 1136,1130 1245,1023 1353,916 1407,772 1407,592 1407,412 1353,268 1245,161 1136,54 992,0 813,0 Z"/>
+ </g>
+ <g id="bullet-char-template-8226" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 346,457 C 273,457 209,483 155,535 101,586 74,649 74,723 74,796 101,859 155,911 209,963 273,989 346,989 419,989 480,963 531,910 582,859 608,796 608,723 608,648 583,586 532,535 482,483 420,457 346,457 Z"/>
+ </g>
+ <g id="bullet-char-template-8211" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M -4,459 L 1135,459 1135,606 -4,606 -4,459 Z"/>
+ </g>
+ <g id="bullet-char-template-61548" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 173,740 C 173,903 231,1043 346,1159 462,1274 601,1332 765,1332 928,1332 1067,1274 1183,1159 1299,1043 1357,903 1357,740 1357,577 1299,437 1183,322 1067,206 928,148 765,148 601,148 462,206 346,322 231,437 173,577 173,740 Z"/>
+ </g>
+ </defs>
+ <defs class="TextEmbeddedBitmaps"/>
+ <g>
+ <g id="id2" class="Master_Slide">
+ <g id="bg-id2" class="Background"/>
+ <g id="bo-id2" class="BackgroundObjects"/>
+ </g>
+ </g>
+ <g class="SlideGroup">
+ <g>
+ <g id="container-id1">
+ <g id="id1" class="Slide" clip-path="url(#presentation_clip_path)">
+ <g class="Page">
+ <g class="com.sun.star.drawing.CustomShape">
+ <g id="id3">
+ <rect class="BoundingBox" stroke="none" fill="none" x="1760" y="1380" width="3433" height="2584"/>
+ <path fill="rgb(114,159,207)" stroke="none" d="M 2067,2236 C 2003,1917 2300,1616 2597,1616 2691,1614 2788,1645 2867,1691 2944,1547 3085,1458 3244,1458 3349,1463 3461,1506 3541,1584 3598,1456 3718,1381 3849,1381 3958,1381 4058,1435 4122,1519 4195,1433 4304,1381 4419,1381 4605,1381 4762,1516 4795,1704 4975,1757 5105,1928 5105,2124 5105,2183 5095,2241 5068,2296 5144,2391 5191,2510 5191,2630 5191,2904 4986,3135 4722,3174 4722,3436 4519,3641 4265,3641 4177,3641 4095,3616 4022,3568 3955,3799 3744,3962 3507,3962 3331,3962 3164,3865 3064,3712 2971,3770 3020,3805 2751,3805 2531,3805 2327,3684 2221,3488 1967,3484 1837,3328 1837,3132 1837,3041 1870,2959 1930,2891 1821,2834 1761,2720 1761,2590 1761,2407 1894,2256 2067,2236 L 2067,2236 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 2067,2236 C 2003,1917 2300,1616 2597,1616 2691,1614 2788,1645 2867,1691 2944,1547 3085,1458 3244,1458 3349,1463 3461,1506 3541,1584 3598,1456 3718,1381 3849,1381 3958,1381 4058,1435 4122,1519 4195,1433 4304,1381 4419,1381 4605,1381 4762,1516 4795,1704 4975,1757 5105,1928 5105,2124 5105,2183 5095,2241 5068,2296 5144,2391 5191,2510 5191,2630 5191,2904 4986,3135 4722,3174 4722,3436 4519,3641 4265,3641 4177,3641 4095,3616 4022,3568 3955,3799 3744,3962 3507,3962 3331,3962 3164,3865 3064,3712 2971,3770 3020,3805 2751,3805 2531,3805 2327,3684 2221,3488 1967,3484 1837,3328 1837,3132 1837,3041 1870,2959 1930,2891 1821,2834 1761,2720 1761,2590 1761,2407 1894,2256 2067,2236 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 2067,2236 C 2070,2266 2084,2299 2092,2327"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 2867,1691 C 2904,1714 2948,1745 2978,1776"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 3541,1584 C 3528,1609 3520,1639 3512,1667"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 4122,1519 C 4098,1548 4085,1586 4069,1621"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 4795,1704 C 4798,1726 4814,1774 4808,1784"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 5068,2296 C 5041,2357 5005,2411 4954,2455"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 4724,3174 C 4736,3077 4663,2838 4460,2749"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 4022,3568 C 4034,3529 4039,3493 4042,3455"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 3066,3712 C 3040,3681 3025,3645 3009,3608"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 2221,3488 C 2251,3484 2281,3476 2310,3466"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 1930,2891 C 1983,2922 2043,2949 2130,2939"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="2549" y="2835"><tspan fill="rgb(0,0,0)" stroke="none">Client</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.CustomShape">
+ <g id="id4">
+ <rect class="BoundingBox" stroke="none" fill="none" x="6939" y="1674" width="3178" height="2035"/>
+ <path fill="rgb(114,159,207)" stroke="none" d="M 8528,3707 L 6940,3707 6940,1675 10115,1675 10115,3707 8528,3707 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 8528,3707 L 6940,3707 6940,1675 10115,1675 10115,3707 8528,3707 Z"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="7719" y="2912"><tspan fill="rgb(0,0,0)" stroke="none">Nginx</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.CustomShape">
+ <g id="id5">
+ <rect class="BoundingBox" stroke="none" fill="none" x="11766" y="1674" width="3305" height="2035"/>
+ <path fill="rgb(114,159,207)" stroke="none" d="M 13418,3707 L 11767,3707 11767,1675 15069,1675 15069,3707 13418,3707 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 13418,3707 L 11767,3707 11767,1675 15069,1675 15069,3707 13418,3707 Z"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="12256" y="2556"><tspan fill="rgb(0,0,0)" stroke="none">Arvados</tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="12116" y="3267"><tspan fill="rgb(0,0,0)" stroke="none">controller</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.CustomShape">
+ <g id="id6">
+ <rect class="BoundingBox" stroke="none" fill="none" x="16719" y="1674" width="3305" height="2035"/>
+ <path fill="rgb(114,159,207)" stroke="none" d="M 18371,3707 L 16720,3707 16720,1675 20022,1675 20022,3707 18371,3707 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 18371,3707 L 16720,3707 16720,1675 20022,1675 20022,3707 18371,3707 Z"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="17120" y="2556"><tspan fill="rgb(0,0,0)" stroke="none">Arvados </tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="16889" y="3267"><tspan fill="rgb(0,0,0)" stroke="none">API server</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.CustomShape">
+ <g id="id7">
+ <rect class="BoundingBox" stroke="none" fill="none" x="21545" y="1674" width="4067" height="2035"/>
+ <path fill="rgb(114,159,207)" stroke="none" d="M 23578,3707 L 21546,3707 21546,1675 25610,1675 25610,3707 23578,3707 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 23578,3707 L 21546,3707 21546,1675 25610,1675 25610,3707 23578,3707 Z"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="21851" y="2912"><tspan fill="rgb(0,0,0)" stroke="none">PostgreSQL</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id8">
+ <rect class="BoundingBox" stroke="none" fill="none" x="5190" y="2535" width="1752" height="302"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 5191,2671 L 6511,2686"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 6941,2691 L 6493,2536 6489,2836 6941,2691 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id9">
+ <rect class="BoundingBox" stroke="none" fill="none" x="10107" y="2527" width="1661" height="329"/>
+ <path fill="none" stroke="rgb(0,0,0)" stroke-width="18" stroke-linejoin="round" d="M 10116,2691 L 11424,2691"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 11428,2853 L 11753,2708 11763,2701 11767,2692 11763,2681 11755,2674 11428,2529 11424,2528 11421,2528 11414,2529 11408,2533 11404,2538 11402,2545 11402,2837 11404,2844 11408,2850 11414,2853 11421,2855 11424,2855 11428,2853 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id10">
+ <rect class="BoundingBox" stroke="none" fill="none" x="15068" y="2541" width="1653" height="301"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 15069,2691 L 16290,2691"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 16720,2691 L 16270,2541 16270,2841 16720,2691 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id11">
+ <rect class="BoundingBox" stroke="none" fill="none" x="20021" y="2541" width="1526" height="301"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 20022,2691 L 21116,2691"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 21546,2691 L 21096,2541 21096,2841 21546,2691 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id12">
+ <rect class="BoundingBox" stroke="none" fill="none" x="13417" y="3706" width="10312" height="1273"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 13418,3707 L 13418,4977 23578,4977 23578,4137"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 23578,3707 L 23428,4157 23728,4157 23578,3707 Z"/>
+ </g>
+ </g>
+ </g>
+ </g>
+ </g>
+ </g>
+ </g>
+</svg>
\ No newline at end of file
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
+<svg version="1.2" width="228.6mm" height="152.4mm" viewBox="0 0 22860 15240" preserveAspectRatio="xMidYMid" fill-rule="evenodd" stroke-width="28.222" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg" xmlns:ooo="http://xml.openoffice.org/svg/export" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:presentation="http://sun.com/xmlns/staroffice/presentation" xmlns:smil="http://www.w3.org/2001/SMIL20/" xmlns:anim="urn:oasis:names:tc:opendocument:xmlns:animation:1.0" xml:space="preserve">
+ <defs class="ClipPathGroup">
+ <clipPath id="presentation_clip_path" clipPathUnits="userSpaceOnUse">
+ <rect x="0" y="0" width="22860" height="15240"/>
+ </clipPath>
+ <clipPath id="presentation_clip_path_shrink" clipPathUnits="userSpaceOnUse">
+ <rect x="22" y="15" width="22815" height="15210"/>
+ </clipPath>
+ </defs>
+ <defs>
+ <font id="EmbeddedFont_1" horiz-adv-x="2048">
+ <font-face font-family="Liberation Sans embedded" units-per-em="2048" font-weight="normal" font-style="normal" ascent="1852" descent="423"/>
+ <missing-glyph horiz-adv-x="2048" d="M 0,0 L 2047,0 2047,2047 0,2047 0,0 Z"/>
+ <glyph unicode="”" horiz-adv-x="557" d="M 607,1264 C 607,1229 605,1197 602,1168 599,1139 594,1113 588,1088 581,1063 573,1039 563,1017 553,995 541,973 528,952 L 407,952 C 437,995 460,1039 477,1083 493,1127 501,1171 501,1214 L 413,1214 413,1409 607,1409 607,1264 Z M 276,1264 C 276,1229 275,1197 272,1168 269,1139 264,1113 257,1088 250,1063 242,1039 233,1017 223,995 211,973 198,952 L 75,952 C 105,995 128,1039 145,1083 161,1127 169,1171 169,1214 L 81,1214 81,1409 276,1409 276,1264 Z"/>
+ <glyph unicode="“" horiz-adv-x="557" d="M 407,952 L 407,1098 C 407,1133 409,1164 412,1193 415,1222 420,1249 427,1274 433,1299 441,1322 451,1344 460,1366 472,1388 485,1409 L 607,1409 C 577,1366 554,1322 538,1278 521,1234 513,1190 513,1147 L 601,1147 601,952 407,952 Z M 75,952 L 75,1098 C 75,1133 77,1164 80,1193 83,1222 88,1249 95,1274 102,1299 110,1322 120,1344 130,1366 142,1388 155,1409 L 276,1409 C 246,1366 223,1322 206,1278 189,1234 181,1190 181,1147 L 270,1147 270,952 75,952 Z"/>
+ <glyph unicode="v" horiz-adv-x="1033" d="M 613,0 L 400,0 7,1082 199,1082 437,378 C 442,363 447,346 454,325 460,304 466,282 473,259 480,236 486,215 492,194 497,173 502,155 506,141 510,155 515,173 522,194 528,215 534,236 541,258 548,280 555,302 562,323 569,344 575,361 580,376 L 826,1082 1017,1082 613,0 Z"/>
+ <glyph unicode="u" horiz-adv-x="874" d="M 314,1082 L 314,396 C 314,343 318,299 326,264 333,229 346,200 363,179 380,157 403,142 432,133 460,124 495,119 537,119 580,119 618,127 653,142 687,157 716,178 741,207 765,235 784,270 797,312 810,353 817,401 817,455 L 817,1082 997,1082 997,231 C 997,208 997,185 998,160 998,135 998,111 999,89 1000,66 1000,47 1001,31 1002,15 1002,5 1003,0 L 833,0 C 832,3 832,12 831,27 830,42 830,59 829,78 828,97 827,116 826,136 825,155 825,172 825,185 L 822,185 C 805,154 786,125 765,100 744,75 720,53 693,36 666,18 634,4 599,-6 564,-15 523,-20 476,-20 416,-20 364,-13 321,2 278,17 242,39 214,70 186,101 166,140 153,188 140,236 133,294 133,361 L 133,1082 314,1082 Z"/>
+ <glyph unicode="t" horiz-adv-x="531" d="M 554,8 C 527,1 499,-5 471,-10 442,-14 409,-16 372,-16 228,-16 156,66 156,229 L 156,951 31,951 31,1082 163,1082 216,1324 336,1324 336,1082 536,1082 536,951 336,951 336,268 C 336,216 345,180 362,159 379,138 408,127 450,127 467,127 484,128 501,131 517,134 535,137 554,141 L 554,8 Z"/>
+ <glyph unicode="s" horiz-adv-x="901" d="M 950,299 C 950,248 940,203 921,164 901,124 872,91 835,64 798,37 752,16 698,2 643,-13 581,-20 511,-20 448,-20 392,-15 342,-6 291,4 247,20 209,41 171,62 139,91 114,126 88,161 69,203 57,254 L 216,285 C 231,227 263,185 311,158 359,131 426,117 511,117 550,117 585,120 618,125 650,130 678,140 701,153 724,166 743,183 756,205 769,226 775,253 775,285 775,318 767,345 752,366 737,387 715,404 688,418 661,432 628,444 589,455 550,465 507,476 460,489 417,500 374,513 331,527 288,541 250,560 216,583 181,606 153,634 132,668 111,702 100,745 100,796 100,895 135,970 206,1022 276,1073 378,1099 513,1099 632,1099 727,1078 798,1036 868,994 912,927 931,834 L 769,814 C 763,842 752,866 736,885 720,904 701,919 678,931 655,942 630,951 602,956 573,961 544,963 513,963 432,963 372,951 333,926 294,901 275,864 275,814 275,785 282,761 297,742 311,723 331,707 357,694 382,681 413,669 449,660 485,650 525,640 568,629 597,622 626,614 656,606 686,597 715,587 744,576 772,564 799,550 824,535 849,519 870,500 889,478 908,456 923,430 934,401 945,372 950,338 950,299 Z"/>
+ <glyph unicode="r" horiz-adv-x="530" d="M 142,0 L 142,830 C 142,853 142,876 142,900 141,923 141,946 140,968 139,990 139,1011 138,1030 137,1049 137,1067 136,1082 L 306,1082 C 307,1067 308,1049 309,1030 310,1010 311,990 312,969 313,948 313,929 314,910 314,891 314,874 314,861 L 318,861 C 331,902 344,938 359,969 373,999 390,1024 409,1044 428,1063 451,1078 478,1088 505,1097 537,1102 575,1102 590,1102 604,1101 617,1099 630,1096 641,1094 648,1092 L 648,927 C 636,930 622,933 606,935 590,936 572,937 552,937 511,937 476,928 447,909 418,890 394,865 376,832 357,799 344,759 335,714 326,668 322,618 322,564 L 322,0 142,0 Z"/>
+ <glyph unicode="p" horiz-adv-x="953" d="M 1053,546 C 1053,464 1046,388 1033,319 1020,250 998,190 967,140 936,90 895,51 844,23 793,-6 730,-20 655,-20 578,-20 510,-5 452,24 394,53 350,101 319,168 L 314,168 C 315,167 315,161 316,150 316,139 316,126 317,110 317,94 317,76 318,57 318,37 318,17 318,-2 L 318,-425 138,-425 138,861 C 138,887 138,912 138,936 137,960 137,982 136,1002 135,1021 135,1038 134,1052 133,1066 133,1076 132,1082 L 306,1082 C 307,1080 308,1073 309,1061 310,1049 311,1035 312,1018 313,1001 314,982 315,963 316,944 316,925 316,908 L 320,908 C 337,943 356,972 377,997 398,1021 423,1041 450,1057 477,1072 508,1084 542,1091 575,1098 613,1101 655,1101 730,1101 793,1088 844,1061 895,1034 936,997 967,949 998,900 1020,842 1033,774 1046,705 1053,629 1053,546 Z M 864,542 C 864,609 860,668 852,720 844,772 830,816 811,852 791,888 765,915 732,934 699,953 658,962 609,962 569,962 531,956 496,945 461,934 430,912 404,880 377,848 356,804 341,748 326,691 318,618 318,528 318,451 324,387 337,334 350,281 368,238 393,205 417,172 447,149 483,135 519,120 560,113 607,113 657,113 699,123 732,142 765,161 791,189 811,226 830,263 844,308 852,361 860,414 864,474 864,542 Z"/>
+ <glyph unicode="o" horiz-adv-x="980" d="M 1053,542 C 1053,353 1011,212 928,119 845,26 724,-20 565,-20 490,-20 422,-9 363,14 304,37 254,71 213,118 172,165 140,223 119,294 97,364 86,447 86,542 86,915 248,1102 571,1102 655,1102 728,1090 789,1067 850,1044 900,1009 939,962 978,915 1006,857 1025,787 1044,717 1053,635 1053,542 Z M 864,542 C 864,626 858,695 845,750 832,805 813,848 788,881 763,914 732,937 696,950 660,963 619,969 574,969 528,969 487,962 450,949 413,935 381,912 355,879 329,846 309,802 296,747 282,692 275,624 275,542 275,458 282,389 297,334 312,279 332,235 358,202 383,169 414,146 449,133 484,120 522,113 563,113 609,113 651,120 688,133 725,146 757,168 783,201 809,234 829,278 843,333 857,388 864,458 864,542 Z"/>
+ <glyph unicode="n" horiz-adv-x="874" d="M 825,0 L 825,686 C 825,739 821,783 814,818 806,853 793,882 776,904 759,925 736,941 708,950 679,959 644,963 602,963 559,963 521,956 487,941 452,926 423,904 399,876 374,847 355,812 342,771 329,729 322,681 322,627 L 322,0 142,0 142,851 C 142,874 142,898 142,923 141,948 141,971 140,994 139,1016 139,1035 138,1051 137,1067 137,1077 136,1082 L 306,1082 C 307,1079 307,1070 308,1055 309,1040 310,1024 311,1005 312,986 312,966 313,947 314,927 314,910 314,897 L 317,897 C 334,928 353,957 374,982 395,1007 419,1029 446,1047 473,1064 505,1078 540,1088 575,1097 616,1102 663,1102 723,1102 775,1095 818,1080 861,1065 897,1043 925,1012 953,981 974,942 987,894 1000,845 1006,788 1006,721 L 1006,0 825,0 Z"/>
+ <glyph unicode="l" horiz-adv-x="187" d="M 138,0 L 138,1484 318,1484 318,0 138,0 Z"/>
+ <glyph unicode="i" horiz-adv-x="187" d="M 137,1312 L 137,1484 317,1484 317,1312 137,1312 Z M 137,0 L 137,1082 317,1082 317,0 137,0 Z"/>
+ <glyph unicode="g" horiz-adv-x="927" d="M 548,-425 C 486,-425 431,-419 383,-406 335,-393 294,-375 260,-352 226,-328 198,-300 177,-267 156,-234 140,-198 131,-158 L 312,-132 C 324,-182 351,-220 392,-248 433,-274 486,-288 553,-288 594,-288 631,-282 664,-271 697,-260 726,-241 749,-217 772,-191 790,-159 803,-119 816,-79 822,-30 822,27 L 822,201 820,201 C 807,174 790,148 771,123 751,98 727,75 699,56 670,37 637,21 600,10 563,-2 520,-8 472,-8 403,-8 345,4 296,27 247,50 207,84 176,130 145,176 122,233 108,302 93,370 86,449 86,539 86,626 93,704 108,773 122,842 145,901 178,950 210,998 252,1035 304,1061 355,1086 418,1099 492,1099 569,1099 635,1082 692,1047 748,1012 791,962 822,897 L 824,897 C 824,914 825,933 826,953 827,974 828,994 829,1012 830,1031 831,1046 832,1060 833,1073 835,1080 836,1080 L 1007,1080 C 1006,1074 1006,1064 1005,1050 1004,1035 1004,1018 1003,998 1002,978 1002,956 1002,932 1001,907 1001,882 1001,856 L 1001,30 C 1001,-121 964,-234 890,-311 815,-387 701,-425 548,-425 Z M 822,541 C 822,616 814,681 798,735 781,788 760,832 733,866 706,900 676,925 642,941 607,957 572,965 536,965 490,965 451,957 418,941 385,925 357,900 336,866 314,831 298,787 288,734 277,680 272,616 272,541 272,463 277,398 288,345 298,292 314,249 335,216 356,183 383,160 416,146 449,132 488,125 533,125 569,125 604,133 639,148 673,163 704,188 731,221 758,254 780,297 797,350 814,403 822,466 822,541 Z"/>
+ <glyph unicode="e" horiz-adv-x="980" d="M 276,503 C 276,446 282,394 294,347 305,299 323,258 348,224 372,189 403,163 441,144 479,125 525,115 578,115 656,115 719,131 766,162 813,193 844,233 861,281 L 1019,236 C 1008,206 992,176 972,146 951,115 924,88 890,64 856,39 814,19 763,4 712,-12 650,-20 578,-20 418,-20 296,28 213,123 129,218 87,360 87,548 87,649 100,735 125,806 150,876 185,933 229,977 273,1021 324,1053 383,1073 442,1092 504,1102 571,1102 662,1102 738,1087 799,1058 860,1029 909,988 946,937 983,885 1009,824 1025,754 1040,684 1048,608 1048,527 L 1048,503 276,503 Z M 862,641 C 852,755 823,838 775,891 727,943 658,969 568,969 538,969 507,964 474,955 441,945 410,928 382,903 354,878 330,845 311,803 292,760 281,706 278,641 L 862,641 Z"/>
+ <glyph unicode="d" horiz-adv-x="927" d="M 821,174 C 788,105 744,55 689,25 634,-5 565,-20 484,-20 347,-20 247,26 183,118 118,210 86,349 86,536 86,913 219,1102 484,1102 566,1102 634,1087 689,1057 744,1027 788,979 821,914 L 823,914 C 823,921 823,931 823,946 822,960 822,975 822,991 821,1006 821,1021 821,1035 821,1049 821,1059 821,1065 L 821,1484 1001,1484 1001,223 C 1001,197 1001,172 1002,148 1002,124 1002,102 1003,82 1004,62 1004,45 1005,31 1006,16 1006,6 1007,0 L 835,0 C 834,7 833,16 832,29 831,41 830,55 829,71 828,87 827,104 826,122 825,139 825,157 825,174 L 821,174 Z M 275,542 C 275,467 280,403 289,350 298,297 313,253 334,219 355,184 381,159 413,143 445,127 484,119 530,119 577,119 619,127 656,142 692,157 722,182 747,217 771,251 789,296 802,351 815,406 821,474 821,554 821,631 815,696 802,749 789,802 771,844 746,877 721,910 691,933 656,948 620,962 579,969 532,969 488,969 450,961 418,946 386,931 359,906 338,872 317,838 301,794 291,740 280,685 275,619 275,542 Z"/>
+ <glyph unicode="c" horiz-adv-x="901" d="M 275,546 C 275,484 280,427 289,375 298,323 313,278 334,241 355,203 384,174 419,153 454,132 497,122 548,122 612,122 666,139 709,174 752,209 778,262 788,334 L 970,322 C 964,277 951,234 931,193 911,152 884,115 850,84 815,53 773,28 724,9 675,-10 618,-20 553,-20 468,-20 396,-6 337,23 278,52 230,91 193,142 156,192 129,251 112,320 95,388 87,462 87,542 87,615 93,679 105,735 117,790 134,839 156,881 177,922 203,957 232,986 261,1014 293,1037 328,1054 362,1071 398,1083 436,1091 474,1098 512,1102 551,1102 612,1102 666,1094 713,1077 760,1060 801,1038 836,1009 870,980 898,945 919,906 940,867 955,824 964,779 L 779,765 C 770,825 746,873 708,908 670,943 616,961 546,961 495,961 452,953 418,936 383,919 355,893 334,859 313,824 298,781 289,729 280,677 275,616 275,546 Z"/>
+ <glyph unicode="a" horiz-adv-x="1060" d="M 414,-20 C 305,-20 224,9 169,66 114,123 87,202 87,302 87,373 101,432 128,478 155,523 190,559 234,585 277,611 327,629 383,639 439,649 496,655 554,656 L 797,660 797,719 C 797,764 792,802 783,833 774,864 759,890 740,909 721,928 697,943 668,952 639,961 604,965 565,965 530,965 499,963 471,958 443,953 419,944 398,931 377,918 361,900 348,878 335,855 327,827 323,793 L 135,810 C 142,853 154,892 173,928 192,963 218,994 253,1020 287,1046 330,1066 382,1081 433,1095 496,1102 569,1102 705,1102 807,1071 876,1009 945,946 979,856 979,738 L 979,272 C 979,219 986,179 1000,152 1014,125 1041,111 1080,111 1090,111 1100,112 1110,113 1120,114 1130,116 1139,118 L 1139,6 C 1116,1 1094,-3 1072,-6 1049,-9 1025,-10 1000,-10 966,-10 937,-5 913,4 888,13 868,26 853,45 838,63 826,86 818,113 810,140 805,171 803,207 L 797,207 C 778,172 757,141 734,113 711,85 684,61 653,42 622,22 588,7 549,-4 510,-15 465,-20 414,-20 Z M 455,115 C 512,115 563,126 606,147 649,168 684,194 713,227 741,260 762,295 776,334 790,373 797,410 797,445 L 797,534 600,530 C 556,529 514,526 475,521 435,515 400,504 370,487 340,470 316,447 299,417 281,387 272,348 272,299 272,240 288,195 320,163 351,131 396,115 455,115 Z"/>
+ <glyph unicode="_" horiz-adv-x="1218" d="M -31,-407 L -31,-277 1162,-277 1162,-407 -31,-407 Z"/>
+ <glyph unicode="U" horiz-adv-x="1192" d="M 731,-20 C 654,-20 580,-10 511,11 442,32 381,64 329,108 276,151 235,207 204,274 173,341 158,420 158,512 L 158,1409 349,1409 349,528 C 349,457 359,396 378,347 397,297 423,256 457,225 491,194 531,171 578,157 624,142 675,135 730,135 785,135 836,142 885,157 934,172 976,195 1013,227 1050,259 1079,301 1100,353 1121,404 1131,467 1131,541 L 1131,1409 1321,1409 1321,530 C 1321,436 1306,355 1275,286 1244,217 1201,159 1148,114 1095,69 1032,35 961,13 889,-9 812,-20 731,-20 Z"/>
+ <glyph unicode="S" horiz-adv-x="1192" d="M 1272,389 C 1272,330 1261,275 1238,225 1215,175 1179,132 1131,96 1083,59 1023,31 950,11 877,-10 790,-20 690,-20 515,-20 378,11 280,72 182,133 120,222 93,338 L 278,375 C 287,338 302,305 321,275 340,245 367,219 400,198 433,176 473,159 522,147 571,135 629,129 697,129 754,129 806,134 853,144 900,153 941,168 975,188 1009,208 1036,234 1055,266 1074,297 1083,335 1083,379 1083,425 1073,462 1052,491 1031,520 1001,543 963,562 925,581 880,596 827,609 774,622 716,635 652,650 613,659 573,668 534,679 494,689 456,701 420,716 383,730 349,747 317,766 285,785 257,809 234,836 211,863 192,894 179,930 166,965 159,1006 159,1053 159,1120 173,1177 200,1225 227,1272 264,1311 312,1342 360,1373 417,1395 482,1409 547,1423 618,1430 694,1430 781,1430 856,1423 918,1410 980,1396 1032,1375 1075,1348 1118,1321 1152,1287 1178,1247 1203,1206 1224,1159 1239,1106 L 1051,1073 C 1042,1107 1028,1137 1011,1164 993,1191 970,1213 941,1231 912,1249 878,1263 837,1272 796,1281 747,1286 692,1286 627,1286 572,1280 528,1269 483,1257 448,1241 421,1221 394,1201 374,1178 363,1151 351,1124 345,1094 345,1063 345,1021 356,987 377,960 398,933 426,910 462,892 498,874 540,859 587,847 634,835 685,823 738,811 781,801 825,791 868,781 911,770 952,758 991,744 1030,729 1067,712 1102,693 1136,674 1166,650 1191,622 1216,594 1236,561 1251,523 1265,485 1272,440 1272,389 Z"/>
+ <glyph unicode="P" horiz-adv-x="1112" d="M 1258,985 C 1258,924 1248,867 1228,814 1207,761 1177,715 1137,676 1096,637 1046,606 985,583 924,560 854,549 773,549 L 359,549 359,0 168,0 168,1409 761,1409 C 844,1409 917,1399 979,1379 1041,1358 1093,1330 1134,1293 1175,1256 1206,1211 1227,1159 1248,1106 1258,1048 1258,985 Z M 1066,983 C 1066,1072 1039,1140 984,1187 929,1233 847,1256 738,1256 L 359,1256 359,700 746,700 C 856,700 937,724 989,773 1040,822 1066,892 1066,983 Z"/>
+ <glyph unicode="L" horiz-adv-x="927" d="M 168,0 L 168,1409 359,1409 359,156 1071,156 1071,0 168,0 Z"/>
+ <glyph unicode="G" horiz-adv-x="1377" d="M 103,711 C 103,821 118,920 148,1009 177,1098 222,1173 281,1236 340,1298 413,1346 500,1380 587,1413 689,1430 804,1430 891,1430 967,1422 1032,1407 1097,1392 1154,1370 1202,1341 1250,1312 1291,1278 1324,1237 1357,1196 1386,1149 1409,1098 L 1227,1044 C 1210,1079 1189,1110 1165,1139 1140,1167 1111,1191 1076,1211 1041,1231 1001,1247 956,1258 910,1269 858,1274 799,1274 714,1274 640,1261 577,1234 514,1207 461,1169 420,1120 379,1071 348,1011 328,942 307,873 297,796 297,711 297,626 308,549 330,479 352,408 385,348 428,297 471,246 525,206 590,178 654,149 728,135 813,135 868,135 919,140 966,149 1013,158 1055,171 1093,186 1130,201 1163,217 1192,236 1221,254 1245,272 1264,291 L 1264,545 843,545 843,705 1440,705 1440,219 C 1409,187 1372,157 1330,128 1287,99 1240,73 1187,51 1134,29 1077,12 1014,-1 951,-14 884,-20 813,-20 694,-20 591,-2 502,35 413,71 340,122 281,187 222,252 177,329 148,418 118,507 103,605 103,711 Z"/>
+ <glyph unicode="D" horiz-adv-x="1218" d="M 1381,719 C 1381,602 1363,498 1328,409 1293,319 1244,244 1183,184 1122,123 1049,78 966,47 882,16 792,0 695,0 L 168,0 168,1409 634,1409 C 743,1409 843,1396 935,1369 1026,1342 1105,1300 1171,1244 1237,1187 1289,1116 1326,1029 1363,942 1381,839 1381,719 Z M 1189,719 C 1189,814 1175,896 1148,964 1121,1031 1082,1087 1033,1130 984,1173 925,1205 856,1226 787,1246 712,1256 630,1256 L 359,1256 359,153 673,153 C 747,153 816,165 879,189 942,213 996,249 1042,296 1088,343 1124,402 1150,473 1176,544 1189,626 1189,719 Z"/>
+ <glyph unicode="C" horiz-adv-x="1324" d="M 792,1274 C 712,1274 641,1261 580,1234 518,1207 466,1169 425,1120 383,1071 351,1011 330,942 309,873 298,796 298,711 298,626 310,549 333,479 356,408 389,348 432,297 475,246 527,207 590,179 652,151 722,137 800,137 855,137 905,144 950,159 995,173 1035,193 1072,219 1108,245 1140,276 1169,312 1198,347 1223,387 1245,430 L 1401,352 C 1376,299 1344,250 1307,205 1270,160 1226,120 1176,87 1125,54 1068,28 1005,9 941,-10 870,-20 791,-20 677,-20 577,-2 492,35 406,71 334,122 277,187 219,252 176,329 147,418 118,507 104,605 104,711 104,821 119,920 150,1009 180,1098 224,1173 283,1236 341,1298 413,1346 498,1380 583,1413 681,1430 790,1430 940,1430 1065,1401 1166,1342 1267,1283 1341,1196 1388,1081 L 1207,1021 C 1194,1054 1176,1086 1153,1117 1130,1147 1102,1174 1068,1197 1034,1220 994,1239 949,1253 903,1267 851,1274 792,1274 Z"/>
+ <glyph unicode="A" horiz-adv-x="1377" d="M 1167,0 L 1006,412 364,412 202,0 4,0 579,1409 796,1409 1362,0 1167,0 Z M 768,1026 C 757,1053 747,1080 738,1107 728,1134 719,1159 712,1182 705,1204 699,1223 694,1238 689,1253 686,1262 685,1265 684,1262 681,1252 676,1237 671,1222 665,1203 658,1180 650,1157 641,1132 632,1105 622,1078 612,1051 602,1024 L 422,561 949,561 768,1026 Z"/>
+ <glyph unicode="7" horiz-adv-x="954" d="M 1036,1263 C 965,1155 900,1051 841,952 782,852 731,752 688,651 645,550 612,446 589,340 565,233 553,120 553,0 L 365,0 C 365,113 378,223 405,332 432,440 468,546 513,651 558,755 611,857 671,958 731,1059 795,1158 862,1256 L 105,1256 105,1409 1036,1409 1036,1263 Z"/>
+ <glyph unicode="6" horiz-adv-x="980" d="M 1049,461 C 1049,390 1039,326 1020,267 1000,208 971,157 933,115 894,72 847,39 790,16 733,-8 668,-20 594,-20 512,-20 440,-4 379,27 318,58 267,104 226,163 185,222 155,294 135,380 114,465 104,563 104,672 104,797 116,907 139,1002 162,1097 195,1176 238,1239 281,1302 334,1350 397,1382 459,1414 529,1430 608,1430 656,1430 701,1425 743,1415 785,1405 823,1389 858,1367 892,1344 922,1315 948,1278 974,1241 995,1196 1010,1143 L 838,1112 C 819,1173 790,1217 749,1244 708,1271 660,1284 606,1284 557,1284 512,1272 472,1249 432,1226 398,1191 370,1145 342,1098 321,1040 306,970 291,900 283,818 283,725 316,786 362,832 421,864 480,895 548,911 625,911 689,911 747,901 799,880 851,859 896,830 933,791 970,752 998,704 1019,649 1039,593 1049,530 1049,461 Z M 866,453 C 866,502 860,546 848,585 836,624 818,658 794,686 770,713 740,735 705,750 670,765 629,772 582,772 549,772 516,767 483,758 450,748 420,732 393,711 366,689 344,660 327,625 310,590 301,547 301,496 301,444 308,396 321,351 334,306 354,266 379,233 404,200 434,173 469,154 504,135 544,125 588,125 631,125 670,133 705,148 739,163 768,184 792,213 816,241 834,275 847,316 860,357 866,402 866,453 Z"/>
+ <glyph unicode="5" horiz-adv-x="980" d="M 1053,459 C 1053,388 1042,324 1021,265 1000,206 968,156 926,114 884,71 832,38 770,15 707,-8 635,-20 553,-20 479,-20 415,-11 360,6 305,23 258,47 220,78 182,108 152,143 130,184 107,225 91,268 82,315 L 264,336 C 271,309 282,284 295,259 308,234 327,211 350,192 373,172 401,156 435,145 468,133 509,127 557,127 604,127 646,134 684,149 722,163 755,184 782,212 809,240 829,274 844,315 859,356 866,402 866,455 866,498 859,538 845,575 831,611 811,642 785,669 759,695 727,715 690,730 652,745 609,752 561,752 531,752 503,749 478,744 453,739 429,731 408,722 386,713 366,702 349,690 331,677 314,664 299,651 L 123,651 170,1409 971,1409 971,1256 334,1256 307,809 C 339,834 379,855 427,873 475,890 532,899 598,899 668,899 731,888 787,867 843,846 891,816 930,777 969,738 1000,691 1021,637 1042,583 1053,524 1053,459 Z"/>
+ <glyph unicode="4" horiz-adv-x="1060" d="M 881,319 L 881,0 711,0 711,319 47,319 47,459 692,1409 881,1409 881,461 1079,461 1079,319 881,319 Z M 711,1206 C 710,1203 706,1196 701,1187 696,1177 690,1166 683,1154 676,1142 670,1130 663,1118 656,1105 649,1095 644,1087 L 283,555 C 280,550 275,543 269,534 262,525 256,517 249,508 242,499 236,490 229,481 222,472 217,466 213,461 L 711,461 711,1206 Z"/>
+ <glyph unicode="3" horiz-adv-x="1006" d="M 1049,389 C 1049,324 1039,267 1018,216 997,165 966,123 926,88 885,53 835,26 776,8 716,-11 648,-20 571,-20 484,-20 410,-9 351,13 291,34 242,63 203,99 164,134 135,175 116,221 97,266 84,313 78,362 L 264,379 C 269,342 279,308 294,277 308,246 327,220 352,198 377,176 407,159 443,147 479,135 522,129 571,129 662,129 733,151 785,196 836,241 862,307 862,395 862,447 851,489 828,521 805,552 776,577 742,595 707,612 670,624 630,630 589,636 552,639 518,639 L 416,639 416,795 514,795 C 548,795 583,799 620,806 657,813 690,825 721,844 751,862 776,887 796,918 815,949 825,989 825,1038 825,1113 803,1173 759,1217 714,1260 648,1282 561,1282 482,1282 418,1262 369,1221 320,1180 291,1123 283,1049 L 102,1063 C 109,1125 126,1179 153,1225 180,1271 214,1309 255,1340 296,1370 342,1393 395,1408 448,1423 504,1430 563,1430 642,1430 709,1420 766,1401 823,1381 869,1354 905,1321 941,1287 968,1247 985,1202 1002,1157 1010,1108 1010,1057 1010,1016 1004,977 993,941 982,905 964,873 940,844 916,815 886,791 849,770 812,749 767,734 715,723 L 715,719 C 772,713 821,700 863,681 905,661 940,636 967,607 994,578 1015,544 1029,507 1042,470 1049,430 1049,389 Z"/>
+ <glyph unicode="2" horiz-adv-x="954" d="M 103,0 L 103,127 C 137,205 179,274 228,334 277,393 328,447 382,496 436,544 490,589 543,630 596,671 643,713 686,754 729,795 763,839 790,884 816,929 829,981 829,1038 829,1078 823,1113 811,1144 799,1174 782,1199 759,1220 736,1241 709,1256 678,1267 646,1277 611,1282 572,1282 536,1282 502,1277 471,1267 439,1257 411,1242 386,1222 361,1202 341,1177 326,1148 310,1118 300,1083 295,1044 L 111,1061 C 117,1112 131,1159 153,1204 175,1249 205,1288 244,1322 283,1355 329,1382 384,1401 438,1420 501,1430 572,1430 642,1430 704,1422 759,1405 814,1388 860,1364 898,1331 935,1298 964,1258 984,1210 1004,1162 1014,1107 1014,1044 1014,997 1006,952 989,909 972,866 949,826 921,787 892,748 859,711 822,675 785,639 746,604 705,570 664,535 623,501 582,468 541,434 502,400 466,366 429,332 397,298 368,263 339,228 317,191 301,153 L 1036,153 1036,0 103,0 Z"/>
+ <glyph unicode="1" horiz-adv-x="927" d="M 156,0 L 156,153 515,153 515,1237 197,1010 197,1180 530,1409 696,1409 696,153 1039,153 1039,0 156,0 Z"/>
+ <glyph unicode="/" horiz-adv-x="583" d="M 0,-20 L 411,1484 569,1484 162,-20 0,-20 Z"/>
+ <glyph unicode="." horiz-adv-x="213" d="M 187,0 L 187,219 382,219 382,0 187,0 Z"/>
+ <glyph unicode=")" horiz-adv-x="557" d="M 555,528 C 555,435 548,346 534,262 520,177 498,96 468,18 438,-60 400,-136 353,-209 306,-282 251,-354 186,-424 L 12,-424 C 75,-354 129,-282 175,-209 220,-136 258,-60 287,19 316,98 338,179 353,264 367,349 374,437 374,530 374,623 367,711 353,796 338,881 316,962 287,1041 258,1119 220,1195 175,1269 129,1342 75,1414 12,1484 L 186,1484 C 251,1414 306,1342 353,1269 400,1196 438,1120 468,1042 498,964 520,883 534,798 548,713 555,625 555,532 L 555,528 Z"/>
+ <glyph unicode="(" horiz-adv-x="583" d="M 127,532 C 127,625 134,713 148,798 162,883 184,964 214,1042 244,1120 282,1196 329,1269 376,1342 431,1414 496,1484 L 670,1484 C 607,1414 553,1342 508,1269 462,1195 424,1119 395,1041 366,962 344,881 330,796 315,711 308,623 308,530 308,437 315,349 330,264 344,179 366,98 395,19 424,-60 462,-136 508,-209 553,-282 607,-354 670,-424 L 496,-424 C 431,-354 376,-282 329,-209 282,-136 244,-60 214,18 184,96 162,177 148,262 134,346 127,435 127,528 L 127,532 Z"/>
+ <glyph unicode=" " horiz-adv-x="556"/>
+ </font>
+ </defs>
+ <defs class="TextShapeIndex">
+ <g ooo:slide="id1" ooo:id-list="id3 id4 id5 id6 id7 id8 id9 id10 id11 id12 id13 id14 id15 id16"/>
+ </defs>
+ <defs class="EmbeddedBulletChars">
+ <g id="bullet-char-template-57356" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 580,1141 L 1163,571 580,0 -4,571 580,1141 Z"/>
+ </g>
+ <g id="bullet-char-template-57354" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 8,1128 L 1137,1128 1137,0 8,0 8,1128 Z"/>
+ </g>
+ <g id="bullet-char-template-10146" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 174,0 L 602,739 174,1481 1456,739 174,0 Z M 1358,739 L 309,1346 659,739 1358,739 Z"/>
+ </g>
+ <g id="bullet-char-template-10132" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 2015,739 L 1276,0 717,0 1260,543 174,543 174,936 1260,936 717,1481 1274,1481 2015,739 Z"/>
+ </g>
+ <g id="bullet-char-template-10007" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 0,-2 C -7,14 -16,27 -25,37 L 356,567 C 262,823 215,952 215,954 215,979 228,992 255,992 264,992 276,990 289,987 310,991 331,999 354,1012 L 381,999 492,748 772,1049 836,1024 860,1049 C 881,1039 901,1025 922,1006 886,937 835,863 770,784 769,783 710,716 594,584 L 774,223 C 774,196 753,168 711,139 L 727,119 C 717,90 699,76 672,76 641,76 570,178 457,381 L 164,-76 C 142,-110 111,-127 72,-127 30,-127 9,-110 8,-76 1,-67 -2,-52 -2,-32 -2,-23 -1,-13 0,-2 Z"/>
+ </g>
+ <g id="bullet-char-template-10004" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 285,-33 C 182,-33 111,30 74,156 52,228 41,333 41,471 41,549 55,616 82,672 116,743 169,778 240,778 293,778 328,747 346,684 L 369,508 C 377,444 397,411 428,410 L 1163,1116 C 1174,1127 1196,1133 1229,1133 1271,1133 1292,1118 1292,1087 L 1292,965 C 1292,929 1282,901 1262,881 L 442,47 C 390,-6 338,-33 285,-33 Z"/>
+ </g>
+ <g id="bullet-char-template-9679" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 813,0 C 632,0 489,54 383,161 276,268 223,411 223,592 223,773 276,916 383,1023 489,1130 632,1184 813,1184 992,1184 1136,1130 1245,1023 1353,916 1407,772 1407,592 1407,412 1353,268 1245,161 1136,54 992,0 813,0 Z"/>
+ </g>
+ <g id="bullet-char-template-8226" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 346,457 C 273,457 209,483 155,535 101,586 74,649 74,723 74,796 101,859 155,911 209,963 273,989 346,989 419,989 480,963 531,910 582,859 608,796 608,723 608,648 583,586 532,535 482,483 420,457 346,457 Z"/>
+ </g>
+ <g id="bullet-char-template-8211" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M -4,459 L 1135,459 1135,606 -4,606 -4,459 Z"/>
+ </g>
+ <g id="bullet-char-template-61548" transform="scale(0.00048828125,-0.00048828125)">
+ <path d="M 173,740 C 173,903 231,1043 346,1159 462,1274 601,1332 765,1332 928,1332 1067,1274 1183,1159 1299,1043 1357,903 1357,740 1357,577 1299,437 1183,322 1067,206 928,148 765,148 601,148 462,206 346,322 231,437 173,577 173,740 Z"/>
+ </g>
+ </defs>
+ <defs class="TextEmbeddedBitmaps"/>
+ <g>
+ <g id="id2" class="Master_Slide">
+ <g id="bg-id2" class="Background"/>
+ <g id="bo-id2" class="BackgroundObjects"/>
+ </g>
+ </g>
+ <g class="SlideGroup">
+ <g>
+ <g id="container-id1">
+ <g id="id1" class="Slide" clip-path="url(#presentation_clip_path)">
+ <g class="Page">
+ <g class="com.sun.star.drawing.CustomShape">
+ <g id="id3">
+ <rect class="BoundingBox" stroke="none" fill="none" x="8202" y="4046" width="3556" height="1781"/>
+ <path fill="rgb(114,159,207)" stroke="none" d="M 9980,5825 L 8203,5825 8203,4047 11756,4047 11756,5825 9980,5825 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 9980,5825 L 8203,5825 8203,4047 11756,4047 11756,5825 9980,5825 Z"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="8500" y="4801"><tspan fill="rgb(0,0,0)" stroke="none">1. Created</tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="8695" y="5512"><tspan fill="rgb(0,0,0)" stroke="none">(inactive)</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.CustomShape">
+ <g id="id4">
+ <rect class="BoundingBox" stroke="none" fill="none" x="8072" y="7983" width="3812" height="1781"/>
+ <path fill="rgb(114,159,207)" stroke="none" d="M 9978,9762 L 8073,9762 8073,7984 11882,7984 11882,9762 9978,9762 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 9978,9762 L 8073,9762 8073,7984 11882,7984 11882,9762 9978,9762 Z"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="8708" y="8738"><tspan fill="rgb(0,0,0)" stroke="none">3. Set up</tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="8693" y="9449"><tspan fill="rgb(0,0,0)" stroke="none">(inactive)</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.CustomShape">
+ <g id="id5">
+ <rect class="BoundingBox" stroke="none" fill="none" x="8234" y="11665" width="3430" height="1908"/>
+ <path fill="rgb(114,159,207)" stroke="none" d="M 9949,13571 L 8235,13571 8235,11666 11662,11666 11662,13571 9949,13571 Z"/>
+ <path fill="none" stroke="rgb(52,101,164)" d="M 9949,13571 L 8235,13571 8235,11666 11662,11666 11662,13571 9949,13571 Z"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="8751" y="12839"><tspan fill="rgb(0,0,0)" stroke="none">5. Active</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id6">
+ <rect class="BoundingBox" stroke="none" fill="none" x="9828" y="5824" width="302" height="2161"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 9980,5825 L 9978,7554"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 9978,7984 L 10128,7534 9828,7534 9978,7984 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id7">
+ <rect class="BoundingBox" stroke="none" fill="none" x="9805" y="9761" width="302" height="1907"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 9978,9762 L 9956,11237"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 9949,11667 L 10106,11219 9806,11215 9949,11667 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id8">
+ <rect class="BoundingBox" stroke="none" fill="none" x="11662" y="4807" width="3803" height="7814"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 11663,12619 C 16668,12619 16620,5368 12161,4954"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 11757,4936 L 12199,5107 12214,4808 11757,4936 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.TextShape">
+ <g id="id9">
+ <rect class="BoundingBox" stroke="none" fill="none" x="10234" y="1155" width="5843" height="4440"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="10484" y="1856"><tspan fill="rgb(0,0,0)" stroke="none">User logs via </tspan></tspan><tspan class="TextPosition" x="10484" y="2567"><tspan fill="rgb(0,0,0)" stroke="none">Google/LDAP etc</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id10">
+ <rect class="BoundingBox" stroke="none" fill="none" x="9833" y="1241" width="302" height="2807"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 10001,1242 L 9983,3617"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 9980,4047 L 10133,3598 9833,3596 9980,4047 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.TextShape">
+ <g id="id11">
+ <rect class="BoundingBox" stroke="none" fill="none" x="10361" y="6460" width="2921" height="3023"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="10611" y="7161"><tspan fill="rgb(0,0,0)" stroke="none">2. setup</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.TextShape">
+ <g id="id12">
+ <rect class="BoundingBox" stroke="none" fill="none" x="10266" y="10270" width="3393" height="3023"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="10516" y="10971"><tspan fill="rgb(0,0,0)" stroke="none">4. activate</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id13">
+ <rect class="BoundingBox" stroke="none" fill="none" x="5111" y="4935" width="3126" height="7805"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 8204,4936 C 4151,4936 4141,12106 7833,12593"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 8236,12619 L 7797,12439 7777,12738 8236,12619 Z"/>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.TextShape">
+ <g id="id14">
+ <rect class="BoundingBox" stroke="none" fill="none" x="1543" y="7708" width="3390" height="2564"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="1793" y="8409"><tspan fill="rgb(0,0,0)" stroke="none">6. update </tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="1793" y="9120"><tspan fill="rgb(0,0,0)" stroke="none">“</tspan><tspan fill="rgb(0,0,0)" stroke="none">is_active” </tspan></tspan><tspan class="TextPosition" x="1793" y="9831"><tspan fill="rgb(0,0,0)" stroke="none">to “true”</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.TextShape">
+ <g id="id15">
+ <rect class="BoundingBox" stroke="none" fill="none" x="15727" y="8165" width="3464" height="3023"/>
+ <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="15977" y="8866"><tspan fill="rgb(0,0,0)" stroke="none">7. unsetup</tspan></tspan></tspan></text>
+ </g>
+ </g>
+ <g class="com.sun.star.drawing.ConnectorShape">
+ <g id="id16">
+ <rect class="BoundingBox" stroke="none" fill="none" x="5916" y="8872" width="2321" height="3869"/>
+ <path fill="none" stroke="rgb(0,0,0)" d="M 8074,8873 C 5278,8873 5211,12261 7812,12593"/>
+ <path fill="rgb(0,0,0)" stroke="none" d="M 8236,12619 L 7797,12440 7777,12740 8236,12619 Z"/>
+ </g>
+ </g>
+ </g>
+ </g>
+ </g>
+ </g>
+ </g>
+</svg>
\ No newline at end of file
<a name="Support"></a>
<p><strong>Support and Community</strong></p>
-<p>The recommended place to ask a question about Arvados is on Biostars. After you have <a href="//www.biostars.org/t/arvados/">read previous questions and answers</a> you can <a href="https://www.biostars.org/p/new/post/?tag_val=arvados">post your question using the 'arvados' tag</a>.</p>
-
- <p>There is a <a href="http://lists.arvados.org/mailman/listinfo/arvados">mailing list</a>. The <a href="https://gitter.im/curoverse/arvados">#arvados channel</a> at gitter.im is available for live discussion and community support.
+ <p>The <a href="https://gitter.im/arvados/community">arvados community channel</a> at gitter.im is available for live discussion and community support. There is also a <a href="http://lists.arvados.org/mailman/listinfo/arvados">mailing list</a>.
</p>
- <p>Curoverse, a Veritas Genetics company, provides managed Arvados installations as well as commercial support for Arvados. Please visit <a href="https://curoverse.com">curoverse.com</a> or contact <a href="mailto:researchsales@veritasgenetics.com">researchsales@veritasgenetics.com</a> for more information.</p>
+ <p>Curii Corporation provides managed Arvados installations as well as commercial support for Arvados. Please contact <a href="mailto:info@curii.com">info@curii.com</a> for more information.</p>
<p><strong>Contributing</strong></p>
- <p>Please visit the <a href="https://dev.arvados.org/projects/arvados/wiki/Wiki#Contributing-and-hacking">developer site</a>. Arvados is 100% free and open source software, check out the code on <a href="https://github.com/curoverse/arvados">github</a>.
+ <p>Please visit the <a href="https://dev.arvados.org/projects/arvados/wiki/Wiki#Contributing-and-hacking">developer site</a>. Arvados is 100% free and open source software, check out the code on <a href="https://github.com/arvados/arvados">github</a>.
<p>Arvados is under active development, see the <a href="https://dev.arvados.org/projects/arvados/activity">recent developer activity</a>.
</p>
<p><strong>License</strong></p>
- <p>Most of Arvados is licensed under the <a href="{{ site.baseurl }}/user/copying/agpl-3.0.html">GNU AGPL v3</a>. The SDKs are licensed under the <a href="{{ site.baseurl }}/user/copying/LICENSE-2.0.html">Apache License 2.0</a> so that they can be incorporated into proprietary code. See the <a href="https://github.com/curoverse/arvados/blob/master/COPYING">COPYING file</a> for more information.
+ <p>Most of Arvados is licensed under the <a href="{{ site.baseurl }}/user/copying/agpl-3.0.html">GNU AGPL v3</a>. The SDKs are licensed under the <a href="{{ site.baseurl }}/user/copying/LICENSE-2.0.html">Apache License 2.0</a> so that they can be incorporated into proprietary code. See the <a href="https://github.com/arvados/arvados/blob/master/COPYING">COPYING file</a> for more information.
</p>
</div>
Clone the repository and nagivate to the @arvados-kubernetes/charts/arvados@ directory:
<pre>
-$ git clone https://github.com/curoverse/arvados-kubernetes.git
+$ git clone https://github.com/arvados/arvados-kubernetes.git
$ cd arvados-kubernetes/charts/arvados
</pre>
h2. Quick start
<pre>
-$ git clone https://github.com/curoverse/arvados.git
+$ git clone https://github.com/arvados/arvados.git
$ cd arvados/tools/arvbox/bin
$ ./arvbox start localdemo
</pre>
+++ /dev/null
----
-layout: default
-navsection: admin
-title: User management at the CLI
-...
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-h3. Workbench: user management
-
-As an Admin user, use the gear icon on the top right to visit the Users page. From there, use the 'Add new user' button to create a new user. Alternatively, visit an existing user with the 'Show' button next to the user's name. Then use the 'Admin' tab and click the 'Setup' button to activate the user, and create a virtual machine login as well as git repository for them.
-
-h3. CLI setup
-
-<pre>
-ARVADOS_API_HOST={{ site.arvados_api_host }}
-ARVADOS_API_TOKEN=1234567890qwertyuiopasdfghjklzxcvbnm1234567890zzzz
-</pre>
-
-h3. CLI: Create VM
-
-<pre>
-arv virtual_machine create --virtual-machine '{"hostname":"xxxxxxxchangeme.example.com"}'
-</pre>
-
-h3. CLI: Activate user
-
-<pre>
-user_uuid=xxxxxxxchangeme
-
-arv user update --uuid "$user_uuid" --user '{"is_active":true}'
-</pre>
-
-h3. User → VM
-
-Give @$user_uuid@ permission to log in to @$vm_uuid@ as @$target_username@
-
-<pre>
-user_uuid=xxxxxxxchangeme
-vm_uuid=xxxxxxxchangeme
-target_username=xxxxxxxchangeme
-
-read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"
-{
-"tail_uuid":"$user_uuid",
-"head_uuid":"$vm_uuid",
-"link_class":"permission",
-"name":"can_login",
-"properties":{"username":"$target_username"}
-}
-EOF
-</pre>
-
-h3. CLI: User → repo
-
-Give @$user_uuid@ permission to commit to @$repo_uuid@ as @$repo_username@
-
-<pre>
-user_uuid=xxxxxxxchangeme
-repo_uuid=xxxxxxxchangeme
-repo_username=xxxxxxxchangeme
-
-read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"
-{
-"tail_uuid":"$user_uuid",
-"head_uuid":"$repo_uuid",
-"link_class":"permission",
-"name":"can_write",
-"properties":{"username":"$repo_username"}
-}
-EOF
-</pre>
--- /dev/null
+../admin/user-management-cli.html.textile.liquid
\ No newline at end of file
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Configuration files
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+h2. Arados /etc/arvados/config.yml
+
+The configuration file is normally found at @/etc/arvados/config.yml@ and will be referred to as just @config.yml@ in this guide. This configuration file must be kept in sync across every service node in the cluster, but not shell and compute nodes (which do not require config.yml).
+
+h3. Syntax
+
+The configuration file is in "YAML":https://yaml.org/ format. This is a block syntax where indentation is significant (similar to Python). By convention we use two space indent. The first line of the file is always "Clusters:", underneath it at the first indent level is the Cluster ID. All the actual cluster configuration follows under the Cluster ID. This means all configuration parameters are indented by at least two levels (four spaces). Comments start with @#@ .
+
+We recommend a YAML-syntax plugin for your favorite text editor, such as @yaml-mode@ (Emacs) or @yaml-vim@.
+
+Example file:
+
+<pre>
+Clusters: # Clusters block, everything else is listed under this
+ abcde: # Cluster ID, everything under it is configuration for this cluster
+ ExampleConfigKey: "fghijk" # An example configuration key
+ ExampleConfigGroup: # A group of keys
+ ExampleDurationConfig: 12s # Example duration
+ ExampleSizeConfig: 99KiB # Example with a size suffix
+</pre>
+
+Each configuration group may only appear once. When a configuration key is within a config group, it will be written with the group name leading, for example @ExampleConfigGroup.ExampleSizeConfig@.
+
+Duration suffixes are s=seconds, m=minutes or h=hours.
+
+Size suffixes are K=10 ^3^, Ki=2 ^10^ , M=10 ^6^, Mi=2 ^20^, G=10 ^9^, Gi=2 ^30^, T=10 ^12^, Ti=2 ^40^, P=10 ^15^, Pi=2 ^50^, E=10 ^18^, Ei=2 ^60^. You can optionally follow with a "B" (eg "MB" or "MiB") for readability (it does not affect the units.)
+
+h3(#empty). Create empty configuration file
+
+Change @webserver-user@ to the user that runs your web server process. This is @www-data@ on Debian-based systems, and @nginx@ on Red Hat-based systems.
+
+<notextile>
+<pre><code># <span class="userinput">export ClusterID=xxxxx</span>
+# <span class="userinput">umask 027</span>
+# <span class="userinput">mkdir -p /etc/arvados</span>
+# <span class="userinput">cat > /etc/arvados/config.yml <<EOF
+Clusters:
+ ${ClusterID}:
+EOF</span>
+# <span class="userinput">chgrp webserver-user /etc/arvados /etc/arvados/config.yml</span>
+</span></code></pre>
+</notextile>
+
+h2. Nginx configuration
+
+This guide will also cover setting up "Nginx":https://www.nginx.com/ as a reverse proxy for Arvados services. Nginx performs two main functions: TLS termination and virtual host routing. The virtual host configuration for each component will go in its own file in @/etc/nginx/conf.d/@.
+
+h2. Synchronizing config file
+
+The Arvados configuration file must be kept in sync across every service node in the cluster. We strongly recommend using a devops configuration management tool such as "Puppet":https://puppet.com/open-source/ to synchronize the config file. Alternately, something like the following script to securely copy the configuration file to each node may be helpful. Replace the @ssh@ targets with your nodes.
+
+<notextile>
+<pre><code>#!/bin/sh
+sudo cat /etc/arvados/config.yml | ssh <span class="userinput">10.0.0.2</span> sudo sh -c "'cat > /etc/arvados/config.yml'"
+sudo cat /etc/arvados/config.yml | ssh <span class="userinput">10.0.0.3</span> sudo sh -c "'cat > /etc/arvados/config.yml'"
+</code></pre>
+</notextile>
<notextile>
<pre><code>~$ <span class="userinput">azure config mode arm</span>
-~$ <span class="userinput">azure login</span>
-~$ <span class="userinput">azure group create exampleGroupName eastus</span>
-~$ <span class="userinput">azure storage account create --type LRS --location eastus --resource-group exampleGroupName exampleStorageAccountName</span>
-~$ <span class="userinput">azure storage account keys list --resource-group exampleGroupName exampleStorageAccountName</span>
-info: Executing command storage account keys list
-+ Getting storage account keys
-data: Primary: zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz==
-data: Secondary: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==
-info: storage account keys list command OK
+~$ <span class="userinput">az login</span>
+~$ <span class="userinput">az group create exampleGroupName eastus2</span>
+~$ <span class="userinput">az storage account create --sku Standard_LRS --kind BlobStorage --encryption-services blob --access-tier Hot --https-only true --location eastus2 --resource-group exampleGroupName --name exampleStorageAccountName</span>
+~$ <span class="userinput">az storage account keys list --resource-group exampleGroupName --account-name exampleStorageAccountName
+[
+ {
+ "keyName": "key1",
+ "permissions": "Full",
+ "value": "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz=="
+ },
+ {
+ "keyName": "key2",
+ "permissions": "Full",
+ "value": "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy=="
+ }
+]</span>
~$ <span class="userinput">AZURE_STORAGE_ACCOUNT="exampleStorageAccountName" \
AZURE_STORAGE_ACCESS_KEY="zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz==" \
-azure storage container create exampleContainerName</span>
+azure storage container create --name exampleContainerName</span>
</code></pre>
</notextile>
{% include 'assign_volume_uuid' %}
-<notextile><pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Volumes:
- <span class="userinput">uuid_prefix</span>-nyw5e-<span class="userinput">000000000000000</span>:
+<notextile><pre><code> Volumes:
+ <span class="userinput">ClusterID</span>-nyw5e-<span class="userinput">000000000000000</span>:
AccessViaHosts:
# This section determines which keepstore servers access the
# volume. In this example, keep0 has read/write access, and
# If the AccessViaHosts section is empty or omitted, all
# keepstore servers will have read/write access to the
# volume.
- "http://<span class="userinput">keep0.uuid_prefix.example.com</span>:25107/": {}
- "http://<span class="userinput">keep1.uuid_prefix.example.com</span>:25107/": {ReadOnly: true}
+ "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107/": {}
+ "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107/": {ReadOnly: true}
- Driver: Azure
+ Driver: <span class="userinput">Azure</span>
DriverParameters:
# Storage account name and secret key, used for
# authentication.
- StorageAccountName: exampleStorageAccountName
- StorageAccountKey: zzzzzzzzzzzzzzzzzzzzzzzzzz
+ StorageAccountName: <span class="userinput">exampleStorageAccountName</span>
+ StorageAccountKey: <span class="userinput">zzzzzzzzzzzzzzzzzzzzzzzzzz</span>
+
+ # Storage container name.
+ ContainerName: <span class="userinput">exampleContainerName</span>
# The cloud environment to use,
# e.g. "core.chinacloudapi.cn". Defaults to
# "core.windows.net" if blank or omitted.
StorageBaseURL: ""
- # Storage container name.
- ContainerName: exampleContainerName
-
# Time to wait for an upstream response before failing the
# request.
RequestTimeout: 10m
{% include 'assign_volume_uuid' %}
-Note that each volume has an AccessViaHosts section indicating that (for example) keep0's /mnt/local-disk directory is volume 0, while keep1's /mnt/local-disk directory is volume 1.
+Note that each volume entry has an @AccessViaHosts@ section indicating which Keepstore instance(s) will serve that volume. In this example, keep0 and keep1 each have their own data disk. The @/mnt/local-disk@ directory on keep0 is volume @ClusterID-nyw5e-000000000000000@, and the @/mnt/local-disk@ directory on keep1 is volume @ClusterID-nyw5e-000000000000001@ .
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Volumes:
- <span class="userinput">uuid_prefix</span>-nyw5e-<span class="userinput">000000000000000</span>:
+<pre><code> Volumes:
+ <span class="userinput">ClusterID</span>-nyw5e-<span class="userinput">000000000000000</span>:
AccessViaHosts:
- "http://<span class="userinput">keep0.uuid_prefix.example.com</span>:25107": {}
- Driver: Directory
+ "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107": {}
+ Driver: <span class="userinput">Directory</span>
DriverParameters:
# The directory that will be used as the backing store.
- Root: /mnt/local-disk
-
- # When true, read and write operations (for whole 64MiB
- # blocks) on an individual volume will queued and issued
- # serially. When false, read and write operations will be
- # issued concurrently.
- #
- # May improve throughput if you experience contention when
- # there are multiple requests to the same volume.
- #
- # When using SSDs, RAID, or a shared network filesystem, you
- # probably don't want this.
- Serialize: false
+ Root: <span class="userinput">/mnt/local-disk</span>
# How much replication is performed by the underlying
# filesystem. (for example, a network filesystem may provide
# reads.
ReadOnly: false
- # Storage classes to associate with this volume. See "Storage
- # classes" in the "Admin" section of doc.arvados.org.
+ # <a href="{{site.baseurl}}/admin/storage-classes.html">Storage classes</a> to associate with this volume.
StorageClasses: null
- <span class="userinput">uuid_prefix</span>-nyw5e-<span class="userinput">000000000000001</span>:
+ <span class="userinput">ClusterID</span>-nyw5e-<span class="userinput">000000000000001</span>:
AccessViaHosts:
- "http://keep1.<span class="userinput">uuid_prefix</span>.example.com:25107": {}
- Driver: Directory
+ "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107": {}
+ Driver: <span class="userinput">Directory</span>
DriverParameters:
- Root: /mnt/local-disk
+ Root: <span class="userinput">/mnt/local-disk</span>
</code></pre></notextile>
-In the case of a network-attached filesystem, the AccessViaHosts section can have multiple entries. If the filesystem is accessible by all keepstore servers, the AccessViaHosts section can be empty, or omitted entirely.
+In the case of a network-attached filesystem, the @AccessViaHosts@ section can have multiple entries. If the filesystem is accessible by all keepstore servers, the AccessViaHosts section can be empty, or omitted entirely. In this example, the underlying storage system performs replication, so specifying @Replication: 2@ means a block is considered to be stored twice for the purposes of data integrity, while only stored on a single volume from the perspective of Keep.
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Volumes:
- <span class="userinput">uuid_prefix</span>-nyw5e-<span class="userinput">000000000000002</span>:
+<pre><code> Volumes:
+ <span class="userinput">ClusterID</span>-nyw5e-<span class="userinput">000000000000002</span>:
AccessViaHosts:
# This section determines which keepstore servers access the
# volume. In this example, keep0 has read/write access, and
# If the AccessViaHosts section is empty or omitted, all
# keepstore servers will have read/write access to the
# volume.
- "http://<span class="userinput">keep0.uuid_prefix.example.com</span>:25107/": {}
- "http://<span class="userinput">keep1.uuid_prefix.example.com</span>:25107/": {ReadOnly: true}
- Driver: Directory
+ "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107/": {}
+ "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107/": {ReadOnly: true}
+ Driver: <span class="userinput">Directory</span>
DriverParameters:
- Root: /mnt/network-attached-filesystem
+ Root: <span class="userinput">/mnt/network-attached-filesystem</span>
Replication: 2
</code></pre></notextile>
{% include 'assign_volume_uuid' %}
-<notextile><pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Volumes:
- <span class="userinput">uuid_prefix</span>-nyw5e-<span class="userinput">000000000000000</span>:
+<notextile><pre><code> Volumes:
+ <span class="userinput">ClusterID</span>-nyw5e-<span class="userinput">000000000000000</span>:
AccessViaHosts:
# This section determines which keepstore servers access the
# volume. In this example, keep0 has read/write access, and
# If the AccessViaHosts section is empty or omitted, all
# keepstore servers will have read/write access to the
# volume.
- "http://<span class="userinput">keep0.uuid_prefix.example.com</span>:25107/": {}
- "http://<span class="userinput">keep1.uuid_prefix.example.com</span>:25107/": {ReadOnly: true}
+ "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107/": {}
+ "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107/": {ReadOnly: true}
- Driver: S3
+ Driver: <span class="userinput">S3</span>
DriverParameters:
+ # Bucket name.
+ Bucket: <span class="userinput">example-bucket-name</span>
+
# IAM role name to use when retrieving credentials from
# instance metadata. It can be omitted, in which case the
# role name itself will be retrieved from instance metadata
# -- but setting it explicitly may protect you from using
# the wrong credentials in the event of an
# installation/configuration error.
- IAMRole: ""
+ IAMRole: <span class="userinput">""</span>
# If you are not using an IAM role for authentication,
# specify access credentials here instead.
- AccessKey: ""
- SecretKey: ""
+ AccessKey: <span class="userinput">""</span>
+ SecretKey: <span class="userinput">""</span>
+
+ # Storage provider region. For Google Cloud Storage, use ""
+ # or omit.
+ Region: <span class="userinput">us-east-1a</span>
# Storage provider endpoint. For Amazon S3, use "" or
# omit. For Google Cloud Storage, use
# "https://storage.googleapis.com".
Endpoint: ""
- # Storage provider region. For Google Cloud Storage, use ""
- # or omit.
- Region: us-east-1a
-
# Change to true if the region requires a LocationConstraint
# declaration.
LocationConstraint: false
- # Bucket name.
- Bucket: example-bucket-name
-
# Requested page size for "list bucket contents" requests.
IndexPageSize: 1000
# Maximum eventual consistency latency
RaceWindow: 24h
- # Enable deletion (garbage collection) even when the
- # configured BlobTrashLifetime is zero. WARNING: eventual
- # consistency may result in race conditions that can cause
- # data loss. Do not enable this unless you understand and
- # accept the risk.
- UnsafeDelete: false
-
# How much replication is provided by the underlying bucket.
# This is used to inform replication decisions at the Keep
# layer.
</code></pre>
</notextile>
-You can now copy the pipeline template from *qr1hi* to *your cluster*. Replace *dst_cluster* with the *uuid_prefix* of your cluster.
+You can now copy the pipeline template from *qr1hi* to *your cluster*. Replace *dst_cluster* with the *ClusterID* of your cluster.
<notextile>
<pre><code>~$ <span class="userinput"> arv-copy --no-recursive --src qr1hi --dst dst_cluster qr1hi-p5p6p-9pkaxt6qjnkxhhu</span>
If you already have an account in the Arvados Playground, you can follow the instructions in the "*Using arv-copy*":http://doc.arvados.org/user/topics/arv-copy.html user guide to get your *Current token* for source and destination clusters, and use them to create the source *qr1hi.conf* and dst_cluster.conf configuration files.
-You can now copy the pipeline template from *qr1hi* to *your cluster* with or without recursion. Replace *dst_cluster* with the *uuid_prefix* of your cluster.
+You can now copy the pipeline template from *qr1hi* to *your cluster* with or without recursion. Replace *dst_cluster* with the *ClusterID* of your cluster.
*Non-recursive copy:*
<notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-h2. Install dependencies
+# "Introduction":#introduction
+# "Set up Docker":#docker
+# "Update fuse.conf":#fuse
+# "Update docker-cleaner.json":#docker-cleaner
+# "Configure Linux cgroups accounting":#cgroups
+# "Install Docker":#install_docker
+# "Configure the Docker daemon":#configure_docker_daemon
+# "Install'python-arvados-fuse and crunch-run and arvados-docker-cleaner":#install-packages
-First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
+h2(#introduction). Introduction
-{% include 'note_python_sc' %}
+This page describes how to configure a compute node so that it can be used to run containers dispatched by Arvados.
-On Red Hat-based systems:
+* If you are using the cloud dispatcher, apply these step and then save a compute node virtual machine image. The virtual machine image id will go in @config.yml@.
+* If you are using SLURM on a static custer, these steps must be duplicated on every compute node, preferrably using a devops tool such as Puppet.
-<notextile>
-<pre><code>~$ <span class="userinput">echo 'exclude=python2-llfuse' | sudo tee -a /etc/yum.conf</span>
-~$ <span class="userinput">sudo yum install python-arvados-fuse crunch-run arvados-docker-cleaner</span>
-</code></pre>
-</notextile>
+h2(#docker). Set up Docker
-On Debian-based systems:
+See "Set up Docker":../install-docker.html
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install python-arvados-fuse crunch-run arvados-docker-cleaner</span>
-</code></pre>
-</notextile>
-
-{% include 'install_compute_docker' %}
+{% assign arvados_component = 'python-arvados-fuse crunch-run arvados-docker-cleaner' %}
{% include 'install_compute_fuse' %}
{% include 'install_docker_cleaner' %}
-h2. Set up SLURM
+{% include 'install_packages' %}
-Install SLURM on the compute node using the same process you used on the API server in the "previous step":install-slurm.html.
+{% assign arvados_component = 'arvados-docker-cleaner' %}
-The @slurm.conf@ and @/etc/munge/munge.key@ files must be identical on all SLURM nodes. Copy the files you created on the API server in the "previous step":install-slurm.html to each compute node.
+{% include 'start_service' %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-The SLURM dispatcher can run on any node that can submit requests to both the Arvados API server and the SLURM controller. It is not resource-intensive, so you can run it on the API server node.
-
-h2. Install the dispatcher
+{% include 'notebox_begin_warning' %}
+crunch-dispatch-slurm is only relevant for on premise clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+{% include 'notebox_end' %}
-First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
+# "Introduction":#introduction
+# "Update config.yml":#update-config
+# "Install crunch-dispatch-slurm":#install-packages
+# "Start the service":#start-service
+# "Restart the API server and controller":#restart-api
-On Red Hat-based systems:
+h2(#introduction). Introduction
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install crunch-dispatch-slurm</span>
-~$ <span class="userinput">sudo systemctl enable crunch-dispatch-slurm</span>
-</code></pre>
-</notextile>
+This assumes you already have a SLURM cluster, and have "set up all of your compute nodes":install-compute-node.html . For information on installing SLURM, see "this install guide":https://slurm.schedmd.com/quickstart_admin.html
-On Debian-based systems:
+The Arvados SLURM dispatcher can run on any node that can submit requests to both the Arvados API server and the SLURM controller (via @sbatch@). It is not resource-intensive, so you can run it on the API server node.
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install crunch-dispatch-slurm</span>
-</code></pre>
-</notextile>
+h2(#update-config). Update config.yml (optional)
-h2. Configure the dispatcher (optional)
+Crunch-dispatch-slurm reads the common configuration file at @config.yml@.
-Crunch-dispatch-slurm reads the common configuration file at @/etc/arvados/config.yml@. The essential configuration parameters will already be set by previous install steps, so no additional configuration is required. The following sections describe optional configuration parameters.
+The following configuration parameters are optional.
h3(#PollPeriod). Containers.PollInterval
crunch-dispatch-slurm polls the API server periodically for new containers to run. The @PollInterval@ option controls how often this poll happens. Set this to a string of numbers suffixed with one of the time units @ns@, @us@, @ms@, @s@, @m@, or @h@. For example:
<notextile>
-<pre>
-Clusters:
- zzzzz:
- Containers:
+<pre> Containers:
<code class="userinput">PollInterval: <b>3m30s</b>
</code></pre>
</notextile>
Supports suffixes @KB@, @KiB@, @MB@, @MiB@, @GB@, @GiB@, @TB@, @TiB@, @PB@, @PiB@, @EB@, @EiB@ (where @KB@ is 10[^3^], @KiB@ is 2[^10^], @MB@ is 10[^6^], @MiB@ is 2[^20^] and so forth).
<notextile>
-<pre>
-Clusters:
- zzzzz:
- Containers:
+<pre> Containers:
<code class="userinput">ReserveExtraRAM: <b>256MiB</b></code>
</pre>
</notextile>
If SLURM is unable to run a container, the dispatcher will submit it again after the next PollPeriod. If PollPeriod is very short, this can be excessive. If MinRetryPeriod is set, the dispatcher will avoid submitting the same container to SLURM more than once in the given time span.
<notextile>
-<pre>
-Clusters:
- zzzzz:
- Containers:
+<pre> Containers:
<code class="userinput">MinRetryPeriod: <b>30s</b></code>
</pre>
</notextile>
Some Arvados installations run a local keepstore on each compute node to handle all Keep traffic. To override Keep service discovery and access the local keep server instead of the global servers, set ARVADOS_KEEP_SERVICES in SbatchEnvironmentVariables:
<notextile>
-<pre>
-Clusters:
- zzzzz:
- Containers:
+<pre> Containers:
SLURM:
<span class="userinput">SbatchEnvironmentVariables:
ARVADOS_KEEP_SERVICES: "http://127.0.0.1:25107"</span>
The smallest usable value is @1@. The default value of @10@ is used if this option is zero or negative. Example:
<notextile>
-<pre>
-Clusters:
- zzzzz:
- Containers:
+<pre> Containers:
SLURM:
<code class="userinput">PrioritySpread: <b>1000</b></code></pre>
</notextile>
When crunch-dispatch-slurm invokes @sbatch@, you can add arguments to the command by specifying @SbatchArguments@. You can use this to send the jobs to specific cluster partitions or add resource requests. Set @SbatchArguments@ to an array of strings. For example:
<notextile>
-<pre>
-Clusters:
- zzzzz:
- Containers:
+<pre> Containers:
SLURM:
<code class="userinput">SbatchArgumentsList:
- <b>"--partition=PartitionName"</b></code>
If your SLURM cluster uses the @task/cgroup@ TaskPlugin, you can configure Crunch's Docker containers to be dispatched inside SLURM's cgroups. This provides consistent enforcement of resource constraints. To do this, use a crunch-dispatch-slurm configuration like the following:
<notextile>
-<pre>
-Clusters:
- zzzzz:
- Containers:
+<pre> Containers:
<code class="userinput">CrunchRunArgumentsList:
- <b>"-cgroup-parent-subsystem=memory"</b></code>
</pre>
Older Linux kernels (prior to 3.18) have bugs in network namespace handling which can lead to compute node lockups. This by is indicated by blocked kernel tasks in "Workqueue: netns cleanup_net". If you are experiencing this problem, as a workaround you can disable use of network namespaces by Docker across the cluster. Be aware this reduces container isolation, which may be a security risk.
<notextile>
-<pre>
-Clusters:
- zzzzz:
- Containers:
+<pre> Containers:
<code class="userinput">CrunchRunArgumentsList:
- <b>"-container-enable-networking=always"</b>
- <b>"-container-network-mode=host"</b></code>
</pre>
</notextile>
-h2. Restart the dispatcher
+{% assign arvados_component = 'crunch-dispatch-slurm' %}
-{% include 'notebox_begin' %}
-
-The crunch-dispatch-slurm package includes configuration files for systemd. If you're using a different init system, you'll need to configure a service to start and stop a @crunch-dispatch-slurm@ process as desired. The process should run from a directory where the @crunch@ user has write permission on all compute nodes, such as its home directory or @/tmp@. You do not need to specify any additional switches or environment variables.
-
-{% include 'notebox_end' %}
+{% include 'install_packages' %}
-Restart the dispatcher to run with your new configuration:
+{% include 'start_service' %}
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart crunch-dispatch-slurm</span>
-</code></pre>
-</notextile>
+{% include 'restart_api' %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-
-Containers can be dispatched to a SLURM cluster. The dispatcher sends work to the cluster using SLURM's @sbatch@ command, so it works in a variety of SLURM configurations.
-
-In order to run containers, you must run the dispatcher as a user that has permission to set up FUSE mounts and run Docker containers on each compute node. This install guide refers to this user as the @crunch@ user. We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions. However, you can run the dispatcher under any account with sufficient permissions across the cluster.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
+Containers can be dispatched to a SLURM cluster. The dispatcher sends work to the cluster using SLURM's @sbatch@ command, so it works in a variety of SLURM configurations.
+
+In order to run containers, you must run the dispatcher as a user that has permission to set up FUSE mounts and run Docker containers on each compute node. This install guide refers to this user as the @crunch@ user. We recommend you create this user on each compute node with the same UID and GID, and add it to the @fuse@ and @docker@ system groups to grant it the necessary permissions. However, you can run the dispatcher under any account with sufficient permissions across the cluster.
+
+
On the API server, install SLURM and munge, and generate a munge key.
On Debian-based systems:
Now we need to give SLURM a configuration file. On Debian-based systems, this is installed at @/etc/slurm-llnl/slurm.conf@. On Red Hat-based systems, this is installed at @/etc/slurm/slurm.conf@. Here's an example @slurm.conf@:
<notextile>
-<pre>
-ControlMachine=uuid_prefix.your.domain
+<pre><code>
+ControlMachine=<span class="userinput">ClusterID.example.com</class>
SlurmctldPort=6817
SlurmdPort=6818
AuthType=auth/munge
NodeName=compute[0-255]
PartitionName=compute Nodes=compute[0-255] Default=YES Shared=YES
-</pre>
+</code></pre>
</notextile>
h3. SLURM configuration essentials
Each hostname in @slurm.conf@ must also resolve correctly on all SLURM worker nodes as well as the controller itself. Furthermore, the hostnames used in the configuration file must match the hostnames reported by @hostname@ or @hostname -s@ on the nodes themselves. This applies to the ControlMachine as well as the worker nodes.
For example:
-* In @slurm.conf@ on control and worker nodes: @ControlMachine=uuid_prefix.your.domain@
+* In @slurm.conf@ on control and worker nodes: @ControlMachine=ClusterID.example.com@
* In @slurm.conf@ on control and worker nodes: @NodeName=compute[0-255]@
-* In @/etc/resolv.conf@ on control and worker nodes: @search uuid_prefix.your.domain@
-* On the control node: @hostname@ reports @uuid_prefix.your.domain@
-* On worker node 123: @hostname@ reports @compute123.uuid_prefix.your.domain@
+* In @/etc/resolv.conf@ on control and worker nodes: @search ClusterID.example.com@
+* On the control node: @hostname@ reports @ClusterID.example.com@
+* On worker node 123: @hostname@ reports @compute123.ClusterID.example.com@
h3. Automatic hostname assignment
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
+{% include 'notebox_begin_warning' %}
+crunch-dispatch-slurm is only relevant for on premise clusters that will spool jobs to Slurm. Skip this section if you are installing a cloud cluster.
+{% include 'notebox_end' %}
+
h2. Test compute node setup
You should now be able to submit SLURM jobs that run in Docker containers. On the node where you're running the dispatcher, you can test this by running:
</code></pre>
</notextile>
-*On your shell server*, submit a simple container request:
+Submit a simple container request:
<notextile>
<pre><code>shell:~$ <span class="userinput">arv container_request create --container-request '{
</code></pre>
</notextile>
-This command should return a record with a @container_uuid@ field. Once crunch-dispatch-slurm polls the API server for new containers to run, you should see it dispatch that same container. It will log messages like:
+This command should return a record with a @container_uuid@ field. Once @crunch-dispatch-slurm@ polls the API server for new containers to run, you should see it dispatch that same container. It will log messages like:
<notextile>
<pre><code>2016/08/05 13:52:54 Monitoring container zzzzz-dz642-hdp2vpu9nq14tx0 started
</code></pre>
</notextile>
-If you do not see crunch-dispatch-slurm try to dispatch the container, double-check that it is running and that the API hostname and token in @/etc/arvados/crunch-dispatch-slurm/crunch-dispatch-slurm.yml@ are correct.
-
Before the container finishes, SLURM's @squeue@ command will show the new job in the list of queued and running jobs. For example, you might see:
<notextile>
</code></pre>
</notextile>
-If the container does not dispatch successfully, refer to the crunch-dispatch-slurm logs for information about why it failed.
+If the container does not dispatch successfully, refer to the @crunch-dispatch-slurm@ logs for information about why it failed.
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Setting up Google auth
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+In order to use Google for authentication, you must use the <a href="https://console.developers.google.com" target="_blank">Google Developers Console</a> to create a set of client credentials.
+
+# Go to the <a href="https://console.developers.google.com" target="_blank">Google Developers Console</a> and select or create a project; this will take you to the project page.
+# Click on *+ Enable APIs and Services*
+## Search for *People API* and click on *Enable API*.
+# Navigate back to the main "APIs & Services" page
+# On the sidebar, click on *OAuth consent screen*
+## On consent screen settings, enter your identifying details
+## Under *Authorized domains* add @example.com@
+## Click on *Save*.
+# On the sidebar, click on *Credentials*; then click on *Create credentials*→*OAuth Client ID*
+# Under *Application type* select *Web application*.
+# You must set the authorization origins. Edit @auth.example.com@ to the appropriate hostname that you will use to access the SSO service:
+## JavaScript origin should be @https://ClusterID.example.com/@ (using Arvados-controller based login) or @https://auth.example.com/@ (for the SSO server)
+## Redirect URI should be @https://ClusterID.example.com/login@ (using Arvados-controller based login) or @https://auth.example.com/users/auth/google_oauth2/callback@ (for the SSO server)
+# Copy the values of *Client ID* and *Client secret* from the Google Developers Console and add them to the appropriate configuration.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Arvados components run on GNU/Linux systems, and supports multiple cloud operating stacks. Arvados supports Debian and derivatives such as Ubuntu, as well as Red Hat and derivatives such as CentOS. Although Arvados development is sponsored by Veritas Genetics which offers commercial support, "Arvados is Free Software":{{site.baseurl}}/copying/copying.html and we encourage self supported/community supported installations.
+{% include 'notebox_begin' %}
+This section is about installing an Arvados cluster. If you are just looking to install Arvados client tools and libraries, "go to the SDK section.":{{site.baseurl}}/sdk
+{% include 'notebox_end' %}
+
+Arvados components run on GNU/Linux systems, and supports AWS, GCP and Azure cloud platforms as well as on-premises installs. Arvados supports Debian and derivatives such as Ubuntu, as well as Red Hat and derivatives such as CentOS. "Arvados is Free Software":{{site.baseurl}}/user/copying/copying.html and self-install installations are not limited in any way. Commercial support and development are also available from "Curii Corporation.":mailto:info@curii.com
Arvados components can be installed and configured in a number of different ways.
<div class="offset1">
table(table table-bordered table-condensed).
|||\5=. Appropriate for|
-||_. Ease of setup|_. Multiuser/networked access|_. Workflow Development and Testing|_. Large Scale Production|_. Development of Arvados|_. Arvados System Testing|
+||_. Ease of setup|_. Multiuser/networked access|_. Workflow Development and Testing|_. Large Scale Production|_. Development of Arvados|_. Arvados Evaluation|
|"Arvados-in-a-box":arvbox.html (arvbox)|Easy|no|yes|no|yes|yes|
|"Arvados on Kubernetes":arvados-on-kubernetes.html|Easy ^1^|yes|yes ^2^|no ^2^|no|yes|
|"Manual installation":install-manual-prerequisites.html|Complicated|yes|yes|yes|no|no|
-|"Arvados Playground":https://playground.arvados.org hosted by Veritas Genetics|N/A ^3^|yes|yes|no|no|no|
-|"Cluster Operation Subscription":https://curoverse.com/products supported by Veritas Genetics|N/A ^3^|yes|yes|yes|yes|yes|
+|"Cluster Operation Subscription supported by Curii":mailto:info@curii.com|N/A ^3^|yes|yes|yes|yes|yes|
</div>
* ^1^ Assumes a Kubernetes cluster is available
* ^2^ Arvados on Kubernetes is under development and not yet ready for production use
-* ^3^ No installation necessary, Veritas Genetics run and managed
+* ^3^ No user installation necessary, Curii run and managed
---
layout: default
navsection: installguide
-title: Install the API server
+title: Install API server and Controller
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-h2. Install prerequisites
+# "Introduction":#introduction
+# "Install dependencies":#dependencies
+# "Set up database":#database-setup
+# "Update config.yml":#update-config
+# "Update nginx configuration":#update-nginx
+# "Install arvados-api-server and arvados-controller":#install-packages
+# "Confirm working installation":#confirm-working
-The Arvados package repository includes an API server package that can help automate much of the deployment.
+h2(#introduction). Introduction
-h3(#install_ruby_and_bundler). Install Ruby and Bundler
+The Arvados core API server consists of four services: PostgreSQL, Arvados Rails API, Arvados Controller, and Nginx.
-{% include 'install_ruby_and_bundler' %}
+Here is a simplified diagram showing the relationship between the core services. Client requests arrive at the public-facing Nginx reverse proxy. The request is forwarded to Arvados controller. The controller is able handle some requests itself, the rest are forwarded to the Arvados Rails API. The Rails API server implements the majority of business logic, communicating with the PostgreSQL database to fetch data and make transactional updates. All services are stateless, except the PostgreSQL database. This guide assumes all of these services will be installed on the same node, but it is possible to install these services across multiple nodes.
-h2(#install_apiserver). Install API server and dependencies
+!(full-width){{site.baseurl}}/images/proxy-chain.svg!
-On a Debian-based system, install the following packages:
+h2(#dependencies). Install dependencies
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install bison build-essential libcurl4-openssl-dev git arvados-api-server</span>
-</code></pre>
-</notextile>
-
-On a Red Hat-based system, install the following packages:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install bison make automake gcc gcc-c++ libcurl-devel git arvados-api-server</span>
-</code></pre>
-</notextile>
+# "Install PostgreSQL":install-postgresql.html
+# "Install Ruby and Bundler":ruby.html
+# "Install nginx":nginx.html
+# "Install Phusion Passenger":https://www.phusionpassenger.com/library/walkthroughs/deploy/ruby/ownserver/nginx/oss/install_passenger_main.html
-{% include 'install_git' %}
+h2(#database-setup). Set up database
-h2(#configure_application). Configure the API server
+{% assign service_role = "arvados" %}
+{% assign service_database = "arvados_production" %}
+{% assign use_contrib = true %}
+{% include 'install_postgres_database' %}
-Edit @/etc/arvados/config.yml@ to set the keys below. Only the most important configuration options are listed here. The example configuration fragments given below should be merged into a single configuration structure. Correct indentation is important. The full set of configuration options are listed in "config.yml":{{site.baseurl}}/admin/config.html
+h2(#update-config). Update config.yml
-h3(#uuid_prefix). ClusterID
+Starting from an "empty config.yml file,":config.html#empty add the following configuration keys.
-The @ClusterID@ is used for all database identifiers to identify the record as originating from this site. It is the first key under @Clusters@ in @config.yml@. It must be exactly 5 lowercase ASCII letters and digits. All configuration items go under the cluster id key (replace @zzzzz@ with your cluster id in the examples below).
+h3. Tokens
<notextile>
-<pre><code>Clusters:
- <span class="userinput">zzzzz</span>:
- ...</code></pre>
-</notextile>
-
-h3(#configure). PostgreSQL.Connection
-
-Replace the @xxxxxxxx@ database password placeholder with the "password you generated during database setup":install-postgresql.html#api.
-
-<notextile>
-<pre><code>Clusters:
- zzzzz:
- PostgreSQL:
- Connection:
- host: <span class="userinput">localhost</span>
- user: <span class="userinput">arvados</span>
- password: <span class="userinput">xxxxxxxx</span>
- dbname: <span class="userinput">arvados_production</span>
- </code></pre>
-</notextile>
-
-h3. API.RailsSessionSecretToken
-
-The @API.RailsSessionSecretToken@ is used for for signing cookies. IMPORTANT: This is a site secret. It should be at least 50 characters. Generate a random value and set it in @config.yml@:
-
-<notextile>
-<pre><code>~$ <span class="userinput">ruby -e 'puts rand(2**400).to_s(36)'</span>
-yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
-</code></pre></notextile>
-
-Example @config.yml@:
-
-<notextile>
-<pre><code>Clusters:
- zzzzz:
+<pre><code> SystemRootToken: <span class="userinput">"$system_root_token"</span>
+ ManagementToken: <span class="userinput">"$management_token"</span>
API:
- RailsSessionSecretToken: <span class="userinput">yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy</span></code></pre>
-</notextile>
-
-h3(#blob_signing_key). Collections.BlobSigningKey
-
-The @Collections.BlobSigningKey@ is used to enforce access control to Keep blocks. This same key must be provided to the Keepstore daemons when "installing Keepstore servers.":install-keepstore.html IMPORTANT: This is a site secret. It should be at least 50 characters. Generate a random value and set it in @config.yml@:
-
-<notextile>
-<pre><code>~$ <span class="userinput">ruby -e 'puts rand(2**400).to_s(36)'</span>
-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-</code></pre></notextile>
-
-Example @config.yml@:
-
-<notextile>
-<pre><code>Clusters:
- zzzzz:
+ RailsSessionSecretToken: <span class="userinput">"$rails_secret_token"</span>
Collections:
- BlobSigningKey: <span class="userinput">xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx</span></code></pre>
+ BlobSigningKey: <span class="userinput">"blob_signing_key"</span>
+</code></pre>
</notextile>
-h3(#omniauth). Login.ProviderAppID, Login.ProviderAppSecret, Services.SSO.ExternalURL
+@SystemRootToken@ is used by Arvados system services to authenticate as the system (root) user when communicating with the API server.
-The following settings enable the API server to communicate with the "Single Sign On (SSO) server":install-sso.html to authenticate user log in.
+@ManagementToken@ is used to authenticate access to system metrics.
-Set @Services.SSO.ExternalURL@ to the base URL where your SSO server is installed. This should be a URL consisting of the scheme and host (and optionally, port), without a trailing slash.
+@API.RailsSessionSecretToken@ is required by the API server.
-Set @Login.ProviderAppID@ and @Login.ProviderAppSecret@ to the corresponding values for @app_id@ and @app_secret@ used in the "Create arvados-server client for Single Sign On (SSO)":install-sso.html#client step.
+@Collections.BlobSigningKey@ is used to control access to Keep blocks.
-Example @config.yml@:
+You can generate a random token for each of these items at the command line like this:
<notextile>
-<pre><code>Clusters:
- zzzzz:
- Services:
- SSO:
- ExternalURL: <span class="userinput">https://sso.example.com</span>
- Login:
- ProviderAppID: <span class="userinput">arvados-server</span>
- ProviderAppSecret: <span class="userinput">wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww</span></code></pre>
-</notextile>
-
-h3. Services.Workbench1.ExternalURL
-
-Set @Services.Workbench1.ExternalURL@ to the URL of your workbench application after following "Install Workbench.":install-workbench-app.html
-
-Example @config.yml@:
-
-<notextile>
-<pre><code>Clusters:
- zzzzz:
- Services:
- Workbench1:
- ExternalURL: <span class="userinput">https://workbench.zzzzz.example.com</span></code></pre>
+<pre><code>~$ <span class="userinput">tr -dc 0-9a-zA-Z </dev/urandom | head -c50; echo</span>
+</code></pre>
</notextile>
-h3. Services.Websocket.ExternalURL
-
-Set @Services.Websocket.ExternalURL@ to the @wss://@ URL of the API server websocket endpoint after following "Install the websocket server":install-ws.html .
-
-Example @config.yml@:
+h3. PostgreSQL.Connection
<notextile>
-<pre><code>Clusters:
- zzzzz:
- Services:
- Websocket:
- ExternalURL: <span class="userinput">wss://ws.zzzzz.example.com</span></code></pre>
+<pre><code> PostgreSQL:
+ Connection:
+ host: <span class="userinput">localhost</span>
+ user: <span class="userinput">arvados</span>
+ password: <span class="userinput">$postgres_password</span>
+ dbname: <span class="userinput">arvados_production</span>
+</code></pre>
</notextile>
-h3(#git_repositories_dir). Git.Repositories
-
-The @Git.Repositories@ setting specifies the directory where user git repositories will be stored.
+Replace the @$postgres_password@ placeholder with the password you generated during "database setup":#database-setup .
-The git server setup process is covered on "its own page":install-arv-git-httpd.html. For now, create an empty directory in the default location:
+h3. Services
<notextile>
-<pre><code>~$ <span class="userinput">sudo mkdir -p /var/lib/arvados/git/repositories</span>
-</code></pre></notextile>
-
-If you intend to store your git repositories in a different location, specify that location in @config.yml@. Example:
-
-<notextile>
-<pre><code>Clusters:
- zzzzz:
- Git:
- Repositories: <span class="userinput">/var/lib/arvados/git/repositories</span></code></pre>
+<pre><code> Services:
+ Controller:
+ ExternalURL: <span class="userinput">"https://ClusterID.example.com"</span>
+ InternalURLs:
+ <span class="userinput">"http://localhost:8003": {}</span>
+ RailsAPI:
+ # Does not have an ExternalURL
+ InternalURLs:
+ <span class="userinput">"http://localhost:8004": {}</span>
+</code></pre>
</notextile>
-h3(#enable_legacy_jobs_api). Containers.JobsAPI.Enable
+Replace @ClusterID.example.com@ with the hostname that you previously selected for the API server.
-Enable the legacy "Jobs API":install-crunch-dispatch.html . Note: new installations should use the "Containers API":crunch2-slurm/install-prerequisites.html
+The @Services@ section of the configuration helps Arvados components contact one another (service discovery). Each service has one or more @InternalURLs@ and an @ExternalURL@. The @InternalURLs@ describe where the service runs, and how the Nginx reverse proxy will connect to it. The @ExternalURL@ is how external clients contact the service.
-Disabling the jobs API means methods involving @jobs@, @job_tasks@, @pipeline_templates@ and @pipeline_instances@ are disabled. This functionality is superceded by the containers API which consists of @container_requests@, @containers@ and @workflows@. Arvados clients (such as @arvados-cwl-runner@) detect which APIs are available and adjust behavior accordingly. Note the configuration value must be a quoted string.
+h2(#update-nginx). Update nginx configuration
-* 'auto' -- (default) enable the Jobs API only if it has been used before (i.e., there are job records in the database), otherwise disable jobs API .
-* 'true' -- enable the Jobs API even if there are no existing job records.
-* 'false' -- disable the Jobs API even in the presence of existing job records.
+Use a text editor to create a new file @/etc/nginx/conf.d/arvados-api-and-controller.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
<notextile>
-<pre><code>Clusters:
- zzzzz:
- Containers:
- JobsAPI:
- Enable: <span class="userinput">'auto'</span></code></pre>
-</notextile>
-
-h4(#git_internal_dir). Containers.JobsAPI.GitInternalDir
+<pre><code>proxy_http_version 1.1;
-Only required if the legacy "Jobs API" is enabled, otherwise you should skip this.
+# When Keep clients request a list of Keep services from the API
+# server, use the origin IP address to determine if the request came
+# from the internal subnet or it is an external client. This sets the
+# $external_client variable which in turn is used to set the
+# X-External-Client header.
+#
+# The API server uses this header to choose whether to respond to a
+# "available keep services" request with either a list of internal keep
+# servers (0) or with the keepproxy (1).
+#
+# <span class="userinput">Following the example here, update the 10.20.30.0/24 netmask</span>
+# <span class="userinput">to match your private subnet.</span>
+# <span class="userinput">Update 1.2.3.4 and add lines as necessary with the public IP</span>
+# <span class="userinput">address of all servers that can also access the private network to</span>
+# <span class="userinput">ensure they are not considered 'external'.</span>
-The @Containers.JobsAPI.GitInternalDir@ setting specifies the location of Arvados' internal git repository. By default this is @/var/lib/arvados/internal.git@. This repository stores git commits that have been used to run Crunch jobs. It should _not_ be a subdirectory of the directory in @Git.Repositories@.
+geo $external_client {
+ default 1;
+ 127.0.0.0/24 0;
+ <span class="userinput">10.20.30.0/24</span> 0;
+ <span class="userinput">1.2.3.4/32</span> 0;
+}
-Example @config.yml@:
+# This is the port where nginx expects to contact arvados-controller.
+upstream controller {
+ server localhost:8003 fail_timeout=10s;
+}
-<notextile>
-<pre><code>Clusters:
- zzzzz:
- Containers:
- JobsAPI:
- GitInternalDir: <span class="userinput">/var/lib/arvados/internal.git</span></code></pre>
-</notextile>
+server {
+ # This configures the public https port that clients will actually connect to,
+ # the request is reverse proxied to the upstream 'controller'
-h2(#set_up). Set up Nginx and Passenger
+ listen *:443 ssl;
+ server_name <span class="userinput">xxxxx.example.com</span>;
-The Nginx server will serve API requests using Passenger. It will also be used to proxy SSL requests to other services which are covered later in this guide.
+ ssl on;
+ ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
+ ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
-First, "Install Nginx and Phusion Passenger":https://www.phusionpassenger.com/library/walkthroughs/deploy/ruby/ownserver/nginx/oss/install_passenger_main.html.
+ # Refer to the comment about this setting in the passenger (arvados
+ # api server) section of your Nginx configuration.
+ client_max_body_size 128m;
-Edit the http section of your Nginx configuration to run the Passenger server. Add a block like the following, adding SSL and logging parameters to taste:
+ location / {
+ proxy_pass http://controller;
+ proxy_redirect off;
+ proxy_connect_timeout 90s;
+ proxy_read_timeout 300s;
+
+ proxy_set_header X-Forwarded-Proto https;
+ proxy_set_header Host $http_host;
+ proxy_set_header X-External-Client $external_client;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ }
+}
-<notextile>
-<pre><code>
server {
- listen 127.0.0.1:8000;
+ # This configures the Arvados API server. It is written using Ruby
+ # on Rails and uses the Passenger application server.
+
+ listen <span class="userinput">localhost:8004</span>;
server_name localhost-api;
root /var/www/arvados-api/current/public;
index index.html index.htm index.php;
passenger_enabled on;
- # If you're using RVM, uncomment the line below.
+
+ # <span class="userinput">If you are using RVM, uncomment the line below.</span>
+ # <span class="userinput">If you're using system ruby, leave it commented out.</span>
#passenger_ruby /usr/local/rvm/wrappers/default/ruby;
# This value effectively limits the size of API objects users can
# create, especially collections. If you change this, you should
# also ensure the following settings match it:
- # * `client_max_body_size` in the server section below
- # * `client_max_body_size` in the Workbench Nginx configuration (twice)
+ # * `client_max_body_size` in the previous server section
# * `API.MaxRequestSize` in config.yml
client_max_body_size 128m;
}
+</code></pre>
+</notextile>
-upstream api {
- server 127.0.0.1:8000 fail_timeout=10s;
-}
+{% assign arvados_component = 'arvados-api-server arvados-controller' %}
-proxy_http_version 1.1;
+{% include 'install_packages' %}
-# When Keep clients request a list of Keep services from the API server, the
-# server will automatically return the list of available proxies if
-# the request headers include X-External-Client: 1. Following the example
-# here, at the end of this section, add a line for each netmask that has
-# direct access to Keep storage daemons to set this header value to 0.
-geo $external_client {
- default 1;
- <span class="userinput">10.20.30.0/24</span> 0;
-}
-</code></pre>
-</notextile>
+{% assign arvados_component = 'arvados-controller' %}
-Restart Nginx to apply the new configuration.
+{% include 'start_service' %}
-<notextile>
-<pre><code>~$ <span class="userinput">sudo nginx -s reload</span>
-</code></pre>
-</notextile>
+h2(#confirm-working). Confirm working installation
-h2. Prepare the API server deployment
+Confirm working controller:
-{% assign railspkg = "arvados-api-server" %}
-{% include 'install_rails_reconfigure' %}
+<notextile><pre><code>$ curl https://<span class="userinput">ClusterID.example.com</span>/arvados/v1/config
+</code></pre></notextile>
-{% include 'notebox_begin' %}
-You can safely ignore the following messages if they appear while this command runs:
+Confirm working Rails API server:
+
+<notextile><pre><code>$ curl https://<span class="userinput">ClusterID.example.com</span>/discovery/v1/apis/arvados/v1/rest
+</code></pre></notextile>
+
+Confirm that you can use the system root token to act as the system root user:
+
+<notextile><pre><code>
+$ curl -H "Authorization: Bearer $system_root_token" https://<span class="userinput">ClusterID.example.com</span>/arvados/v1/users/current
+</code></pre></notextile>
-<notextile><pre>Don't run Bundler as root. Bundler can ask for sudo if it is needed, and installing your bundle as root will
-break this application for all non-root users on this machine.</pre></notextile>
+h3. Troubleshooting
-<notextile><pre>fatal: Not a git repository (or any of the parent directories): .git</pre></notextile>
-{% include 'notebox_end' %}
+If you are getting TLS errors, make sure the @ssl_certificate@ directive in your nginx configuration has the "full certificate chain":http://nginx.org/en/docs/http/configuring_https_servers.html#chains
-h2. Troubleshooting
+Logs can be found in @/var/www/arvados-api/current/log/production.log@ and using @journalctl -u arvados-controller@.
-Once you have the API Server up and running you may need to check it back if dealing with client related issues. Please read our "admin troubleshooting notes":{{site.baseurl}}/admin/troubleshooting.html on how requests can be tracked down between services.
\ No newline at end of file
+See also the admin page on "Logging":{{site.baseurl}}/admin/logging.html .
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Arvados allows users to create their own private and public git repositories, and clone/push them using SSH and HTTPS.
+# "Introduction":#introduction
+# "Install dependencies":#dependencies
+# "Create "git" user and storage directory":#create
+# "Install gitolite":#gitolite
+# "Configure gitolite":#config-gitolite
+# "Configure git synchronization":#sync
+# "Update config.yml":#update-config
+# "Update nginx configuration":#update-nginx
+# "Install arvados-git-httpd package":#install-packages
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
+
+h2(#introduction). Introduction
+
+Arvados support for git repository management enables using Arvados permissions to control access to git repositories. Users can create their own private and public git repositories and share them with others.
The git hosting setup involves three components.
* The "arvados-git-sync.rb" script polls the API server for the current list of repositories, creates bare repositories, and updates the local permission cache used by gitolite.
-* Gitolite provides SSH access.
-* arvados-git-http provides HTTPS access.
+* Gitolite provides SSH access. Users authenticate by SSH keys.
+* arvados-git-http provides HTTPS access. Users authenticate by Arvados tokens.
-It is not strictly necessary to deploy _both_ SSH and HTTPS access, but we recommend deploying both:
-* SSH is a more appropriate way to authenticate from a user's workstation because it does not require managing tokens on the client side;
-* HTTPS is a more appropriate way to authenticate from a shell VM because it does not depend on SSH agent forwarding (SSH clients' agent forwarding features tend to behave as if the remote machine is fully trusted).
-* HTTPS is also used by Arvados Composer to access git repositories from the browser.
+Git services must be installed on the same host as the Arvados Rails API server.
-The HTTPS instructions given below will not work if you skip the SSH setup steps.
+h2(#dependencies). Install dependencies
-h2. Set up DNS
-
-By convention, we use the following hostname for the git service:
+h3. Centos 7
<notextile>
-<pre><code>git.<span class="userinput">uuid_prefix</span>.your.domain
+<pre><code># <span class="userinput">yum install git perl-Data-Dumper openssh-server</span>
</code></pre>
</notextile>
-{% include 'notebox_begin' %}
-Here, we show how to install the git hosting services *on the same host as your API server.* Using a different host is not yet fully supported. On this page we will refer to it as your git server.
-{% include 'notebox_end' %}
-
-DNS and network configuration should be set up so port 443 reaches your HTTPS proxy, and port 22 reaches the OpenSSH service on your git server.
-
-h2. Generate an API token
-
-{% assign railshost = "gitserver" %}
-{% assign railscmd = "bundle exec ./script/create_superuser_token.rb" %}
-{% assign railsout = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" %}
-Use the following command to generate an API token. {% include 'install_rails_command' %}
-
-Copy that token; you'll need it in a minute.
-
-h2. Install git and other dependencies
-
-On Debian-based systems:
+h3. Debian and Ubuntu
<notextile>
-<pre><code>gitserver:~$ <span class="userinput">sudo apt-get install git openssh-server</span>
+<pre><code># <span class="userinput">apt-get --no-install-recommends install git openssh-server</span>
</code></pre>
</notextile>
-On Red Hat-based systems:
-
-<notextile>
-<pre><code>gitserver:~$ <span class="userinput">sudo yum install git perl-Data-Dumper openssh-server</span>
-</code></pre>
-</notextile>
-
-{% include 'install_git' %}
-
-h2. Create a "git" user and a storage directory
+h2(#create). Create "git" user and storage directory
Gitolite and some additional scripts will be installed in @/var/lib/arvados/git@, which means hosted repository data will be stored in @/var/lib/arvados/git/repositories@. If you choose to install gitolite in a different location, make sure to update the @git_repositories_dir@ entry in your API server's @application.yml@ file accordingly: for example, if you install gitolite at @/data/gitolite@ then your @git_repositories_dir@ will be @/data/gitolite/repositories@.
</code></pre>
</notextile>
-h2. Install gitolite
+h2(#gitolite). Install gitolite
-Check "https://github.com/sitaramc/gitolite/tags":https://github.com/sitaramc/gitolite/tags for the latest stable version. This guide was tested with @v3.6.4@. _Versions below 3.0 are missing some features needed by Arvados, and should not be used._
+Check "https://github.com/sitaramc/gitolite/tags":https://github.com/sitaramc/gitolite/tags for the latest stable version. This guide was tested with @v3.6.11@. _Versions below 3.0 are missing some features needed by Arvados, and should not be used._
Download and install the version you selected.
<notextile>
-<pre><code>git@gitserver:~$ <span class="userinput">echo 'PATH=$HOME/bin:$PATH' >.profile</span>
-git@gitserver:~$ <span class="userinput">source .profile</span>
-git@gitserver:~$ <span class="userinput">git clone --branch <b>v3.6.4</b> https://github.com/sitaramc/gitolite</span>
+<pre><code>$ <span class="userinput">sudo -u git -i bash</span>
+git@gitserver:~$ <span class="userinput">echo 'PATH=$HOME/bin:$PATH' >.profile</span>
+git@gitserver:~$ <span class="userinput">. .profile</span>
+git@gitserver:~$ <span class="userinput">git clone --branch <b>v3.6.11</b> https://github.com/sitaramc/gitolite</span>
...
Note: checking out '5d24ae666bfd2fa9093d67c840eb8d686992083f'.
...
</code></pre>
</notextile>
-h3. Configure gitolite
+h2(#config-gitolite). Configure gitolite
Configure gitolite to look up a repository name like @username/reponame.git@ and find the appropriate bare repository storage directory.
</span></code></pre>
</notextile>
-h2. Configure git synchronization
+h2(#sync). Configure git synchronization
Create a configuration file @/var/www/arvados-api/current/config/arvados-clients.yml@ using the following template, filling in the appropriate values for your system.
-* For @arvados_api_token@, use the token you generated above.
+* For @arvados_api_token@, use @SystemRootToken@
* For @gitolite_arvados_git_user_key@, provide the public key you generated above, i.e., the contents of @~git/.ssh/id_rsa.pub@.
<notextile>
<pre><code>production:
gitolite_url: /var/lib/arvados/git/repositories/gitolite-admin.git
gitolite_tmp: /var/lib/arvados/git
- arvados_api_host: <span class="userinput">uuid_prefix.example.com</span>
+ arvados_api_host: <span class="userinput">ClusterID.example.com</span>
arvados_api_token: "<span class="userinput">zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz</span>"
arvados_api_host_insecure: <span class="userinput">false</span>
gitolite_arvados_git_user_key: "<span class="userinput">ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7aBIDAAgMQN16Pg6eHmvc+D+6TljwCGr4YGUBphSdVb25UyBCeAEgzqRiqy0IjQR2BLtSirXr+1SJAcQfBgI/jwR7FG+YIzJ4ND9JFEfcpq20FvWnMMQ6XD3y3xrZ1/h/RdBNwy4QCqjiXuxDpDB7VNP9/oeAzoATPZGhqjPfNS+RRVEQpC6BzZdsR+S838E53URguBOf9yrPwdHvosZn7VC0akeWQerHqaBIpSfDMtaM4+9s1Gdsz0iP85rtj/6U/K/XOuv2CZsuVZZ52nu3soHnEX2nx2IaXMS3L8Z+lfOXB2T6EaJgXF7Z9ME5K1tx9TSNTRcYCiKztXLNLSbp git@gitserver</span>"
</code></pre>
</notextile>
-h3. Enable the synchronization script
+<pre>
+$ sudo chown git:git /var/www/arvados-api/current/config/arvados-clients.yml
+$ sudo chmod og-rwx /var/www/arvados-api/current/config/arvados-clients.yml
+</pre>
-The API server package includes a script that retrieves the current set of repository names and permissions from the API, writes them to @arvadosaliases.pl@ in a format usable by gitolite, and triggers gitolite hooks which create new empty repositories if needed. This script should run every 2 to 5 minutes.
+h3. Test configuration
-If you are using RVM, create @/etc/cron.d/arvados-git-sync@ with the following content:
+notextile. <pre><code>$ <span class="userinput">sudo -u git -i bash -c 'cd /var/www/arvados-api/current && bundle exec script/arvados-git-sync.rb production'</span></code></pre>
-<notextile>
-<pre><code><span class="userinput">*/5 * * * * git cd /var/www/arvados-api/current && /usr/local/rvm/bin/rvm-exec default bundle exec script/arvados-git-sync.rb production</span>
-</code></pre>
-</notextile>
+h3. Enable the synchronization script
-Otherwise, create @/etc/cron.d/arvados-git-sync@ with the following content:
+The API server package includes a script that retrieves the current set of repository names and permissions from the API, writes them to @arvadosaliases.pl@ in a format usable by gitolite, and triggers gitolite hooks which create new empty repositories if needed. This script should run every 2 to 5 minutes.
+
+Create @/etc/cron.d/arvados-git-sync@ with the following content:
<notextile>
<pre><code><span class="userinput">*/5 * * * * git cd /var/www/arvados-api/current && bundle exec script/arvados-git-sync.rb production</span>
</code></pre>
</notextile>
-h3. Configure the API server to advertise the correct SSH URLs
+h2(#update-config). Update config.yml
-Edit the cluster config at @/etc/arvados/config.yml@ and set @Services.GitSSH.ExternalURL@. Replace @uuid_prefix@ with your cluster id.
+Edit the cluster config at @config.yml@ .
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Services:
+<pre><code> Services:
GitSSH:
- ExternalURL: <span class="userinput">git@git.uuid_prefix.your.domain:</span>
-</code></pre>
-</notextile>
-
-Make sure to include the trailing colon.
-
-h2. Install the arvados-git-httpd package
-
-This is needed only for HTTPS access.
-
-The arvados-git-httpd package provides HTTP access, using Arvados authentication tokens instead of passwords. It is intended to be installed on the system where your git repositories are stored, and accessed through a web proxy that provides SSL support.
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install git arvados-git-httpd</span>
-</code></pre>
-</notextile>
-
-On Red Hat-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install git arvados-git-httpd</span>
-~$ <span class="userinput">sudo systemctl enable arvados-git-httpd</span>
-</code></pre>
-</notextile>
-
-Verify that @arvados-git-httpd@ and @git-http-backend@ can be run:
-
-<notextile>
-<pre><code>~$ <span class="userinput">arvados-git-httpd -h</span>
-[...]
-Usage: arvados-git-httpd [-config path/to/arvados/git-httpd.yml]
-[...]
-~$ <span class="userinput">git http-backend</span>
-Status: 500 Internal Server Error
-Expires: Fri, 01 Jan 1980 00:00:00 GMT
-Pragma: no-cache
-Cache-Control: no-cache, max-age=0, must-revalidate
-
-fatal: No REQUEST_METHOD from server
-</code></pre>
-</notextile>
-
-h3. Enable arvados-git-httpd
-
-{% include 'notebox_begin' %}
-
-The arvados-git-httpd package includes configuration files for systemd. If you're using a different init system, you'll need to configure a service to start and stop an @arvados-git-httpd@ process as desired.
-
-{% include 'notebox_end' %}
-
-Edit the cluster config at @/etc/arvados/config.yml@ and set the following values. Replace @uuid_prefix@ with your cluster id.
-
-<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Services:
+ ExternalURL: "<span class="userinput">ssh://git@git.ClusterID.example.com</span>"
GitHTTP:
- ExternalURL: <span class="userinput">https://git.uuid_prefix.your.domain/</span>
+ ExternalURL: <span class="userinput">https://git.ClusterID.example.com/</span>
InternalURLs:
- <span class="userinput">"http://localhost:9001": {}</span>
+ "http://localhost:9001": {}
Git:
GitCommand: <span class="userinput">/var/lib/arvados/git/gitolite/src/gitolite-shell</span>
GitoliteHome: <span class="userinput">/var/lib/arvados/git</span>
</code></pre>
</notextile>
-Make sure to include the trailing slash for @Services.GitHTTP.ExternalURL@.
-
-Restart the systemd service to ensure the new configuration is used.
-
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart arvados-git-httpd</span>
-</code></pre>
-</notextile>
-
-h3. Set up a reverse proxy to provide SSL service
-
-The arvados-git-httpd service will be accessible from anywhere on the internet, so we recommend using SSL.
-
-This is best achieved by putting a reverse proxy with SSL support in front of arvados-git-httpd, running on port 443 and passing requests to @arvados-git-httpd@ on port 9001 (or whichever port you used in your run script).
+h2(#update-nginx). Update nginx configuration
-Add the following configuration to the @http@ section of your Nginx configuration:
+Use a text editor to create a new file @/etc/nginx/conf.d/arvados-git.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
<notextile>
-<pre><code>
-upstream arvados-git-httpd {
+<pre><code>upstream arvados-git-httpd {
server 127.0.0.1:<span class="userinput">9001</span>;
}
server {
- listen <span class="userinput">[your public IP address]</span>:443 ssl;
- server_name git.<span class="userinput">uuid_prefix.your.domain</span>;
+ listen *:443 ssl;
+ server_name git.<span class="userinput">ClusterID.example.com</span>;
proxy_connect_timeout 90s;
proxy_read_timeout 300s;
ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
# The server needs to accept potentially large refpacks from push clients.
- client_max_body_size 50m;
+ client_max_body_size 128m;
location / {
proxy_pass http://arvados-git-httpd;
</code></pre>
</notextile>
-h2. Restart Nginx
+h2(#install-packages). Install the arvados-git-httpd package
+
+The arvados-git-httpd package provides HTTP access, using Arvados authentication tokens instead of passwords. It must be installed on the system where your git repositories are stored.
+
+h3. Centos 7
+
+<notextile>
+<pre><code># <span class="userinput">yum install arvados-git-httpd</span>
+</code></pre>
+</notextile>
-Restart Nginx to make the Nginx and API server configuration changes take effect.
+h3. Debian and Ubuntu
<notextile>
-<pre><code>gitserver:~$ <span class="userinput">sudo nginx -s reload</span>
+<pre><code># <span class="userinput">apt-get --no-install-recommends install arvados-git-httpd</span>
</code></pre>
</notextile>
-h2. Clone Arvados repository
+h2(#restart-api). Restart the API server and controller
-Here we create a repository object which will be used to set up a hosted clone of the arvados repository on this cluster.
+After adding Workbench to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
<notextile>
-<pre><code>~$ <span class="userinput">uuid_prefix=`arv --format=uuid user current | cut -d- -f1`</span>
-~$ <span class="userinput">echo "Site prefix is '$uuid_prefix'"</span>
-~$ <span class="userinput">all_users_group_uuid="$uuid_prefix-j7d0g-fffffffffffffff"</span>
-~$ <span class="userinput">repo_uuid=`arv --format=uuid repository create --repository "{\"owner_uuid\":\"$uuid_prefix-tpzed-000000000000000\", \"name\":\"arvados\"}"`</span>
-~$ <span class="userinput">echo "Arvados repository uuid is '$repo_uuid'"</span>
-</code></pre></notextile>
+<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
+</code></pre>
+</notextile>
-Create a link object to make the repository object readable by the "All users" group, and therefore by every active user. This makes it possible for users to run the bundled Crunch scripts by specifying @"script_version":"master","repository":"arvados"@ rather than pulling the Arvados source tree into their own repositories.
+h2(#confirm-working). Confirm working installation
+
+Create 'testrepo' in the Arvados database.
<notextile>
-<pre><code>~$ <span class="userinput">read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"</span>
-<span class="userinput">{
- "tail_uuid":"$all_users_group_uuid",
- "head_uuid":"$repo_uuid",
- "link_class":"permission",
- "name":"can_read"
-}
-EOF</span>
+<pre><code>~$ <span class="userinput">arv --format=uuid repository create --repository '{"name":"myusername/testrepo"}'</span>
</code></pre></notextile>
-In a couple of minutes, your arvados-git-sync cron job will create an empty repository on your git server. Seed it with the real arvados repository. If your git credential helpers were configured correctly when you "set up your shell server":install-shell-server.html, the "git push" command will use your API token instead of prompting you for a username and password.
+The arvados-git-sync cron job will notice the new repository record and create a repository on disk. Because it is on a timer (default 5 minutes) you may have to wait a minute or two for it to show up.
+
+h3. SSH
+
+Before you do this, go to Workbench and choose *SSH Keys* from the menu, and upload your public key. Arvados uses the public key to identify you when you access the git repo.
<notextile>
-<pre><code>~$ <span class="userinput">cd /tmp</span>
-/tmp$ <span class="userinput">git clone --bare https://github.com/curoverse/arvados.git</span>
-/tmp <span class="userinput">git --git-dir arvados.git push https://git.<b>uuid_prefix.your.domain</b>/arvados.git '*:*'</span>
+<pre><code>~$ <span class="userinput">git clone git@git.ClusterID.example.com:username/testrepo.git</span>
</code></pre>
</notextile>
-If you did not set up a HTTPS service, you can push to <code>git@git.uuid_prefix.your.domain:arvados.git</code> using your SSH key, or by logging in to your git server and using sudo.
+h3. HTTP
+
+Set up git credential helpers as described in "install shell server":install-shell-server.html#config-git for the git command to use your API token instead of prompting you for a username and password.
<notextile>
-<pre><code>gitserver:~$ <span class="userinput">sudo -u git -i bash</span>
-git@gitserver:~$ <span class="userinput">git clone --bare https://github.com/curoverse/arvados.git /tmp/arvados.git</span>
-git@gitserver:~$ <span class="userinput">cd /tmp/arvados.git</span>
-git@gitserver:/tmp/arvados.git$ <span class="userinput">gitolite push /var/lib/arvados/git/repositories/<b>your_arvados_repo_uuid</b>.git '*:*'</span>
+<pre><code>~$ <span class="userinput">git clone https://git.ClusterID.example.com/username/testrepo.git</span>
</code></pre>
</notextile>
Arvados Composer is a web-based javascript application for building Common Workflow Languge (CWL) Workflows.
-h2. Prerequisites
+# "Install dependencies":#dependencies
+# "Update config.yml":#update-config
+# "Update Nginx configuration":#update-nginx
+# "Install arvados-composer":#install-packages
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
-In addition to Arvados core services, Composer requires "Arvados hosted git repositories":install-arv-git-httpd.html which are used for storing workflow files.
+h2(#dependencies). Install dependencies
-h2. Install
+In addition to Arvados core services, Composer requires "Arvados hosted git repositories":install-arv-git-httpd.html which are used for storing workflow files.
-Composer may be installed on the same host as Workbench, or on a different host. Composer communicates directly with the Arvados API server. It does not require its own backend and should be served as a static file.
+h2(#configure). Update config.yml
-On a Debian-based system, install the following package:
+Edit @config.yml@ and set @Services.Composer.ExternalURL@ to the location from which it is served:
<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install arvados-composer</span>
-</code></pre>
+<pre><code> Services:
+ Composer:
+ ExternalURL: <span class="userinput">https://workbench.CusterID.example.com/composer</span></code></pre>
</notextile>
-On a Red Hat-based system, install the following package:
+h2(#update-nginx). Update nginx configuration
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install arvados-composer</span>
-</code></pre>
-</notextile>
+Composer may be served from the same host as Workbench. Composer communicates directly with the Arvados API server. It does not require its own backend and should be served as a static file.
-h2. Configure
+Add the following @location@ sections to @/etc/nginx/conf.d/arvados-workbench.conf@ .
-h3. Nginx
+<notextile>
+<pre><code>server {
+ [...]
-Add Composer to your Nginx configuration. This example will host Composer at @/composer@.
+ location /composer {
+ root /var/www/arvados-composer;
+ index index.html;
+ }
-<pre>
-location /composer {
- root /var/www/arvados-composer
- index index.html
+ location /composer/composer.yml {
+ return 200 '{ "API_HOST": "<span class="userinput">ClusterID.example.com</span>" }';
+ }
}
-</pre>
-
-h3. composer.yml
+</code></pre>
+</notextile>
-Create @/var/www/arvados-composer/composer.yml@ and set @API_HOST@ to your API server:
+{% assign arvados_component = 'arvados-composer' %}
-<pre>
-API_HOST: zzzzz.arvadosapi.com
-</pre>
+{% include 'install_packages' %}
-h3. Workbench link to composer
+{% include 'restart_api' %}
-Edit @config.yml@ and set @Services.Composer.ExternalURL@ to the location from which it is served:
+h2(#confirm-working). Confirm working installation
-<notextile>
-<pre><code>Clusters:
- zzzzz:
- Services:
- Composer:
- ExternalURL: <span class="userinput">https://workbench.zzzzz.arvadosapi.com/composer</span></code></pre>
-</notextile>
+Visit @https://workbench.ClusterID.example.com/composer@ in a browser. You should be able to log in using the login method you configured previously.
+++ /dev/null
----
-layout: default
-navsection: installguide
-title: Install the controller
-...
-{% comment %}
-Copyright (C) The Arvados Authors. All rights reserved.
-
-SPDX-License-Identifier: CC-BY-SA-3.0
-{% endcomment %}
-
-The arvados-controller service must be installed on your API server node.
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install arvados-controller</span>
-</code></pre>
-</notextile>
-
-On Red Hat-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install arvados-controller</span>
-</code></pre>
-</notextile>
-
-Verify the @arvados-controller@ program is functional:
-
-<notextile>
-<pre><code>~$ <span class="userinput">arvados-controller -h</span>
-Usage:
- -config file
-[...]
-</code></pre>
-</notextile>
-
-h3. Configure Nginx to route requests to the controller
-
-Add @upstream@ and @server@ definitions inside the @http@ section of your Nginx configuration using the following template.
-
-{% include 'notebox_begin' %}
-
-If you are adding arvados-controller to an existing system as part of the upgrade procedure, do not add a new "server" part here. Instead, add only the "upstream" part as shown here, and update your existing "server" section by changing its @proxy_pass@ directive from @http://api@ to @http://controller@.
-
-{% include 'notebox_end' %}
-
-<notextile>
-<pre><code>upstream controller {
- server 127.0.0.1:9004 fail_timeout=10s;
-}
-
-server {
- listen <span class="userinput">[your public IP address]</span>:443 ssl;
- server_name <span class="userinput">uuid_prefix.your.domain</span>;
-
- ssl on;
- ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
- ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
-
- # Refer to the comment about this setting in the passenger (arvados
- # api server) section of your Nginx configuration.
- client_max_body_size 128m;
-
- location / {
- proxy_pass http://controller;
- proxy_redirect off;
- proxy_connect_timeout 90s;
- proxy_read_timeout 300s;
-
- proxy_set_header X-Forwarded-Proto https;
- proxy_set_header Host $http_host;
- proxy_set_header X-External-Client $external_client;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- }
-}
-</code></pre>
-</notextile>
-
-Restart Nginx to apply the new configuration.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo nginx -s reload</span>
-</code></pre>
-</notextile>
-
-h3(#configuration). Configure arvados-controller
-
-Create the cluster configuration file @/etc/arvados/config.yml@ using the following template.
-
-<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Services:
- Controller:
- InternalURLs:
- "http://localhost:<span class="userinput">9004</span>": {} # must match the "upstream controller" section of your Nginx config
- RailsAPI:
- arvados-api-server:
- "http://localhost:<span class="userinput">8000</span>": {} # must match the "upstream api" section of your Nginx config
- PostgreSQL:
- ConnectionPool: 128
- Connection:
- host: localhost
- dbname: arvados_production
- user: arvados
- password: <span class="userinput">xxxxxxxx</span>
- sslmode: require
-</code></pre>
-</notextile>
-
-Create the host configuration file @/etc/arvados/environment@.
-
-<notextile>
-<pre><code>ARVADOS_NODE_PROFILE=apiserver
-</code></pre>
-</notextile>
-
-h3. Start the service (option 1: systemd)
-
-If your system does not use systemd, skip this section and follow the "runit instructions":#runit instead.
-
-If your system uses systemd, the arvados-controller service should already be set up. Restart it to load the new configuration file, and check its status:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart arvados-controller</span>
-~$ <span class="userinput">sudo systemctl status arvados-controller</span>
-● arvados-controller.service - Arvados controller
- Loaded: loaded (/lib/systemd/system/arvados-controller.service; enabled; vendor preset: enabled)
- Active: active (running) since Tue 2018-07-31 13:17:44 UTC; 3s ago
- Docs: https://doc.arvados.org/
- Main PID: 25066 (arvados-control)
- CGroup: /system.slice/arvados-controller.service
- └─25066 /usr/bin/arvados-controller
-
-Jul 31 13:17:44 zzzzz systemd[1]: Starting Arvados controller...
-Jul 31 13:17:44 zzzzz arvados-controller[25191]: {"Listen":"[::]:9004","Service":"arvados-controller","level":"info","msg":"listening","time":"2018-07-31T13:17:44.521694195Z"}
-Jul 31 13:17:44 zzzzz systemd[1]: Started Arvados controller.
-</code></pre>
-</notextile>
-
-Skip ahead to "confirm the service is working":#confirm.
-
-h3(#runit). Start the service (option 2: runit)
-
-Install runit to supervise the arvados-controller daemon. {% include 'install_runit' %}
-
-Create a supervised service.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo mkdir /etc/service/arvados-controller</span>
-~$ <span class="userinput">cd /etc/service/arvados-controller</span>
-~$ <span class="userinput">sudo mkdir log log/main</span>
-~$ <span class="userinput">printf '#!/bin/sh\nset -a\n. /etc/arvados/environment\nexec arvados-controller 2>&1\n' | sudo tee run</span>
-~$ <span class="userinput">printf '#!/bin/sh\nexec svlogd main\n' | sudo tee log/run</span>
-~$ <span class="userinput">sudo chmod +x run log/run</span>
-~$ <span class="userinput">sudo sv exit .</span>
-~$ <span class="userinput">cd -</span>
-</code></pre>
-</notextile>
-
-Use @sv stat@ and check the log file to verify the service is running.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo sv stat /etc/service/arvados-controller</span>
-run: /etc/service/arvados-controller: (pid 12520) 2s; run: log: (pid 12519) 2s
-~$ <span class="userinput">tail /etc/service/arvados-controller/log/main/current</span>
-{"Listen":"[::]:9004","Service":"arvados-controller","level":"info","msg":"listening","time":"2018-07-31T13:17:44.521694195Z"}
-</code></pre>
-</notextile>
-
-h3(#confirm). Confirm the service is working
-
-Confirm the service is listening on its assigned port and responding to requests.
-
-<notextile>
-<pre><code>~$ <span class="userinput">curl -X OPTIONS http://0.0.0.0:<b>9004</b>/login</span>
-{"errors":["Forbidden"],"error_token":"1533044555+684b532c"}
-</code></pre>
-</notextile>
-
-h3(#confirm-config). Confirm the public configuration is OK
-
-Confirm the publicly accessible configuration endpoint does not reveal any sensitive information (e.g., a secret that was mistakenly entered under the wrong configuration key). Use the jq program, if you have installed it, to make the JSON document easier to read.
-
-<notextile>
-<pre><code>~$ <span class="userinput">curl http://0.0.0.0:<b>9004</b>/arvados/v1/config | jq .</span>
-{
- "API": {
- "MaxItemsPerResponse": 1000,
- "MaxRequestAmplification": 4,
- "RequestTimeout": "5m"
- },
- ...
-</code></pre>
-</notextile>
layout: default
navsection: installguide
title: Install the cloud dispatcher
-
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-The cloud dispatch service is an *experimental* service for running containers on cloud VMs. It eliminates the need for SLURM, Node Manager, and SLURM dispatcher. It works with Microsoft Azure and Amazon EC2; future versions will also support Google Compute Engine.
+{% include 'notebox_begin_warning' %}
+arvados-dispatch-cloud is only relevant for cloud installations. Skip this section if you are installing a on premise cluster that will spool jobs to Slurm.
+{% include 'notebox_end' %}
+
+# "Introduction":#introduction
+# "Create compute node VM image":#create-image
+# "Update config.yml":#update-config
+# "Install arvados-dispatch-cloud":#install-packages
+# "Start the service":#start-service
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
+
+h2(#introduction). Introduction
+
+The cloud dispatch service is for running containers on cloud VMs. It works with Microsoft Azure and Amazon EC2; future versions will also support Google Compute Engine.
The cloud dispatch service can run on any node that can connect to the Arvados API service, the cloud provider's API, and the SSH service on cloud VMs. It is not resource-intensive, so you can run it on the API server node.
-*Only one dispatch process should be running at a time.* If you are migrating a system that currently runs @crunch-dispatch-slurm@, it is safest to remove the @crunch-dispatch-slurm@ service entirely before installing @arvados-dispatch-cloud@.
+h2(#create-image). Create compute node VM image and configure resolver
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl --now disable crunch-dispatch-slurm</span>
-~$ <span class="userinput">sudo apt-get remove crunch-dispatch-slurm</span>
-</code></pre>
-</notextile>
+Set up a VM following the steps "to set up a compute node":crunch2-slurm/install-compute-node.html
+
+Compute nodes must be able to resolve the hostnames of the API server and any keepstore servers to your internal IP addresses. You can do this by running an internal DNS resolver and configuring the compute VMs to use that resolver, or by hardcoding the services in the @/etc/hosts@ file. For example:
-h2. Create a dispatcher token
+<notextile><pre><code>10.20.30.40 <span class="userinput">ClusterID.example.com</span>
+10.20.30.41 <span class="userinput">keep1.ClusterID.example.com</span>
+10.20.30.42 <span class="userinput">keep2.ClusterID.example.com</span>
+</code></pre></notextile>
-If you haven't already done so, create an Arvados superuser token to use as SystemRootToken in your cluster config file.
+Once the VM is fully configured, create a reusable VM image from it and make note of the image id.
-{% include 'create_superuser_token' %}
+h2(#update-config). Update config.yml
-h2. Create a private key
+h3. Create a private key
Generate an SSH private key with no passphrase. Save it in the cluster configuration file (see @PrivateKey@ in the example below).
</code></pre>
</notextile>
-h2. Configure the dispatcher
+h3. Configure CloudVMs
-Add or update the following portions of your cluster configuration file, @/etc/arvados/config.yml@. Refer to "config.defaults.yml":{{site.baseurl}}/admin/config.html for information about additional configuration options.
+Add or update the following portions of your cluster configuration file, @config.yml@. Refer to "config.defaults.yml":{{site.baseurl}}/admin/config.html for information about additional configuration options.
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- ManagementToken: xyzzy
- SystemRootToken: <span class="userinput">zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz</span>
- Services:
- Controller:
- ExternalURL: "https://<span class="userinput">uuid_prefix.arvadosapi.com</span>"
+<pre><code> Services:
DispatchCloud:
InternalURLs:
"http://localhost:9006": {}
</code></pre>
</notextile>
-Minimal configuration example for Amazon EC2:
+h4. Minimal configuration example for Amazon EC2
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Containers:
+<pre><code> Containers:
CloudVMs:
ImageID: ami-01234567890abcdef
Driver: ec2
DriverParameters:
- AccessKeyID: EALMF21BJC7MKNF9FVVR
- SecretAccessKey: yKJAPmoCQOMtYWzEUQ1tKTyrocTcbH60CRvGP3pM
+ AccessKeyID: XXXXXXXXXXXXXXXXXXXX
+ SecretAccessKey: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
SecurityGroupIDs:
- sg-0123abcd
SubnetID: subnet-0123abcd
Region: us-east-1
EBSVolumeType: gp2
- AdminUsername: debian
+ AdminUsername: arvados
</code></pre>
</notextile>
-Minimal configuration example for Azure:
+h4. Minimal configuration example for Azure
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Containers:
+<pre><code> Containers:
CloudVMs:
ImageID: "https://zzzzzzzz.blob.core.windows.net/system/Microsoft.Compute/Images/images/zzzzz-compute-osDisk.55555555-5555-5555-5555-555555555555.vhd"
Driver: azure
DriverParameters:
SubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
ClientID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
- ClientSecret: 2WyXt0XFbEtutnf2hp528t6Wk9S5bOHWkRaaWwavKQo=
+ ClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
TenantID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
CloudEnvironment: AzurePublicCloud
ResourceGroup: zzzzz
</code></pre>
</notextile>
-h2. Test your configuration
+Get the @SubscriptionID@ and @TenantID@:
-First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
+<pre>
+$ az account list
+[
+ {
+ "cloudName": "AzureCloud",
+ "id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX",
+ "isDefault": true,
+ "name": "Your Subscription",
+ "state": "Enabled",
+ "tenantId": "YYYYYYYY-YYYY-YYYY-YYYYYYYY",
+ "user": {
+ "name": "you@example.com",
+ "type": "user"
+ }
+ }
+]
+</pre>
-Next, install the arvados-server package.
+You will need to create a "service principal" to use as a delegated authority for API access.
-On Red Hat-based systems:
+<notextile><pre><code>$ az ad app create --display-name "Arvados Dispatch Cloud (<span class="userinput">ClusterID</span>)" --homepage "https://arvados.org" --identifier-uris "https://<span class="userinput">ClusterID.example.com</span>" --end-date 2299-12-31 --password <span class="userinput">Your_Password</span>
+$ az ad sp create "<span class="userinput">appId</span>"
+(appId is part of the response of the previous command)
+$ az role assignment create --assignee "<span class="userinput">objectId</span>" --role Owner --scope /subscriptions/{subscriptionId}/
+(objectId is part of the response of the previous command)
+</code></pre></notextile>
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install arvados-server</span>
-</code></pre>
-</notextile>
+Now update your @config.yml@ file:
-On Debian-based systems:
+@ClientID@ is the 'appId' value.
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install arvados-server</span>
-</code></pre>
-</notextile>
+@ClientSecret@ is what was provided as <span class="userinput">Your_Password</span>.
+
+h3. Test your configuration
Run the @cloudtest@ tool to verify that your configuration works. This creates a new cloud VM, confirms that it boots correctly and accepts your configured SSH private key, and shuts it down.
Refer to the "cloudtest tool documentation":../admin/cloudtest.html for more information.
-h2. Install the dispatcher
+{% assign arvados_component = 'arvados-dispatch-cloud' %}
+
+{% include 'install_packages' %}
+
+{% include 'start_service' %}
+
+{% include 'restart_api' %}
+
+h2(#confirm-working). Confirm working installation
-On Red Hat-based systems:
+On the dispatch node, start monitoring the arvados-dispatch-cloud logs:
<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install arvados-dispatch-cloud</span>
-~$ <span class="userinput">sudo systemctl enable arvados-dispatch-cloud</span>
+<pre><code>~$ <span class="userinput">sudo journalctl -o cat -fu arvados-dispatch-cloud.service</span>
</code></pre>
</notextile>
-On Debian-based systems:
+"Make sure to install the arvados/jobs image.":install-jobs-image.html
+
+Submit a simple container request:
<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install arvados-dispatch-cloud</span>
+<pre><code>shell:~$ <span class="userinput">arv container_request create --container-request '{
+ "name": "test",
+ "state": "Committed",
+ "priority": 1,
+ "container_image": "arvados/jobs:latest",
+ "command": ["echo", "Hello, Crunch!"],
+ "output_path": "/out",
+ "mounts": {
+ "/out": {
+ "kind": "tmp",
+ "capacity": 1000
+ }
+ },
+ "runtime_constraints": {
+ "vcpus": 1,
+ "ram": 1048576
+ }
+}'</span>
</code></pre>
</notextile>
-{% include 'notebox_begin' %}
+This command should return a record with a @container_uuid@ field. Once @arvados-dispatch-cloud@ polls the API server for new containers to run, you should see it dispatch that same container.
-The arvados-dispatch-cloud package includes configuration files for systemd. If you're using a different init system, configure a service to start and stop an @arvados-dispatch-cloud@ process as desired.
+The @arvados-dispatch-cloud@ API a list of queued and running jobs. For example:
-{% include 'notebox_end' %}
+<notextile>
+<pre><code>~$ <span class="userinput">curl ...</span>
+</code></pre>
+</notextile>
-h2. Verify the dispatcher is running
+When the container finishes, the dispatcher will log it.
-Use your ManagementToken to test the dispatcher's metrics endpoint.
+After the container finishes, you can get the container record by UUID *from a shell server* to see its results:
<notextile>
-<pre><code>~$ <span class="userinput">token="xyzzy"</span>
-~$ <span class="userinput">curl -H "Authorization: Bearer $token" http://localhost:9006/metrics</span>
-# HELP arvados_dispatchcloud_containers_running Number of containers reported running by cloud VMs.
-# TYPE arvados_dispatchcloud_containers_running gauge
-arvados_dispatchcloud_containers_running 0
-[...]
+<pre><code>shell:~$ <span class="userinput">arv get <b>zzzzz-dz642-hdp2vpu9nq14tx0</b></span>
+{
+ ...
+ "exit_code":0,
+ "log":"a01df2f7e5bc1c2ad59c60a837e90dc6+166",
+ "output":"d41d8cd98f00b204e9800998ecf8427e+0",
+ "state":"Complete",
+ ...
+}
+</code></pre>
+</notextile>
+
+You can use standard Keep tools to view the container's output and logs from their corresponding fields. For example, to see the logs from the collection referenced in the @log@ field:
+
+<notextile>
+<pre><code>~$ <span class="userinput">arv keep ls <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b></span>
+./crunch-run.txt
+./stderr.txt
+./stdout.txt
+~$ <span class="userinput">arv-get <b>a01df2f7e5bc1c2ad59c60a837e90dc6+166</b>/stdout.txt</span>
+2016-08-05T13:53:06.201011Z Hello, Crunch!
</code></pre>
</notextile>
+
+If the container does not dispatch successfully, refer to the @arvados-dispatch-cloud@ logs for information about why it failed.
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Set up Docker
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% include 'install_compute_docker' %}
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Install arvados/jobs image
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+h2. Create a project for Docker images
+
+Here we create a default project for the standard Arvados Docker images, and give all users read access to it. The project is owned by the system user.
+
+<notextile>
+<pre><code>~$ <span class="userinput">uuid_prefix=$(arv --format=uuid user current | cut -d- -f1)</span>
+~$ <span class="userinput">project_uuid=$(arv --format=uuid group create --group '{"owner_uuid":"'$uuid_prefix'-tpzed-000000000000000", "group_class":"project", "name":"Arvados Standard Docker Images"}')</span>
+~$ <span class="userinput">echo "Arvados project uuid is '$project_uuid'"</span>
+~$ <span class="userinput">read -rd $'\000' newlink <<EOF; arv link create --link "$newlink"</span>
+<span class="userinput">{
+ "tail_uuid":"${uuid_prefix}-j7d0g-fffffffffffffff",
+ "head_uuid":"$project_uuid",
+ "link_class":"permission",
+ "name":"can_read"
+}
+EOF</span>
+</code></pre></notextile>
+
+h2. Import the arvados/jobs docker image
+
+In order to start workflows from workbench, there needs to be Docker image @arvados/jobs@ tagged with the version of Arvados you are installing. The following command downloads the latest arvados/jobs image from Docker Hub, loads it into Keep. In this example @$project_uuid@ should be the UUID of the "Arvados Standard Docker Images" project.
+
+<notextile>
+<pre><code>~$ <span class="userinput">arv-keepdocker --pull arvados/jobs latest --project-uuid $project_uuid</span>
+</code></pre></notextile>
+
+If the image needs to be downloaded from Docker Hub, the command can take a few minutes to complete, depending on available network bandwidth.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Keep-balance deletes unreferenced and overreplicated blocks from Keep servers, makes additional copies of underreplicated blocks, and moves blocks into optimal locations as needed (e.g., after adding new servers). See "Balancing Keep servers":{{site.baseurl}}/admin/keep-balance.html for usage details.
-
-{% include 'notebox_begin' %}
+# "Introduction":#introduction
+# "Update config.yml":#update-config
+# "Install keep-balance package":#install-packages
+# "Start the service":#start-service
-If you are installing keep-balance on an existing system with valuable data, you can run keep-balance in "dry run" mode first and review its logs as a precaution. To do this, edit your keep-balance startup script to use the flags @-commit-pulls=false -commit-trash=false@.
-
-{% include 'notebox_end' %}
+h2(#introduction). Introduction
-h2. Install keep-balance
+Keep-balance deletes unreferenced and overreplicated blocks from Keep servers, makes additional copies of underreplicated blocks, and moves blocks into optimal locations as needed (e.g., after adding new servers). See "Balancing Keep servers":{{site.baseurl}}/admin/keep-balance.html for usage details.
Keep-balance can be installed anywhere with network access to Keep services. Typically it runs on the same host as keepproxy.
-*A cluster should have only one keep-balance process running at a time.*
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install keep-balance</span>
-</code></pre>
-</notextile>
-
-On Red Hat-based systems:
+*A cluster should have only one instance of keep-balance running at a time.*
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install keep-balance</span>
-</code></pre>
-</notextile>
+{% include 'notebox_begin' %}
-Verify that @keep-balance@ is functional:
+If you are installing keep-balance on an existing system with valuable data, you can run keep-balance in "dry run" mode first and review its logs as a precaution. To do this, edit your keep-balance startup script to use the flags @-commit-pulls=false -commit-trash=false@.
-<notextile>
-<pre><code>~$ <span class="userinput">keep-balance -h</span>
-...
-Usage of ./keep-balance:
- -commit-pulls
- send pull requests (make more replicas of blocks that are underreplicated or are not in optimal rendezvous probe order)
- -commit-trash
- send trash requests (delete unreferenced old blocks, and excess replicas of overreplicated blocks)
-...
-</code></pre>
-</notextile>
+{% include 'notebox_end' %}
-h3. Update the cluster config
+h2(#update-config). Update the cluster config
-Edit the cluster config at @/etc/arvados/config.yml@ and set @Services.Keepbalance.InternalURLs@. Replace @uuid_prefix@ with your cluster id.
+Edit the cluster config at @config.yml@ and set @Services.Keepbalance.InternalURLs@. This port is only used to publish metrics.
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Services:
+<pre><code> Services:
Keepbalance:
InternalURLs:
- "http://localhost:9005/": {}
- TLS:
- Insecure: false
+ "http://<span class="userinput">keep.ClusterID.example.com</span>:9005/": {}
</code></pre>
</notextile>
-Set @TLS.Insecure: true@ if your API server’s TLS certificate is not signed by a recognized CA.
-
-h3. Start the service (option 1: systemd)
-
-If your system does not use systemd, skip this section and follow the "runit instructions":#runit instead.
-
-If your system uses systemd, the keep-balance service should already be set up. Start it and check its status:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart keep-balance</span>
-~$ <span class="userinput">sudo systemctl status keep-balance</span>
-● keep-balance.service - Arvados Keep Balance
- Loaded: loaded (/lib/systemd/system/keep-balance.service; enabled)
- Active: active (running) since Sat 2017-02-14 18:46:01 UTC; 3 days ago
- Docs: https://doc.arvados.org/
- Main PID: 541 (keep-balance)
- CGroup: /system.slice/keep-balance.service
- └─541 /usr/bin/keep-balance -commit-pulls -commit-trash
-
-Feb 14 18:46:01 zzzzz.arvadosapi.com keep-balance[541]: 2017/02/14 18:46:01 starting up: will scan every 10m0s and on SIGUSR1
-Feb 14 18:56:01 zzzzz.arvadosapi.com keep-balance[541]: 2017/02/14 18:56:01 Run: start
-Feb 14 18:56:01 zzzzz.arvadosapi.com keep-balance[541]: 2017/02/14 18:56:01 skipping zzzzz-bi6l4-rbtrws2jxul6i4t with service type "proxy"
-Feb 14 18:56:01 zzzzz.arvadosapi.com keep-balance[541]: 2017/02/14 18:56:01 clearing existing trash lists, in case the new rendezvous order differs from previous run
-</code></pre>
-</notextile>
-
-h3(#runit). Start the service (option 2: runit)
-
-Install runit to supervise the keep-balance daemon. {% include 'install_runit' %}
-
-Create a supervised service.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo mkdir /etc/service/keep-balance</span>
-~$ <span class="userinput">cd /etc/service/keep-balance</span>
-~$ <span class="userinput">sudo mkdir log log/main</span>
-~$ <span class="userinput">printf '#!/bin/sh\nexec keep-balance -commit-pulls -commit-trash 2>&1\n' | sudo tee run</span>
-~$ <span class="userinput">printf '#!/bin/sh\nexec svlogd main\n' | sudo tee log/run</span>
-~$ <span class="userinput">sudo chmod +x run log/run</span>
-~$ <span class="userinput">sudo sv exit .</span>
-~$ <span class="userinput">cd -</span>
-</code></pre>
-</notextile>
-
-Use @sv stat@ and check the log file to verify the service is running.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo sv stat /etc/service/keep-balance</span>
-run: /etc/service/keep-balance: (pid 12520) 2s; run: log: (pid 12519) 2s
-~$ <span class="userinput">tail /etc/service/keep-balance/log/main/current</span>
-2017/02/14 18:46:01 starting up: will scan every 10m0s and on SIGUSR1
-2017/02/14 18:56:01 Run: start
-2017/02/14 18:56:01 skipping zzzzz-bi6l4-rbtrws2jxul6i4t with service type "proxy"
-2017/02/14 18:56:01 clearing existing trash lists, in case the new rendezvous order differs from previous run
-</code></pre>
-</notextile>
-
-h2. Enable garbage collection
-
Ensure your cluster configuration has @Collections.BlobTrash: true@ (this is the default).
<notextile>
-<pre><code>~$ arvados-server config-dump | grep BlobTrash:
+<pre><code># arvados-server config-dump | grep BlobTrash:
BlobTrash: true
</code></pre>
</notextile>
If BlobTrash is false, unneeded blocks will be counted and logged by keep-balance, but they will not be deleted.
+
+{% assign arvados_component = 'keep-balance' %}
+
+{% include 'install_packages' %}
+
+{% include 'start_service' %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-The Keep-web server provides read/write HTTP (WebDAV) access to files stored in Keep. It serves public data to unauthenticated clients, and serves private data to clients that supply Arvados API tokens. It can be installed anywhere with access to Keep services, typically behind a web proxy that provides TLS support. See the "godoc page":http://godoc.org/github.com/curoverse/arvados/services/keep-web for more detail.
+# "Introduction":#introduction
+# "Configure DNS":#introduction
+# "Configure anonymous user token.yml":#update-config
+# "Update nginx configuration":#update-nginx
+# "Install keep-web package":#install-packages
+# "Start the service":#start-service
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
-By convention, we use the following hostnames for the Keep-web service:
+h2(#introduction). Introduction
+
+The Keep-web server provides read/write HTTP (WebDAV) access to files stored in Keep. This makes it easy to access files in Keep from a browser, or mount Keep as a network folder using WebDAV support in various operating systems. It serves public data to unauthenticated clients, and serves private data to clients that supply Arvados API tokens. It can be installed anywhere with access to Keep services, typically behind a web proxy that provides TLS support. See the "godoc page":http://godoc.org/github.com/curoverse/arvados/services/keep-web for more detail.
+
+h2(#dns). Configure DNS
+
+It is important to properly configure the keep-web service to so it does not open up cross-site-scripting (XSS) attacks. A HTML file can be stored in collection. If an attacker causes a victim to visit that HTML file through Workbench, it will be rendered by the browser. If all collections are served at the same domain, the browser will consider collections as coming from the same origin and thus have access to the same browsing data (such as API token), enabling malicious Javascript in the HTML file to access Arvados as the victim.
+
+There are two approaches to mitigate this.
+
+# The service can tell the browser that all files should go to download instead of in-browser preview, except in situations where an attacker is unlikely to be able to gain access to anything they didn't already have access to.
+# Each each collection served by @keep-web@ is served on its own virtual host. This allows for file with executable content to be displayed in-browser securely. The virtual host embeds the collection uuid or portable data hash in the hostname. For example, a collection with uuid @xxxxx-4zz18-tci4vn4fa95w0zx@ could be served as @xxxxx-4zz18-tci4vn4fa95w0zx.collections.ClusterID.example.com@ . The portable data hash @dd755dbc8d49a67f4fe7dc843e4f10a6+54@ could be served at @dd755dbc8d49a67f4fe7dc843e4f10a6-54.collections.ClusterID.example.com@ . This requires "wildcard DNS record":https://en.wikipedia.org/wiki/Wildcard_DNS_record and "wildcard TLS certificate.":https://en.wikipedia.org/wiki/Wildcard_certificate
+
+h3. Collections download URL
+
+Downloads links will served from the the URL in @Services.WebDAVDownload.ExternalURL@ . The collection uuid or PDH is put in the URL path.
+
+If blank, serve links to WebDAV with @disposition=attachment@ query param. Unlike preview links, browsers do not render attachments, so there is no risk of XSS.
+
+If @WebDAVDownload@ is blank, and @WebDAV@ has a single origin (not wildcard, see below), then Workbench will show an error page
<notextile>
-<pre><code>download.<span class="userinput">uuid_prefix</span>.your.domain
-collections.<span class="userinput">uuid_prefix</span>.your.domain
-*.collections.<span class="userinput">uuid_prefix</span>.your.domain
+<pre><code> Services:
+ WebDAVDownload:
+ ExternalURL: https://<span class="userinput">download.ClusterID.example.com</span>
</code></pre>
</notextile>
-The above hostnames should resolve from anywhere on the internet.
+h3. Collections preview URL
+
+Collections will be served using the URL pattern in @Services.WebDAV.ExternalURL@ . If blank, use @Services.WebDAVDownload.ExternalURL@ instead, and disable inline preview. If both are empty, downloading collections from workbench will be impossible. When wildcard domains configured, credentials are still required to access non-public data.
+
+h4. In their own subdomain
+
+Collections can be served from their own subdomain:
-h2. Install Keep-web
+<notextile>
+<pre><code> Services:
+ WebDAV:
+ ExternalURL: https://<span class="userinput">*.collections.ClusterID.example.com/</span>
+</code></pre>
+</notextile>
-Typically Keep-web runs on the same host as Keepproxy.
+h4. Under the main domain
-On Debian-based systems:
+Alternately, they can go under the main domain by including @--@:
<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install keep-web</span>
+<pre><code> Services:
+ WebDAV:
+ ExternalURL: https://<span class="userinput">*--collections.ClusterID.example.com/</span>
</code></pre>
</notextile>
-On Red Hat-based systems:
+h4. From a single domain
+
+Serve preview links from a single domain, setting uuid or pdh in the path (similar to downloads). This configuration only allows previews of public data (data accessible by the anonymous user) and collection-sharing links (where the token is already embedded in the URL); it will ignore authorization headers, so a request for non-public data may return "404 Not Found" even if normally valid credentials were provided.
<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install keep-web</span>
+<pre><code> Services:
+ WebDAV:
+ ExternalURL: https://<span class="userinput">collections.ClusterID.example.com/</span>
</code></pre>
</notextile>
-Verify that @Keep-web@ is functional:
+Note the trailing slash.
+
+h2. Set InternalURLs
<notextile>
-<pre><code>~$ <span class="userinput">keep-web -h</span>
-Usage of keep-web:
- -config file
- Site configuration file (default may be overridden by setting an ARVADOS_CONFIG environment variable) (default "/etc/arvados/config.yml")
- -dump-config
- write current configuration to stdout and exit
-[...]
- -version
- print version information and exit.
+<pre><code> Services:
+ WebDAV:
+ InternalURLs:
+ http://"<span class="userinput">localhost:9002</span>": {}
</code></pre>
</notextile>
-h3. Set up a reverse proxy with TLS support
+h2(#update-config). Configure anonymous user token
-The Keep-web service will be accessible from anywhere on the internet, so we recommend using TLS for transport encryption.
+{% assign railscmd = "bundle exec ./script/get_anonymous_user_token.rb --get" %}
+{% assign railsout = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" %}
+If you intend to use Keep-web to serve public data to anonymous clients, configure it with an anonymous token. Use the following command on the <strong>API server</strong> to create an anonymous user token. {% include 'install_rails_command' %}
-This is best achieved by putting a reverse proxy with TLS support in front of Keep-web, running on port 443 and passing requests to Keep-web on port 9002 (or whatever port you chose in your run script).
+<notextile>
+<pre><code> Users:
+ AnonymousUserToken: <span class="userinput">"{{railsout}}"</span>
+</code></pre>
+</notextile>
+
+Set @Users.AnonymousUserToken: ""@ (empty string) or leave it out if you do not want to serve public data.
+
+h3. Update nginx configuration
-Note: A wildcard TLS certificate is required in order to support a full-featured secure Keep-web service. Without it, Keep-web can offer file downloads for all Keep data; however, in order to avoid cross-site scripting vulnerabilities, Keep-web refuses to serve private data as web content except when it is accessed using a "secret link" share. With a wildcard TLS certificate and DNS configured appropriately, all data can be served as web content.
+Put a reverse proxy with SSL support in front of keep-web. Keep-web itself runs on the port 25107 (or whatever is specified in @Services.Keepproxy.InternalURL@) the reverse proxy runs on port 443 and forwards requests to Keepproxy.
-For example, using Nginx:
+Use a text editor to create a new file @/etc/nginx/conf.d/keep-web.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
<notextile><pre>
upstream keep-web {
}
server {
- listen <span class="userinput">[your public IP address]</span>:443 ssl;
- server_name download.<span class="userinput">uuid_prefix</span>.your.domain
- collections.<span class="userinput">uuid_prefix</span>.your.domain
- *.collections.<span class="userinput">uuid_prefix</span>.your.domain
- ~.*--collections.<span class="userinput">uuid_prefix</span>.your.domain;
+ listen *:443 ssl;
+ server_name <span class="userinput">download.ClusterID.example.com</span>
+ <span class="userinput">collections.ClusterID.example.com</span>
+ <span class="userinput">*.collections.ClusterID.example.com</span>
+ <span class="userinput">~.*--collections.ClusterID.example.com</span>;
proxy_connect_timeout 90s;
proxy_read_timeout 300s;
ssl on;
- ssl_certificate <span class="userinput"/>YOUR/PATH/TO/cert.pem</span>;
- ssl_certificate_key <span class="userinput"/>YOUR/PATH/TO/cert.key</span>;
+ ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
+ ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
location / {
proxy_pass http://keep-web;
</pre></notextile>
{% include 'notebox_begin' %}
-If you restrict access to your Arvados services based on network topology -- for example, your proxy server is not reachable from the public internet -- additional proxy configuration might be needed to thwart cross-site scripting attacks that would circumvent your restrictions. Read the "'Intranet mode' section of the Keep-web documentation":https://godoc.org/github.com/curoverse/arvados/services/keep-web#hdr-Intranet_mode now.
-{% include 'notebox_end' %}
-
-h3(#dns). Configure DNS
-
-Configure your DNS servers so the following names resolve to your Nginx proxy's public IP address.
-* @download.uuid_prefix.your.domain@
-* @collections.uuid_prefix.your.domain@
-* @*--collections.uuid_prefix.your.domain@, if you have a wildcard TLS certificate valid for @*.uuid_prefix.your.domain@ and your DNS server allows this without interfering with other DNS names.
-* @*.collections.uuid_prefix.your.domain@, if you have a wildcard TLS certificate valid for these names.
-
-If neither of the above wildcard options is feasible, you have two choices:
-# Serve web content at @collections.uuid_prefix.your.domain@, but only for unauthenticated requests (public data and collection sharing links). Authenticated requests will always result in file downloads, using the @download@ name. For example, the Workbench "preview" button and the "view entire log file" link will invoke file downloads instead of displaying content in the browser window.
-# In the special case where you know you are immune to XSS exploits, you can enable the "trust all content" mode in Keep-web and Workbench (setting @Collections.TrustAllContent: true@ on the config file). With this enabled, inline web content can be served from a single @collections@ host name; no wildcard DNS or certificate is needed. Do not do this without understanding the security implications described in the "Keep-web documentation":http://godoc.org/github.com/curoverse/arvados/services/keep-web.
+If you restrict access to your Arvados services based on network topology -- for example, your proxy server is not reachable from the public internet -- additional proxy configuration might be needed to thwart cross-site scripting attacks that would circumvent your restrictions.
-h2. Configure Keep-web
+Normally, Keep-web accepts requests for multiple collections using the same host name, provided the client's credentials are not being used. This provides insufficient XSS protection in an installation where the "anonymously accessible" data is not truly public, but merely protected by network topology.
-{% assign railscmd = "bundle exec ./script/get_anonymous_user_token.rb --get" %}
-{% assign railsout = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz" %}
-If you intend to use Keep-web to serve public data to anonymous clients, configure it with an anonymous token. You can use the same one you used when you set up your Keepproxy server, or use the following command on the <strong>API server</strong> to create another. {% include 'install_rails_command' %}
-
-Set the cluster config file like the following:
-
-<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Services:
- Controller:
- ExternalURL: "https://<span class="userinput">uuid_prefix</span>.your.domain"
- WebDAV:
- InternalURLs:
- "http://keep_web_hostname_goes_here:9002/": {}
- ExternalURL: "https://collections.<span class="userinput">uuid_prefix</span>.your.domain"
- WebDAVDownload:
- InternalURLs:
- "http://keep_web_hostname_goes_here:9002/": {}
- ExternalURL: "https://download.<span class="userinput">uuid_prefix</span>.your.domain"
- Users:
- AnonymousUserToken: "{{railsout}}"
- Collections:
- TrustAllContent: false
- TLS:
- Insecure: false
-</code></pre>
-</notextile>
-
-Set @Users.AnonymousUserToken: ""@ (empty string) if you do not want to serve public data.
-
-Set @TLS.Insecure: true@ if your API server's TLS certificate is not signed by a recognized CA.
-
-Workbench has features like "download file from collection" and "show image" which work better if the content is served by Keep-web rather than Workbench itself. We recommend using the two different hostnames ("download" and "collections" above) for file downloads and inline content respectively.
-
-The following entry on your cluster configuration file (@/etc/arvados/config.yml@) details the URL that will be used for file downloads.
-
-<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Services:
- WebDAVDownload:
- ExternalURL: "https://download.<span class="userinput">uuid_prefix</span>.your.domain"
-</code></pre>
-</notextile>
-
-Additionally, one of the following entries on your cluster configuration file (depending on your DNS setup) tells Workbench which URL will be used to serve user content that can be displayed in the browser, like image previews and static HTML pages.
-
-<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Services:
- WebDAV:
- ExternalURL: "https://*--collections.<span class="userinput">uuid_prefix</span>.your.domain"
- ExternalURL: "https://*.collections.<span class="userinput">uuid_prefix</span>.your.domain"
- ExternalURL: "https://collections.<span class="userinput">uuid_prefix</span>.your.domain"
-</code></pre>
-</notextile>
+In such cases -- for example, a site which is not reachable from the internet, where some data is world-readable from Arvados's perspective but is intended to be available only to users within the local network -- the downstream proxy should configured to return 401 for all paths beginning with "/c="
+{% include 'notebox_end' %}
-h2. Run Keep-web
+{% assign arvados_component = 'keep-web' %}
-h3. Start the service (option 1: systemd)
+{% include 'install_packages' %}
-If your system does not use systemd, skip this section and follow the "runit instructions":#runit instead.
+{% include 'start_service' %}
-If your system uses systemd, the keep-web service should already be set up. Start it and check its status:
+{% include 'restart_api' %}
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart keep-web</span>
-~$ <span class="userinput">sudo systemctl status keep-web</span>
-● keep-web.service - Arvados Keep web gateway
- Loaded: loaded (/lib/systemd/system/keep-web.service; enabled)
- Active: active (running) since Sat 2019-08-10 10:33:21 UTC; 3 days ago
- Docs: https://doc.arvados.org/
- Main PID: 4242 (keep-web)
- CGroup: /system.slice/keep-web.service
- └─4242 /usr/bin/keep-web
-[...]
-</code></pre>
-</notextile>
+h2(#confirm-working). Confirm working installation
-h3(#runit). Start the service (option 2: runit)
+<notextile><code><pre>
+$ curl -H "Authorization: Bearer $system_root_token" https://<span class="userinput">download.ClusterID.example.com</span>/c=59389a8f9ee9d399be35462a0f92541c-53/_/hello.txt
+</code></pre></notextile>
-Install runit to supervise the Keep-web daemon. {% include 'install_runit' %}
+If wildcard collections domains are configured:
-The basic command to start Keep-web in the service run script is:
+<notextile><code><pre>
+$ curl -H "Authorization: Bearer $system_root_token" https://<span class="userinput">59389a8f9ee9d399be35462a0f92541c-53.collections.ClusterID.example.com</span>/hello.txt
+</code></pre></notextile>
-<notextile>
-<pre><code>exec keep-web
-</code></pre>
-</notextile>
+If using a single collections preview domain:
+<notextile><code><pre>
+$ curl https://<span class="userinput">collections.ClusterID.example.com</span>/c=59389a8f9ee9d399be35462a0f92541c-53/t=$system_root_token/_/hello.txt
+</code></pre></notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
+# "Introduction":#introduction
+# "Update config.yml":#update-config
+# "Update nginx configuration":#update-nginx
+# "Install keepproxy package":#install-packages
+# "Start the service":#start-service
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
+
+h2(#introduction). Introduction
+
The Keepproxy server is a gateway into your Keep storage. Unlike the Keepstore servers, which are only accessible on the local LAN, Keepproxy is suitable for clients located elsewhere on the internet. Specifically, in contrast to Keepstore:
-* A client writing through Keepproxy generates less network traffic: the client sends a single copy of a data block, and Keepproxy sends copies to the appropriate Keepstore servers.
+* A client writing through Keepproxy sends a single copy of a data block, and Keepproxy distributes copies to the appropriate Keepstore servers.
* A client can write through Keepproxy without precomputing content hashes. Notably, the browser-based upload feature in Workbench requires Keepproxy.
* Keepproxy checks API token validity before processing requests. (Clients that can connect directly to Keepstore can use it as scratch space even without a valid API token.)
<div class="offset1">
table(table table-bordered table-condensed).
-|_Hostname_|
-|keep.@uuid_prefix@.your.domain|
+|_. Hostname|
+|@keep.ClusterID.example.com@|
</div>
This hostname should resolve from anywhere on the internet.
-h2. Install Keepproxy
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install keepproxy</span>
-</code></pre>
-</notextile>
-
-On Red Hat-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install keepproxy</span>
-</code></pre>
-</notextile>
-
-Verify that Keepproxy is functional:
-
-<notextile>
-<pre><code>~$ <span class="userinput">keepproxy -h</span>
-Usage of keepproxy:
- -config file
- Site configuration file (default may be overridden by setting an ARVADOS_CONFIG environment variable) (default "/etc/arvados/config.yml")
- -dump-config
- write current configuration to stdout and exit
-[...]
- -version
- print version information and exit.
-</code></pre>
-</notextile>
+h2(#update-config). Update config.yml
-h3. Update the cluster config
-
-Edit the cluster config at @/etc/arvados/config.yml@ and set @Services.Keepproxy.ExternalURL@ and @Services.Keepproxy.InternalURLs@. Replace @uuid_prefix@ with your cluster id.
+Edit the cluster config at @config.yml@ and set @Services.Keepproxy.ExternalURL@ and @Services.Keepproxy.InternalURLs@.
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- Services:
+<pre><code> Services:
Keepproxy:
- ExternalURL: <span class="userinput">https://keep.uuid_prefix.your.domain</span>
+ ExternalURL: <span class="userinput">https://keep.ClusterID.example.com</span>
InternalURLs:
- <span class="userinput">"http://localhost:25107": {}</span>
+ <span class="userinput">"http://localhost:25107": {}</span>
</span></code></pre>
</notextile>
-h3. Set up a reverse proxy with SSL support
+h2(#update-nginx). Update Nginx configuration
-Because the Keepproxy is intended for access from anywhere on the internet, it is recommended to use SSL for transport encryption.
+Put a reverse proxy with SSL support in front of Keepproxy. Keepproxy itself runs on the port 25107 (or whatever is specified in @Services.Keepproxy.InternalURL@) the reverse proxy runs on port 443 and forwards requests to Keepproxy.
-This is best achieved by putting a reverse proxy with SSL support in front of Keepproxy. Keepproxy itself runs on port 25107 by default; your reverse proxy can run on port 443 and pass requests to Keepproxy on port 25107.
+Use a text editor to create a new file @/etc/nginx/conf.d/keepproxy.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
-<notextile><pre>
-upstream keepproxy {
+<notextile><pre><code>upstream keepproxy {
server 127.0.0.1:<span class="userinput">25107</span>;
}
server {
- listen <span class="userinput">[your public IP address]</span>:443 ssl;
- server_name keep.<span class="userinput">uuid_prefix</span>.your.domain;
+ listen *:443 ssl;
+ server_name <span class="userinput">keep.ClusterID.example.com</span>;
proxy_connect_timeout 90s;
proxy_read_timeout 300s;
proxy_http_version 1.1;
proxy_request_buffering off;
- ssl on;
- ssl_certificate /etc/nginx/keep.<span class="userinput">uuid_prefix</span>.your.domain-ssl.crt;
- ssl_certificate_key /etc/nginx/keep.<span class="userinput">uuid_prefix</span>.your.domain-ssl.key;
+ ssl on;
+ ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
+ ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
# Clients need to be able to upload blocks of data up to 64MiB in size.
client_max_body_size 64m;
proxy_pass http://keepproxy;
}
}
-</pre></notextile>
+</code></pre></notextile>
Note: if the Web uploader is failing to upload data and there are no logs from keepproxy, be sure to check the nginx proxy logs. In addition to "GET" and "PUT", The nginx proxy must pass "OPTIONS" requests to keepproxy, which should respond with appropriate Cross-origin resource sharing headers. If the CORS headers are not present, brower security policy will cause the upload request to silently fail. The CORS headers are generated by keepproxy and should not be set in nginx.
-h3. Tell the API server about the Keepproxy server
+{% assign arvados_component = 'keepproxy' %}
-The API server needs to be informed about the presence of your Keepproxy server.
+{% include 'install_packages' %}
-First, if you don't already have an admin token, create a superuser token.
+{% include 'start_service' %}
-{% include 'create_superuser_token' %}
+{% include 'restart_api' %}
-Configure your environment to run @arv@ using the output of create_superuser_token.rb:
+h2(#confirm-working). Confirm working installation
-<pre>
-export ARVADOS_API_HOST=zzzzz.example.com
-export ARVADOS_API_TOKEN=zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
-</pre>
+Log into a host that is on a network external to your private Arvados network. The host should be able to contact your keepproxy server (eg @keep.ClusterID.example.com@), but not your keepstore servers (eg keep[0-9].ClusterID.example.com).
-<notextile>
-<pre><code>~$ <span class="userinput">uuid_prefix=`arv --format=uuid user current | cut -d- -f1`</span>
-~$ <span class="userinput">echo "Site prefix is '$uuid_prefix'"</span>
-~$ <span class="userinput">read -rd $'\000' keepservice <<EOF; arv keep_service create --keep-service "$keepservice"</span>
-<span class="userinput">{
- "service_host":"<strong>keep.$uuid_prefix.your.domain</strong>",
- "service_port":443,
- "service_ssl_flag":true,
- "service_type":"proxy"
-}
-EOF</span>
-</code></pre></notextile>
+@ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ must be set in the environment.
-h2. Run Keepproxy
+@ARVADOS_API_HOST@ should be the hostname of the API server.
-h3. Start the service (option 1: systemd)
+@ARVADOS_API_TOKEN@ should be the system root token.
-If your system does not use systemd, skip this section and follow the "runit instructions":#runit instead.
+Install the "Command line SDK":{{site.baseurl}}/sdk/cli/install.html
-If your system uses systemd, the keepproxy service should already be set up. Start it and check its status:
+Check that the keepproxy server is in the @keep_service@ "accessible" list:
<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart keepproxy</span>
-~$ <span class="userinput">sudo systemctl status keepproxy</span>
-● keepproxy.service - Arvados Keep Proxy
- Loaded: loaded (/lib/systemd/system/keepproxy.service; enabled)
- Active: active (running) since Tue 2019-07-23 09:33:47 EDT; 3 weeks 1 days ago
- Docs: https://doc.arvados.org/
- Main PID: 1150 (Keepproxy)
- CGroup: /system.slice/keepproxy.service
- └─1150 /usr/bin/keepproxy
+<pre><code>
+$ <span class="userinput">arv keep_service accessible</span>
[...]
</code></pre>
</notextile>
-h3(#runit). Start the service (option 2: runit)
-
-Install runit to supervise the Keep-web daemon. {% include 'install_runit' %}
-
-h3. Testing keepproxy
-
-Log into a host that is on an external network from your private Arvados network. The host should be able to contact your keepproxy server (eg keep.$uuid_prefix.arvadosapi.com), but not your keepstore servers (eg keep[0-9].$uuid_prefix.arvadosapi.com).
+If keepstore does not show up in the "accessible" list, and you are accessing it from within the private network, check that you have "properly configured the @geo@ block for the API server":install-api-server.html#update-nginx .
Install the "Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html
-@ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ must be set in the environment.
+You should now be able to use @arv-put@ to upload collections and @arv-get@ to fetch collections. Be sure to execute this from _outside_ the cluster's private network.
-You should now be able to use @arv-put@ to upload collections and @arv-get@ to fetch collections, for an example see "Testing keep.":install-keepstore.html#testing on the keepstore install page.
+{% include 'arv_put_example' %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
+# "Introduction":#introduction
+# "Update config.yml":#update-config
+# "Install keepstore package":#install-packages
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
+# "Note on storage management":#note
+
+h2. Introduction
+
Keepstore provides access to underlying storage for reading and writing content-addressed blocks, with enforcement of Arvados permissions. Keepstore supports a variety of cloud object storage and POSIX filesystems for its backing store.
h3. Plan your storage layout
<div class="offset1">
table(table table-bordered table-condensed).
-|_Hostname_|
-|keep0.@uuid_prefix@.your.domain|
-|keep1.@uuid_prefix@.your.domain|
+|_. Hostname|
+|@keep0.ClusterID.example.com@|
+|@keep1.ClusterID.example.com@|
</div>
Keepstore servers should not be directly accessible from the Internet (they are accessed via "keepproxy":install-keepproxy.html), so the hostnames only need to resolve on the private network.
-h2. Install Keepstore
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install keepstore</span>
-</code></pre>
-</notextile>
-
-On Red Hat-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install keepstore</span>
-</code></pre>
-</notextile>
-
-Verify that Keepstore is functional:
-
-<notextile>
-<pre><code>~$ <span class="userinput">keepstore --version</span>
-</code></pre>
-</notextile>
+h2(#update-config). Update cluster config
-h3. Create a superuser token
+h3. Configure storage volumes
-If you haven't already done so, create a superuser token.
+Fill in the @Volumes@ section of @config.yml@ for each storage volume. Available storage volume types include POSIX filesystems and cloud object storage. It is possible to have different volume types in the same cluster.
-{% include 'create_superuser_token' %}
+* To use a POSIX filesystem, including both local filesystems (ext4, xfs) and network file system such as GPFS or Lustre, follow the setup instructions on "Filesystem storage":configure-fs-storage.html
+* If you are using S3-compatible object storage (including Amazon S3, Google Cloud Storage, and Ceph RADOS), follow the setup instructions on "S3 Object Storage":configure-s3-object-storage.html
+* If you are using Azure Blob Storage, follow the setup instructions on "Azure Blob Storage":configure-azure-blob-storage.html
-h3. Update cluster config file
+h3. List services
-Add or update the following sections of @/etc/arvados/config.yml@ as needed. Refer to the examples and comments in the "default config.yml file":{{site.baseurl}}/admin/config.html for more information.
+Add each keepstore server to the @Services.Keepstore@ section of @/etc/arvados/config.yml@ .
<notextile>
-<pre><code>Clusters:
- <span class="userinput">uuid_prefix</span>:
- SystemRootToken: zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
- Services:
+<pre><code> Services:
Keepstore:
+ # No ExternalURL because they are only accessed by the internal subnet.
InternalURLs:
- "http://<span class="userinput">keep0.uuid_prefix.example.com</span>:25107/": {}
- API:
- MaxKeepBlobBuffers: 128
+ "http://<span class="userinput">keep0.ClusterID.example.com</span>:25107/": {}
+ "http://<span class="userinput">keep1.ClusterID.example.com</span>:25107/": {}
+ # and so forth
</code></pre>
</notextile>
-h3. Note on storage management
-
-On its own, a keepstore server never deletes data. Instead, the keep-balance service determines which blocks are candidates for deletion and instructs the keepstore to move those blocks to the trash. Please see the "Balancing Keep servers":{{site.baseurl}}/admin/keep-balance.html for more details.
-
-h3. Configure storage volumes
+{% assign arvados_component = 'keepstore' %}
-Available storage volume types include POSIX filesystems and cloud object storage.
+{% include 'install_packages' %}
-* To use a POSIX filesystem, including both local filesystems (ext4, xfs) and network file system such as GPFS or Lustre, follow the setup instructions on "Filesystem storage":configure-fs-storage.html
-* If you are using S3-compatible object storage (including Amazon S3, Google Cloud Storage, and Ceph RADOS), follow the setup instructions on "S3 Object Storage":configure-s3-object-storage.html
-* If you are using Azure Blob Storage, follow the setup instructions on "Azure Blob Storage":configure-azure-blob-storage.html
+{% include 'start_service' %}
-h2. Run keepstore as a supervised service
+{% include 'restart_api' %}
-h3. Start the service (option 1: systemd)
+h2(#confirm-working). Confirm working installation
-If your system does not use systemd, skip this section and follow the "runit instructions":#runit instead.
+Log into a host that is on your private Arvados network. The host should be able to contact your your keepstore servers (eg keep[0-9].ClusterID.example.com).
-If your system uses systemd, the keepstore service should already be set up. Restart it to read the updated configuration, and check its status:
+@ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ must be set in the environment.
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart keepstore</span>
-~$ <span class="userinput">sudo systemctl status keepstore</span>
-● keepstore.service - Arvados Keep Storage Daemon
- Loaded: loaded (/etc/systemd/system/keepstore.service; enabled; vendor preset: enabled)
- Active: active (running) since Tue 2019-09-10 14:16:29 UTC; 1s ago
- Docs: https://doc.arvados.org/
- Main PID: 25465 (keepstore)
- Tasks: 9 (limit: 4915)
- CGroup: /system.slice/keepstore.service
- └─25465 /usr/bin/keepstore
-[...]
-</code></pre>
-</notextile>
+@ARVADOS_API_HOST@ should be the hostname of the API server.
-h3(#runit). Start the service (option 2: runit)
+@ARVADOS_API_TOKEN@ should be the system root token.
-Install runit to supervise the keepstore daemon. {% include 'install_runit' %}
+Install the "Command line SDK":{{site.baseurl}}/sdk/cli/install.html
-Install this script as the run script @/etc/sv/keepstore/run@ for the keepstore service:
+Check that the keepstore server is in the @keep_service@ "accessible" list:
<notextile>
-<pre><code>#!/bin/sh
-
-exec 2>&1
-GOGC=10 exec keepstore
+<pre><code>
+$ <span class="userinput">arv keep_service accessible</span>
+[...]
</code></pre>
</notextile>
-h2. Set up additional servers
-
-Repeat the above sections to prepare volumes and bring up supervised services on each Keepstore server you are setting up.
+If keepstore does not show up in the "accessible" list, and you are accessing it from within the private network, check that you have "properly configured the @geo@ block for the API server":install-api-server.html#update-nginx .
-h2. Restart the API server and controller
+Next, install the "Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html
-After adding all of your keepstore servers to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
+You should now be able to use @arv-put@ to upload collections and @arv-get@ to fetch collections. Be sure to execute this from _inside_ the cluster's private network. You will be able to access keep from _outside_ the private network after setting up "keepproxy":install-keepproxy.html .
-<pre>
-sudo systemctl restart nginx arvados-controller
-</pre>
+{% include 'arv_put_example' %}
-h2(#testing). Testing keep
-
-Install the "Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html
-
-@ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ must be set in the environment.
+h2(#note). Note on storage management
-You should now be able to use @arv-put@ to upload collections and @arv-get@ to fetch collections:
-
-<pre>
-$ echo "hello world!" > hello.txt
-
-$ arv-put --portable-data-hash hello.txt
-2018-07-12 13:35:25 arvados.arv_put[28702] INFO: Creating new cache file at /home/example/.cache/arvados/arv-put/1571ec0adb397c6a18d5c74cc95b3a2a
-0M / 0M 100.0% 2018-07-12 13:35:27 arvados.arv_put[28702] INFO:
-
-2018-07-12 13:35:27 arvados.arv_put[28702] INFO: Collection saved as 'Saved at 2018-07-12 17:35:25 UTC by example@example'
-59389a8f9ee9d399be35462a0f92541c+53
-
-$ arv-get 59389a8f9ee9d399be35462a0f92541c+53/hello.txt
-hello world!
-</pre>
+On its own, a keepstore server never deletes data. Instead, the keep-balance service determines which blocks are candidates for deletion and instructs the keepstore to move those blocks to the trash. Please see the "Balancing Keep servers":{{site.baseurl}}/admin/keep-balance.html for more details.
---
layout: default
navsection: installguide
-title: Prerequisites
+title: Planning and prerequisites
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-h2. Supported Cloud and HPC platforms
+Before attempting installation, you should begin by reviewing supported platforms, choosing backends for identity, storage, and scheduling, and decide how you will distribute Arvados services onto machines. You should also choose an Arvados Cluster ID, choose your hostnames, and aquire TLS certificates. It may be helpful to make notes as you go along using one of these worksheets: "New cluster checklist for AWS":new_cluster_checklist_AWS.xlsx - "New cluster checklist for Azure":new_cluster_checklist_Azure.xlsx - "New cluster checklist for on premise SLURM":new_cluster_checklist_slurm.xlsx
-Arvados can run in a variety of configurations. For compute scheduling, Arvados supports HPC clusters using @slurm@, and supports elastic cloud computing on AWS, Google and Azure. For storage, Arvados can store blocks on regular file systems such as ext4 or xfs, on network file systems such as GPFS, or object storage such as Azure blob storage, Amazon S3, and other object storage that supports the S3 API including Google Cloud Storage and Ceph.
+The Arvados storage subsystem is called "keep". The compute subsystem is called "crunch".
-h2. Hardware (or virtual machines)
+# "Supported GNU/Linux distributions":#supportedlinux
+# "Choosing which components to install":#components
+# "Identity provider":#identity
+# "Storage backend (Keep)":#storage
+# "Container compute scheduler (Crunch)":#scheduler
+# "Hardware or virtual machines":#machines
+# "Arvados Cluster ID":#clusterid
+# "DNS and TLS":#dnstls
-This guide assumes you have seven systems available in the same network subnet:
-
-<div class="offset1">
-table(table table-bordered table-condensed).
-|_. Function|_. Number of nodes|
-|Arvados API, Crunch dispatcher, Git, Websockets and Workbench|1|
-|Arvados Compute node|1|
-|Arvados Keepproxy and Keep-web server|1|
-|Arvados Keepstore servers|2|
-|Arvados Shell server|1|
-|Arvados SSO server|1|
-</div>
-
-The number of Keepstore, shell and compute nodes listed above is a minimum. In a real production installation, you will likely run many more of each of those types of nodes. In such a scenario, you would probably also want to dedicate a node to the Workbench server and Crunch dispatcher, respectively. For performance reasons, you may want to run the database server on a separate node as well.
-
-h2. Supported GNU/Linux distributions
+h2(#supportedlinux). Supported GNU/Linux distributions
table(table table-bordered table-condensed).
|_. Distribution|_. State|_. Last supported version|
|CentOS 7|Supported|Latest|
+|Debian 10 ("buster")|Supported|Latest|
|Debian 9 ("stretch")|Supported|Latest|
-|Ubuntu 16.04 ("xenial")|Supported|Latest|
|Ubuntu 18.04 ("bionic")|Supported|Latest|
-|Ubuntu 14.04 ("trusty")|EOL|5f943cd451acfbdcddd84e791738c3aa5926bfed (2019-07-10)|
-|Debian 8 ("jessie")|EOL|5f943cd451acfbdcddd84e791738c3aa5926bfed (2019-07-10)|
+|Ubuntu 16.04 ("xenial")|Supported|Latest|
+|Ubuntu 14.04 ("trusty")|EOL|1.4.3|
+|Debian 8 ("jessie")|EOL|1.4.3|
|Ubuntu 12.04 ("precise")|EOL|8ed7b6dd5d4df93a3f37096afe6d6f81c2a7ef6e (2017-05-03)|
|Debian 7 ("wheezy")|EOL|997479d1408139e96ecdb42a60b4f727f814f6c9 (2016-12-28)|
|CentOS 6 |EOL|997479d1408139e96ecdb42a60b4f727f814f6c9 (2016-12-28)|
Arvados packages are published for current Debian releases (until the EOL date), current Ubuntu LTS releases (until the end of standard support), and the latest version of CentOS.
-h2(#repos). Arvados package repositories
-
-On any host where you install Arvados software, you'll need to set up an Arvados package repository. They're available for several popular distributions.
-
-h3. CentOS
+h2(#components). Choosing which components to install
-Packages are available for CentOS 7. To install them with yum, save this configuration block in @/etc/yum.repos.d/arvados.repo@:
+Arvados consists of many components, some of which may be omitted (at the cost of reduced functionality.) It may also be helpful to review the "Arvados Architecture":{{site.baseurl}}/architecture to understand how these components interact.
-<notextile>
-<pre><code>[arvados]
-name=Arvados
-baseurl=http://rpm.arvados.org/CentOS/$releasever/os/$basearch/
-gpgcheck=1
-gpgkey=http://rpm.arvados.org/CentOS/RPM-GPG-KEY-curoverse
-</code></pre>
-</notextile>
+table(table table-bordered table-condensed).
+|\3=. *Core*|
+|"Postgres database":install-postgresql.html |Stores data for the API server.|Required.|
+|"API server":install-api-server.html |Core Arvados logic for managing users, groups, collections, containers, and enforcing permissions.|Required.|
+|\3=. *Keep (storage)*|
+|"Keepstore":install-keepstore.html |Stores content-addressed blocks in a variety of backends (local filesystem, cloud object storage).|Required.|
+|"Keepproxy":install-keepproxy.html |Gateway service to access keep servers from external networks.|Required to be able to use arv-put, arv-get, or arv-mount outside the private Arvados network.|
+|"Keep-web":install-keep-web.html |Gateway service providing read/write HTTP and WebDAV support on top of Keep.|Required to access files from Workbench.|
+|"Keep-balance":install-keep-balance.html |Storage cluster maintenance daemon responsible for moving blocks to their optimal server location, adjusting block replication levels, and trashing unreferenced blocks.|Required to free deleted data from underlying storage, and to ensure proper replication and block distribution (including support for storage classes).|
+|\3=. *User interface*|
+|"Single Sign On server":install-sso.html |Web based login to Workbench.|Depends on identity provider. Not required for Google. Required for LDAP or standalone database.|
+|"Workbench":install-workbench-app.html, "Workbench2":install-workbench2-app.html |Primary graphical user interface for working with file collections and running containers.|Optional. Depends on API server, SSO server, keep-web, websockets server.|
+|"Workflow Composer":install-composer.html |Graphical user interface for editing Common Workflow Language workflows.|Optional. Depends on git server (arv-git-httpd).|
+|\3=. *Additional services*|
+|"Websockets server":install-ws.html |Event distribution server.|Required to view streaming container logs in Workbench.|
+|"Shell server":install-shell-server.html |Synchronize (create/delete/configure) Unix shell accounts with Arvados users.|Optional.|
+|"Git server":install-arv-git-httpd.html |Arvados-hosted git repositories, with Arvados-token based authentication.|Optional, but required by Workflow Composer.|
+|\3=. *Crunch (running containers)*|
+|"crunch-dispatch-slurm":crunch2-slurm/install-prerequisites.html |Run analysis workflows using Docker containers distributed across a SLURM cluster.|Optional if you wish to use Arvados for data management only.|
+|"Node Manager":install-nodemanager.html, "arvados-dispatch-cloud":install-dispatch-cloud.html |Allocate and free cloud VM instances on demand based on workload.|Optional, not needed for a static SLURM cluster (such as on-premise HPC).|
-{% include 'install_redhat_key' %}
+h2(#identity). Identity provider
-h3. Debian and Ubuntu
+Choose which backend you will use to authenticate users.
-Packages are available for Debian 9 ("stretch"), Ubuntu 16.04 ("xenial") and Ubuntu 18.04 ("bionic").
+* Google login to authenticate users with a Google account. Note: if you only use this identity provider, login can be handled by @arvados-controller@ (recommended), and you do not need to install the Arvados Single Sign-On server (SSO).
+* LDAP login to authenticate users using the LDAP protocol, supported by many services such as OpenLDAP and Active Directory. Supports username/password authentication.
+* Standalone SSO server user database. Supports username/password authentication. Supports new user sign-up.
-First, register the Curoverse signing key in apt's database:
+h2(#storage). Storage backend
-{% include 'install_debian_key' %}
+Choose which backend you will use for storing and retrieving content-addressed Keep blocks.
-Configure apt to retrieve packages from the Arvados package repository. This command depends on your OS vendor and version:
+* File systems storage, such as ext4 or xfs, or network file systems such as GPFS or Lustre
+* Amazon S3, or other object storage that supports the S3 API including Google Cloud Storage and Ceph.
+* Azure blob storage
-table(table table-bordered table-condensed).
-|_. OS version|_. Command|
-|Debian 9 ("stretch")|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/ stretch main" | sudo tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
-|Ubuntu 16.04 ("xenial")[1]|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/ xenial main" | sudo tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
-|Ubuntu 18.04 ("bionic")[1]|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/ bionic main" | sudo tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
+You should also determine the desired replication factor for your data. A replication factor of 1 means only a single copy of a given data block is kept. With a conventional file system backend and a replication factor of 1, a hard drive failure is likely to lose data. For this reason the default replication factor is 2 (two copies are kept).
-{% include 'notebox_begin' %}
+A backend may have its own replication factor (such as durability guarantees of cloud buckets) and Arvados will take this into account when writing a new data block.
-fn1. Arvados packages for Ubuntu may depend on third-party packages in Ubuntu's "universe" repository. If you're installing on Ubuntu, make sure you have the universe sources uncommented in @/etc/apt/sources.list@.
+h2(#scheduler). Container compute scheduler
-{% include 'notebox_end' %}
+Choose which backend you will use to schedule computation.
-Retrieve the package list:
+* On AWS EC2 and Azure, you probably want to use @arvados-dispatch-cloud@ to manage the full lifecycle of cloud compute nodes: starting up nodes sized to the container request, executing containers on those nodes, and shutting nodes down when no longer needed.
+* For on-premise HPC clusters using "slurm":https://slurm.schedmd.com/ use @crunch-dispatch-slurm@ to execute containers with slurm job submissions.
+* For single node demos, use @crunch-dispatch-local@ to execute containers directly.
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get update</span>
-</code></pre>
-</notextile>
+h2(#machines). Hardware (or virtual machines)
-h2. A unique identifier
+Choose how to allocate Arvados services to machines. We recommend that each machine start with a clean installation of a supported GNU/Linux distribution.
-Each Arvados installation should have a globally unique identifier, which is a unique 5-character lowercase alphanumeric string. For testing purposes, here is one way to make a random 5-character string:
+For a production installation, this is a reasonable starting point:
-<notextile>
-<pre><code>~$ <span class="userinput">tr -dc 0-9a-z </dev/urandom | head -c5; echo</span>
-</code></pre>
-</notextile>
-
-You may also use a different method to pick the unique identifier. The unique identifier will be part of the hostname of the services in your Arvados cluster. The rest of this documentation will refer to it as your @uuid_prefix@.
-
-
-h2. SSL certificates
+<div class="offset1">
+table(table table-bordered table-condensed).
+|_. Function|_. Number of nodes|_. Recommended specs|
+|Postgres database, Arvados API server, Arvados controller, Git, Websockets, Container dispatcher|1|16+ GiB RAM, 4+ cores, fast disk for database|
+|Single Sign-On (SSO) server ^1^|1|2 GiB RAM|
+|Workbench, Keepproxy, Keep-web, Keep-balance|1|8 GiB RAM, 2+ cores|
+|Keepstore servers ^2^|2+|4 GiB RAM|
+|Compute worker nodes ^2^|0+ |Depends on workload; scaled dynamically in the cloud|
+|User shell nodes ^3^|0+|Depends on workload|
+</div>
-There are six public-facing services that require an SSL certificate. If you do not have official SSL certificates, you can use self-signed certificates.
+^1^ May be omitted when using Google login support in @arvados-controller@
+^2^ Should be scaled up as needed
+^3^ Refers to shell nodes managed by Arvados, that provide ssh access for users to interact with Arvados at the command line. Optional.
{% include 'notebox_begin' %}
+For a small demo installation, it is possible to run all the Arvados services on a single node. Special considerations for single-node installs will be noted in boxes like this.
+{% include 'notebox_end' %}
-Most Arvados clients and services will accept self-signed certificates when the @ARVADOS_API_HOST_INSECURE@ environment variable is set to @true@. However, web browsers generally do not make it easy for users to accept self-signed certificates from Web sites.
+h2(#clusterid). Arvados Cluster ID
-Users who log in through Workbench will visit at least three sites: the SSO server, the API server, and Workbench itself. When a browser visits each of these sites, it will warn the user if the site uses a self-signed certificate, and the user must accept it before continuing. This procedure usually only needs to be done once in a browser.
+Each Arvados installation should have a cluster identifier, which is a unique 5-character lowercase alphanumeric string. Here is one way to make a random 5-character string:
-After that's done, Workbench includes JavaScript clients for other Arvados services. Users are usually not warned if these client connections are refused because the server uses a self-signed certificate, and it is especially difficult to accept those cerficiates:
+<notextile>
+<pre><code>~$ <span class="userinput">tr -dc 0-9a-z </dev/urandom | head -c5; echo</span>
+</code></pre>
+</notextile>
-* JavaScript connects to the Websockets server to provide incremental page updates and view logs from running jobs.
-* JavaScript connects to the API and Keepproxy servers to upload local files to collections.
-* JavaScript connects to the Keep-web server to download log files.
+You may also use a different method to pick the cluster identifier. The cluster identifier will be part of the hostname of the services in your Arvados cluster. The rest of this documentation will refer to it as your @ClusterID@. Whenever @ClusterID@ appears in a configuration example, replace it with your five-character cluster identifier.
-In sum, Workbench will be much less pleasant to use in a cluster that uses self-signed certificates. You should avoid using self-signed certificates unless you plan to deploy a cluster without Workbench; you are deploying only to evaluate Arvados as an individual system administrator; or you can push configuration to users' browsers to trust your self-signed certificates.
+h2(#dnstls). DNS entries and TLS certificates
-{% include 'notebox_end' %}
+The following services are normally public-facing and require DNS entries and corresponding TLS certificates. Get certificates from your preferred TLS certificate provider. We recommend using "Let's Encrypt":https://letsencrypt.org/. You can run several services on same node, but each distinct hostname requires its own TLS certificate.
-By convention, we use the following hostname pattern:
+This guide uses the following hostname conventions. A later part of this guide will describe how to set up Nginx virtual hosts.
<div class="offset1">
table(table table-bordered table-condensed).
|_. Function|_. Hostname|
-|Arvados API|@uuid_prefix@.your.domain|
-|Arvados Git server|git.@uuid_prefix@.your.domain|
-|Arvados Keepproxy server|keep.@uuid_prefix@.your.domain|
-|Arvados Keep-web server|download.@uuid_prefix@.your.domain
+|Arvados API|@ClusterID.example.com@|
+|Arvados Git server|git.@ClusterID.example.com@|
+|Arvados Websockets endpoint|ws.@ClusterID.example.com@|
+|Arvados SSO Server|@auth.example.com@|
+|Arvados Workbench|workbench.@ClusterID.example.com@|
+|Arvados Workbench 2|workbench2.@ClusterID.example.com@|
+|Arvados Keepproxy server|keep.@ClusterID.example.com@|
+|Arvados Keep-web server|download.@ClusterID.example.com@
_and_
-*.collections.@uuid_prefix@.your.domain or
-*<notextile>--</notextile>collections.@uuid_prefix@.your.domain or
-collections.@uuid_prefix@.your.domain (see the "keep-web install docs":install-keep-web.html)|
-|Arvados SSO Server|auth.your.domain|
-|Arvados Websockets endpoint|ws.@uuid_prefix@.your.domain|
-|Arvados Workbench|workbench.@uuid_prefix@.your.domain|
+*.collections.@ClusterID.example.com@ or
+*<notextile>--</notextile>collections.@ClusterID.example.com@ or
+collections.@ClusterID.example.com@ (see the "keep-web install docs":install-keep-web.html)|
</div>
+
+{% include 'notebox_begin' %}
+It is also possible to create your own certificate authority, issue server certificates, and install a custom root certificate in the browser. This is out of scope for this guide.
+{% include 'notebox_end' %}
---
layout: default
navsection: installguide
-title: Set up PostgreSQL databases
+title: Install PostgreSQL 9.4+
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-Two Arvados Rails servers store data in a PostgreSQL database: the SSO server, and the API server. The API server requires at least version *9.4* of PostgreSQL. Beyond that, you have the flexibility to deploy PostgreSQL any way that the Rails servers will be able to connect to it. Our recommended deployment strategy is:
+Arvados requires at least version *9.4* of PostgreSQL.
-* Install PostgreSQL on the same host as the SSO server, and dedicate that install to hosting the SSO database. This provides the best security for the SSO server, because the database does not have to accept any client connections over the network. Typical load on the SSO server is light enough that deploying both it and its database on the same host does not compromise performance.
-* If you want to provide the most scalability for your Arvados cluster, install PostgreSQL for the API server on a dedicated host. This gives you the most flexibility to avoid resource contention, and tune performance separately for the API server and its database. If performance is less of a concern for your installation, you can install PostgreSQL on the API server host directly, as with the SSO server.
-
-Find the section for your distribution below, and follow it to install PostgreSQL on each host where you will deploy it. Then follow the steps in the later section(s) to set up PostgreSQL for the Arvados service(s) that need it.
-
-It is important to make sure that autovacuum is enabled for the PostgreSQL database that backs the API server. Autovacuum is enabled by default since PostgreSQL 8.3.
-
-h2. Install PostgreSQL 9.4+
-
-The API server requires at least version *9.4* of PostgreSQL.
+* "CentOS 7":#centos7
+* "Debian or Ubuntu":#debian
h3(#centos7). CentOS 7
{% assign rh_version = "7" %}
{% include 'note_python_sc' %}
-# Install PostgreSQL:
- <notextile><pre>~$ <span class="userinput">sudo yum install rh-postgresql95 rh-postgresql95-postgresql-contrib</span>
+# Install PostgreSQL
+ <notextile><pre># <span class="userinput">yum install rh-postgresql95 rh-postgresql95-postgresql-contrib</span>
~$ <span class="userinput">scl enable rh-postgresql95 bash</span></pre></notextile>
-# Initialize the database:
- <notextile><pre>~$ <span class="userinput">sudo postgresql-setup initdb</span></pre></notextile>
-# Configure the database to accept password connections:
- <notextile><pre><code>~$ <span class="userinput">sudo sed -ri -e 's/^(host +all +all +(127\.0\.0\.1\/32|::1\/128) +)ident$/\1md5/' /var/lib/pgsql/data/pg_hba.conf</span></code></pre></notextile>
-# Configure the database to launch at boot:
- <notextile><pre>~$ <span class="userinput">sudo systemctl enable rh-postgresql95-postgresql</span></pre></notextile>
-# Start the database:
- <notextile><pre>~$ <span class="userinput">sudo systemctl start rh-postgresql95-postgresql</span></pre></notextile>
-# "Set up Arvados credentials and databases":#rails_setup for the services that will use this PostgreSQL install.
+# Initialize the database
+ <notextile><pre># <span class="userinput">postgresql-setup initdb</span></pre></notextile>
+# Configure the database to accept password connections
+ <notextile><pre><code># <span class="userinput">sed -ri -e 's/^(host +all +all +(127\.0\.0\.1\/32|::1\/128) +)ident$/\1md5/' /var/lib/pgsql/data/pg_hba.conf</span></code></pre></notextile>
+# Configure the database to launch at boot and start now
+ <notextile><pre># <span class="userinput">systemctl enable --now rh-postgresql95-postgresql</span></pre></notextile>
h3(#debian). Debian or Ubuntu
Ubuntu 14.04 (Trusty) requires an updated PostgreSQL version, see "the PostgreSQL ubuntu repository":https://www.postgresql.org/download/linux/ubuntu/
-# Install PostgreSQL:
- <notextile><pre>~$ <span class="userinput">sudo apt-get install postgresql postgresql-contrib</span></pre></notextile>
-# "Set up Arvados credentials and databases":#rails_setup for the services that will use this PostgreSQL install.
-
-<a name="rails_setup"></a>
-
-h2(#sso). Set up SSO server credentials and database
-
-{% assign service_role = "arvados_sso" %}
-{% assign service_database = "arvados_sso_production" %}
-{% assign use_contrib = false %}
-{% include 'install_postgres_database' %}
-
-h2(#api). Set up API server credentials and database
-
-{% assign service_role = "arvados" %}
-{% assign service_database = "arvados_production" %}
-{% assign use_contrib = true %}
-{% include 'install_postgres_database' %}
+# Install PostgreSQL
+ <notextile><pre># <span class="userinput">apt-get --no-install-recommends install postgresql postgresql-contrib</span></pre></notextile>
+# Configure the database to launch at boot and start now
+ <notextile><pre># <span class="userinput">systemctl enable --now postgresql</span></pre></notextile>
---
layout: default
navsection: installguide
-title: Install a shell server
+title: Set up a shell node
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-There is nothing inherently special about an Arvados shell server. It is just a GNU/Linux machine with Arvados utilites and SDKs installed. For optimal performance, the Arvados shell server should be on the same LAN as the Arvados cluster, but that is not required.
+# "Introduction":#introduction
+# "Install Dependecies and SDKs":#dependencies
+# "Install git and curl":#install-packages
+# "Update Git Config":#config-git
+# "Create record for VM":#vm-record
+# "Create scoped token":#scoped-token
+# "Install arvados-login-sync":#arvados-login-sync
+# "Confirm working installation":#confirm-working
-h2. Install API tokens
+h2(#introduction). Introduction
-Please follow the "API token guide":../user/reference/api-tokens.html to get API tokens for your Arvados account and install them on your shell server. We will use those tokens to test the SDKs as we install them.
+Arvados support for shell nodes allows you to use Arvados permissions to grant Linux shell accounts to users.
-h2. Install the Ruby SDK and utilities
+A shell node runs the @arvados-login-sync@ service, and has some additional configuration to make it convenient for users to use Arvados utilites and SDKs. Users are allowed to log in and run arbitrary programs. For optimal performance, the Arvados shell server should be on the same LAN as the Arvados cluster.
-First, install the curl development libraries necessary to build the Arvados Ruby SDK. On Debian-based systems:
+Because it _contains secrets_ shell nodes should *not* have a copy of the complete @config.yml@. For example, if users have access to the @docker@ daemon, it is trival to gain *root* access to any file on the system. Users sharing a shell node should be implicitly trusted, or not given access to Docker. In more secure environments, the admin should allocate a separate VM for each user.
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install libcurl4-openssl-dev</span>
-</code></pre>
-</notextile>
-
-On Red Hat-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install libcurl-devel</span>
-</code></pre>
-</notextile>
-
-Next, install the arvados-cli Ruby gem. If you're using RVM:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo /usr/local/rvm/bin/rvm-exec default gem install arvados-cli</span>
-</code></pre>
-</notextile>
-
-If you're not using RVM:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo -i gem install arvados-cli</span>
-</code></pre>
-</notextile>
-
-h2. Install the Python SDK and utilities
+h2(#dependencies). Install Dependecies and SDKs
-{% assign rh_version = "7" %}
-{% include 'note_python_sc' %}
+# "Install Ruby and Bundler":ruby.html
+# "Install the Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html
+# "Install the FUSE driver":{{site.baseurl}}/sdk/python/arvados-fuse.html
+# "Install the CLI":{{site.baseurl}}/sdk/cli/install.html
+# "Install the R SDK":{{site.baseurl}}/sdk/R/index.html (optional)
+# "Install Docker":install-docker.html (optional)
-On Red Hat-based systems:
+{% assign arvados_component = 'git curl' %}
-<notextile>
-<pre><code>~$ <span class="userinput">echo 'exclude=python2-llfuse' | sudo tee -a /etc/yum.conf</span>
-~$ <span class="userinput">sudo yum install python-arvados-python-client python-arvados-fuse crunchrunner</span>
-</code></pre>
-</notextile>
+{% include 'install_packages' %}
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install python-arvados-python-client python-arvados-fuse crunchrunner</span>
-</code></pre>
-</notextile>
-
-h2. Install Git and curl
-
-{% include 'install_git_curl' %}
-
-h2. Update Git Config
+h2(#config-git). Update Git Config
Configure git to use the ARVADOS_API_TOKEN environment variable to authenticate to arv-git-httpd. We use the @--system@ flag so it takes effect for all current and future user accounts. It does not affect git's behavior when connecting to other git servers.
<notextile>
<pre>
-<code>~$ <span class="userinput">sudo git config --system 'credential.https://git.<b>uuid_prefix.your.domain</b>/.username' none</span></code>
-<code>~$ <span class="userinput">sudo git config --system 'credential.https://git.<b>uuid_prefix.your.domain</b>/.helper' '!cred(){ cat >/dev/null; if [ "$1" = get ]; then echo password=$ARVADOS_API_TOKEN; fi; };cred'</span></code>
+<code># <span class="userinput">git config --system 'credential.https://git.<b>ClusterID.example.com</b>/.username' none</span></code>
+<code># <span class="userinput">git config --system 'credential.https://git.<b>ClusterID.example.com</b>/.helper' '!cred(){ cat >/dev/null; if [ "$1" = get ]; then echo password=$ARVADOS_API_TOKEN; fi; };cred'</span></code>
</pre>
</notextile>
-h2. Install arvados-login-sync
+h2(#vm-record). Create record for VM
This program makes it possible for Arvados users to log in to the shell server -- subject to permissions assigned by the Arvados administrator -- using the SSH keys they upload to Workbench. It sets up login accounts, updates group membership, and adds users' public keys to the appropriate @authorized_keys@ files.
</pre>
</notextile>
-Create a token that is allowed to read login information for this VM.
+h2(#scoped-token). Create scoped token
+
+As an Arvados admin user (such as the system root user), create a "scoped token":{{site.baseurl}}/admin/scoped-tokens.html that is permits only reading login information for this VM. Setting a scope on the token means that even though a user with root access on the shell node can access the token, the token is not usable for admin actions on Arvados.
<notextile>
<pre>
Note the UUID and the API token output by the above commands: you will need them in a minute.
-Install the arvados-login-sync program.
-
-If you're using RVM:
-
-<notextile>
-<pre>
-<code>shellserver:~$ <span class="userinput">sudo -i `which rvm-exec` default gem install arvados-login-sync</span></code>
-</pre>
-</notextile>
+h2(#arvados-login-sync). Install arvados-login-sync
-If you're not using RVM:
+Install the arvados-login-sync program from RubyGems.
<notextile>
<pre>
-<code>shellserver:~$ <span class="userinput">sudo -i gem install arvados-login-sync</span></code>
+<code>shellserver:# <span class="userinput">gem install arvados-login-sync</span></code>
</pre>
</notextile>
-Install cron.
-
-On Red Hat-based distributions:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install cronie</span>
-~$ <span class="userinput">sudo systemctl enable crond</span>
-~$ <span class="userinput">sudo systemctl start crond</span>
-</code></pre>
-</notextile>
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install cron</span>
-</code></pre>
-</notextile>
-
Configure cron to run the @arvados-login-sync@ program every 2 minutes.
-If you're using RVM:
-
<notextile>
<pre>
-<code>shellserver:~$ <span class="userinput">sudo bash -c 'umask 077; tee /etc/cron.d/arvados-login-sync' <<'EOF'
-ARVADOS_API_HOST="<strong>uuid_prefix.your.domain</strong>"
-ARVADOS_API_TOKEN="<strong>the_token_you_created_above</strong>"
-ARVADOS_VIRTUAL_MACHINE_UUID="<strong>zzzzz-2x53u-zzzzzzzzzzzzzzz</strong>"
-*/2 * * * * root /usr/local/rvm/bin/rvm-exec default arvados-login-sync
-EOF</span></code>
-</pre>
-</notextile>
-
-If you're not using RVM:
-
-<notextile>
-<pre>
-<code>shellserver:~$ <span class="userinput">sudo bash -c 'umask 077; tee /etc/cron.d/arvados-login-sync' <<'EOF'
-ARVADOS_API_HOST="<strong>uuid_prefix.your.domain</strong>"
+<code>shellserver:# <span class="userinput">umask 077; tee /etc/cron.d/arvados-login-sync <<EOF
+ARVADOS_API_HOST="<strong>ClusterID.example.com</strong>"
ARVADOS_API_TOKEN="<strong>the_token_you_created_above</strong>"
ARVADOS_VIRTUAL_MACHINE_UUID="<strong>zzzzz-2x53u-zzzzzzzzzzzzzzz</strong>"
*/2 * * * * root arvados-login-sync
</pre>
</notextile>
+h2(#confirm-working). Confirm working installation
+
A user should be able to log in to the shell server when the following conditions are satisfied:
-* The user has uploaded an SSH public key: Workbench → Account menu → "SSH keys" item → "Add new SSH key" button.
-* As an admin user, you have given the user permission to log in: Workbench → Admin menu → "Users" item → "Show" button → "Admin" tab → "Setup shell account" button.
-* Two minutes have elapsed since the above conditions were satisfied, and the cron job has had a chance to run.
+
+# The user has uploaded an SSH public key: Workbench → Account menu → "SSH keys" item → "Add new SSH key" button.
+# As an admin user, you have given the user permission to log in using the Workbench → Admin menu → "Users" item → "Show" button → "Admin" tab → "Setup account" button.
+# The cron job has run.
+
+See also "how to add a VM login permission link at the command line":../admin/user-management-cli.html
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-h2(#dependencies). Install prerequisites
+{% include 'notebox_begin_warning' %}
+Skip this section if you are using Google login via @arvados-controller@.
+{% include 'notebox_end' %}
-The Arvados package repository includes an SSO server package that can help automate much of the deployment.
+# "Install dependencies":#dependencies
+# "Set up database":#database-setup
+# "Update config.yml":#update-config
+# "Configure the SSO server":#create-application-yml
+# "Update Nginx configuration":#update-nginx
+# "Install arvados-sso-server":#install-packages
+# "Create arvados-server client record":#client
+# "Restart the API server and controller":#restart-api
-h3(#install_ruby_and_bundler). Install Ruby and Bundler
+h2(#dependencies). Install dependencies
-{% include 'install_ruby_and_bundler_sso' %}
+# "Install PostgreSQL":install-postgresql.html
+# "Install Ruby and Bundler":ruby.html Important! The Single Sign On server only supports Ruby 2.3, to avoid version conflicts we recommend installing it on a different server from the API server. When installing Ruby, ensure that you get the right version by installing the "ruby2.3" package, or by using RVM with @--ruby=2.3@
+# "Install nginx":nginx.html
+# "Install Phusion Passenger":https://www.phusionpassenger.com/library/walkthroughs/deploy/ruby/ownserver/nginx/oss/install_passenger_main.html
-h3(#install_web_server). Set up a Web server
+h2(#database-setup). Set up the database
-For best performance, we recommend you use Nginx as your Web server frontend with a Passenger backend to serve the SSO server. The Passenger team provides "Nginx + Passenger installation instructions":https://www.phusionpassenger.com/library/walkthroughs/deploy/ruby/ownserver/nginx/oss/install_passenger_main.html.
+{% assign service_role = "arvados_sso" %}
+{% assign service_database = "arvados_sso_production" %}
+{% assign use_contrib = false %}
+{% include 'install_postgres_database' %}
-Follow the instructions until you see the section that says you are ready to deploy your Ruby application on the production server.
+Now create @/etc/arvados/sso/database.yml@
-h2(#install). Install the SSO server
+<pre>
+production:
+ adapter: postgresql
+ encoding: utf8
+ database: arvados_sso_production
+ username: arvados_sso
+ password: $password
+ host: localhost
+ template: template0
+</pre>
-On a Debian-based system, install the following package:
+h2(#update-config). Update config.yml
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install arvados-sso-server</span>
-</code></pre>
-</notextile>
+<pre>
+ Services:
+ SSO:
+ ExternalURL: auth.ClusterID.example.com
+ Login:
+ ProviderAppID: "arvados-server"
+ ProviderAppSecret: $app_secret
+</pre>
-On a Red Hat-based system, install the following package:
+Generate @ProviderAppSecret@:
<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install arvados-sso-server</span>
-</code></pre>
-</notextile>
-
-h2(#configure). Configure the SSO server
+<pre><code>~$ <span class="userinput">ruby -e 'puts rand(2**400).to_s(36)'</span>
+zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
+</code></pre></notextile>
-The package has installed three configuration files in @/etc/arvados/sso@:
+h2(#create-application-yml). Configure the SSO server
-<notextile>
-<pre><code>/etc/arvados/sso/application.yml
-/etc/arvados/sso/database.yml
-/etc/arvados/sso/production.rb
-</code></pre>
-</notextile>
-
-The SSO server runs from the @/var/www/arvados-sso/current/@ directory. The files @/var/www/arvados-sso/current/config/application.yml@, @/var/www/arvados-sso/current/config/database.yml@ and @/var/www/arvados-sso/current/config/environments/production.rb@ are symlinked to the configuration files in @/etc/arvados/sso/@.
+The SSO server runs from the @/var/www/arvados-sso/current/@ directory. The files @/var/www/arvados-sso/current/config/application.yml@ and @/var/www/arvados-sso/current/config/database.yml@ will be symlinked to the configuration files in @/etc/arvados/sso/@.
The SSO server reads the @config/application.yml@ file, as well as the @config/application.defaults.yml@ file. Values in @config/application.yml@ take precedence over the defaults that are defined in @config/application.defaults.yml@. The @config/application.yml.example@ file is not read by the SSO server and is provided for installation convenience only.
Consult @config/application.default.yml@ for a full list of configuration options. Local configuration goes in @/etc/arvados/sso/application.yml@, do not edit @config/application.default.yml@.
-h3(#uuid_prefix). uuid_prefix
+Create @/etc/arvados/sso/application.yml@ and add these keys:
-Generate a uuid prefix for the single sign on service. This prefix is used to identify user records as originating from this site. It must be exactly 5 lowercase ASCII letters and/or digits. You may use the following snippet to generate a uuid prefix:
+<pre>
+production:
+ uuid_prefix: xxxxx
+ secret_token: zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
+</pre>
-<notextile>
-<pre><code>~$ <span class="userinput">ruby -e 'puts "#{rand(2**64).to_s(36)[0,5]}"'</span>
-abcde
-</code></pre></notextile>
+h3(#uuid_prefix). uuid_prefix
-Edit @/etc/arvados/sso/application.yml@ and set @uuid_prefix@ in the "common" section.
+Most of the time, you want this to be the same as your @ClusterID@. If not, generate a new one from the command line listed previously.
h3(#secret_token). secret_token
zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
</code></pre></notextile>
-Edit @/etc/arvados/sso/application.yml@ and set @secret_token@ in the "common" section.
-
-There are other configuration options in @/etc/arvados/sso/application.yml@. See the "Authentication methods":install-sso.html#authentication_methods section below for more details.
-
-h2(#database). Set up the database
-
-Configure the SSO server to connect to your database by updating @/etc/arvados/sso/database.yml@. Replace the @xxxxxxxx@ database password placeholder with the "password you generated during database setup":install-postgresql.html#sso. Be sure to update the @production@ section.
-
-<notextile>
-<pre><code>~$ <span class="userinput">editor /etc/arvados/sso/database.yml</span>
-</code></pre></notextile>
-
-h2(#reconfigure_package). Reconfigure the package
-
-{% assign railspkg = "arvados-sso-server" %}
-{% include 'install_rails_reconfigure' %}
+h3(#authentication_methods). Authentication methods
-h2(#client). Create arvados-server client
+Authentication methods are configured in @application.yml@. Currently three authentication methods are supported: local accounts, LDAP, and Google. If neither Google nor LDAP are enabled, the SSO server defaults to local user accounts. Only one authentication mechanism should be in use at a time. Choose your authentication method and add the listed configuration items to the @production@ section.
-{% assign railshost = "" %}
-{% assign railsdir = "/var/www/arvados-sso/current" %}
-Use @rails console@ to create a @Client@ record that will be used by the Arvados API server. {% include 'install_rails_command' %}
-
-Enter the following commands at the console. The values that appear after you assign @app_id@ and @app_secret@ correspond to the values for @sso_app_id@ and @sso_app_secret@, respectively, in the "API server's SSO settings":install-api-server.html#omniauth.
-
-<notextile>
-<pre><code>:001 > <span class="userinput">c = Client.new</span>
-:002 > <span class="userinput">c.name = "joshid"</span>
-:003 > <span class="userinput">c.app_id = "arvados-server"</span>
-:004 > <span class="userinput">c.app_secret = rand(2**400).to_s(36)</span>
-=> "<strong>save this string for your API server's sso_app_secret</strong>"
-:005 > <span class="userinput">c.save!</span>
-:006 > <span class="userinput">quit</span>
-</code></pre>
-</notextile>
-
-h2(#configure_web_server). Configure your web server
-
-Edit the http section of your Nginx configuration to run the Passenger server and act as a frontend for it. You might add a block like the following, adding SSL and logging parameters to taste:
-
-<notextile>
-<pre><code>server {
- listen 127.0.0.1:8900;
- server_name localhost-sso;
-
- root /var/www/arvados-sso/current/public;
- index index.html;
-
- passenger_enabled on;
- # If you're not using RVM, comment out the line below.
- passenger_ruby /usr/local/rvm/wrappers/default/ruby;
-}
-
-upstream sso {
- server 127.0.0.1:8900 fail_timeout=10s;
-}
-
-proxy_http_version 1.1;
-
-server {
- listen <span class="userinput">[your public IP address]</span>:443 ssl;
- server_name auth.<span class="userinput">your.domain</span>;
-
- ssl on;
- ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
- ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
-
- index index.html;
-
- location / {
- proxy_pass http://sso;
- proxy_redirect off;
- proxy_connect_timeout 90s;
- proxy_read_timeout 300s;
-
- proxy_set_header X-Forwarded-Proto https;
- proxy_set_header Host $http_host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- }
-}
-</code></pre>
-</notextile>
-
-Finally, restart Nginx and your Arvados SSO server should be up and running. You can verify that by visiting the URL you configured your Nginx web server to listen on in the server section above (port 443). Read on if you want to configure your Arvados SSO server to use a different authentication backend.
-
-h2(#authentication_methods). Authentication methods
-
-Authentication methods are configured in @application.yml@. Currently three authentication methods are supported: local accounts, LDAP, and Google+. If neither Google+ nor LDAP are enabled, the SSO server defaults to local user accounts. Only one authentication mechanism should be in use at a time.
-
-h3(#local_accounts). Local account authentication
+h4(#local_accounts). Local account authentication
There are two configuration options for local accounts:
</code></pre>
</notextile>
-h3(#ldap). LDAP authentication
+h4(#ldap). LDAP authentication
The following options are available to configure LDAP authentication. Note that you must preserve the indentation of the fields listed under @use_ldap@.
|bind_dn|If required by server, username to log with in before performing directory lookup|
|password|If required by server, password to log with before performing directory lookup|
-h3(#google). Google+ authentication
+h4(#google). Google authentication
-In order to use Google+ authentication, you must use the <a href="https://console.developers.google.com" target="_blank">Google Developers Console</a> to create a set of client credentials.
+First, visit "Setting up Google auth.":google-auth.html
-# Go to the <a href="https://console.developers.google.com" target="_blank">Google Developers Console</a> and select or create a project; this will take you to the project page.
-# On the sidebar, click on *APIs & auth* then select *APIs*.
-## Search for *Contacts API* and click on *Enable API*.
-## Search for *Google+ API* and click on *Enable API*.
-# On the sidebar, click on *Credentials*; under *OAuth* click on *Create new Client ID* to bring up the *Create Client ID* dialog box.
-# Under *Application type* select *Web application*.
-# If the authorization origins are not displayed, clicking on *Create Client ID* will take you to *Consent screen* settings.
-## On consent screen settings, enter the appropriate details and click on *Save*.
-## This will return you to the *Create Client ID* dialog box.
-# You must set the authorization origins. Edit @auth.your.domain@ to the appropriate hostname that you will use to access the SSO service:
-## JavaScript origin should be @https://auth.your.domain/@
-## Redirect URI should be @https://auth.your.domain/users/auth/google_oauth2/callback@
-# Copy the values of *Client ID* and *Client secret* from the Google Developers Console into the Google section of @config/application.yml@, like this:
+Next, copy the values of *Client ID* and *Client secret* from the Google Developers Console into the Google section of @config/application.yml@, like this:
<notextile>
<pre><code> # Google API tokens required for OAuth2 login.
google_oauth2_client_id: <span class="userinput">"---YOUR---CLIENT---ID---HERE--"-</span>
google_oauth2_client_secret: <span class="userinput">"---YOUR---CLIENT---SECRET---HERE--"-</span></code></pre></notextile>
+
+h2(#update-nginx). Update nginx configuration
+
+Use a text editor to create a new file @/etc/nginx/conf.d/arvados-sso.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
+
+<notextile>
+<pre><code>server {
+ listen <span class="userinput">auth.ClusterID.example.com</span>:443 ssl;
+ server_name <span class="userinput">auth.ClusterID.example.com</span>;
+
+ ssl on;
+ ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
+ ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
+
+ root /var/www/arvados-sso/current/public;
+ index index.html;
+
+ passenger_enabled on;
+
+ # <span class="userinput">If you are using RVM, uncomment the line below.</span>
+ # <span class="userinput">If you're using system ruby, leave it commented out.</span>
+ #passenger_ruby /usr/local/rvm/wrappers/default/ruby;
+}
+</code></pre>
+</notextile>
+
+h2(#install-packages). Install arvados-sso-server package
+
+h3. Centos 7
+
+<notextile>
+<pre><code># <span class="userinput">yum install arvados-sso-server</span>
+</code></pre>
+</notextile>
+
+h3. Debian and Ubuntu
+
+<notextile>
+<pre><code># <span class="userinput">apt-get --no-install-recommends arvados-sso-server</span>
+</code></pre>
+</notextile>
+
+h2(#client). Create arvados-server client record
+
+{% assign railshost = "" %}
+{% assign railsdir = "/var/www/arvados-sso/current" %}
+Use @rails console@ to create a @Client@ record that will be used by the Arvados API server. {% include 'install_rails_command' %}
+
+Enter the following commands at the console. The values that appear after you assign @app_id@ and @app_secret@ will be copied to @Login.ProviderAppID@ and @Login.ProviderAppSecret@ in @config.yml@.
+
+<notextile>
+<pre><code>:001 > <span class="userinput">c = Client.new</span>
+:002 > <span class="userinput">c.name = "joshid"</span>
+:003 > <span class="userinput">c.app_id = "arvados-server"</span>
+:004 > <span class="userinput">c.app_secret = "the value of Login.ProviderAppSecret"</span>
+:005 > <span class="userinput">c.save!</span>
+:006 > <span class="userinput">quit</span>
+</code></pre>
+</notextile>
+
+h2(#restart-api). Restart the API server and controller
+
+After adding the SSO server to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
+
+<notextile>
+<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
+</code></pre>
+</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-h2. Install prerequisites
+# "Install dependencies":#dependencies
+# "Update config.yml":#update-config
+# "Update Nginx configuration":#update-nginx
+# "Trusted client flag":#trusted_client
+# "Install arvados-workbench":#install-packages
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
-The Arvados package repository includes a Workbench server package that can help automate much of the deployment.
+h2(#dependencies). Install dependencies
-h3(#install_ruby_and_bundler). Install Ruby and Bundler
+# "Install Ruby and Bundler":ruby.html
+# "Install nginx":nginx.html
+# "Install Phusion Passenger":https://www.phusionpassenger.com/library/walkthroughs/deploy/ruby/ownserver/nginx/oss/install_passenger_main.html
-{% include 'install_ruby_and_bundler' %}
+h2(#configure). Update config.yml
-h2(#install_workbench). Install Workbench and dependencies
-
-Workbench doesn't need its own database, so it does not need to have PostgreSQL installed.
-
-{% assign rh_version = "7" %}
-{% include 'note_python_sc' %}
-
-On a Debian-based system, install the following packages:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install bison build-essential graphviz git python-arvados-python-client arvados-workbench</span>
-</code></pre>
-</notextile>
-
-On a Red Hat-based system, install the following packages:
+Edit @config.yml@ to set the keys below. The full set of configuration options are in the "Workbench section of config.yml":{{site.baseurl}}/admin/config.html
<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install bison make automake gcc gcc-c++ graphviz git python-arvados-python-client arvados-workbench</span>
+<pre><code> Services:
+ Workbench1:
+ ExternalURL: <span class="userinput">"https://workbench.ClusterID.example.com"</span>
+ Workbench:
+ SecretKeyBase: <span class="userinput">aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa</span>
+ Users:
+ AutoAdminFirstUser: true
</code></pre>
</notextile>
-h2(#configure). Configure Workbench
-
-Edit @/etc/arvados/config.yml@ to set the keys below. Only the most important configuration options are listed here. The full set of configuration options are in the "Workbench section of config.yml":{{site.baseurl}}/admin/config.html
-
-h3. Workbench.SecretKeyBase
-
This application needs a secret token. Generate a new secret:
<notextile>
Then put that value in the @Workbench.SecretKeyBase@ field.
-<notextile>
-<pre><code>Cluster:
- zzzzz:
- Workbench:
- SecretKeyBase: <span class="userinput">aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa</span>
-</code></pre>
-</notextile>
+You probably want to enable @Users.AutoAdminFirstUser@ . The first user to log in when no other admin user exists will automatically be made an admin.
-h3. Services.Controller.ExternalURL
+h2(#update-nginx). Update nginx configuration
-Ensure that @Services.Controller.ExternalURL@ is configured for "Arvados Controller":install-controller.html . For example like this:
+Use a text editor to create a new file @/etc/nginx/conf.d/arvados-workbench.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
<notextile>
-<pre><code>Cluster:
- zzzzz:
- Services:
- Controller:
- ExternalURL: <span class="userinput">https://prefix_uuid.your.domain</span>
-</code></pre>
-</notextile>
-
-h3. Workbench.SiteName
-
-@Workbench.SiteName@ can be set to any arbitrary string. It is used to identify this Workbench to people visiting it.
-
-
-<notextile>
-<pre><code>Cluster:
- zzzzz:
- Workbench:
- SiteName: <span class="userinput">My Arvados</span>
-</code></pre>
-</notextile>
-
-h3. TLS.Insecure
-
-For testing only. Allows use of self-signed certificates. If true, workbench will not verify the TLS certificate of Arvados Controller.
-
-<notextile>
-<pre><code>Cluster:
- zzzzz:
- TLS:
- Insecure: <span class="userinput">false</span>
-</code></pre>
-</notextile>
-
-h2. Configure Piwik (optional)
-
-Piwik can be used to gather usage analytics. In @/var/www/arvados-workbench/current/config@, copy @piwik.yml.example@ to @piwik.yml@ and edit to suit.
-
-h2. Set up Web server
-
-For best performance, we recommend you use Nginx as your Web server front-end, with a Passenger backend to serve Workbench. To do that:
-
-<notextile>
-<ol>
-<li><a href="https://www.phusionpassenger.com/library/walkthroughs/deploy/ruby/ownserver/nginx/oss/install_passenger_main.html">Install Nginx and Phusion Passenger</a>.</li>
-
-<li><p>Edit the http section of your Nginx configuration to run the Passenger server, and act as a front-end for it. You might add a block like the following, adding SSL and logging parameters to taste:</p>
-
<pre><code>server {
- listen 127.0.0.1:9000;
- server_name localhost-workbench;
-
- root /var/www/arvados-workbench/current/public;
- index index.html index.htm index.php;
-
- passenger_enabled on;
- # If you're using RVM, uncomment the line below.
- #passenger_ruby /usr/local/rvm/wrappers/default/ruby;
-
- # `client_max_body_size` should match the corresponding setting in
- # the API.MaxRequestSize and Controller's server's Nginx configuration.
- client_max_body_size 128m;
-}
-
-upstream workbench {
- server 127.0.0.1:9000 fail_timeout=10s;
+ listen 80;
+ server_name workbench.<span class="userinput">ClusterID.example.com</span>;
+ return 301 https://workbench.<span class="userinput">ClusterID.example.com</span>$request_uri;
}
-proxy_http_version 1.1;
-
server {
- listen <span class="userinput">[your public IP address]</span>:443 ssl;
- server_name workbench.<span class="userinput">uuid-prefix.your.domain</span>;
+ listen *:443 ssl;
+ server_name workbench.<span class="userinput">ClusterID.example.com</span>;
ssl on;
ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
- index index.html index.htm index.php;
+ root /var/www/arvados-workbench/current/public;
+ index index.html;
+
+ passenger_enabled on;
+ # If you're using RVM, uncomment the line below.
+ #passenger_ruby /usr/local/rvm/wrappers/default/ruby;
+
# `client_max_body_size` should match the corresponding setting in
# the API.MaxRequestSize and Controller's server's Nginx configuration.
client_max_body_size 128m;
-
- location / {
- proxy_pass http://workbench;
- proxy_redirect off;
- proxy_connect_timeout 90s;
- proxy_read_timeout 300s;
-
- proxy_set_header X-Forwarded-Proto https;
- proxy_set_header Host $http_host;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- }
}
</code></pre>
-</li>
-
-<li>Restart Nginx.</li>
-
-</ol>
-</notextile>
-
-h2. Prepare the Workbench deployment
-
-{% assign railspkg = "arvados-workbench" %}
-{% include 'install_rails_reconfigure' %}
-
-{% include 'notebox_begin' %}
-You can safely ignore the following error message you may see when Ruby Gems are installed:
-<notextile>
-<pre><code>themes_for_rails at /usr/local/rvm/gems/ruby-2.1.1/bundler/gems/themes_for_rails-1fd2d7897d75 did not have a valid gemspec.
-This prevents bundler from installing bins or native extensions, but that may not affect its functionality.
-The validation message from Rubygems was:
- duplicate dependency on rails (= 3.0.11, development), (>= 3.0.0) use:
- add_runtime_dependency 'rails', '= 3.0.11', '>= 3.0.0'
-Using themes_for_rails (0.5.1) from https://github.com/holtkampw/themes_for_rails (at 1fd2d78)
-</code></pre>
</notextile>
-{% include 'notebox_end' %}
-
-h2. Trusted client setting
-Log in to Workbench once to ensure that the Arvados API server has a record of the Workbench client. (It's OK if Workbench says your account hasn't been activated yet. We'll deal with that next.)
+h2(#trusted_client). Trusted client flag
In the <strong>API server</strong> project root, start the Rails console. {% include 'install_rails_command' %}
-At the console, enter the following commands to locate the ApiClient record for your Workbench installation (typically, while you're setting this up, the @last@ one in the database is the one you want), then set the @is_trusted@ flag for the appropriate client record:
+Create an ApiClient record for your Workbench installation with the @is_trusted@ flag set.
-<notextile><pre><code>irb(main):001:0> <span class="userinput">wb = ApiClient.all.last; [wb.url_prefix, wb.created_at]</span>
-=> ["https://workbench.example.com/", Sat, 19 Apr 2014 03:35:12 UTC +00:00]
-irb(main):002:0> <span class="userinput">include CurrentApiClient</span>
-=> true
-irb(main):003:0> <span class="userinput">act_as_system_user do wb.update_attributes!(is_trusted: true) end</span>
+<notextile><pre><code>irb(main):001:0> <span class="userinput">include CurrentApiClient</span>
=> true
+irb(main):002:0> <span class="userinput">act_as_system_user do ApiClient.create!(url_prefix: "https://workbench.ClusterID.example.com/", is_trusted: true) end</span>
+=> #<ApiClient id: 2, uuid: "...", owner_uuid: "...", modified_by_client_uuid: nil, modified_by_user_uuid: "...", modified_at: "2019-12-16 14:19:10", name: nil, url_prefix: "https://workbench.ClusterID.example.com/", created_at: "2019-12-16 14:19:10", updated_at: "2019-12-16 14:19:10", is_trusted: true>
</code></pre>
</notextile>
-h2(#admin-user). Add an admin user
+{% assign arvados_component = 'arvados-workbench' %}
-Next, we're going to use the Rails console on the <strong>API server</strong> to activate your account and give yourself admin privileges. {% include 'install_rails_command' %}
+{% include 'install_packages' %}
-Enter the following commands at the console:
+{% include 'restart_api' %}
-<notextile>
-<pre><code>irb(main):001:0> <span class="userinput">Thread.current[:user] = User.all.select(&:identity_url).last</span>
-irb(main):002:0> <span class="userinput">Thread.current[:user].update_attributes is_admin: true, is_active: true</span>
-irb(main):003:0> <span class="userinput">User.where(is_admin: true).collect &:email</span>
-=> ["root", "<b>your_address@example.com</b>"]
-</code></pre></notextile>
+h2(#confirm-working). Confirm working installation
-At this point, you should have a working Workbench login with administrator privileges. Revisit your Workbench URL in a browser and reload the page to access it.
+Visit @https://workbench.ClusterID.example.com@ in a browser. You should be able to log in using the login method you configured in the previous step. If @Users.AutoAdminFirstUser@ is true, you will be an admin user.
---
layout: default
navsection: installguide
-title: Install Workbench2 (beta)
+title: Install Workbench 2
...
{% comment %}
Copyright (C) The Arvados Authors. All rights reserved.
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
+# "Update config.yml":#update-config
+# "Update Nginx configuration":#update-nginx
+# "Install arvados-workbench2":#install-packages
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
+# "Trusted client setting":#trusted_client
+
Workbench2 is the web-based user interface for Arvados.
{% include 'notebox_begin' %}
-Workbench2 is the replacement for Arvados Workbench. Workbench2 is currently in <i>beta</i>, it is not yet feature complete.
+Workbench2 is the replacement for Arvados Workbench. Workbench2 is suitable for day-to-day use, but does not yet implement every feature of the traditional Workbench.
{% include 'notebox_end' %}
-h2(#install_workbench). Install Workbench2 and dependencies
-
-Workbench2 does not require its own database. It is a set of html, javascript and css files that are served as static files from a web server like Nginx or Apache2.
+h2(#configure). Update config.yml
-On a Debian-based system, install the following package:
+Edit @config.yml@ to set the keys below. The full set of configuration options are in the "Workbench section of config.yml":{{site.baseurl}}/admin/config.html
<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install arvados-workbench2</span>
+<pre><code> Services:
+ Workbench2:
+ ExternalURL: <span class="userinput">"https://workbench2.ClusterID.example.com"</span>
</code></pre>
</notextile>
-On a Red Hat-based system, install the following package:
+h2(#update-nginx). Update Nginx configuration
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install arvados-workbench2</span>
-</code></pre>
-</notextile>
+Workbench2 does not require its own database. It is a set of html, javascript and css files that are served as static files from Nginx.
-h2. Set up Web server
-
-For best performance, we recommend you use Nginx as your Web server to serve Workbench2. Workbench2 consists entirely of static files. To do that:
+Use a text editor to create a new file @/etc/nginx/conf.d/arvados-workbench2.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
<notextile>
-<ol>
-<li>Install Nginx</li>
-
-<li><p>Edit the http section of your Nginx configuration to serve Workbench2's files. You might add a block like the following, adding SSL and logging parameters to taste:</p>
-
<pre><code>server {
- listen <span class="userinput">[your public IP address]</span>:443 ssl;
- server_name workbench2.<span class="userinput">uuid-prefix.your.domain</span>;
+ listen 80;
+ server_name workbench2.<span class="userinput">ClusterID.example.com</span>;
+ return 301 https://workbench2.<span class="userinput">ClusterID.example.com</span>$request_uri;
+}
+
+server {
+ listen *:443 ssl;
+ server_name workbench2.<span class="userinput">ClusterID.example.com</span>;
ssl on;
ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
index index.html;
- # Workbench2 uses a call to /config.json to bootstrap itself and talk to the desired API server
+ # <span class="userinput">Workbench2 uses a call to /config.json to bootstrap itself</span>
+ # <span class="userinput">and find out where to contact the API server.</span>
location /config.json {
- return 200 '{ "API_HOST": "<span class="userinput">uuid-prefix.your.domain</span>" }';
+ return 200 '{ "API_HOST": "<span class="userinput">ClusterID.example.com</span>" }';
}
location / {
}
}
</code></pre>
-</li>
+</notextile>
-<li>Restart Nginx.</li>
+h2. Vocabulary configuration (optional)
-</ol>
-</notextile>
+Workbench2 can load a vocabulary file which lists available metadata properties for groups and collections. To configure the property vocabulary definition, please visit the "Workbench2 Vocabulary Format":{{site.baseurl}}/admin/workbench2-vocabulary.html page in the Admin section.
+
+{% assign arvados_component = 'arvados-workbench2' %}
+
+{% include 'install_packages' %}
+
+{% include 'restart_api' %}
+
+h2(#confirm-working). Confirm working installation
+
+Visit @https://workbench2.ClusterID.example.com@ in a browser. You should be able to log in using the login method you configured in the previous step. If @Users.AutoAdminFirstUser@ is true, you will be an admin user.
-h2. Trusted client setting
+h2(#trusted_client). Trusted client flag
-Log in to Workbench2 once to ensure that the Arvados API server has a record of the Workbench2 client.
+Log in to Workbench once to ensure that the Arvados API server has a record of the Workbench client. (It's OK if Workbench says your account hasn't been activated yet. We'll deal with that next.)
In the <strong>API server</strong> project root, start the Rails console. {% include 'install_rails_command' %}
-At the console, enter the following commands to locate the ApiClient record for your Workbench2 installation (typically, while you're setting this up, the @last@ one in the database is the one you want), then set the @is_trusted@ flag for the appropriate client record:
+At the console, enter the following commands to locate the ApiClient record for your Workbench installation (typically, while you're setting this up, the @last@ one in the database is the one you want), then set the @is_trusted@ flag for the appropriate client record:
<notextile><pre><code>irb(main):001:0> <span class="userinput">wb = ApiClient.all.last; [wb.url_prefix, wb.created_at]</span>
-=> ["https://workbench2.<span class="userinput">uuid_prefix.your.domain</span>/", Sat, 20 Apr 2019 01:23:45 UTC +00:00]
+=> ["https://workbench.example.com/", Sat, 19 Apr 2014 03:35:12 UTC +00:00]
irb(main):002:0> <span class="userinput">include CurrentApiClient</span>
=> true
irb(main):003:0> <span class="userinput">act_as_system_user do wb.update_attributes!(is_trusted: true) end</span>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-The arvados-ws server provides event notifications to websocket clients. It can be installed anywhere with access to Postgres database and the Arvados API server, typically behind a web proxy that provides SSL support. See the "godoc page":http://godoc.org/github.com/curoverse/arvados/services/ws for additional information.
+The arvados-ws server provides event notifications to websocket clients. It can be installed anywhere with access to Postgres database and the Arvados API server, typically behind a web proxy that provides SSL support. See the "godoc page":http://godoc.org/github.com/arvados/arvados/services/ws for additional information.
-By convention, we use the following hostname for the websocket service.
+# "Update config.yml":#update-config
+# "Update nginx configuration":#update-nginx
+# "Install arvados-ws package":#install-packages
+# "Start the service":#start-service
+# "Restart the API server and controller":#restart-api
+# "Confirm working installation":#confirm-working
-<notextile>
-<pre><code>ws.<span class="userinput">uuid_prefix.your.domain</span></code></pre>
-</notextile>
-
-The above hostname should resolve from anywhere on the internet.
-
-h2. Install arvados-ws
-
-Typically arvados-ws runs on the same host as the API server.
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install arvados-ws</span>
-</code></pre>
-</notextile>
-
-On Red Hat-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install arvados-ws</span>
-</code></pre>
-</notextile>
+h2(#configure). Update config.yml
-Verify that @arvados-ws@ is functional:
+Edit the cluster config at @config.yml@ and set @Services.Websocket.ExternalURL@ and @Services.Websocket.InternalURLs@. Replace @zzzzz@ with your cluster id.
<notextile>
-<pre><code>~$ <span class="userinput">arvados-ws -h</span>
-Usage of arvados-ws:
- -config path
- path to config file (default "/etc/arvados/config.yml")
- -dump-config
- show current configuration and exit
-</code></pre>
-</notextile>
-
-h3. Update cluster config
-
-Edit the cluster config at @/etc/arvados/config.yml@ and set @Services.Websocket.ExternalURL@ and @Services.Websocket.InternalURLs@. Replace @zzzzz@ with your cluster id.
-
-<notextile>
-<pre><code>Clusters:
- zzzzz:
- Services:
- <span class="userinput">Websocket:
- ExternalURL: wss://ws.uuid_prefix.your.domain/websocket
+<pre><code> Services:
+ Websocket:
InternalURLs:
- "http://localhost:9003": {}
+ "http://localhost:8005"</span>: {}
+ ExternalURL: <span class="userinput">wss://ws.ClusterID.example.com/websocket</span>
</span></code></pre>
</notextile>
-h3. Start the service (option 1: systemd)
-
-If your system does not use systemd, skip this section and follow the "runit instructions":#runit instead.
-
-If your system uses systemd, the arvados-ws service should already be set up. Start it and check its status:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo systemctl restart arvados-ws</span>
-~$ <span class="userinput">sudo systemctl status arvados-ws</span>
-● arvados-ws.service - Arvados websocket server
- Loaded: loaded (/lib/systemd/system/arvados-ws.service; enabled)
- Active: active (running) since Tue 2016-12-06 11:20:48 EST; 10s ago
- Docs: https://doc.arvados.org/
- Main PID: 9421 (arvados-ws)
- CGroup: /system.slice/arvados-ws.service
- └─9421 /usr/bin/arvados-ws
-
-Dec 06 11:20:48 zzzzz arvados-ws[9421]: {"level":"info","msg":"started","time":"2016-12-06T11:20:48.207617188-05:00"}
-Dec 06 11:20:48 zzzzz arvados-ws[9421]: {"Listen":":9003","level":"info","msg":"listening","time":"2016-12-06T11:20:48.244956506-05:00"}
-Dec 06 11:20:48 zzzzz systemd[1]: Started Arvados websocket server.
-</code></pre>
-</notextile>
-
-If it is not running, use @journalctl@ to check logs for errors:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo journalctl -n10 -u arvados-ws</span>
-...
-Dec 06 11:12:48 zzzzz systemd[1]: Starting Arvados websocket server...
-Dec 06 11:12:48 zzzzz arvados-ws[8918]: {"level":"info","msg":"started","time":"2016-12-06T11:12:48.030496636-05:00"}
-Dec 06 11:12:48 zzzzz arvados-ws[8918]: {"error":"pq: password authentication failed for user \"arvados\"","level":"fatal","msg":"db.Ping failed","time":"2016-12-06T11:12:48.058206400-05:00"}
-</code></pre>
-</notextile>
-
-Skip ahead to "confirm the service is working":#confirm.
-
-h3(#runit). Start the service (option 2: runit)
-
-Install runit to supervise the arvados-ws daemon. {% include 'install_runit' %}
-
-Create a supervised service.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo mkdir /etc/service/arvados-ws</span>
-~$ <span class="userinput">cd /etc/service/arvados-ws</span>
-~$ <span class="userinput">sudo mkdir log log/main</span>
-~$ <span class="userinput">printf '#!/bin/sh\nexec arvados-ws 2>&1\n' | sudo tee run</span>
-~$ <span class="userinput">printf '#!/bin/sh\nexec svlogd main\n' | sudo tee log/run</span>
-~$ <span class="userinput">sudo chmod +x run log/run</span>
-~$ <span class="userinput">sudo sv exit .</span>
-~$ <span class="userinput">cd -</span>
-</code></pre>
-</notextile>
-
-Use @sv stat@ and check the log file to verify the service is running.
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo sv stat /etc/service/arvados-ws</span>
-run: /etc/service/arvados-ws: (pid 12520) 2s; run: log: (pid 12519) 2s
-~$ <span class="userinput">tail /etc/service/arvados-ws/log/main/current</span>
-{"level":"info","msg":"started","time":"2016-12-06T11:56:20.669171449-05:00"}
-{"Listen":":9003","level":"info","msg":"listening","time":"2016-12-06T11:56:20.708847627-05:00"}
-</code></pre>
-</notextile>
-
-h3(#confirm). Confirm the service is working
-
-Confirm the service is listening on its assigned port and responding to requests.
-
-<notextile>
-<pre><code>~$ <span class="userinput">curl http://0.0.0.0:<b>9003</b>/status.json</span>
-{"Clients":1}
-</code></pre>
-</notextile>
-
-h3. Set up a reverse proxy with SSL support
+h2(#update-nginx). Update Nginx configuration
The arvados-ws service will be accessible from anywhere on the internet, so we recommend using SSL for transport encryption.
-This is best achieved by putting a reverse proxy with SSL support in front of arvados-ws, running on port 443 and passing requests to arvados-ws on port 9003 (or whatever port you chose in your configuration file).
-
-For example, using Nginx:
+Use a text editor to create a new file @/etc/nginx/conf.d/arvados-ws.conf@ with the following configuration. Options that need attention are marked in <span class="userinput">red</span>.
<notextile><pre>
upstream arvados-ws {
- server 127.0.0.1:<span class="userinput">9003</span>;
+ server 127.0.0.1:<span class="userinput">8005</span>;
}
server {
- listen <span class="userinput">[your public IP address]</span>:443 ssl;
- server_name ws.<span class="userinput">uuid_prefix.your.domain</span>;
+ listen *:443 ssl;
+ server_name ws.<span class="userinput">ClusterID.example.com</span>;
proxy_connect_timeout 90s;
proxy_read_timeout 300s;
ssl on;
- ssl_certificate <span class="userinput"/>YOUR/PATH/TO/cert.pem</span>;
- ssl_certificate_key <span class="userinput"/>YOUR/PATH/TO/cert.key</span>;
+ ssl_certificate <span class="userinput">/YOUR/PATH/TO/cert.pem</span>;
+ ssl_certificate_key <span class="userinput">/YOUR/PATH/TO/cert.key</span>;
location / {
proxy_pass http://arvados-ws;
}
</pre></notextile>
-{% include 'notebox_begin' %}
-If you are upgrading a cluster where Nginx is configured to proxy @ws@ requests to puma, change the @server_name@ value in the old configuration block so it doesn't conflict. When the new configuration is working, delete the old Nginx configuration sections (i.e., the "upstream websockets" block, and the "server" block that references @http://websockets@), and disable/remove the runit or systemd files for the puma server.
-{% include 'notebox_end' %}
+{% assign arvados_component = 'arvados-ws' %}
+
+{% include 'install_packages' %}
+
+{% include 'start_service' %}
-h3. Update API server configuration
+{% include 'restart_api' %}
-Restart Nginx to reload the API server configuration.
+h2(#restart-api). Restart the API server and controller
+
+After adding the SSO server to the Services section, make sure the cluster config file is up to date on the API server host, and restart the API server and controller processes to ensure the changes are applied.
<notextile>
-<pre><code>$ sudo nginx -s reload</span>
+<pre><code># <span class="userinput">systemctl restart nginx arvados-controller</span>
</code></pre>
</notextile>
-h3. Verify DNS and proxy setup
+h2(#confirm). Confirm working installation
-Use a host elsewhere on the Internet to confirm that your DNS, proxy, and SSL are configured correctly. For @Authorization: Bearer xxxx@ replace @xxxx@ with the value from @ManagementToken@ in @config.yml@.
+Confirm the service is listening on its assigned port and responding to requests.
<notextile>
-<pre><code>$ <span class="userinput">curl -H "Authorization: Bearer xxxx" https://ws.<b>uuid_prefix.your.domain</b>/_health/ping</span>
-{"health":"OK"}
+<pre><code>~$ <span class="userinput">curl https://<span class="userinput">ws.ClusterID.example.com</span>/status.json</span>
+{"Clients":1}
</code></pre>
</notextile>
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Install Nginx
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+h3. Centos 7
+
+<notextile>
+<pre><code># <span class="userinput">yum install epel-release</span></code>
+<code># <span class="userinput">yum install nginx</span></code></pre>
+</notextile>
+
+h3. Debian and Ubuntu
+
+<notextile>
+<pre><code># <span class="userinput">apt-get --no-install-recommends install nginx</span></code></pre>
+</notextile>
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Arvados package repositories
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+On any host where you install Arvados software, you'll need to add the Arvados package repository. They're available for several popular distributions.
+
+* "Centos 7":#centos7
+* "Debian and Ubuntu":#debian
+
+h3(#centos7). CentOS
+
+Packages are available for CentOS 7. To install them with yum, save this configuration block in @/etc/yum.repos.d/arvados.repo@:
+
+<notextile>
+<pre><code>[arvados]
+name=Arvados
+baseurl=http://rpm.arvados.org/CentOS/$releasever/os/$basearch/
+gpgcheck=1
+gpgkey=http://rpm.arvados.org/CentOS/RPM-GPG-KEY-curoverse
+</code></pre>
+</notextile>
+
+{% include 'install_redhat_key' %}
+
+h3(#debian). Debian and Ubuntu
+
+Packages are available for recent versions of Debian and Ubuntu.
+
+First, register the Arvados signing key in apt's database:
+
+{% include 'install_debian_key' %}
+
+As root, add the Arvados package repository to your sources. This command depends on your OS vendor and version:
+
+table(table table-bordered table-condensed).
+|_. OS version|_. Command|
+|Debian 10 ("buster")|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/ buster main" | tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
+|Debian 9 ("stretch")|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/ stretch main" | tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
+|Ubuntu 18.04 ("bionic")[1]|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/ bionic main" | tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
+|Ubuntu 16.04 ("xenial")[1]|<notextile><code><span class="userinput">echo "deb http://apt.arvados.org/ xenial main" | tee /etc/apt/sources.list.d/arvados.list</span></code></notextile>|
+
+
+{% include 'notebox_begin' %}
+
+fn1. Arvados packages for Ubuntu may depend on third-party packages in Ubuntu's "universe" repository. If you're installing on Ubuntu, make sure you have the universe sources uncommented in @/etc/apt/sources.list@.
+
+{% include 'notebox_end' %}
+
+Retrieve the package list:
+
+<notextile>
+<pre><code># <span class="userinput">apt-get update</span>
+</code></pre>
+</notextile>
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Install Ruby and Bundler
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+{% include 'install_ruby_and_bundler' %}
--- /dev/null
+---
+layout: default
+navsection: installguide
+title: Set up web based login
+...
+{% comment %}
+Copyright (C) The Arvados Authors. All rights reserved.
+
+SPDX-License-Identifier: CC-BY-SA-3.0
+{% endcomment %}
+
+# "Option 1: Google login through Arvados controller":#controller
+# "Option 2: Separate single-sign-on (SSO) server (Google, LDAP, local database)":#sso
+
+h2(#controller). Option 1: Google login through Arvados controller
+
+First, visit "Setting up Google auth.":google-auth.html
+
+Next, copy the values of *Client ID* and *Client secret* from the Google Developers Console into @Login.GoogleClientID@ and @Login.GoogleClientSecret@ of @config.yml@ :
+
+<pre>
+ Login:
+ GoogleClientID: ""
+ GoogleClientSecret: ""
+</pre>
+
+h2(#sso). Option 2: Separate single-sign-on (SSO) server (supports Google, LDAP, local database)
+
+See "Install the Single Sign On (SSO) server":install-sso.html
Arvados CLI tools are written in Ruby and Python. To use the @arv@ command, you can either install the @arvados-cli@ gem via RubyGems or build and install the package from source. The @arv@ command also relies on other Arvados tools. To get those, install the @arvados-python-client@ and @arvados-cwl-runner@ packages, either from PyPI or source.
-h3. Prerequisites: Ruby, Bundler, and curl libraries
+h2. Prerequisites
-{% include 'install_ruby_and_bundler' %}
+# "Install Ruby":../../install/ruby.html
+# "Install the Python SDK":../python/sdk-python.html
-Install curl libraries with your system's package manager. For example, on Debian or Ubuntu:
+The SDK uses @curl@ which depends on the @libcurl@ C library. To build the module you may have to install additional packages. On Debian 9 this is:
-<notextile>
-<pre>
-~$ <code class="userinput">sudo apt-get install libcurl3 libcurl3-gnutls libcurl4-openssl-dev</code>
-</pre>
-</notextile>
-
-h3. Option 1: Install from RubyGems and PyPI
-
-<notextile>
<pre>
-~$ <code class="userinput">sudo -i gem install arvados-cli</code>
+$ apt-get install build-essential libcurl4-openssl-dev
</pre>
-</notextile>
-
-<notextile>
-<pre>
-~$ <code class="userinput">pip install arvados-python-client arvados-cwl-runner</code>
-</pre>
-</notextile>
-h3. Option 2: Build and install from source
+h2. Install from RubyGems
<notextile>
<pre>
-~$ <code class="userinput">git clone https://github.com/curoverse/arvados.git</code>
-~$ <code class="userinput">cd arvados/sdk/cli</code>
-~/arvados/sdk/cli$ <code class="userinput">gem build arvados-cli.gemspec</code>
-~/arvados/sdk/cli$ <code class="userinput">sudo -i gem install arvados-cli-*.gem</code>
-~/arvados/sdk/cli$ <code class="userinput">cd ../python</code>
-~/arvados/sdk/python$ <code class="userinput">python setup.py install</code>
-~/arvados/sdk/python$ <code class="userinput">cd ../cwl</code>
-~/arvados/sdk/cwl$ <code class="userinput">python setup.py install</code>
+# <code class="userinput">gem install arvados-cli</code>
</pre>
</notextile>
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-See "Arvados GoDoc":https://godoc.org/git.curoverse.com/arvados.git/sdk/go for detailed documentation.
+See "Arvados GoDoc":https://godoc.org/git.arvados.org/arvados.git/sdk/go for detailed documentation.
In these examples, the site prefix is @aaaaa@.
{% codeblock as go %}
import (
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
}
func main() {
The Go ("Golang":http://golang.org) SDK provides a generic set of wrappers so you can make API calls easily.
-See "Arvados GoDoc":https://godoc.org/git.curoverse.com/arvados.git/sdk/go for detailed documentation.
+See "Arvados GoDoc":https://godoc.org/git.arvados.org/arvados.git/sdk/go for detailed documentation.
h3. Installation
-Use @go get git.curoverse.com/arvados.git/sdk/go/arvadosclient@. The go tools will fetch the relevant code and dependencies for you.
+Use @go get git.arvados.org/arvados.git/sdk/go/arvadosclient@. The go tools will fetch the relevant code and dependencies for you.
{% codeblock as go %}
import (
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
{% endcodeblock %}
This section documents language bindings for the "Arvados API":{{site.baseurl}}/api and Keep that are available for various programming languages. Not all features are available in every SDK. The most complete SDK is the Python SDK. Note that this section only gives a high level overview of each SDK. Consult the "Arvados API":{{site.baseurl}}/api section for detailed documentation about Arvados API calls available on each resource.
-* "Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html
+* "Python SDK":{{site.baseurl}}/sdk/python/sdk-python.html (also includes essential command line tools such as "arv-put" and "arv-get")
* "Command line SDK":{{site.baseurl}}/sdk/cli/install.html ("arv")
* "Go SDK":{{site.baseurl}}/sdk/go/index.html
* "R SDK":{{site.baseurl}}/sdk/R/index.html
-* "Perl SDK":{{site.baseurl}}/sdk/perl/index.html
* "Ruby SDK":{{site.baseurl}}/sdk/ruby/index.html
* "Java SDK v2":{{site.baseurl}}/sdk/java-v2/index.html
* "Java SDK v1":{{site.baseurl}}/sdk/java/index.html
+* "Perl SDK":{{site.baseurl}}/sdk/perl/index.html
Many Arvados Workbench pages, under the *Advanced* tab, provide examples of API and SDK use for accessing the current resource .
<notextile>
<pre>
-$ <code class="userinput">git clone https://github.com/curoverse/arvados.git</code>
+$ <code class="userinput">git clone https://github.com/arvados/arvados.git</code>
$ <code class="userinput">cd arvados/sdk/java-v2</code>
$ <code class="userinput">gradle test</code>
$ <code class="userinput">gradle jar</code>
The Java SDK v1 provides a low level API to call Arvados from Java.
+This is a legacy SDK. It is no longer used or maintained regularly. The "Arvados Java SDK v2":../java-v2/index.html should be used.
+
h3. Introdution
* The Java SDK requires Java 6 or later
The Perl SDK provides a generic set of wrappers so you can make API calls easily.
-It should be treated as alpha/experimental. Currently, limitations include:
-* Verbose syntax.
-* No native Keep client.
-* No CPAN package.
+This is a legacy SDK. It is no longer used or maintained regularly.
h3. Installation
Then run the following:
<notextile>
-<pre><code>~$ <span class="userinput">git clone https://github.com/curoverse/arvados.git</span>
+<pre><code>~$ <span class="userinput">git clone https://github.com/arvados/arvados.git</span>
~$ <span class="userinput">cd arvados/sdk/perl</span>
~$ <span class="userinput">perl Makefile.PL</span>
~$ <span class="userinput">sudo make install</span>
The Arvados FUSE driver is a Python utility that allows you to see the Keep service as a normal filesystem, so that data can be accessed using standard tools. This driver requires the Python SDK installed in order to access Arvados services.
-h3. Installation
+h2. Installation
-If you are logged in to an Arvados VM, the @arv-mount@ utility should already be installed.
+If you are logged in to a managed Arvados VM, the @arv-mount@ utility should already be installed.
-To use the FUSE driver elsewhere, you can install from a distribution package, PyPI, or source.
+To use the FUSE driver elsewhere, you can install from a distribution package, or PyPI.
-{% include 'notebox_begin' %}
-The Arvados FUSE driver requires Python 2.7
-{% include 'notebox_end' %}
+h2. Option 1: Install from distribution packages
-h4. Option 1: Install from distribution packages
+First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/packages.html
-First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
+{% assign arvados_component = 'python-arvados-fuse' %}
-{% assign rh_version = "6" %}
-{% include 'note_python_sc' %}
+{% include 'install_packages' %}
-On Red Hat-based systems:
+h2. Option 2: Install with pip
-<notextile>
-<pre><code>~$ <span class="userinput">echo 'exclude=python2-llfuse' | sudo tee -a /etc/yum.conf</span>
-~$ <span class="userinput">sudo yum install python-arvados-fuse</code>
-</code></pre>
-</notextile>
-
-On Debian-based systems:
-
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install python-arvados-fuse</code>
-</code></pre>
-</notextile>
+Run @pip install arvados_fuse@ in an appropriate installation environment, such as a virtualenv.
-h4. Option 2: Install with pip
+Note:
-Run @pip install arvados_fuse@ in an appropriate installation environment, such as a virtualenv.
+The SDK uses @pycurl@ which depends on the @libcurl@ C library. To build the module you may have to first install additional packages. On Debian 9 this is:
-h4. Option 3: Install from source
+<pre>
+$ apt-get install git build-essential python-dev libcurl4-openssl-dev libssl1.0-dev python-llfuse
+</pre>
-Install the @python-setuptools@ package from your distribution. Then run the following:
+For Python 3 this is:
-<notextile>
-<pre><code>~$ <span class="userinput">git clone https://github.com/curoverse/arvados.git</span>
-~$ <span class="userinput">cd arvados/services/fuse</span>
-~/arvados/services/fuse$ <span class="userinput">python setup.py install</span>
-</code></pre>
-</notextile>
+<pre>
+$ apt-get install git build-essential python3-dev libcurl4-openssl-dev libssl1.0-dev python3-llfuse
+</pre>
h3. Usage
print(collection.open(c).read())
{% endcodeblock %}
-h2. Create a collection sharing link
+h2(#sharing_link). Create a collection sharing link
{% codeblock as python %}
import arvados
content = reader.read(128*1024)
print("Finished downloading %s" % filename)
{% endcodeblock %}
+
+h2. Copy files from a collection a new collection
+
+{% codeblock as python %}
+import arvados.collection
+
+source_collection = "x1u39-4zz18-krzg64ufvehgitl"
+target_project = "x1u39-j7d0g-67q94einb8ptznm"
+target_name = "Files copied from source_collection"
+files_to_copy = ["folder1/sample1/sample1_R1.fastq",
+ "folder1/sample2/sample2_R1.fastq"]
+
+source = arvados.collection.CollectionReader(source_collection)
+target = arvados.collection.Collection()
+
+for f in files_to_copy:
+ target.copy(f, "", source_collection=source)
+
+target.save_new(name=target_name, owner_uuid=target_project)
+print("Created collection %s" % target.manifest_locator())
+{% endcodeblock %}
In these examples, the site prefix is @aaaaa@.
+See also the "cookbook":cookbook.html for more complex examples.
+
h2. Initialize SDK
{% codeblock as python %}
{% codeblock as python %}
result = api.users().current().execute()
{% endcodeblock %}
+
+h2. Get the User object for the current user
+
+{% codeblock as python %}
+current_user = arvados.api('v1').users().current().execute()
+{% endcodeblock %}
+
+h2. Get the UUID of an object that was retrieved using the SDK
+
+{% codeblock as python %}
+my_uuid = current_user['uuid']
+{% endcodeblock %}
SPDX-License-Identifier: CC-BY-SA-3.0
{% endcomment %}
-The Python SDK provides access from Python to the Arvados API and Keep. It also includes a number of command line tools for using and administering Arvados and Keep, and some conveniences for use in Crunch scripts; see "Crunch utility libraries":crunch-utility-libraries.html for details.
+The Python SDK provides access from Python to the Arvados API and Keep, along with a number of command line tools for using and administering Arvados and Keep.
h2. Installation
The Python SDK supports Python 2.7 and 3.4+
-h3. Option 1: Install with pip
+h2. Option 1: Install from a distribution package
-This installation method is recommended to make the SDK available for use in your own Python programs. It can coexist with the system-wide installation method from a distribution package (option 2, below).
+This installation method is recommended to make the CLI tools available system-wide. It can coexist with the installation method described in option 2, below.
-Run @pip install arvados-python-client@ in an appropriate installation environment, such as a @virtualenv@.
+First, configure the "Arvados package repositories":../../install/packages.html
-The SDK uses @pycurl@ which depends on the @libcurl@ C library. To build the module you may have to install additional packages. On Debian 9 this is:
+{% assign arvados_component = 'python-arvados-python-client' %}
-<pre>
-$ apt-get install git build-essential python3-dev libcurl4-openssl-dev libssl1.0-dev
-</pre>
+{% include 'install_packages' %}
-If your version of @pip@ is 1.4 or newer, the @pip install@ command might give an error: "Could not find a version that satisfies the requirement arvados-python-client". If this happens, try @pip install --pre arvados-python-client@.
+h2. Option 2: Install with pip
-h3. Option 2: Install from a distribution package
+This installation method is recommended to use the SDK in your own Python programs. If installed into a @virtualenv@, it can coexist with the system-wide installation method from a distribution package.
-This installation method is recommended to make the CLI tools available system-wide. It can coexist with the installation method described in option 1, above.
+Run @pip install arvados-python-client@ in an appropriate installation environment, such as a @virtualenv@.
-First, "add the appropriate package repository for your distribution":{{ site.baseurl }}/install/install-manual-prerequisites.html#repos.
+Note:
-On Red Hat-based systems:
+The SDK uses @pycurl@ which depends on the @libcurl@ C library. To build the module you may have to first install additional packages. On Debian 9 this is:
-<notextile>
-<pre><code>~$ <span class="userinput">sudo yum install python-arvados-python-client</code>
-</code></pre>
-</notextile>
+<pre>
+$ apt-get install git build-essential python-dev libcurl4-openssl-dev libssl1.0-dev
+</pre>
-On Debian-based systems:
+For Python 3 this is
-<notextile>
-<pre><code>~$ <span class="userinput">sudo apt-get install python-arvados-python-client</code>
-</code></pre>
-</notextile>
+<pre>
+$ apt-get install git build-essential python3-dev libcurl4-openssl-dev libssl1.0-dev
+</pre>
+
+If your version of @pip@ is 1.4 or newer, the @pip install@ command might give an error: "Could not find a version that satisfies the requirement arvados-python-client". If this happens, try @pip install --pre arvados-python-client@.
-h3. Test installation
+h2. Test installation
If the SDK is installed and your @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ environment variables are set up correctly (see "api-tokens":{{site.baseurl}}/user/reference/api-tokens.html for details), @import arvados@ should produce no errors.
</pre>
</notextile>
-h3. Examples
-
-Get the User object for the current user:
-
-<notextile>
-<pre><code class="userinput">current_user = arvados.api('v1').users().current().execute()
-</code></pre>
-</notextile>
-
-Get the UUID of an object that was retrieved using the SDK:
+h2. Usage
-<notextile>
-<pre><code class="userinput">my_uuid = current_user['uuid']
-</code></pre>
-</notextile>
-
-Retrieve an object by ID:
-
-<notextile>
-<pre><code class="userinput">some_user = arvados.api('v1').users().get(uuid=my_uuid).execute()
-</code></pre>
-</notextile>
-
-Create an object:
-
-<notextile>
-<pre><code class="userinput">test_link = arvados.api('v1').links().create(
- body={'link_class':'test','name':'test'}).execute()
-</code></pre>
-</notextile>
-
-Update an object:
-
-<notextile>
-<pre><code class="userinput">arvados.api('v1').links().update(
- uuid=test_link['uuid'],
- body={'properties':{'foo':'bar'}}).execute()
-</code></pre>
-</notextile>
-
-Get a list of objects:
-
-<notextile>
-<pre><code class="userinput">repos = arvados.api('v1').repositories().list().execute()
-len(repos['items'])</code>
-2
-<code class="userinput">repos['items'][0]['uuid']</code>
-u'qr1hi-s0uqq-kg8cawglrf74bmw'
-</code></pre>
-</notextile>
+Check out the "examples":example.html and "cookbook":cookbook.html
h3. Notes
The Ruby SDK provides a generic set of wrappers so you can make API calls easily.
-h3. Installation
+h2. Installation
If you are logged in to an Arvados VM, the Ruby SDK should be installed.
To use it elsewhere, you can either install the @arvados@ gem via RubyGems or build and install the package using the arvados source tree.
-h4. Prerequisites: Ruby >= 2.0.0
+h3. Prerequisites
-You can use "RVM":http://rvm.io/rvm/install to install and manage Ruby versions.
+# "Install Ruby":../../install/ruby.html
-h4. Option 1: install with RubyGems
+The SDK uses @curl@ which depends on the @libcurl@ C library. To build the module you may have to install additional packages. On Debian 9 this is:
-<notextile>
<pre>
-$ <code class="userinput">sudo -i gem install arvados</code>
+$ apt-get install build-essential libcurl4-openssl-dev
</pre>
-</notextile>
-h4. Option 2: build and install from source
+h3. Install with RubyGems
<notextile>
<pre>
-$ <code class="userinput">git clone https://github.com/curoverse/arvados.git</code>
-$ <code class="userinput">cd arvados/sdk/ruby</code>
-$ <code class="userinput">gem build arvados.gemspec</code>
-$ <code class="userinput">sudo -i gem install arvados-*.gem</code>
+# <code class="userinput">gem install arvados</code>
</pre>
</notextile>
-h4. Test installation
+h3. Test installation
If the SDK is installed, @ruby -r arvados -e 'puts "OK!"'@ should produce no errors.
Indicate that one or more input parameters are "secret". Must be applied at the top level Workflow. Secret parameters are not stored in keep, are hidden from logs and API responses, and are wiped from the database after the workflow completes.
+*Note: currently, workflows with secrets must be submitted on the command line using @arvados-cwl-runner@. Workflows with secrets submitted through Workbench will not properly obscure the secret inputs.*
+
table(table table-bordered table-condensed).
|_. Field |_. Type |_. Description |
|secrets|array<string>|Input parameters which are considered "secret". Must be strings.|
h3. Get the example files
-The tutorial files are located in the "documentation section of the Arvados source repository:":https://github.com/curoverse/arvados/tree/master/doc/user/cwl/bwa-mem
+The tutorial files are located in the "documentation section of the Arvados source repository:":https://github.com/arvados/arvados/tree/master/doc/user/cwl/bwa-mem
<notextile>
-<pre><code>~$ <span class="userinput">git clone https://github.com/curoverse/arvados</span>
+<pre><code>~$ <span class="userinput">git clone https://github.com/arvados/arvados</span>
~$ <span class="userinput">cd arvados/doc/user/cwl/bwa-mem</span>
</code></pre>
</notextile>
A URI reference to Keep uses the @keep:@ scheme followed by either the portable data hash or UUID of the collection and then the location of the file inside the collection. For example, @keep:2463fa9efeb75e099685528b3b9071e0+438/19.fasta.bwt@ or @keep:zzzzz-4zz18-zzzzzzzzzzzzzzz/19.fasta.bwt@.
-If you reference a file in "arv-mount":{{site.baseurl}}/user/tutorials/tutorial-keep-mount.html, such as @/home/example/keep/by_id/2463fa9efeb75e099685528b3b9071e0+438/19.fasta.bwt@, then @arvados-cwl-runner@ will automatically determine the appropriate Keep URI reference.
+If you reference a file in "arv-mount":{{site.baseurl}}/user/tutorials/tutorial-keep-mount-gnu-linux.html, such as @/home/example/keep/by_id/2463fa9efeb75e099685528b3b9071e0+438/19.fasta.bwt@, then @arvados-cwl-runner@ will automatically determine the appropriate Keep URI reference.
If you reference a local file which is not in @arv-mount@, then @arvados-cwl-runner@ will upload the file to Keep and use the Keep URI reference from the upload.
h2. Get the example files
-The tutorial files are located in the "documentation section of the Arvados source repository:":https://github.com/curoverse/arvados/tree/master/doc/user/cwl/federated or "see below":#fed-example
+The tutorial files are located in the "documentation section of the Arvados source repository:":https://github.com/arvados/arvados/tree/master/doc/user/cwl/federated or "see below":#fed-example
<notextile>
-<pre><code>~$ <span class="userinput">git clone https://github.com/curoverse/arvados</span>
+<pre><code>~$ <span class="userinput">git clone https://github.com/arvados/arvados</span>
~$ <span class="userinput">cd arvados/doc/user/cwl/federated</span>
</code></pre>
</notextile>
h2. Chat
-The "curoverse/arvados channel":https://gitter.im/curoverse/arvados channel at "gitter.im":https://gitter.im is available for live discussion and support.
+The "arvados community channel":https://gitter.im/arvados/community channel at "gitter.im":https://gitter.im is available for live discussion and support.
h2. Bug tracking
For example, let's copy from the <a href="https://playground.arvados.org/">Arvados playground</a>, also known as *qr1hi*, to *dst_cluster*. The names *qr1hi* and *dst_cluster* are interchangable with any cluster name. You can find the cluster name from the prefix of the uuid of the object you want to copy. For example, in *qr1hi*-4zz18-tci4vn4fa95w0zx, the cluster name is qr1hi.
-In order to communicate with both clusters, you must create custom configuration files for each cluster. In the Arvados Workbench, click on the dropdown menu icon <span class="fa fa-lg fa-user"></span> <span class="caret"></span> in the upper right corner of the top navigation menu to access the user settings menu, and click on the menu item *Current token*. Copy the @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ in both of your clusters. Then, create two configuration files, one for each cluster. The names of the files must have the format of *uuid_prefix.conf*. In our example, let's make two files, one for *qr1hi* and one for *dst_cluster*. From your *Current token* page in *qr1hi* and *dst_cluster*, copy the @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@.
+In order to communicate with both clusters, you must create custom configuration files for each cluster. In the Arvados Workbench, click on the dropdown menu icon <span class="fa fa-lg fa-user"></span> <span class="caret"></span> in the upper right corner of the top navigation menu to access the user settings menu, and click on the menu item *Current token*. Copy the @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@ in both of your clusters. Then, create two configuration files, one for each cluster. The names of the files must have the format of *ClusterID.conf*. In our example, let's make two files, one for *qr1hi* and one for *dst_cluster*. From your *Current token* page in *qr1hi* and *dst_cluster*, copy the @ARVADOS_API_HOST@ and @ARVADOS_API_TOKEN@.
!{display: block;margin-left: 25px;margin-right: auto;}{{ site.baseurl }}/images/api-token-host.png!
h3. Browsing Keep (read-only)
-In Finder, use "Connect to Server..." under the "Go" menu and enter @https://collections.uuid_prefix.your.domain/@ in popup dialog. When prompted for credentials, put a valid Arvados token in the @Password@ field and anything in the Name field (it will be ignored by Arvados).
+In Finder, use "Connect to Server..." under the "Go" menu and enter @https://collections.ClusterID.example.com/@ in popup dialog. When prompted for credentials, put a valid Arvados token in the @Password@ field and anything in the Name field (it will be ignored by Arvados).
This mount is read-only. Write support for the @/users/@ directory is planned for a future release.
h3. Accessing a specific collection in Keep (read-write)
-In Finder, use "Connect to Server..." under the "Go" menu and enter @https://collections.uuid_prefix.your.domain/@ in popup dialog. When prompted for credentials, put a valid Arvados token in the @Password@ field and anything in the Name field (it will be ignored by Arvados).
+In Finder, use "Connect to Server..." under the "Go" menu and enter @https://collections.ClusterID.example.com/@ in popup dialog. When prompted for credentials, put a valid Arvados token in the @Password@ field and anything in the Name field (it will be ignored by Arvados).
This collection is now accessible read/write.
h3. Browsing Keep (read-only)
-Use the 'Map network drive' functionality, and enter @https://collections.uuid_prefix.your.domain/@ in the Folder field. When prompted for credentials, you can fill in an arbitrary string for @Username@, it is ignored by Arvados. Windows will not accept an empty @Username@. Put a valid Arvados token in the @Password@ field.
+Use the 'Map network drive' functionality, and enter @https://collections.ClusterID.example.com/@ in the Folder field. When prompted for credentials, you can fill in an arbitrary string for @Username@, it is ignored by Arvados. Windows will not accept an empty @Username@. Put a valid Arvados token in the @Password@ field.
This mount is read-only. Write support for the @/users/@ directory is planned for a future release.
h3. Accessing a specific collection in Keep (read-write)
-Use the 'Map network drive' functionality, and enter @https://collections.uuid_prefix.your.domain/c=your-collection-uuid@ in the Folder field. When prompted for credentials, you can fill in an arbitrary string for @Username@, it is ignored by Arvados. Windows will not accept an empty @Username@. Put a valid token in the @Password@ field.
+Use the 'Map network drive' functionality, and enter @https://collections.ClusterID.example.com/c=your-collection-uuid@ in the Folder field. When prompted for credentials, you can fill in an arbitrary string for @Username@, it is ignored by Arvados. Windows will not accept an empty @Username@. Put a valid token in the @Password@ field.
This collection is now accessible read/write.
--- /dev/null
+require 'zenweb'
+
+module ZenwebTextile
+ VERSION = '0.0.1'
+end
+
+module Zenweb
+ class Page
+ alias_method :old_body, :body
+ def body
+ # Don't try to parse binary files as text
+ if /\.(?:#{Site.binary_files.join("|")})$/ =~ path
+ @body ||= File.binread path
+ else
+ @body ||= begin
+ _, body = Zenweb::Config.split path
+ body.strip
+ end
+ end
+ end
+ end
+end
# Based on Debian Stretch
FROM debian:stretch-slim
-MAINTAINER Ward Vandewege <wvandewege@veritasgenetics.com>
+MAINTAINER Peter Amstutz <peter.amstutz@curii.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -q
RUN apt-get install -yq --no-install-recommends nodejs \
python-arvados-python-client=$python_sdk_version \
- python-arvados-cwl-runner=$cwl_runner_version
+ python3-arvados-cwl-runner=$cwl_runner_version
# use the Python executable from the python-arvados-cwl-runner package
-RUN rm -f /usr/bin/python && ln -s /usr/share/python2.7/dist/python-arvados-cwl-runner/bin/python /usr/bin/python
+RUN rm -f /usr/bin/python && ln -s /usr/share/python2.7/dist/python-arvados-python-client/bin/python /usr/bin/python
+RUN rm -f /usr/bin/python3 && ln -s /usr/share/python3/dist/python3-arvados-cwl-runner/bin/python /usr/bin/python3
# Install dependencies and set up system.
RUN /usr/sbin/adduser --disabled-password \
--- /dev/null
+module git.arvados.org/arvados.git
+
+go 1.13
+
+require (
+ github.com/AdRoll/goamz v0.0.0-20170825154802-2731d20f46f4
+ github.com/Azure/azure-sdk-for-go v19.1.0+incompatible
+ github.com/Azure/go-autorest v10.15.2+incompatible
+ github.com/Microsoft/go-winio v0.4.5 // indirect
+ github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7 // indirect
+ github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239 // indirect
+ github.com/arvados/cgofuse v1.2.0-arvados1
+ github.com/aws/aws-sdk-go v1.25.30
+ github.com/coreos/go-oidc v2.1.0+incompatible
+ github.com/coreos/go-systemd v0.0.0-20180108085132-cc4f39464dc7
+ github.com/dgrijalva/jwt-go v3.1.0+incompatible // indirect
+ github.com/dimchansky/utfbom v1.0.0 // indirect
+ github.com/dnaeon/go-vcr v1.0.1 // indirect
+ github.com/docker/distribution v2.6.0-rc.1.0.20180105232752-277ed486c948+incompatible // indirect
+ github.com/docker/docker v1.4.2-0.20180109013817-94b8a116fbf1
+ github.com/docker/go-connections v0.3.0 // indirect
+ github.com/docker/go-units v0.3.3-0.20171221200356-d59758554a3d // indirect
+ github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568 // indirect
+ github.com/ghodss/yaml v1.0.0
+ github.com/gliderlabs/ssh v0.2.2 // indirect
+ github.com/gogo/protobuf v1.1.1
+ github.com/gorilla/context v1.1.1 // indirect
+ github.com/gorilla/mux v1.6.1-0.20180107155708-5bbbb5b2b572
+ github.com/hashicorp/golang-lru v0.5.1
+ github.com/imdario/mergo v0.3.8-0.20190415133143-5ef87b449ca7
+ github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
+ github.com/jmcvetta/randutil v0.0.0-20150817122601-2bb1b664bcff
+ github.com/julienschmidt/httprouter v1.2.0
+ github.com/karalabe/xgo v0.0.0-20191115072854-c5ccff8648a7 // indirect
+ github.com/kevinburke/ssh_config v0.0.0-20171013211458-802051befeb5 // indirect
+ github.com/lib/pq v1.3.0
+ github.com/marstr/guid v1.1.1-0.20170427235115-8bdf7d1a087c // indirect
+ github.com/mitchellh/go-homedir v0.0.0-20161203194507-b8bc1bf76747 // indirect
+ github.com/opencontainers/go-digest v1.0.0-rc1 // indirect
+ github.com/opencontainers/image-spec v1.0.1-0.20171125024018-577479e4dc27 // indirect
+ github.com/pelletier/go-buffruneio v0.2.0 // indirect
+ github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35 // indirect
+ github.com/prometheus/client_golang v1.2.1
+ github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4
+ github.com/prometheus/common v0.7.0
+ github.com/satori/go.uuid v1.2.1-0.20180103174451-36e9d2ebbde5 // indirect
+ github.com/sergi/go-diff v1.0.0 // indirect
+ github.com/sirupsen/logrus v1.4.2
+ github.com/src-d/gcfg v1.3.0 // indirect
+ github.com/stretchr/testify v1.4.0 // indirect
+ github.com/xanzy/ssh-agent v0.1.0 // indirect
+ golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550
+ golang.org/x/net v0.0.0-20190620200207-3b0461eec859
+ golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
+ golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd // indirect
+ google.golang.org/api v0.13.0
+ gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405
+ gopkg.in/square/go-jose.v2 v2.3.1
+ gopkg.in/src-d/go-billy.v4 v4.0.1
+ gopkg.in/src-d/go-git-fixtures.v3 v3.5.0 // indirect
+ gopkg.in/src-d/go-git.v4 v4.0.0
+ gopkg.in/warnings.v0 v0.1.2 // indirect
+ gopkg.in/yaml.v2 v2.2.4 // indirect
+ rsc.io/getopt v0.0.0-20170811000552-20be20937449
+)
+
+replace github.com/AdRoll/goamz => github.com/arvados/goamz v0.0.0-20190905141525-1bba09f407ef
--- /dev/null
+cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.38.0 h1:ROfEUZz+Gh5pa62DJWXSaonyu3StP6EA6lPEXPI6mCo=
+cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
+github.com/Azure/azure-sdk-for-go v19.1.0+incompatible h1:ysqLW+tqZjJWOTE74heH/pDRbr4vlN3yV+dqQYgpyxw=
+github.com/Azure/azure-sdk-for-go v19.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/go-autorest v10.15.2+incompatible h1:oZpnRzZie83xGV5txbT1aa/7zpCPvURGhV6ThJij2bs=
+github.com/Azure/go-autorest v10.15.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
+github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
+github.com/Microsoft/go-winio v0.4.5 h1:U2XsGR5dBg1yzwSEJoP2dE2/aAXpmad+CNG2hE9Pd5k=
+github.com/Microsoft/go-winio v0.4.5/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
+github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7 h1:uSoVVbwJiQipAclBbw+8quDsfcvFjOpI5iCf4p/cqCs=
+github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7/go.mod h1:6zEj6s6u/ghQa61ZWa/C2Aw3RkjiTBOix7dkqa1VLIs=
+github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
+github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
+github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
+github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
+github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239 h1:kFOfPq6dUM1hTo4JG6LR5AXSUEsOjtdm0kw0FtQtMJA=
+github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
+github.com/arvados/cgofuse v1.2.0-arvados1 h1:4Q4vRJ4hbTCcI4gGEaa6hqwj3rqlUuzeFQkfoEA2HqE=
+github.com/arvados/cgofuse v1.2.0-arvados1/go.mod h1:79WFV98hrkRHK9XPhh2IGGOwpFSjocsWubgxAs2KhRc=
+github.com/arvados/goamz v0.0.0-20190905141525-1bba09f407ef h1:cl7DIRbiAYNqaVxg3CZY8qfZoBOKrj06H/x9SPGaxas=
+github.com/arvados/goamz v0.0.0-20190905141525-1bba09f407ef/go.mod h1:rCtgyMmBGEbjTm37fCuBYbNL0IhztiALzo3OB9HyiOM=
+github.com/aws/aws-sdk-go v1.25.30 h1:I9qj6zW3mMfsg91e+GMSN/INcaX9tTFvr/l/BAHKaIY=
+github.com/aws/aws-sdk-go v1.25.30/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
+github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
+github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
+github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
+github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
+github.com/cespare/xxhash/v2 v2.1.0 h1:yTUvW7Vhb89inJ+8irsUqiWjh8iT6sQPZiQzI6ReGkA=
+github.com/cespare/xxhash/v2 v2.1.0/go.mod h1:dgIUBU3pDso/gPgZ1osOZ0iQf77oPR28Tjxl5dIMyVM=
+github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/coreos/go-oidc v2.1.0+incompatible h1:sdJrfw8akMnCuUlaZU3tE/uYXFgfqom8DBE9so9EBsM=
+github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
+github.com/coreos/go-systemd v0.0.0-20180108085132-cc4f39464dc7 h1:e3u8KWFMR3irlDo1Z/tL8Hsz1MJmCLkSoX5AZRMKZkg=
+github.com/coreos/go-systemd v0.0.0-20180108085132-cc4f39464dc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/dgrijalva/jwt-go v3.1.0+incompatible h1:FFziAwDQQ2dz1XClWMkwvukur3evtZx7x/wMHKM1i20=
+github.com/dgrijalva/jwt-go v3.1.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
+github.com/dimchansky/utfbom v1.0.0 h1:fGC2kkf4qOoKqZ4q7iIh+Vef4ubC1c38UDsEyZynZPc=
+github.com/dimchansky/utfbom v1.0.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
+github.com/dnaeon/go-vcr v1.0.1 h1:r8L/HqC0Hje5AXMu1ooW8oyQyOFv4GxqpL0nRP7SLLY=
+github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
+github.com/docker/distribution v2.6.0-rc.1.0.20180105232752-277ed486c948+incompatible h1:PVtvnmmxSMUcT5AY6vG7sCCzRg3eyoW6vQvXtITC60c=
+github.com/docker/distribution v2.6.0-rc.1.0.20180105232752-277ed486c948+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/docker v1.4.2-0.20180109013817-94b8a116fbf1 h1:0NaIDWeMBQIQACbThhJaL8lts6EMPSTCMLeDstJ6gU8=
+github.com/docker/docker v1.4.2-0.20180109013817-94b8a116fbf1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/go-connections v0.3.0 h1:3lOnM9cSzgGwx8VfK/NGOW5fLQ0GjIlCkaktF+n1M6o=
+github.com/docker/go-connections v0.3.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
+github.com/docker/go-units v0.3.3-0.20171221200356-d59758554a3d h1:dVaNRYvaGV23AdNdsm+4y1mPN0tj3/1v6taqKMmM6Ko=
+github.com/docker/go-units v0.3.3-0.20171221200356-d59758554a3d/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568 h1:BHsljHzVlRcyQhjrss6TZTdY2VfCqZPbv5k3iBFa2ZQ=
+github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
+github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
+github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
+github.com/gliderlabs/ssh v0.2.2 h1:6zsha5zo/TWhRhwqCD3+EarCAgZ2yN28ipRnGPnwkI0=
+github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
+github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
+github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
+github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
+github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
+github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
+github.com/gogo/protobuf v1.1.1 h1:72R+M5VuhED/KujmZVcIquuo8mBgX4oVda//DQb3PXo=
+github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
+github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
+github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
+github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
+github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
+github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
+github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
+github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
+github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
+github.com/gorilla/mux v1.6.1-0.20180107155708-5bbbb5b2b572 h1:eWMpQtfzS3D63EI50baSfP/zjyqFM9tDfvVyAlCIMic=
+github.com/gorilla/mux v1.6.1-0.20180107155708-5bbbb5b2b572/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
+github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/imdario/mergo v0.3.8-0.20190415133143-5ef87b449ca7 h1:kUGMXUVH7IU1rKA3TZu9ROUE61dVv2SSgSsdeYKm0mg=
+github.com/imdario/mergo v0.3.8-0.20190415133143-5ef87b449ca7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
+github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
+github.com/jmcvetta/randutil v0.0.0-20150817122601-2bb1b664bcff h1:6NvhExg4omUC9NfA+l4Oq3ibNNeJUdiAF3iBVB0PlDk=
+github.com/jmcvetta/randutil v0.0.0-20150817122601-2bb1b664bcff/go.mod h1:ddfPX8Z28YMjiqoaJhNBzWHapTHXejnB5cDCUWDwriw=
+github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
+github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
+github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/julienschmidt/httprouter v1.2.0 h1:TDTW5Yz1mjftljbcKqRcrYhd4XeOoI98t+9HbQbYf7g=
+github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
+github.com/karalabe/xgo v0.0.0-20191115072854-c5ccff8648a7 h1:AYzjK/SHz6m6mg5iuFwkrAhCc14jvCpW9d6frC9iDPE=
+github.com/karalabe/xgo v0.0.0-20191115072854-c5ccff8648a7/go.mod h1:iYGcTYIPUvEWhFo6aKUuLchs+AV4ssYdyuBbQJZGcBk=
+github.com/kevinburke/ssh_config v0.0.0-20171013211458-802051befeb5 h1:xXn0nBttYwok7DhU4RxqaADEpQn7fEMt5kKc3yoj/n0=
+github.com/kevinburke/ssh_config v0.0.0-20171013211458-802051befeb5/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
+github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
+github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
+github.com/lib/pq v1.3.0 h1:/qkRGz8zljWiDcFvgpwUpwIAPu3r07TDvs3Rws+o/pU=
+github.com/lib/pq v1.3.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
+github.com/marstr/guid v1.1.1-0.20170427235115-8bdf7d1a087c h1:ouxemItv3B/Zh008HJkEXDYCN3BIRyNHxtUN7ThJ5Js=
+github.com/marstr/guid v1.1.1-0.20170427235115-8bdf7d1a087c/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
+github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
+github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
+github.com/mitchellh/go-homedir v0.0.0-20161203194507-b8bc1bf76747 h1:eQox4Rh4ewJF+mqYPxCkmBAirRnPaHEB26UkNuPyjlk=
+github.com/mitchellh/go-homedir v0.0.0-20161203194507-b8bc1bf76747/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
+github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
+github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
+github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
+github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
+github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
+github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
+github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/image-spec v1.0.1-0.20171125024018-577479e4dc27 h1:8Q+VFspwMHwvVvpSS8xpuFQR7RpGX8G8ECXwgc/05sg=
+github.com/opencontainers/image-spec v1.0.1-0.20171125024018-577479e4dc27/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/pelletier/go-buffruneio v0.2.0 h1:U4t4R6YkofJ5xHm3dJzuRpPZ0mr5MMCoAWooScCR7aA=
+github.com/pelletier/go-buffruneio v0.2.0/go.mod h1:JkE26KsDizTr40EUHkXVtNPvgGtbSNq5BcowyYOWdKo=
+github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
+github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35 h1:J9b7z+QKAmPf4YLrFg6oQUotqHQeUNWwkvo7jZp1GLU=
+github.com/pquerna/cachecontrol v0.0.0-20180517163645-1555304b9b35/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
+github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
+github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
+github.com/prometheus/client_golang v1.2.1 h1:JnMpQc6ppsNgw9QPAGF6Dod479itz7lvlsMzzNayLOI=
+github.com/prometheus/client_golang v1.2.1/go.mod h1:XMU6Z2MjaRKVu/dC1qupJI9SiNkDYzz3xecMgSW/F+U=
+github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
+github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
+github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
+github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
+github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
+github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
+github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
+github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/satori/go.uuid v1.2.1-0.20180103174451-36e9d2ebbde5 h1:Jw7W4WMfQDxsXvfeFSaS2cHlY7bAF4MGrgnbd0+Uo78=
+github.com/satori/go.uuid v1.2.1-0.20180103174451-36e9d2ebbde5/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
+github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
+github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
+github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
+github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
+github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
+github.com/src-d/gcfg v1.3.0 h1:2BEDr8r0I0b8h/fOqwtxCEiq2HJu8n2JGZJQFGXWLjg=
+github.com/src-d/gcfg v1.3.0/go.mod h1:p/UMsR43ujA89BJY9duynAwIpvqEujIH/jFlfL7jWoI=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
+github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
+github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
+github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+github.com/xanzy/ssh-agent v0.1.0 h1:lOhdXLxtmYjaHc76ZtNmJWPg948y/RnT+3N3cvKWFzY=
+github.com/xanzy/ssh-agent v0.1.0/go.mod h1:0NyE30eGUDliuLEHJgYte/zncp2zdTStcOnWhgSqHD8=
+go.opencensus.io v0.21.0 h1:mU6zScU4U1YAFPHEHYk+3JC4SY7JxgkqS10ZOSyksNg=
+go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
+golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550 h1:ObdrDkeb4kJdCP557AjRjq69pTHfNouLtWZG7j9rPN8=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
+golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c h1:uOCk1iQW6Vc18bnC13MfzScl+wdKBmM9Y9kU7Z83/lw=
+golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190613194153-d28f0bde5980 h1:dfGZHvZk057jK2MCeWus/TowKpJ8y4AmooUzdBSR9GU=
+golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
+golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd h1:3x5uuvBgE6oaXJjCOvpCC1IpgJogqQ+PqGGU3ZxAgII=
+golang.org/x/sys v0.0.0-20191105231009-c1f44814a5cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c h1:97SnQk1GYRXJgvwZ8fadnxDOWfKvkNQHH3CtZntPSrM=
+golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
+google.golang.org/api v0.13.0 h1:Q3Ui3V3/CVinFWFiW39Iw0kMuVrRzYX0wN6OPFp0lTA=
+google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
+google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873 h1:nfPFGzJkUDX6uBmpN/pSw7MbOAWegH5QDQuoXFHedLg=
+google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
+google.golang.org/grpc v1.20.1 h1:Hz2g2wirWK7H0qIIhGIqRGTuMwTE8HEKFnDZZ7lm9NU=
+google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405 h1:829vOVxxusYHC+IqBtkX5mbKtsY9fheQiQn0MZRVLfQ=
+gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/square/go-jose.v2 v2.3.1 h1:SK5KegNXmKmqE342YYN2qPHEnUYeoMiXXl1poUlI+o4=
+gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
+gopkg.in/src-d/go-billy.v4 v4.0.1 h1:iMxwQPj2cuKRyaIZ985zxClkcdTtT5VpXYf4PTJc0Ek=
+gopkg.in/src-d/go-billy.v4 v4.0.1/go.mod h1:ZHSF0JP+7oD97194otDUCD7Ofbk63+xFcfWP5bT6h+Q=
+gopkg.in/src-d/go-git-fixtures.v3 v3.5.0 h1:ivZFOIltbce2Mo8IjzUHAFoq/IylO9WHhNOAJK+LsJg=
+gopkg.in/src-d/go-git-fixtures.v3 v3.5.0/go.mod h1:dLBcvytrw/TYZsNTWCnkNF2DSIlzWYqTe3rJR56Ac7g=
+gopkg.in/src-d/go-git.v4 v4.0.0 h1:9ZRNKHuhaTaJRGcGaH6Qg7uUORO2X0MNB5WL/CDdqto=
+gopkg.in/src-d/go-git.v4 v4.0.0/go.mod h1:CzbUWqMn4pvmvndg3gnh5iZFmSsbhyhUWdI0IQ60AQo=
+gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
+gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
+gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
+gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
+gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+rsc.io/getopt v0.0.0-20170811000552-20be20937449 h1:UukjJOsjQH0DIuyyrcod6CXHS6cdaMMuJmrt+SN1j4A=
+rsc.io/getopt v0.0.0-20170811000552-20be20937449/go.mod h1:dhCdeqAxkyt5u3/sKRkUXuHaMXUu1Pt13GTQAM2xnig=
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package boot
+
+import (
+ "context"
+ "io/ioutil"
+ "path/filepath"
+)
+
+// Create a root CA key and use it to make a new server
+// certificate+key pair.
+//
+// In future we'll make one root CA key per host instead of one per
+// cluster, so it only needs to be imported to a browser once for
+// ongoing dev/test usage.
+type createCertificates struct{}
+
+func (createCertificates) String() string {
+ return "certificates"
+}
+
+func (createCertificates) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ // Generate root key
+ err := super.RunProgram(ctx, super.tempdir, nil, nil, "openssl", "genrsa", "-out", "rootCA.key", "4096")
+ if err != nil {
+ return err
+ }
+ // Generate a self-signed root certificate
+ err = super.RunProgram(ctx, super.tempdir, nil, nil, "openssl", "req", "-x509", "-new", "-nodes", "-key", "rootCA.key", "-sha256", "-days", "3650", "-out", "rootCA.crt", "-subj", "/C=US/ST=MA/O=Example Org/CN=localhost")
+ if err != nil {
+ return err
+ }
+ // Generate server key
+ err = super.RunProgram(ctx, super.tempdir, nil, nil, "openssl", "genrsa", "-out", "server.key", "2048")
+ if err != nil {
+ return err
+ }
+ // Build config file for signing request
+ defaultconf, err := ioutil.ReadFile("/etc/ssl/openssl.cnf")
+ if err != nil {
+ return err
+ }
+ err = ioutil.WriteFile(filepath.Join(super.tempdir, "server.cfg"), append(defaultconf, []byte(`
+[SAN]
+subjectAltName=DNS:localhost,DNS:localhost.localdomain
+`)...), 0644)
+ if err != nil {
+ return err
+ }
+ // Generate signing request
+ err = super.RunProgram(ctx, super.tempdir, nil, nil, "openssl", "req", "-new", "-sha256", "-key", "server.key", "-subj", "/C=US/ST=MA/O=Example Org/CN=localhost", "-reqexts", "SAN", "-config", "server.cfg", "-out", "server.csr")
+ if err != nil {
+ return err
+ }
+ // Sign certificate
+ err = super.RunProgram(ctx, super.tempdir, nil, nil, "openssl", "x509", "-req", "-in", "server.csr", "-CA", "rootCA.crt", "-CAkey", "rootCA.key", "-CAcreateserial", "-out", "server.crt", "-days", "3650", "-sha256")
+ if err != nil {
+ return err
+ }
+ return nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package boot
+
+import (
+ "context"
+ "flag"
+ "fmt"
+ "io"
+
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+)
+
+var Command cmd.Handler = bootCommand{}
+
+type supervisedTask interface {
+ // Execute the task. Run should return nil when the task is
+ // done enough to satisfy a dependency relationship (e.g., the
+ // service is running and ready). If the task starts a
+ // goroutine that fails after Run returns (e.g., the service
+ // shuts down), it should call fail().
+ Run(ctx context.Context, fail func(error), super *Supervisor) error
+ String() string
+}
+
+type bootCommand struct{}
+
+func (bootCommand) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+ super := &Supervisor{
+ Stderr: stderr,
+ logger: ctxlog.New(stderr, "json", "info"),
+ }
+
+ ctx := ctxlog.Context(context.Background(), super.logger)
+ ctx, cancel := context.WithCancel(ctx)
+ defer cancel()
+
+ var err error
+ defer func() {
+ if err != nil {
+ super.logger.WithError(err).Info("exiting")
+ }
+ }()
+
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
+ flags.SetOutput(stderr)
+ loader := config.NewLoader(stdin, super.logger)
+ loader.SetupFlags(flags)
+ versionFlag := flags.Bool("version", false, "Write version information to stdout and exit 0")
+ flags.StringVar(&super.SourcePath, "source", ".", "arvados source tree `directory`")
+ flags.StringVar(&super.ClusterType, "type", "production", "cluster `type`: development, test, or production")
+ flags.StringVar(&super.ListenHost, "listen-host", "localhost", "host name or interface address for service listeners")
+ flags.StringVar(&super.ControllerAddr, "controller-address", ":0", "desired controller address, `host:port` or `:port`")
+ flags.BoolVar(&super.OwnTemporaryDatabase, "own-temporary-database", false, "bring up a postgres server and create a temporary database")
+ err = flags.Parse(args)
+ if err == flag.ErrHelp {
+ err = nil
+ return 0
+ } else if err != nil {
+ return 2
+ } else if *versionFlag {
+ return cmd.Version.RunCommand(prog, args, stdin, stdout, stderr)
+ } else if super.ClusterType != "development" && super.ClusterType != "test" && super.ClusterType != "production" {
+ err = fmt.Errorf("cluster type must be 'development', 'test', or 'production'")
+ return 2
+ }
+
+ loader.SkipAPICalls = true
+ cfg, err := loader.Load()
+ if err != nil {
+ return 1
+ }
+
+ super.Start(ctx, cfg)
+ defer super.Stop()
+ url, ok := super.WaitReady()
+ if !ok {
+ return 1
+ }
+ // Write controller URL to stdout. Nothing else goes to
+ // stdout, so this provides an easy way for a calling script
+ // to discover the controller URL when everything is ready.
+ fmt.Fprintln(stdout, url)
+ // Wait for signal/crash + orderly shutdown
+ <-super.done
+ return 0
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package boot
+
+import (
+ "context"
+ "fmt"
+ "io/ioutil"
+ "net"
+ "os"
+ "os/exec"
+ "path/filepath"
+ "regexp"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+)
+
+// Run an Nginx process that proxies the supervisor's configured
+// ExternalURLs to the appropriate InternalURLs.
+type runNginx struct{}
+
+func (runNginx) String() string {
+ return "nginx"
+}
+
+func (runNginx) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ vars := map[string]string{
+ "LISTENHOST": super.ListenHost,
+ "SSLCERT": filepath.Join(super.SourcePath, "services", "api", "tmp", "self-signed.pem"), // TODO: root ca
+ "SSLKEY": filepath.Join(super.SourcePath, "services", "api", "tmp", "self-signed.key"), // TODO: root ca
+ "ACCESSLOG": filepath.Join(super.tempdir, "nginx_access.log"),
+ "ERRORLOG": filepath.Join(super.tempdir, "nginx_error.log"),
+ "TMPDIR": super.tempdir,
+ }
+ var err error
+ for _, cmpt := range []struct {
+ varname string
+ svc arvados.Service
+ }{
+ {"CONTROLLER", super.cluster.Services.Controller},
+ {"KEEPWEB", super.cluster.Services.WebDAV},
+ {"KEEPWEBDL", super.cluster.Services.WebDAVDownload},
+ {"KEEPPROXY", super.cluster.Services.Keepproxy},
+ {"GIT", super.cluster.Services.GitHTTP},
+ {"WORKBENCH1", super.cluster.Services.Workbench1},
+ {"WS", super.cluster.Services.Websocket},
+ } {
+ port, err := internalPort(cmpt.svc)
+ if err != nil {
+ return fmt.Errorf("%s internal port: %s (%v)", cmpt.varname, err, cmpt.svc)
+ }
+ if ok, err := addrIsLocal(net.JoinHostPort(super.ListenHost, port)); !ok || err != nil {
+ return fmt.Errorf("urlIsLocal() failed for host %q port %q: %v", super.ListenHost, port, err)
+ }
+ vars[cmpt.varname+"PORT"] = port
+
+ port, err = externalPort(cmpt.svc)
+ if err != nil {
+ return fmt.Errorf("%s external port: %s (%v)", cmpt.varname, err, cmpt.svc)
+ }
+ if ok, err := addrIsLocal(net.JoinHostPort(super.ListenHost, port)); !ok || err != nil {
+ return fmt.Errorf("urlIsLocal() failed for host %q port %q: %v", super.ListenHost, port, err)
+ }
+ vars[cmpt.varname+"SSLPORT"] = port
+ }
+ tmpl, err := ioutil.ReadFile(filepath.Join(super.SourcePath, "sdk", "python", "tests", "nginx.conf"))
+ if err != nil {
+ return err
+ }
+ conf := regexp.MustCompile(`{{.*?}}`).ReplaceAllStringFunc(string(tmpl), func(src string) string {
+ if len(src) < 4 {
+ return src
+ }
+ return vars[src[2:len(src)-2]]
+ })
+ conffile := filepath.Join(super.tempdir, "nginx.conf")
+ err = ioutil.WriteFile(conffile, []byte(conf), 0755)
+ if err != nil {
+ return err
+ }
+ nginx := "nginx"
+ if _, err := exec.LookPath(nginx); err != nil {
+ for _, dir := range []string{"/sbin", "/usr/sbin", "/usr/local/sbin"} {
+ if _, err = os.Stat(dir + "/nginx"); err == nil {
+ nginx = dir + "/nginx"
+ break
+ }
+ }
+ }
+ super.waitShutdown.Add(1)
+ go func() {
+ defer super.waitShutdown.Done()
+ fail(super.RunProgram(ctx, ".", nil, nil, nginx,
+ "-g", "error_log stderr info;",
+ "-g", "pid "+filepath.Join(super.tempdir, "nginx.pid")+";",
+ "-c", conffile))
+ }()
+ return waitForConnect(ctx, super.cluster.Services.Controller.ExternalURL.Host)
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package boot
+
+import (
+ "bytes"
+ "context"
+ "fmt"
+ "os"
+ "path/filepath"
+ "strings"
+ "sync"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+)
+
+// Don't trust "passenger-config" (or "bundle install") to handle
+// concurrent installs.
+var passengerInstallMutex sync.Mutex
+
+var railsEnv = []string{
+ "ARVADOS_RAILS_LOG_TO_STDOUT=1",
+ "ARVADOS_CONFIG_NOLEGACY=1", // don't load database.yml from source tree
+}
+
+// Install a Rails application's dependencies, including phusion
+// passenger.
+type installPassenger struct {
+ src string
+ depends []supervisedTask
+}
+
+func (runner installPassenger) String() string {
+ return "installPassenger:" + runner.src
+}
+
+func (runner installPassenger) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ err := super.wait(ctx, runner.depends...)
+ if err != nil {
+ return err
+ }
+
+ passengerInstallMutex.Lock()
+ defer passengerInstallMutex.Unlock()
+
+ var buf bytes.Buffer
+ err = super.RunProgram(ctx, runner.src, &buf, nil, "gem", "list", "--details", "bundler")
+ if err != nil {
+ return err
+ }
+ for _, version := range []string{"1.11.0", "1.17.3", "2.0.2"} {
+ if !strings.Contains(buf.String(), "("+version+")") {
+ err = super.RunProgram(ctx, runner.src, nil, nil, "gem", "install", "--user", "bundler:1.11", "bundler:1.17.3", "bundler:2.0.2")
+ if err != nil {
+ return err
+ }
+ break
+ }
+ }
+ err = super.RunProgram(ctx, runner.src, nil, nil, "bundle", "install", "--jobs", "4", "--path", filepath.Join(os.Getenv("HOME"), ".gem"))
+ if err != nil {
+ return err
+ }
+ err = super.RunProgram(ctx, runner.src, nil, nil, "bundle", "exec", "passenger-config", "build-native-support")
+ if err != nil {
+ return err
+ }
+ err = super.RunProgram(ctx, runner.src, nil, nil, "bundle", "exec", "passenger-config", "install-standalone-runtime")
+ if err != nil {
+ return err
+ }
+ err = super.RunProgram(ctx, runner.src, nil, nil, "bundle", "exec", "passenger-config", "validate-install")
+ if err != nil && !strings.Contains(err.Error(), "exit status 2") {
+ // Exit code 2 indicates there were warnings (like
+ // "other passenger installations have been detected",
+ // which we can't expect to avoid) but no errors.
+ // Other non-zero exit codes (1, 9) indicate errors.
+ return err
+ }
+ return nil
+}
+
+type runPassenger struct {
+ src string
+ svc arvados.Service
+ depends []supervisedTask
+}
+
+func (runner runPassenger) String() string {
+ return "runPassenger:" + runner.src
+}
+
+func (runner runPassenger) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ err := super.wait(ctx, runner.depends...)
+ if err != nil {
+ return err
+ }
+ port, err := internalPort(runner.svc)
+ if err != nil {
+ return fmt.Errorf("bug: no internalPort for %q: %v (%#v)", runner, err, runner.svc)
+ }
+ loglevel := "4"
+ if lvl, ok := map[string]string{
+ "debug": "5",
+ "info": "4",
+ "warn": "2",
+ "warning": "2",
+ "error": "1",
+ "fatal": "0",
+ "panic": "0",
+ }[super.cluster.SystemLogs.LogLevel]; ok {
+ loglevel = lvl
+ }
+ super.waitShutdown.Add(1)
+ go func() {
+ defer super.waitShutdown.Done()
+ err = super.RunProgram(ctx, runner.src, nil, railsEnv, "bundle", "exec",
+ "passenger", "start",
+ "-p", port,
+ "--log-file", "/dev/stderr",
+ "--log-level", loglevel,
+ "--no-friendly-error-pages",
+ "--pid-file", filepath.Join(super.tempdir, "passenger."+strings.Replace(runner.src, "/", "_", -1)+".pid"))
+ fail(err)
+ }()
+ return nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package boot
+
+import (
+ "bytes"
+ "context"
+ "database/sql"
+ "fmt"
+ "os"
+ "os/exec"
+ "path/filepath"
+ "strings"
+ "time"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "github.com/lib/pq"
+)
+
+// Run a postgresql server in a private data directory. Set up a db
+// user, database, and TCP listener that match the supervisor's
+// configured database connection info.
+type runPostgreSQL struct{}
+
+func (runPostgreSQL) String() string {
+ return "postgresql"
+}
+
+func (runPostgreSQL) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ err := super.wait(ctx, createCertificates{})
+ if err != nil {
+ return err
+ }
+
+ buf := bytes.NewBuffer(nil)
+ err = super.RunProgram(ctx, super.tempdir, buf, nil, "pg_config", "--bindir")
+ if err != nil {
+ return err
+ }
+ bindir := strings.TrimSpace(buf.String())
+
+ datadir := filepath.Join(super.tempdir, "pgdata")
+ err = os.Mkdir(datadir, 0755)
+ if err != nil {
+ return err
+ }
+ err = super.RunProgram(ctx, super.tempdir, nil, nil, filepath.Join(bindir, "initdb"), "-D", datadir)
+ if err != nil {
+ return err
+ }
+
+ err = super.RunProgram(ctx, super.tempdir, nil, nil, "cp", "server.crt", "server.key", datadir)
+ if err != nil {
+ return err
+ }
+
+ port := super.cluster.PostgreSQL.Connection["port"]
+
+ super.waitShutdown.Add(1)
+ go func() {
+ defer super.waitShutdown.Done()
+ fail(super.RunProgram(ctx, super.tempdir, nil, nil, filepath.Join(bindir, "postgres"),
+ "-l", // enable ssl
+ "-D", datadir, // data dir
+ "-k", datadir, // socket dir
+ "-p", super.cluster.PostgreSQL.Connection["port"],
+ ))
+ }()
+
+ for {
+ if ctx.Err() != nil {
+ return ctx.Err()
+ }
+ if exec.CommandContext(ctx, "pg_isready", "--timeout=10", "--host="+super.cluster.PostgreSQL.Connection["host"], "--port="+port).Run() == nil {
+ break
+ }
+ time.Sleep(time.Second / 2)
+ }
+ db, err := sql.Open("postgres", arvados.PostgreSQLConnection{
+ "host": datadir,
+ "port": port,
+ "dbname": "postgres",
+ }.String())
+ if err != nil {
+ return fmt.Errorf("db open failed: %s", err)
+ }
+ defer db.Close()
+ conn, err := db.Conn(ctx)
+ if err != nil {
+ return fmt.Errorf("db conn failed: %s", err)
+ }
+ defer conn.Close()
+ _, err = conn.ExecContext(ctx, `CREATE USER `+pq.QuoteIdentifier(super.cluster.PostgreSQL.Connection["user"])+` WITH SUPERUSER ENCRYPTED PASSWORD `+pq.QuoteLiteral(super.cluster.PostgreSQL.Connection["password"]))
+ if err != nil {
+ return fmt.Errorf("createuser failed: %s", err)
+ }
+ _, err = conn.ExecContext(ctx, `CREATE DATABASE `+pq.QuoteIdentifier(super.cluster.PostgreSQL.Connection["dbname"]))
+ if err != nil {
+ return fmt.Errorf("createdb failed: %s", err)
+ }
+ return nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package boot
+
+import (
+ "context"
+)
+
+// Populate a blank database with arvados tables and seed rows.
+type seedDatabase struct{}
+
+func (seedDatabase) String() string {
+ return "seedDatabase"
+}
+
+func (seedDatabase) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ err := super.wait(ctx, runPostgreSQL{}, installPassenger{src: "services/api"})
+ if err != nil {
+ return err
+ }
+ err = super.RunProgram(ctx, "services/api", nil, railsEnv, "bundle", "exec", "rake", "db:setup")
+ if err != nil {
+ return err
+ }
+ return nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package boot
+
+import (
+ "context"
+ "errors"
+ "path/filepath"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+)
+
+// Run a service using the arvados-server binary.
+//
+// In future this will bring up the service in the current process,
+// but for now (at least until the subcommand handlers get a shutdown
+// mechanism) it starts a child process using the arvados-server
+// binary, which the supervisor is assumed to have installed in
+// {super.tempdir}/bin/.
+type runServiceCommand struct {
+ name string // arvados-server subcommand, e.g., "controller"
+ svc arvados.Service // cluster.Services.* entry with the desired InternalURLs
+ depends []supervisedTask // wait for these tasks before starting
+}
+
+func (runner runServiceCommand) String() string {
+ return runner.name
+}
+
+func (runner runServiceCommand) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ binfile := filepath.Join(super.tempdir, "bin", "arvados-server")
+ err := super.RunProgram(ctx, super.tempdir, nil, nil, binfile, "-version")
+ if err != nil {
+ return err
+ }
+ super.wait(ctx, runner.depends...)
+ for u := range runner.svc.InternalURLs {
+ u := u
+ if islocal, err := addrIsLocal(u.Host); err != nil {
+ return err
+ } else if !islocal {
+ continue
+ }
+ super.waitShutdown.Add(1)
+ go func() {
+ defer super.waitShutdown.Done()
+ fail(super.RunProgram(ctx, super.tempdir, nil, []string{"ARVADOS_SERVICE_INTERNAL_URL=" + u.String()}, binfile, runner.name, "-config", super.configfile))
+ }()
+ }
+ return nil
+}
+
+// Run a Go service that isn't bundled in arvados-server.
+type runGoProgram struct {
+ src string // source dir, e.g., "services/keepproxy"
+ svc arvados.Service // cluster.Services.* entry with the desired InternalURLs
+ depends []supervisedTask // wait for these tasks before starting
+}
+
+func (runner runGoProgram) String() string {
+ _, basename := filepath.Split(runner.src)
+ return basename
+}
+
+func (runner runGoProgram) Run(ctx context.Context, fail func(error), super *Supervisor) error {
+ if len(runner.svc.InternalURLs) == 0 {
+ return errors.New("bug: runGoProgram needs non-empty svc.InternalURLs")
+ }
+
+ binfile, err := super.installGoProgram(ctx, runner.src)
+ if err != nil {
+ return err
+ }
+ if ctx.Err() != nil {
+ return ctx.Err()
+ }
+
+ err = super.RunProgram(ctx, super.tempdir, nil, nil, binfile, "-version")
+ if err != nil {
+ return err
+ }
+
+ super.wait(ctx, runner.depends...)
+ for u := range runner.svc.InternalURLs {
+ u := u
+ if islocal, err := addrIsLocal(u.Host); err != nil {
+ return err
+ } else if !islocal {
+ continue
+ }
+ super.waitShutdown.Add(1)
+ go func() {
+ defer super.waitShutdown.Done()
+ fail(super.RunProgram(ctx, super.tempdir, nil, []string{"ARVADOS_SERVICE_INTERNAL_URL=" + u.String()}, binfile))
+ }()
+ }
+ return nil
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package boot
+
+import (
+ "bytes"
+ "context"
+ "crypto/rand"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net"
+ "os"
+ "os/exec"
+ "os/signal"
+ "os/user"
+ "path/filepath"
+ "strings"
+ "sync"
+ "syscall"
+ "time"
+
+ "git.arvados.org/arvados.git/lib/service"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/health"
+ "github.com/sirupsen/logrus"
+)
+
+type Supervisor struct {
+ SourcePath string // e.g., /home/username/src/arvados
+ SourceVersion string // e.g., acbd1324...
+ ClusterType string // e.g., production
+ ListenHost string // e.g., localhost
+ ControllerAddr string // e.g., 127.0.0.1:8000
+ OwnTemporaryDatabase bool
+ Stderr io.Writer
+
+ logger logrus.FieldLogger
+ cluster *arvados.Cluster
+
+ ctx context.Context
+ cancel context.CancelFunc
+ done chan struct{}
+ healthChecker *health.Aggregator
+ tasksReady map[string]chan bool
+ waitShutdown sync.WaitGroup
+
+ tempdir string
+ configfile string
+ environ []string // for child processes
+}
+
+func (super *Supervisor) Start(ctx context.Context, cfg *arvados.Config) {
+ super.ctx, super.cancel = context.WithCancel(ctx)
+ super.done = make(chan struct{})
+
+ go func() {
+ sigch := make(chan os.Signal)
+ signal.Notify(sigch, syscall.SIGINT, syscall.SIGTERM)
+ defer signal.Stop(sigch)
+ go func() {
+ for sig := range sigch {
+ super.logger.WithField("signal", sig).Info("caught signal")
+ super.cancel()
+ }
+ }()
+
+ err := super.run(cfg)
+ if err != nil {
+ super.logger.WithError(err).Warn("supervisor shut down")
+ }
+ close(super.done)
+ }()
+}
+
+func (super *Supervisor) run(cfg *arvados.Config) error {
+ cwd, err := os.Getwd()
+ if err != nil {
+ return err
+ }
+ if !strings.HasPrefix(super.SourcePath, "/") {
+ super.SourcePath = filepath.Join(cwd, super.SourcePath)
+ }
+ super.SourcePath, err = filepath.EvalSymlinks(super.SourcePath)
+ if err != nil {
+ return err
+ }
+
+ super.tempdir, err = ioutil.TempDir("", "arvados-server-boot-")
+ if err != nil {
+ return err
+ }
+ defer os.RemoveAll(super.tempdir)
+ if err := os.Mkdir(filepath.Join(super.tempdir, "bin"), 0755); err != nil {
+ return err
+ }
+
+ // Fill in any missing config keys, and write the resulting
+ // config in the temp dir for child services to use.
+ err = super.autofillConfig(cfg)
+ if err != nil {
+ return err
+ }
+ conffile, err := os.OpenFile(filepath.Join(super.tempdir, "config.yml"), os.O_CREATE|os.O_WRONLY, 0644)
+ if err != nil {
+ return err
+ }
+ defer conffile.Close()
+ err = json.NewEncoder(conffile).Encode(cfg)
+ if err != nil {
+ return err
+ }
+ err = conffile.Close()
+ if err != nil {
+ return err
+ }
+ super.configfile = conffile.Name()
+
+ super.environ = os.Environ()
+ super.cleanEnv([]string{"ARVADOS_"})
+ super.setEnv("ARVADOS_CONFIG", super.configfile)
+ super.setEnv("RAILS_ENV", super.ClusterType)
+ super.setEnv("TMPDIR", super.tempdir)
+ super.prependEnv("PATH", filepath.Join(super.tempdir, "bin")+":")
+
+ super.cluster, err = cfg.GetCluster("")
+ if err != nil {
+ return err
+ }
+ // Now that we have the config, replace the bootstrap logger
+ // with a new one according to the logging config.
+ loglevel := super.cluster.SystemLogs.LogLevel
+ if s := os.Getenv("ARVADOS_DEBUG"); s != "" && s != "0" {
+ loglevel = "debug"
+ }
+ super.logger = ctxlog.New(super.Stderr, super.cluster.SystemLogs.Format, loglevel).WithFields(logrus.Fields{
+ "PID": os.Getpid(),
+ })
+
+ if super.SourceVersion == "" {
+ // Find current source tree version.
+ var buf bytes.Buffer
+ err = super.RunProgram(super.ctx, ".", &buf, nil, "git", "diff", "--shortstat")
+ if err != nil {
+ return err
+ }
+ dirty := buf.Len() > 0
+ buf.Reset()
+ err = super.RunProgram(super.ctx, ".", &buf, nil, "git", "log", "-n1", "--format=%H")
+ if err != nil {
+ return err
+ }
+ super.SourceVersion = strings.TrimSpace(buf.String())
+ if dirty {
+ super.SourceVersion += "+uncommitted"
+ }
+ } else {
+ return errors.New("specifying a version to run is not yet supported")
+ }
+
+ _, err = super.installGoProgram(super.ctx, "cmd/arvados-server")
+ if err != nil {
+ return err
+ }
+ err = super.setupRubyEnv()
+ if err != nil {
+ return err
+ }
+
+ tasks := []supervisedTask{
+ createCertificates{},
+ runPostgreSQL{},
+ runNginx{},
+ runServiceCommand{name: "controller", svc: super.cluster.Services.Controller, depends: []supervisedTask{runPostgreSQL{}}},
+ runGoProgram{src: "services/arv-git-httpd", svc: super.cluster.Services.GitHTTP},
+ runGoProgram{src: "services/health", svc: super.cluster.Services.Health},
+ runGoProgram{src: "services/keepproxy", svc: super.cluster.Services.Keepproxy, depends: []supervisedTask{runPassenger{src: "services/api"}}},
+ runGoProgram{src: "services/keepstore", svc: super.cluster.Services.Keepstore},
+ runGoProgram{src: "services/keep-web", svc: super.cluster.Services.WebDAV},
+ runGoProgram{src: "services/ws", svc: super.cluster.Services.Websocket, depends: []supervisedTask{runPostgreSQL{}}},
+ installPassenger{src: "services/api"},
+ runPassenger{src: "services/api", svc: super.cluster.Services.RailsAPI, depends: []supervisedTask{createCertificates{}, runPostgreSQL{}, installPassenger{src: "services/api"}}},
+ installPassenger{src: "apps/workbench", depends: []supervisedTask{installPassenger{src: "services/api"}}}, // dependency ensures workbench doesn't delay api startup
+ runPassenger{src: "apps/workbench", svc: super.cluster.Services.Workbench1, depends: []supervisedTask{installPassenger{src: "apps/workbench"}}},
+ seedDatabase{},
+ }
+ if super.ClusterType != "test" {
+ tasks = append(tasks,
+ runServiceCommand{name: "dispatch-cloud", svc: super.cluster.Services.Controller},
+ runGoProgram{src: "services/keep-balance"},
+ )
+ }
+ super.tasksReady = map[string]chan bool{}
+ for _, task := range tasks {
+ super.tasksReady[task.String()] = make(chan bool)
+ }
+ for _, task := range tasks {
+ task := task
+ fail := func(err error) {
+ if super.ctx.Err() != nil {
+ return
+ }
+ super.cancel()
+ super.logger.WithField("task", task.String()).WithError(err).Error("task failed")
+ }
+ go func() {
+ super.logger.WithField("task", task.String()).Info("starting")
+ err := task.Run(super.ctx, fail, super)
+ if err != nil {
+ fail(err)
+ return
+ }
+ close(super.tasksReady[task.String()])
+ }()
+ }
+ err = super.wait(super.ctx, tasks...)
+ if err != nil {
+ return err
+ }
+ super.logger.Info("all startup tasks are complete; starting health checks")
+ super.healthChecker = &health.Aggregator{Cluster: super.cluster}
+ <-super.ctx.Done()
+ super.logger.Info("shutting down")
+ super.waitShutdown.Wait()
+ return super.ctx.Err()
+}
+
+func (super *Supervisor) wait(ctx context.Context, tasks ...supervisedTask) error {
+ for _, task := range tasks {
+ ch, ok := super.tasksReady[task.String()]
+ if !ok {
+ return fmt.Errorf("no such task: %s", task)
+ }
+ super.logger.WithField("task", task.String()).Info("waiting")
+ select {
+ case <-ch:
+ super.logger.WithField("task", task.String()).Info("ready")
+ case <-ctx.Done():
+ super.logger.WithField("task", task.String()).Info("task was never ready")
+ return ctx.Err()
+ }
+ }
+ return nil
+}
+
+func (super *Supervisor) Stop() {
+ super.cancel()
+ <-super.done
+}
+
+func (super *Supervisor) WaitReady() (*arvados.URL, bool) {
+ ticker := time.NewTicker(time.Second)
+ defer ticker.Stop()
+ for waiting := "all"; waiting != ""; {
+ select {
+ case <-ticker.C:
+ case <-super.ctx.Done():
+ return nil, false
+ }
+ if super.healthChecker == nil {
+ // not set up yet
+ continue
+ }
+ resp := super.healthChecker.ClusterHealth()
+ // The overall health check (resp.Health=="OK") might
+ // never pass due to missing components (like
+ // arvados-dispatch-cloud in a test cluster), so
+ // instead we wait for all configured components to
+ // pass.
+ waiting = ""
+ for target, check := range resp.Checks {
+ if check.Health != "OK" {
+ waiting += " " + target
+ }
+ }
+ if waiting != "" {
+ super.logger.WithField("targets", waiting[1:]).Info("waiting")
+ }
+ }
+ u := super.cluster.Services.Controller.ExternalURL
+ return &u, true
+}
+
+func (super *Supervisor) prependEnv(key, prepend string) {
+ for i, s := range super.environ {
+ if strings.HasPrefix(s, key+"=") {
+ super.environ[i] = key + "=" + prepend + s[len(key)+1:]
+ return
+ }
+ }
+ super.environ = append(super.environ, key+"="+prepend)
+}
+
+func (super *Supervisor) cleanEnv(prefixes []string) {
+ var cleaned []string
+ for _, s := range super.environ {
+ drop := false
+ for _, p := range prefixes {
+ if strings.HasPrefix(s, p) {
+ drop = true
+ break
+ }
+ }
+ if !drop {
+ cleaned = append(cleaned, s)
+ }
+ }
+ super.environ = cleaned
+}
+
+func (super *Supervisor) setEnv(key, val string) {
+ for i, s := range super.environ {
+ if strings.HasPrefix(s, key+"=") {
+ super.environ[i] = key + "=" + val
+ return
+ }
+ }
+ super.environ = append(super.environ, key+"="+val)
+}
+
+// Remove all but the first occurrence of each env var.
+func dedupEnv(in []string) []string {
+ saw := map[string]bool{}
+ var out []string
+ for _, kv := range in {
+ if split := strings.Index(kv, "="); split < 1 {
+ panic("invalid environment var: " + kv)
+ } else if saw[kv[:split]] {
+ continue
+ } else {
+ saw[kv[:split]] = true
+ out = append(out, kv)
+ }
+ }
+ return out
+}
+
+func (super *Supervisor) installGoProgram(ctx context.Context, srcpath string) (string, error) {
+ _, basename := filepath.Split(srcpath)
+ bindir := filepath.Join(super.tempdir, "bin")
+ binfile := filepath.Join(bindir, basename)
+ err := super.RunProgram(ctx, filepath.Join(super.SourcePath, srcpath), nil, []string{"GOBIN=" + bindir}, "go", "install", "-ldflags", "-X git.arvados.org/arvados.git/lib/cmd.version="+super.SourceVersion+" -X main.version="+super.SourceVersion)
+ return binfile, err
+}
+
+func (super *Supervisor) usingRVM() bool {
+ return os.Getenv("rvm_path") != ""
+}
+
+func (super *Supervisor) setupRubyEnv() error {
+ if !super.usingRVM() {
+ // (If rvm is in use, assume the caller has everything
+ // set up as desired)
+ super.cleanEnv([]string{
+ "GEM_HOME=",
+ "GEM_PATH=",
+ })
+ cmd := exec.Command("gem", "env", "gempath")
+ cmd.Env = super.environ
+ buf, err := cmd.Output() // /var/lib/arvados/.gem/ruby/2.5.0/bin:...
+ if err != nil || len(buf) == 0 {
+ return fmt.Errorf("gem env gempath: %v", err)
+ }
+ gempath := string(bytes.Split(buf, []byte{':'})[0])
+ super.prependEnv("PATH", gempath+"/bin:")
+ super.setEnv("GEM_HOME", gempath)
+ super.setEnv("GEM_PATH", gempath)
+ }
+ // Passenger install doesn't work unless $HOME is ~user
+ u, err := user.Current()
+ if err != nil {
+ return err
+ }
+ super.setEnv("HOME", u.HomeDir)
+ return nil
+}
+
+func (super *Supervisor) lookPath(prog string) string {
+ for _, val := range super.environ {
+ if strings.HasPrefix(val, "PATH=") {
+ for _, dir := range filepath.SplitList(val[5:]) {
+ path := filepath.Join(dir, prog)
+ if fi, err := os.Stat(path); err == nil && fi.Mode()&0111 != 0 {
+ return path
+ }
+ }
+ }
+ }
+ return prog
+}
+
+// Run prog with args, using dir as working directory. If ctx is
+// cancelled while the child is running, RunProgram terminates the
+// child, waits for it to exit, then returns.
+//
+// Child's environment will have our env vars, plus any given in env.
+//
+// Child's stdout will be written to output if non-nil, otherwise the
+// boot command's stderr.
+func (super *Supervisor) RunProgram(ctx context.Context, dir string, output io.Writer, env []string, prog string, args ...string) error {
+ cmdline := fmt.Sprintf("%s", append([]string{prog}, args...))
+ super.logger.WithField("command", cmdline).WithField("dir", dir).Info("executing")
+
+ logprefix := strings.TrimPrefix(prog, super.tempdir+"/bin/")
+ if logprefix == "bundle" && len(args) > 2 && args[0] == "exec" {
+ logprefix = args[1]
+ } else if logprefix == "arvados-server" && len(args) > 1 {
+ logprefix = args[0]
+ }
+ if !strings.HasPrefix(dir, "/") {
+ logprefix = dir + ": " + logprefix
+ }
+
+ cmd := exec.Command(super.lookPath(prog), args...)
+ stdout, err := cmd.StdoutPipe()
+ if err != nil {
+ return err
+ }
+ stderr, err := cmd.StderrPipe()
+ if err != nil {
+ return err
+ }
+ logwriter := &service.LogPrefixer{Writer: super.Stderr, Prefix: []byte("[" + logprefix + "] ")}
+ var copiers sync.WaitGroup
+ copiers.Add(1)
+ go func() {
+ io.Copy(logwriter, stderr)
+ copiers.Done()
+ }()
+ copiers.Add(1)
+ go func() {
+ if output == nil {
+ io.Copy(logwriter, stdout)
+ } else {
+ io.Copy(output, stdout)
+ }
+ copiers.Done()
+ }()
+
+ if strings.HasPrefix(dir, "/") {
+ cmd.Dir = dir
+ } else {
+ cmd.Dir = filepath.Join(super.SourcePath, dir)
+ }
+ env = append([]string(nil), env...)
+ env = append(env, super.environ...)
+ cmd.Env = dedupEnv(env)
+
+ exited := false
+ defer func() { exited = true }()
+ go func() {
+ <-ctx.Done()
+ log := ctxlog.FromContext(ctx).WithFields(logrus.Fields{"dir": dir, "cmdline": cmdline})
+ for !exited {
+ if cmd.Process == nil {
+ log.Debug("waiting for child process to start")
+ time.Sleep(time.Second / 2)
+ } else {
+ log.WithField("PID", cmd.Process.Pid).Debug("sending SIGTERM")
+ cmd.Process.Signal(syscall.SIGTERM)
+ time.Sleep(5 * time.Second)
+ if !exited {
+ stdout.Close()
+ stderr.Close()
+ log.WithField("PID", cmd.Process.Pid).Warn("still waiting for child process to exit 5s after SIGTERM")
+ }
+ }
+ }
+ }()
+
+ err = cmd.Start()
+ if err != nil {
+ return err
+ }
+ copiers.Wait()
+ err = cmd.Wait()
+ if ctx.Err() != nil {
+ // Return "context canceled", instead of the "killed"
+ // error that was probably caused by the context being
+ // canceled.
+ return ctx.Err()
+ } else if err != nil {
+ return fmt.Errorf("%s: error: %v", cmdline, err)
+ }
+ return nil
+}
+
+func (super *Supervisor) autofillConfig(cfg *arvados.Config) error {
+ cluster, err := cfg.GetCluster("")
+ if err != nil {
+ return err
+ }
+ usedPort := map[string]bool{}
+ nextPort := func(host string) string {
+ for {
+ port, err := availablePort(host)
+ if err != nil {
+ panic(err)
+ }
+ if usedPort[port] {
+ continue
+ }
+ usedPort[port] = true
+ return port
+ }
+ }
+ if cluster.Services.Controller.ExternalURL.Host == "" {
+ h, p, err := net.SplitHostPort(super.ControllerAddr)
+ if err != nil {
+ return err
+ }
+ if h == "" {
+ h = super.ListenHost
+ }
+ if p == "0" {
+ p = nextPort(h)
+ }
+ cluster.Services.Controller.ExternalURL = arvados.URL{Scheme: "https", Host: net.JoinHostPort(h, p)}
+ }
+ for _, svc := range []*arvados.Service{
+ &cluster.Services.Controller,
+ &cluster.Services.DispatchCloud,
+ &cluster.Services.GitHTTP,
+ &cluster.Services.Health,
+ &cluster.Services.Keepproxy,
+ &cluster.Services.Keepstore,
+ &cluster.Services.RailsAPI,
+ &cluster.Services.WebDAV,
+ &cluster.Services.WebDAVDownload,
+ &cluster.Services.Websocket,
+ &cluster.Services.Workbench1,
+ } {
+ if svc == &cluster.Services.DispatchCloud && super.ClusterType == "test" {
+ continue
+ }
+ if svc.ExternalURL.Host == "" {
+ if svc == &cluster.Services.Controller ||
+ svc == &cluster.Services.GitHTTP ||
+ svc == &cluster.Services.Keepproxy ||
+ svc == &cluster.Services.WebDAV ||
+ svc == &cluster.Services.WebDAVDownload ||
+ svc == &cluster.Services.Workbench1 {
+ svc.ExternalURL = arvados.URL{Scheme: "https", Host: fmt.Sprintf("%s:%s", super.ListenHost, nextPort(super.ListenHost))}
+ } else if svc == &cluster.Services.Websocket {
+ svc.ExternalURL = arvados.URL{Scheme: "wss", Host: fmt.Sprintf("%s:%s", super.ListenHost, nextPort(super.ListenHost))}
+ }
+ }
+ if len(svc.InternalURLs) == 0 {
+ svc.InternalURLs = map[arvados.URL]arvados.ServiceInstance{
+ arvados.URL{Scheme: "http", Host: fmt.Sprintf("%s:%s", super.ListenHost, nextPort(super.ListenHost))}: arvados.ServiceInstance{},
+ }
+ }
+ }
+ if cluster.SystemRootToken == "" {
+ cluster.SystemRootToken = randomHexString(64)
+ }
+ if cluster.ManagementToken == "" {
+ cluster.ManagementToken = randomHexString(64)
+ }
+ if cluster.API.RailsSessionSecretToken == "" {
+ cluster.API.RailsSessionSecretToken = randomHexString(64)
+ }
+ if cluster.Collections.BlobSigningKey == "" {
+ cluster.Collections.BlobSigningKey = randomHexString(64)
+ }
+ if super.ClusterType != "production" && cluster.Containers.DispatchPrivateKey == "" {
+ buf, err := ioutil.ReadFile(filepath.Join(super.SourcePath, "lib", "dispatchcloud", "test", "sshkey_dispatch"))
+ if err != nil {
+ return err
+ }
+ cluster.Containers.DispatchPrivateKey = string(buf)
+ }
+ if super.ClusterType != "production" {
+ cluster.TLS.Insecure = true
+ }
+ if super.ClusterType == "test" {
+ // Add a second keepstore process.
+ cluster.Services.Keepstore.InternalURLs[arvados.URL{Scheme: "http", Host: fmt.Sprintf("%s:%s", super.ListenHost, nextPort(super.ListenHost))}] = arvados.ServiceInstance{}
+
+ // Create a directory-backed volume for each keepstore
+ // process.
+ cluster.Volumes = map[string]arvados.Volume{}
+ for url := range cluster.Services.Keepstore.InternalURLs {
+ volnum := len(cluster.Volumes)
+ datadir := fmt.Sprintf("%s/keep%d.data", super.tempdir, volnum)
+ if _, err = os.Stat(datadir + "/."); err == nil {
+ } else if !os.IsNotExist(err) {
+ return err
+ } else if err = os.Mkdir(datadir, 0755); err != nil {
+ return err
+ }
+ cluster.Volumes[fmt.Sprintf(cluster.ClusterID+"-nyw5e-%015d", volnum)] = arvados.Volume{
+ Driver: "Directory",
+ DriverParameters: json.RawMessage(fmt.Sprintf(`{"Root":%q}`, datadir)),
+ AccessViaHosts: map[arvados.URL]arvados.VolumeAccess{
+ url: {},
+ },
+ }
+ }
+ }
+ if super.OwnTemporaryDatabase {
+ cluster.PostgreSQL.Connection = arvados.PostgreSQLConnection{
+ "client_encoding": "utf8",
+ "host": "localhost",
+ "port": nextPort(super.ListenHost),
+ "dbname": "arvados_test",
+ "user": "arvados",
+ "password": "insecure_arvados_test",
+ }
+ }
+
+ cfg.Clusters[cluster.ClusterID] = *cluster
+ return nil
+}
+
+func addrIsLocal(addr string) (bool, error) {
+ return true, nil
+ listener, err := net.Listen("tcp", addr)
+ if err == nil {
+ listener.Close()
+ return true, nil
+ } else if strings.Contains(err.Error(), "cannot assign requested address") {
+ return false, nil
+ } else {
+ return false, err
+ }
+}
+
+func randomHexString(chars int) string {
+ b := make([]byte, chars/2)
+ _, err := rand.Read(b)
+ if err != nil {
+ panic(err)
+ }
+ return fmt.Sprintf("%x", b)
+}
+
+func internalPort(svc arvados.Service) (string, error) {
+ if len(svc.InternalURLs) > 1 {
+ return "", errors.New("internalPort() doesn't work with multiple InternalURLs")
+ }
+ for u := range svc.InternalURLs {
+ if _, p, err := net.SplitHostPort(u.Host); err != nil {
+ return "", err
+ } else if p != "" {
+ return p, nil
+ } else if u.Scheme == "https" {
+ return "443", nil
+ } else {
+ return "80", nil
+ }
+ }
+ return "", fmt.Errorf("service has no InternalURLs")
+}
+
+func externalPort(svc arvados.Service) (string, error) {
+ if _, p, err := net.SplitHostPort(svc.ExternalURL.Host); err != nil {
+ return "", err
+ } else if p != "" {
+ return p, nil
+ } else if svc.ExternalURL.Scheme == "https" {
+ return "443", nil
+ } else {
+ return "80", nil
+ }
+}
+
+func availablePort(host string) (string, error) {
+ ln, err := net.Listen("tcp", net.JoinHostPort(host, "0"))
+ if err != nil {
+ return "", err
+ }
+ defer ln.Close()
+ _, port, err := net.SplitHostPort(ln.Addr().String())
+ if err != nil {
+ return "", err
+ }
+ return port, nil
+}
+
+// Try to connect to addr until it works, then close ch. Give up if
+// ctx cancels.
+func waitForConnect(ctx context.Context, addr string) error {
+ dialer := net.Dialer{Timeout: time.Second}
+ for ctx.Err() == nil {
+ conn, err := dialer.DialContext(ctx, "tcp", addr)
+ if err != nil {
+ time.Sleep(time.Second / 10)
+ continue
+ }
+ conn.Close()
+ return nil
+ }
+ return ctx.Err()
+}
"strings"
"syscall"
- "git.curoverse.com/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/cmd"
)
var (
import (
"flag"
- "git.curoverse.com/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/cmd"
"rsc.io/getopt"
)
"fmt"
"io"
- "git.curoverse.com/arvados.git/lib/cmd"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/ghodss/yaml"
)
"regexp"
"testing"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
check "gopkg.in/check.v1"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-06-01/compute"
"github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-06-01/network"
storageacct "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2018-02-01/storage"
//
// How to manually run individual tests against the real cloud:
//
-// $ go test -v git.curoverse.com/arvados.git/lib/cloud/azure -live-azure-cfg azconfig.yml -check.f=TestCreate
+// $ go test -v git.arvados.org/arvados.git/lib/cloud/azure -live-azure-cfg azconfig.yml -check.f=TestCreate
//
// Tests should be run individually and in the order they are listed in the file:
//
"testing"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/config"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/config"
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-06-01/compute"
"github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-06-01/network"
"github.com/Azure/azure-sdk-for-go/storage"
"io"
"os"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/lib/dispatchcloud"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/dispatchcloud"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"golang.org/x/crypto/ssh"
)
"fmt"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/ssh_executor"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/worker"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/ssh_executor"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/worker"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/ssh"
)
"testing"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"golang.org/x/crypto/ssh"
check "gopkg.in/check.v1"
)
"math/big"
"sync"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
//
// How to manually run individual tests against the real cloud:
//
-// $ go test -v git.curoverse.com/arvados.git/lib/cloud/ec2 -live-ec2-cfg ec2config.yml -check.f=TestCreate
+// $ go test -v git.arvados.org/arvados.git/lib/cloud/ec2 -live-ec2-cfg ec2config.yml -check.f=TestCreate
//
// Tests should be run individually and in the order they are listed in the file:
//
"flag"
"testing"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/config"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/config"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/ec2"
"github.com/sirupsen/logrus"
"io"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/ssh"
)
type versionCommand struct{}
+func (versionCommand) String() string {
+ return fmt.Sprintf("%s (%s)", version, runtime.Version())
+}
+
func (versionCommand) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
prog = regexp.MustCompile(` -*version$`).ReplaceAllLiteralString(prog, "")
fmt.Fprintf(stdout, "%s %s (%s)\n", prog, version, runtime.Version())
func (m Multi) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
_, basename := filepath.Split(prog)
- basename = strings.TrimPrefix(basename, "arvados-")
- basename = strings.TrimPrefix(basename, "crunch-")
- if cmd, ok := m[basename]; ok {
+ if i := strings.Index(basename, "~"); i >= 0 {
+ // drop "~anything" suffix (arvados-dispatch-cloud's
+ // DeployRunnerBinary feature relies on this)
+ basename = basename[:i]
+ }
+ cmd, ok := m[basename]
+ if !ok {
+ // "controller" command exists, and binary is named "arvados-controller"
+ cmd, ok = m[strings.TrimPrefix(basename, "arvados-")]
+ }
+ if !ok {
+ // "dispatch-slurm" command exists, and binary is named "crunch-dispatch-slurm"
+ cmd, ok = m[strings.TrimPrefix(basename, "crunch-")]
+ }
+ if ok {
return cmd.RunCommand(prog, args, stdin, stdout, stderr)
} else if len(args) < 1 {
fmt.Fprintf(stderr, "usage: %s command [args]\n", prog)
"strings"
"testing"
- "git.curoverse.com/arvados.git/lib/cmdtest"
+ "git.arvados.org/arvados.git/lib/cmdtest"
check "gopkg.in/check.v1"
)
"os"
"os/exec"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/ghodss/yaml"
"github.com/sirupsen/logrus"
)
if err != nil {
return 1
}
+ problems := false
+ if warnAboutProblems(logger, withDepr) {
+ problems = true
+ }
cmd := exec.Command("diff", "-u", "--label", "without-deprecated-configs", "--label", "relying-on-deprecated-configs", "/dev/fd/3", "/dev/fd/4")
for _, obj := range []interface{}{withoutDepr, withDepr} {
y, _ := yaml.Marshal(obj)
if logbuf.Len() > 0 {
return 1
}
- return 0
+
+ if problems {
+ return 1
+ } else {
+ return 0
+ }
+}
+
+func warnAboutProblems(logger logrus.FieldLogger, cfg *arvados.Config) bool {
+ warned := false
+ for id, cc := range cfg.Clusters {
+ if cc.SystemRootToken == "" {
+ logger.Warnf("Clusters.%s.SystemRootToken is empty; see https://doc.arvados.org/master/install/install-keepstore.html", id)
+ warned = true
+ }
+ if cc.ManagementToken == "" {
+ logger.Warnf("Clusters.%s.ManagementToken is empty; see https://doc.arvados.org/admin/management-token.html", id)
+ warned = true
+ }
+ }
+ return warned
}
var DumpDefaultsCommand defaultsCommand
"io/ioutil"
"os"
- "git.curoverse.com/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/cmd"
check "gopkg.in/check.v1"
)
os.Unsetenv("ARVADOS_API_TOKEN")
}
-func (s *CommandSuite) TestBadArg(c *check.C) {
+func (s *CommandSuite) TestDump_BadArg(c *check.C) {
var stderr bytes.Buffer
code := DumpCommand.RunCommand("arvados config-dump", []string{"-badarg"}, bytes.NewBuffer(nil), bytes.NewBuffer(nil), &stderr)
c.Check(code, check.Equals, 2)
c.Check(stderr.String(), check.Matches, `(?ms)flag provided but not defined: -badarg\nUsage:\n.*`)
}
-func (s *CommandSuite) TestEmptyInput(c *check.C) {
+func (s *CommandSuite) TestDump_EmptyInput(c *check.C) {
var stdout, stderr bytes.Buffer
code := DumpCommand.RunCommand("arvados config-dump", []string{"-config", "-"}, &bytes.Buffer{}, &stdout, &stderr)
c.Check(code, check.Equals, 1)
c.Check(stderr.String(), check.Matches, `config does not define any clusters\n`)
}
-func (s *CommandSuite) TestCheckNoDeprecatedKeys(c *check.C) {
+func (s *CommandSuite) TestCheck_NoWarnings(c *check.C) {
var stdout, stderr bytes.Buffer
in := `
Clusters:
z1234:
+ ManagementToken: xyzzy
+ SystemRootToken: xyzzy
API:
MaxItemsPerResponse: 1234
PostgreSQL:
c.Check(stderr.String(), check.Equals, "")
}
-func (s *CommandSuite) TestCheckDeprecatedKeys(c *check.C) {
+func (s *CommandSuite) TestCheck_DeprecatedKeys(c *check.C) {
var stdout, stderr bytes.Buffer
in := `
Clusters:
c.Check(stdout.String(), check.Matches, `(?ms).*\n\- +.*MaxItemsPerResponse: 1000\n\+ +MaxItemsPerResponse: 1234\n.*`)
}
-func (s *CommandSuite) TestCheckOldKeepstoreConfigFile(c *check.C) {
+func (s *CommandSuite) TestCheck_OldKeepstoreConfigFile(c *check.C) {
f, err := ioutil.TempFile("", "")
c.Assert(err, check.IsNil)
defer os.Remove(f.Name())
c.Check(stderr.String(), check.Matches, `(?ms).*you should remove the legacy keepstore config file.*\n`)
}
-func (s *CommandSuite) TestCheckUnknownKey(c *check.C) {
+func (s *CommandSuite) TestCheck_UnknownKey(c *check.C) {
var stdout, stderr bytes.Buffer
in := `
Clusters:
c.Check(stderr.String(), check.Matches, `(?ms).*unexpected object in config entry: Clusters.z1234.PostgreSQL.ConnectionPool"\n.*`)
}
-func (s *CommandSuite) TestDumpFormatting(c *check.C) {
+func (s *CommandSuite) TestDump_Formatting(c *check.C) {
var stdout, stderr bytes.Buffer
in := `
Clusters:
c.Check(stdout.String(), check.Matches, `(?ms).*http://localhost:12345: {}\n.*`)
}
-func (s *CommandSuite) TestDumpUnknownKey(c *check.C) {
+func (s *CommandSuite) TestDump_UnknownKey(c *check.C) {
var stdout, stderr bytes.Buffer
in := `
Clusters:
# in the directory where your API server is running.
AnonymousUserToken: ""
+ # If a new user has an alternate email address (local@domain)
+ # with the domain given here, its local part becomes the new
+ # user's default username. Otherwise, the user's primary email
+ # address is used.
+ PreferDomainForUsername: ""
+
AuditLogs:
# Time to keep audit logs, in seconds. (An audit log is a row added
# to the "logs" table in the PostgreSQL database each time an
# > 0s = auto-create a new version when older than the specified number of seconds.
PreserveVersionIfIdle: -1s
+ # If non-empty, allow project and collection names to contain
+ # the "/" character (slash/stroke/solidus), and replace "/" with
+ # the given string in the filesystem hierarchy presented by
+ # WebDAV. Example values are "%2f" and "{slash}". Names that
+ # contain the substitution string itself may result in confusing
+ # behavior, so a value like "_" is not recommended.
+ #
+ # If the default empty value is used, the server will reject
+ # requests to create or rename a collection when the new name
+ # contains "/".
+ #
+ # If the value "/" is used, project and collection names
+ # containing "/" will be allowed, but they will not be
+ # accessible via WebDAV.
+ #
+ # Use of this feature is not recommended, if it can be avoided.
+ ForwardSlashNameSubstitution: ""
+
# Managed collection properties. At creation time, if the client didn't
# provide the listed keys, they will be automatically populated following
# one of the following behaviors:
# (Experimental) Authenticate with Google, bypassing the
# SSO-provider gateway service. Use the Google Cloud console to
- # generate the Client ID and secret (APIs and Services >
- # Credentials > Create credentials > OAuth client ID > Web
- # application) and add your controller's /login URL (e.g.,
+ # enable the People API (APIs and Services > Enable APIs and
+ # services > Google People API > Enable), generate a Client ID
+ # and secret (APIs and Services > Credentials > Create
+ # credentials > OAuth client ID > Web application) and add your
+ # controller's /login URL (e.g.,
# "https://zzzzz.example.com/login") as an authorized redirect
# URL.
#
- # Requires EnableBetaController14287. ProviderAppID must be
+ # Incompatible with ForceLegacyAPI14. ProviderAppID must be
# blank.
GoogleClientID: ""
GoogleClientSecret: ""
+ # Allow users to log in to existing accounts using any verified
+ # email address listed by their Google account. If true, the
+ # Google People API must be enabled in order for Google login to
+ # work. If false, only the primary email address will be used.
+ GoogleAlternateEmailAddresses: true
+
# The cluster ID to delegate the user database. When set,
# logins on this cluster will be redirected to the login cluster
- # (login cluster must appear in RemoteHosts with Proxy: true)
+ # (login cluster must appear in RemoteClusters with Proxy: true)
LoginCluster: ""
# How long a cached token belonging to a remote cluster will
# (experimental) cloud dispatcher for executing containers on
# worker VMs. Begins with "-----BEGIN RSA PRIVATE KEY-----\n"
# and ends with "\n-----END RSA PRIVATE KEY-----\n".
- DispatchPrivateKey: none
+ DispatchPrivateKey: ""
# Maximum time to wait for workers to come up before abandoning
# stale locks from a previous dispatch process.
# has been reached or crunch_log_seconds_between_events has elapsed since
# the last flush.
LogBytesPerEvent: 4096
- LogSecondsBetweenEvents: 1
+ LogSecondsBetweenEvents: 5s
# The sample period for throttling logs.
LogThrottlePeriod: 60s
# Worker VM image ID.
ImageID: ""
+ # An executable file (located on the dispatcher host) to be
+ # copied to cloud instances at runtime and used as the
+ # container runner/supervisor. The default value is the
+ # dispatcher program itself.
+ #
+ # Use the empty string to disable this step: nothing will be
+ # copied, and cloud instances are assumed to have a suitable
+ # version of crunch-run installed.
+ DeployRunnerBinary: "/proc/self/exe"
+
# Tags to add on all resources (VMs, NICs, disks) created by
# the container dispatcher. (Arvados's own tags --
# InstanceType, IdleBehavior, and InstanceSecret -- will also
SAMPLE: true
Driver: s3
DriverParameters:
-
# for s3 driver -- see
# https://doc.arvados.org/install/configure-s3-object-storage.html
IAMRole: aaaaa
ConnectTimeout: 1m
ReadTimeout: 10m
RaceWindow: 24h
+
+ # For S3 driver, potentially unsafe tuning parameter,
+ # intentionally excluded from main documentation.
+ #
+ # Enable deletion (garbage collection) even when the
+ # configured BlobTrashLifetime is zero. WARNING: eventual
+ # consistency may result in race conditions that can cause
+ # data loss. Do not enable this unless you understand and
+ # accept the risk.
UnsafeDelete: false
# for azure driver -- see
# for local directory driver -- see
# https://doc.arvados.org/install/configure-fs-storage.html
Root: /var/lib/arvados/keep-data
+
+ # For local directory driver, potentially confusing tuning
+ # parameter, intentionally excluded from main documentation.
+ #
+ # When true, read and write operations (for whole 64MiB
+ # blocks) on an individual volume will queued and issued
+ # serially. When false, read and write operations will be
+ # issued concurrently.
+ #
+ # May possibly improve throughput if you have physical spinning disks
+ # and experience contention when there are multiple requests
+ # to the same volume.
+ #
+ # Otherwise, when using SSDs, RAID, or a shared network filesystem, you
+ # should leave this alone.
Serialize: false
Mail:
identification, and does not retrieve any other personal
information.</i>
+ # Workbench screen displayed to inactive users. This is HTML
+ # text that will be incorporated directly onto the page.
InactivePageHTML: |
<img src="/arvados-logo-big.png" style="width: 20%; float: right; padding: 1em;" />
<h3>Hi! You're logged in, but...</h3>
<p>An administrator must activate your account before you can get
any further.</p>
- # Use experimental controller code (see https://dev.arvados.org/issues/14287)
- EnableBetaController14287: false
+ # Connecting to Arvados shell VMs tends to be site-specific.
+ # Put any special instructions here. This is HTML text that will
+ # be incorporated directly onto the Workbench page.
+ SSHHelpPageHTML: |
+ <a href="https://doc.arvados.org/user/getting_started/ssh-access-unix.html">Accessing an Arvados VM with SSH</a> (generic instructions).
+ Site configurations vary. Contact your local cluster administrator if you have difficulty accessing an Arvados shell node.
+
+ # Sample text if you are using a "switchyard" ssh proxy.
+ # Replace "zzzzz" with your Cluster ID.
+ #SSHHelpPageHTML: |
+ # <p>Add a section like this to your SSH configuration file ( <i>~/.ssh/config</i>):</p>
+ # <pre>Host *.zzzzz
+ # TCPKeepAlive yes
+ # ServerAliveInterval 60
+ # ProxyCommand ssh -p2222 turnout@switchyard.zzzzz.arvadosapi.com -x -a $SSH_PROXY_FLAGS %h
+ # </pre>
+
+ # If you are using a switchyard ssh proxy, shell node hostnames
+ # may require a special hostname suffix. In the sample ssh
+ # configuration above, this would be ".zzzzz"
+ # This is added to the hostname in the "command line" column
+ # the Workbench "shell VMs" page.
+ #
+ # If your shell nodes are directly accessible by users without a
+ # proxy and have fully qualified host names, you should leave
+ # this blank.
+ SSHHelpHostSuffix: ""
+
+ # Bypass new (Arvados 1.5) API implementations, and hand off
+ # requests directly to Rails instead. This can provide a temporary
+ # workaround for clients that are incompatible with the new API
+ # implementation. Note that it also disables some new federation
+ # features and will be removed in a future release.
+ ForceLegacyAPI14: false
"os"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/ghodss/yaml"
)
type oldKeepWebConfig struct {
Client *arvados.Client
- Listen string
+ Listen *string
- AnonymousTokens []string
- AttachmentOnlyHost string
- TrustAllContent bool
+ AnonymousTokens *[]string
+ AttachmentOnlyHost *string
+ TrustAllContent *bool
Cache struct {
- TTL arvados.Duration
- UUIDTTL arvados.Duration
- MaxCollectionEntries int
- MaxCollectionBytes int64
- MaxPermissionEntries int
- MaxUUIDEntries int
+ TTL *arvados.Duration
+ UUIDTTL *arvados.Duration
+ MaxCollectionEntries *int
+ MaxCollectionBytes *int64
+ MaxPermissionEntries *int
+ MaxUUIDEntries *int
}
// Hack to support old command line flag, which is a bool
// meaning "get actual token from environment".
- deprecatedAllowAnonymous bool
+ deprecatedAllowAnonymous *bool
// Authorization token to be included in all health check requests.
- ManagementToken string
+ ManagementToken *string
}
func (ldr *Loader) loadOldKeepWebConfig(cfg *arvados.Config) error {
loadOldClientConfig(cluster, oc.Client)
- cluster.Services.WebDAV.InternalURLs[arvados.URL{Host: oc.Listen}] = arvados.ServiceInstance{}
- cluster.Services.WebDAVDownload.InternalURLs[arvados.URL{Host: oc.Listen}] = arvados.ServiceInstance{}
- cluster.Services.WebDAVDownload.ExternalURL = arvados.URL{Host: oc.AttachmentOnlyHost}
- cluster.TLS.Insecure = oc.Client.Insecure
- cluster.ManagementToken = oc.ManagementToken
- cluster.Collections.TrustAllContent = oc.TrustAllContent
- cluster.Collections.WebDAVCache.TTL = oc.Cache.TTL
- cluster.Collections.WebDAVCache.UUIDTTL = oc.Cache.UUIDTTL
- cluster.Collections.WebDAVCache.MaxCollectionEntries = oc.Cache.MaxCollectionEntries
- cluster.Collections.WebDAVCache.MaxCollectionBytes = oc.Cache.MaxCollectionBytes
- cluster.Collections.WebDAVCache.MaxPermissionEntries = oc.Cache.MaxPermissionEntries
- cluster.Collections.WebDAVCache.MaxUUIDEntries = oc.Cache.MaxUUIDEntries
- if len(oc.AnonymousTokens) > 0 {
- cluster.Users.AnonymousUserToken = oc.AnonymousTokens[0]
- if len(oc.AnonymousTokens) > 1 {
- ldr.Logger.Warn("More than 1 anonymous tokens configured, using only the first and discarding the rest.")
+ if oc.Listen != nil {
+ cluster.Services.WebDAV.InternalURLs[arvados.URL{Host: *oc.Listen}] = arvados.ServiceInstance{}
+ cluster.Services.WebDAVDownload.InternalURLs[arvados.URL{Host: *oc.Listen}] = arvados.ServiceInstance{}
+ }
+ if oc.AttachmentOnlyHost != nil {
+ cluster.Services.WebDAVDownload.ExternalURL = arvados.URL{Host: *oc.AttachmentOnlyHost}
+ }
+ if oc.ManagementToken != nil {
+ cluster.ManagementToken = *oc.ManagementToken
+ }
+ if oc.TrustAllContent != nil {
+ cluster.Collections.TrustAllContent = *oc.TrustAllContent
+ }
+ if oc.Cache.TTL != nil {
+ cluster.Collections.WebDAVCache.TTL = *oc.Cache.TTL
+ }
+ if oc.Cache.UUIDTTL != nil {
+ cluster.Collections.WebDAVCache.UUIDTTL = *oc.Cache.UUIDTTL
+ }
+ if oc.Cache.MaxCollectionEntries != nil {
+ cluster.Collections.WebDAVCache.MaxCollectionEntries = *oc.Cache.MaxCollectionEntries
+ }
+ if oc.Cache.MaxCollectionBytes != nil {
+ cluster.Collections.WebDAVCache.MaxCollectionBytes = *oc.Cache.MaxCollectionBytes
+ }
+ if oc.Cache.MaxPermissionEntries != nil {
+ cluster.Collections.WebDAVCache.MaxPermissionEntries = *oc.Cache.MaxPermissionEntries
+ }
+ if oc.Cache.MaxUUIDEntries != nil {
+ cluster.Collections.WebDAVCache.MaxUUIDEntries = *oc.Cache.MaxUUIDEntries
+ }
+ if oc.AnonymousTokens != nil {
+ if len(*oc.AnonymousTokens) > 0 {
+ cluster.Users.AnonymousUserToken = (*oc.AnonymousTokens)[0]
+ if len(*oc.AnonymousTokens) > 1 {
+ ldr.Logger.Warn("More than 1 anonymous tokens configured, using only the first and discarding the rest.")
+ }
}
}
type oldGitHttpdConfig struct {
Client *arvados.Client
- Listen string
- GitCommand string
- GitoliteHome string
- RepoRoot string
- ManagementToken string
+ Listen *string
+ GitCommand *string
+ GitoliteHome *string
+ RepoRoot *string
+ ManagementToken *string
}
func (ldr *Loader) loadOldGitHttpdConfig(cfg *arvados.Config) error {
loadOldClientConfig(cluster, oc.Client)
- cluster.Services.GitHTTP.InternalURLs[arvados.URL{Host: oc.Listen}] = arvados.ServiceInstance{}
- cluster.TLS.Insecure = oc.Client.Insecure
- cluster.ManagementToken = oc.ManagementToken
- cluster.Git.GitCommand = oc.GitCommand
- cluster.Git.GitoliteHome = oc.GitoliteHome
- cluster.Git.Repositories = oc.RepoRoot
+ if oc.Listen != nil {
+ cluster.Services.GitHTTP.InternalURLs[arvados.URL{Host: *oc.Listen}] = arvados.ServiceInstance{}
+ }
+ if oc.ManagementToken != nil {
+ cluster.ManagementToken = *oc.ManagementToken
+ }
+ if oc.GitCommand != nil {
+ cluster.Git.GitCommand = *oc.GitCommand
+ }
+ if oc.GitoliteHome != nil {
+ cluster.Git.GitoliteHome = *oc.GitoliteHome
+ }
+ if oc.RepoRoot != nil {
+ cluster.Git.Repositories = *oc.RepoRoot
+ }
cfg.Clusters[cluster.ClusterID] = *cluster
return nil
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
"text/template"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
check "gopkg.in/check.v1"
)
"os"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
check "gopkg.in/check.v1"
)
+// Configured at: sdk/python/tests/run_test_server.py
+const TestServerManagementToken = "e687950a23c3a9bceec28c6223a06c79"
+
func testLoadLegacyConfig(content []byte, mungeFlag string, c *check.C) (*arvados.Cluster, error) {
tmpfile, err := ioutil.TempFile("", "example")
if err != nil {
c.Check(cluster.ManagementToken, check.Equals, "xyzzy")
}
+// Tests fix for https://dev.arvados.org/issues/15642
+func (s *LoadSuite) TestLegacyKeepWebConfigDoesntDisableMissingItems(c *check.C) {
+ content := []byte(`
+{
+ "Client": {
+ "Scheme": "",
+ "APIHost": "example.com",
+ "AuthToken": "abcdefg",
+ }
+}
+`)
+ cluster, err := testLoadLegacyConfig(content, "-legacy-keepweb-config", c)
+ c.Check(err, check.IsNil)
+ // The resulting ManagementToken should be the one set up on the test server.
+ c.Check(cluster.ManagementToken, check.Equals, TestServerManagementToken)
+}
+
func (s *LoadSuite) TestLegacyKeepproxyConfig(c *check.C) {
f := "-legacy-keepproxy-config"
content := []byte(fmtKeepproxyConfig("", true))
c.Check(cluster.Services.Keepproxy.InternalURLs[arvados.URL{Host: ":9000"}], check.Equals, arvados.ServiceInstance{})
}
+// Tests fix for https://dev.arvados.org/issues/15642
+func (s *LoadSuite) TestLegacyArvGitHttpdConfigDoesntDisableMissingItems(c *check.C) {
+ content := []byte(`
+{
+ "Client": {
+ "Scheme": "",
+ "APIHost": "example.com",
+ "AuthToken": "abcdefg",
+ }
+}
+`)
+ cluster, err := testLoadLegacyConfig(content, "-legacy-git-httpd-config", c)
+ c.Check(err, check.IsNil)
+ // The resulting ManagementToken should be the one set up on the test server.
+ c.Check(cluster.ManagementToken, check.Equals, TestServerManagementToken)
+}
+
func (s *LoadSuite) TestLegacyKeepBalanceConfig(c *check.C) {
f := "-legacy-keepbalance-config"
content := []byte(fmtKeepBalanceConfig(""))
"io"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// ExportJSON writes a JSON object with the safe (non-secret) portions
"Collections.CollectionVersioning": false,
"Collections.DefaultReplication": true,
"Collections.DefaultTrashLifetime": true,
+ "Collections.ForwardSlashNameSubstitution": true,
"Collections.ManagedProperties": true,
"Collections.ManagedProperties.*": true,
"Collections.ManagedProperties.*.*": true,
"Containers.SupportedDockerImageFormats": true,
"Containers.SupportedDockerImageFormats.*": true,
"Containers.UsePreemptibleInstances": true,
- "EnableBetaController14287": false,
+ "ForceLegacyAPI14": false,
"Git": false,
"InstanceTypes": true,
"InstanceTypes.*": true,
"Login": true,
"Login.GoogleClientID": false,
"Login.GoogleClientSecret": false,
+ "Login.GoogleAlternateEmailAddresses": false,
"Login.ProviderAppID": false,
"Login.ProviderAppSecret": false,
"Login.LoginCluster": true,
"Login.RemoteTokenRefresh": true,
- "Mail": false,
+ "Mail": true,
+ "Mail.MailchimpAPIKey": false,
+ "Mail.MailchimpListID": false,
+ "Mail.SendUserSetupNotificationEmail": false,
+ "Mail.IssueReporterEmailFrom": false,
+ "Mail.IssueReporterEmailTo": false,
+ "Mail.SupportEmailAddress": true,
+ "Mail.EmailFrom": false,
"ManagementToken": false,
"PostgreSQL": false,
"RemoteClusters": true,
"Users.NewInactiveUserNotificationRecipients": false,
"Users.NewUserNotificationRecipients": false,
"Users.NewUsersAreActive": false,
+ "Users.PreferDomainForUsername": false,
"Users.UserNotifierEmailFrom": false,
"Users.UserProfileNotificationAddress": false,
"Volumes": true,
"Workbench.VocabularyURL": true,
"Workbench.WelcomePageHTML": true,
"Workbench.InactivePageHTML": true,
+ "Workbench.SSHHelpPageHTML": true,
+ "Workbench.SSHHelpHostSuffix": true,
}
func redactUnsafe(m map[string]interface{}, mPrefix, lookupPrefix string) error {
# in the directory where your API server is running.
AnonymousUserToken: ""
+ # If a new user has an alternate email address (local@domain)
+ # with the domain given here, its local part becomes the new
+ # user's default username. Otherwise, the user's primary email
+ # address is used.
+ PreferDomainForUsername: ""
+
AuditLogs:
# Time to keep audit logs, in seconds. (An audit log is a row added
# to the "logs" table in the PostgreSQL database each time an
# > 0s = auto-create a new version when older than the specified number of seconds.
PreserveVersionIfIdle: -1s
+ # If non-empty, allow project and collection names to contain
+ # the "/" character (slash/stroke/solidus), and replace "/" with
+ # the given string in the filesystem hierarchy presented by
+ # WebDAV. Example values are "%2f" and "{slash}". Names that
+ # contain the substitution string itself may result in confusing
+ # behavior, so a value like "_" is not recommended.
+ #
+ # If the default empty value is used, the server will reject
+ # requests to create or rename a collection when the new name
+ # contains "/".
+ #
+ # If the value "/" is used, project and collection names
+ # containing "/" will be allowed, but they will not be
+ # accessible via WebDAV.
+ #
+ # Use of this feature is not recommended, if it can be avoided.
+ ForwardSlashNameSubstitution: ""
+
# Managed collection properties. At creation time, if the client didn't
# provide the listed keys, they will be automatically populated following
# one of the following behaviors:
# (Experimental) Authenticate with Google, bypassing the
# SSO-provider gateway service. Use the Google Cloud console to
- # generate the Client ID and secret (APIs and Services >
- # Credentials > Create credentials > OAuth client ID > Web
- # application) and add your controller's /login URL (e.g.,
+ # enable the People API (APIs and Services > Enable APIs and
+ # services > Google People API > Enable), generate a Client ID
+ # and secret (APIs and Services > Credentials > Create
+ # credentials > OAuth client ID > Web application) and add your
+ # controller's /login URL (e.g.,
# "https://zzzzz.example.com/login") as an authorized redirect
# URL.
#
- # Requires EnableBetaController14287. ProviderAppID must be
+ # Incompatible with ForceLegacyAPI14. ProviderAppID must be
# blank.
GoogleClientID: ""
GoogleClientSecret: ""
+ # Allow users to log in to existing accounts using any verified
+ # email address listed by their Google account. If true, the
+ # Google People API must be enabled in order for Google login to
+ # work. If false, only the primary email address will be used.
+ GoogleAlternateEmailAddresses: true
+
# The cluster ID to delegate the user database. When set,
# logins on this cluster will be redirected to the login cluster
- # (login cluster must appear in RemoteHosts with Proxy: true)
+ # (login cluster must appear in RemoteClusters with Proxy: true)
LoginCluster: ""
# How long a cached token belonging to a remote cluster will
# (experimental) cloud dispatcher for executing containers on
# worker VMs. Begins with "-----BEGIN RSA PRIVATE KEY-----\n"
# and ends with "\n-----END RSA PRIVATE KEY-----\n".
- DispatchPrivateKey: none
+ DispatchPrivateKey: ""
# Maximum time to wait for workers to come up before abandoning
# stale locks from a previous dispatch process.
# has been reached or crunch_log_seconds_between_events has elapsed since
# the last flush.
LogBytesPerEvent: 4096
- LogSecondsBetweenEvents: 1
+ LogSecondsBetweenEvents: 5s
# The sample period for throttling logs.
LogThrottlePeriod: 60s
# Worker VM image ID.
ImageID: ""
+ # An executable file (located on the dispatcher host) to be
+ # copied to cloud instances at runtime and used as the
+ # container runner/supervisor. The default value is the
+ # dispatcher program itself.
+ #
+ # Use the empty string to disable this step: nothing will be
+ # copied, and cloud instances are assumed to have a suitable
+ # version of crunch-run installed.
+ DeployRunnerBinary: "/proc/self/exe"
+
# Tags to add on all resources (VMs, NICs, disks) created by
# the container dispatcher. (Arvados's own tags --
# InstanceType, IdleBehavior, and InstanceSecret -- will also
SAMPLE: true
Driver: s3
DriverParameters:
-
# for s3 driver -- see
# https://doc.arvados.org/install/configure-s3-object-storage.html
IAMRole: aaaaa
ConnectTimeout: 1m
ReadTimeout: 10m
RaceWindow: 24h
+
+ # For S3 driver, potentially unsafe tuning parameter,
+ # intentionally excluded from main documentation.
+ #
+ # Enable deletion (garbage collection) even when the
+ # configured BlobTrashLifetime is zero. WARNING: eventual
+ # consistency may result in race conditions that can cause
+ # data loss. Do not enable this unless you understand and
+ # accept the risk.
UnsafeDelete: false
# for azure driver -- see
# for local directory driver -- see
# https://doc.arvados.org/install/configure-fs-storage.html
Root: /var/lib/arvados/keep-data
+
+ # For local directory driver, potentially confusing tuning
+ # parameter, intentionally excluded from main documentation.
+ #
+ # When true, read and write operations (for whole 64MiB
+ # blocks) on an individual volume will queued and issued
+ # serially. When false, read and write operations will be
+ # issued concurrently.
+ #
+ # May possibly improve throughput if you have physical spinning disks
+ # and experience contention when there are multiple requests
+ # to the same volume.
+ #
+ # Otherwise, when using SSDs, RAID, or a shared network filesystem, you
+ # should leave this alone.
Serialize: false
Mail:
identification, and does not retrieve any other personal
information.</i>
+ # Workbench screen displayed to inactive users. This is HTML
+ # text that will be incorporated directly onto the page.
InactivePageHTML: |
<img src="/arvados-logo-big.png" style="width: 20%; float: right; padding: 1em;" />
<h3>Hi! You're logged in, but...</h3>
<p>An administrator must activate your account before you can get
any further.</p>
- # Use experimental controller code (see https://dev.arvados.org/issues/14287)
- EnableBetaController14287: false
+ # Connecting to Arvados shell VMs tends to be site-specific.
+ # Put any special instructions here. This is HTML text that will
+ # be incorporated directly onto the Workbench page.
+ SSHHelpPageHTML: |
+ <a href="https://doc.arvados.org/user/getting_started/ssh-access-unix.html">Accessing an Arvados VM with SSH</a> (generic instructions).
+ Site configurations vary. Contact your local cluster administrator if you have difficulty accessing an Arvados shell node.
+
+ # Sample text if you are using a "switchyard" ssh proxy.
+ # Replace "zzzzz" with your Cluster ID.
+ #SSHHelpPageHTML: |
+ # <p>Add a section like this to your SSH configuration file ( <i>~/.ssh/config</i>):</p>
+ # <pre>Host *.zzzzz
+ # TCPKeepAlive yes
+ # ServerAliveInterval 60
+ # ProxyCommand ssh -p2222 turnout@switchyard.zzzzz.arvadosapi.com -x -a $SSH_PROXY_FLAGS %h
+ # </pre>
+
+ # If you are using a switchyard ssh proxy, shell node hostnames
+ # may require a special hostname suffix. In the sample ssh
+ # configuration above, this would be ".zzzzz"
+ # This is added to the hostname in the "command line" column
+ # the Workbench "shell VMs" page.
+ #
+ # If your shell nodes are directly accessible by users without a
+ # proxy and have fully qualified host names, you should leave
+ # this blank.
+ SSHHelpHostSuffix: ""
+
+ # Bypass new (Arvados 1.5) API implementations, and hand off
+ # requests directly to Rails instead. This can provide a temporary
+ # workaround for clients that are incompatible with the new API
+ # implementation. Note that it also disables some new federation
+ # features and will be removed in a future release.
+ ForceLegacyAPI14: false
`)
"os"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/ghodss/yaml"
"github.com/imdario/mergo"
"github.com/sirupsen/logrus"
"strings"
"testing"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/ghodss/yaml"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
import (
"context"
- "git.curoverse.com/arvados.git/lib/cmd"
- "git.curoverse.com/arvados.git/lib/service"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/service"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/prometheus/client_golang/prometheus"
)
"strings"
"sync"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
func rewriteSignatures(clusterID string, expectHash string,
"net/http"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
func remoteContainerRequestCreate(
creds := auth.NewCredentials()
creds.LoadTokensFromHTTPRequest(req)
- currentUser, err := h.handler.validateAPItoken(req, creds.Tokens[0])
+ currentUser, ok, err := h.handler.validateAPItoken(req, creds.Tokens[0])
if err != nil {
- httpserver.Error(w, err.Error(), http.StatusForbidden)
+ httpserver.Error(w, err.Error(), http.StatusInternalServerError)
+ return true
+ } else if !ok {
+ httpserver.Error(w, "invalid API token", http.StatusForbidden)
return true
}
"regexp"
"sync"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
type federatedRequestDelegate func(
"regexp"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
"github.com/jmcvetta/randutil"
)
// checks it again api_client_authorizations table in the database,
// and fills in the token scope and user UUID. Does not handle remote
// tokens unless they are already in the database and not expired.
-func (h *Handler) validateAPItoken(req *http.Request, token string) (*CurrentUser, error) {
+//
+// Return values are:
+//
+// nil, false, non-nil -- if there was an internal error
+//
+// nil, false, nil -- if the token is invalid
+//
+// non-nil, true, nil -- if the token is valid
+func (h *Handler) validateAPItoken(req *http.Request, token string) (*CurrentUser, bool, error) {
user := CurrentUser{Authorization: arvados.APIClientAuthorization{APIToken: token}}
db, err := h.db(req)
if err != nil {
- return nil, err
+ return nil, false, err
}
var uuid string
user.Authorization.APIToken = token
var scopes string
err = db.QueryRowContext(req.Context(), `SELECT api_client_authorizations.uuid, api_client_authorizations.scopes, users.uuid FROM api_client_authorizations JOIN users on api_client_authorizations.user_id=users.id WHERE api_token=$1 AND (expires_at IS NULL OR expires_at > current_timestamp) LIMIT 1`, token).Scan(&user.Authorization.UUID, &scopes, &user.UUID)
- if err != nil {
- return nil, err
+ if err == sql.ErrNoRows {
+ return nil, false, nil
+ } else if err != nil {
+ return nil, false, err
}
if uuid != "" && user.Authorization.UUID != uuid {
- return nil, fmt.Errorf("UUID embedded in v2 token did not match record")
+ // secret part matches, but UUID doesn't -- somewhat surprising
+ return nil, false, nil
}
err = json.Unmarshal([]byte(scopes), &user.Authorization.Scopes)
if err != nil {
- return nil, err
+ return nil, false, err
}
- return &user, nil
+ return &user, true, nil
}
func (h *Handler) createAPItoken(req *http.Request, userUUID string, scopes []string) (*arvados.APIClientAuthorization, error) {
// If the token exists in our own database, salt it
// for the remote. Otherwise, assume it was issued by
// the remote, and pass it through unmodified.
- currentUser, err := h.validateAPItoken(req, creds.Tokens[0])
- if err == sql.ErrNoRows {
+ currentUser, ok, err := h.validateAPItoken(req, creds.Tokens[0])
+ if err != nil {
+ return nil, err
+ } else if !ok {
// Not ours; pass through unmodified.
token = creds.Tokens[0]
- } else if err != nil {
- return nil, err
} else {
// Found; make V2 version and salt it.
token, err = auth.SaltToken(currentUser.Authorization.TokenV2(), remote)
import (
"bytes"
"context"
- "crypto/md5"
"encoding/json"
"errors"
"fmt"
"net/url"
"regexp"
"strings"
-
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/lib/controller/localdb"
- "git.curoverse.com/arvados.git/lib/controller/rpc"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "time"
+
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/controller/localdb"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
)
type Conn struct {
local := localdb.NewConn(cluster)
remotes := map[string]backend{}
for id, remote := range cluster.RemoteClusters {
- if !remote.Proxy {
+ if !remote.Proxy || id == cluster.ClusterID {
continue
}
- remotes[id] = rpc.NewConn(id, &url.URL{Scheme: remote.Scheme, Host: remote.Host}, remote.Insecure, saltedTokenProvider(local, id))
+ conn := rpc.NewConn(id, &url.URL{Scheme: remote.Scheme, Host: remote.Host}, remote.Insecure, saltedTokenProvider(local, id))
+ // Older versions of controller rely on the Via header
+ // to detect loops.
+ conn.SendHeader = http.Header{"Via": {"HTTP/1.1 arvados-controller"}}
+ remotes[id] = conn
}
return &Conn{
// or "" for the local backend.
//
// A non-nil error means all backends failed.
-func (conn *Conn) tryLocalThenRemotes(ctx context.Context, fn func(context.Context, string, backend) error) error {
- if err := fn(ctx, "", conn.local); err == nil || errStatus(err) != http.StatusNotFound {
+func (conn *Conn) tryLocalThenRemotes(ctx context.Context, forwardedFor string, fn func(context.Context, string, backend) error) error {
+ if err := fn(ctx, "", conn.local); err == nil || errStatus(err) != http.StatusNotFound || forwardedFor != "" {
+ // Note: forwardedFor != "" means this request came
+ // from a remote cluster, so we don't take a second
+ // hop. This avoids cycles, redundant calls to a
+ // mutually reachable remote, and use of double-salted
+ // tokens.
return err
}
})
}
-// this could be in sdk/go/arvados
-func portableDataHash(mt string) string {
- h := md5.New()
- blkRe := regexp.MustCompile(`^ [0-9a-f]{32}\+\d+`)
- size := 0
- _ = regexp.MustCompile(` ?[^ ]*`).ReplaceAllFunc([]byte(mt), func(tok []byte) []byte {
- if m := blkRe.Find(tok); m != nil {
- // write hash+size, ignore remaining block hints
- tok = m
- }
- n, err := h.Write(tok)
- if err != nil {
- panic(err)
- }
- size += n
- return nil
- })
- return fmt.Sprintf("%x+%d", h.Sum(nil), size)
-}
-
func (conn *Conn) ConfigGet(ctx context.Context) (json.RawMessage, error) {
var buf bytes.Buffer
err := config.ExportJSON(&buf, conn.cluster)
if err != nil {
return arvados.LoginResponse{}, fmt.Errorf("internal error getting redirect target: %s", err)
}
- target.RawQuery = url.Values{
+ params := url.Values{
"return_to": []string{options.ReturnTo},
- "remote": []string{options.Remote},
- }.Encode()
+ }
+ if options.Remote != "" {
+ params.Set("remote", options.Remote)
+ }
+ target.RawQuery = params.Encode()
return arvados.LoginResponse{
RedirectLocation: target.String(),
}, nil
}
}
+func (conn *Conn) Logout(ctx context.Context, options arvados.LogoutOptions) (arvados.LogoutResponse, error) {
+ // If the logout request comes with an API token from a known
+ // remote cluster, redirect to that cluster's logout handler
+ // so it has an opportunity to clear sessions, expire tokens,
+ // etc. Otherwise use the local endpoint.
+ reqauth, ok := auth.FromContext(ctx)
+ if !ok || len(reqauth.Tokens) == 0 || len(reqauth.Tokens[0]) < 8 || !strings.HasPrefix(reqauth.Tokens[0], "v2/") {
+ return conn.local.Logout(ctx, options)
+ }
+ id := reqauth.Tokens[0][3:8]
+ if id == conn.cluster.ClusterID {
+ return conn.local.Logout(ctx, options)
+ }
+ remote, ok := conn.remotes[id]
+ if !ok {
+ return conn.local.Logout(ctx, options)
+ }
+ baseURL := remote.BaseURL()
+ target, err := baseURL.Parse(arvados.EndpointLogout.Path)
+ if err != nil {
+ return arvados.LogoutResponse{}, fmt.Errorf("internal error getting redirect target: %s", err)
+ }
+ target.RawQuery = url.Values{"return_to": {options.ReturnTo}}.Encode()
+ return arvados.LogoutResponse{RedirectLocation: target.String()}, nil
+}
+
func (conn *Conn) CollectionGet(ctx context.Context, options arvados.GetOptions) (arvados.Collection, error) {
if len(options.UUID) == 27 {
// UUID is really a UUID
} else {
// UUID is a PDH
first := make(chan arvados.Collection, 1)
- err := conn.tryLocalThenRemotes(ctx, func(ctx context.Context, remoteID string, be backend) error {
- c, err := be.CollectionGet(ctx, options)
+ err := conn.tryLocalThenRemotes(ctx, options.ForwardedFor, func(ctx context.Context, remoteID string, be backend) error {
+ remoteOpts := options
+ remoteOpts.ForwardedFor = conn.cluster.ClusterID + "-" + options.ForwardedFor
+ c, err := be.CollectionGet(ctx, remoteOpts)
if err != nil {
return err
}
// options.UUID is either hash+size or
// hash+size+hints; only hash+size need to
// match the computed PDH.
- if pdh := portableDataHash(c.ManifestText); pdh != options.UUID && !strings.HasPrefix(options.UUID, pdh+"+") {
+ if pdh := arvados.PortableDataHash(c.ManifestText); pdh != options.UUID && !strings.HasPrefix(options.UUID, pdh+"+") {
err = httpErrorf(http.StatusBadGateway, "bad portable data hash %q received from remote %q (expected %q)", pdh, remoteID, options.UUID)
ctxlog.FromContext(ctx).Warn(err)
return err
}
}
+func (conn *Conn) CollectionList(ctx context.Context, options arvados.ListOptions) (arvados.CollectionList, error) {
+ return conn.generated_CollectionList(ctx, options)
+}
+
func (conn *Conn) CollectionProvenance(ctx context.Context, options arvados.GetOptions) (map[string]interface{}, error) {
return conn.chooseBackend(options.UUID).CollectionProvenance(ctx, options)
}
return conn.chooseBackend(options.UUID).CollectionUntrash(ctx, options)
}
+func (conn *Conn) ContainerList(ctx context.Context, options arvados.ListOptions) (arvados.ContainerList, error) {
+ return conn.generated_ContainerList(ctx, options)
+}
+
func (conn *Conn) ContainerCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Container, error) {
return conn.chooseBackend(options.ClusterID).ContainerCreate(ctx, options)
}
return conn.chooseBackend(options.UUID).ContainerUnlock(ctx, options)
}
+func (conn *Conn) SpecimenList(ctx context.Context, options arvados.ListOptions) (arvados.SpecimenList, error) {
+ return conn.generated_SpecimenList(ctx, options)
+}
+
func (conn *Conn) SpecimenCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Specimen, error) {
return conn.chooseBackend(options.ClusterID).SpecimenCreate(ctx, options)
}
return conn.chooseBackend(options.UUID).SpecimenDelete(ctx, options)
}
+var userAttrsCachedFromLoginCluster = map[string]bool{
+ "created_at": true,
+ "email": true,
+ "first_name": true,
+ "is_active": true,
+ "is_admin": true,
+ "last_name": true,
+ "modified_at": true,
+ "modified_by_client_uuid": true,
+ "modified_by_user_uuid": true,
+ "prefs": true,
+ "username": true,
+
+ "etag": false,
+ "full_name": false,
+ "identity_url": false,
+ "is_invited": false,
+ "owner_uuid": false,
+ "uuid": false,
+ "writable_by": false,
+}
+
+func (conn *Conn) UserList(ctx context.Context, options arvados.ListOptions) (arvados.UserList, error) {
+ logger := ctxlog.FromContext(ctx)
+ if id := conn.cluster.Login.LoginCluster; id != "" && id != conn.cluster.ClusterID {
+ resp, err := conn.chooseBackend(id).UserList(ctx, options)
+ if err != nil {
+ return resp, err
+ }
+ batchOpts := arvados.UserBatchUpdateOptions{Updates: map[string]map[string]interface{}{}}
+ for _, user := range resp.Items {
+ if !strings.HasPrefix(user.UUID, id) {
+ continue
+ }
+ logger.Debugf("cache user info for uuid %q", user.UUID)
+
+ // If the remote cluster has null timestamps
+ // (e.g., test server with incomplete
+ // fixtures) use dummy timestamps (instead of
+ // the zero time, which causes a Rails API
+ // error "year too big to marshal: 1 UTC").
+ if user.ModifiedAt.IsZero() {
+ user.ModifiedAt = time.Now()
+ }
+ if user.CreatedAt.IsZero() {
+ user.CreatedAt = time.Now()
+ }
+
+ var allFields map[string]interface{}
+ buf, err := json.Marshal(user)
+ if err != nil {
+ return arvados.UserList{}, fmt.Errorf("error encoding user record from remote response: %s", err)
+ }
+ err = json.Unmarshal(buf, &allFields)
+ if err != nil {
+ return arvados.UserList{}, fmt.Errorf("error transcoding user record from remote response: %s", err)
+ }
+ updates := allFields
+ if len(options.Select) > 0 {
+ updates = map[string]interface{}{}
+ for _, k := range options.Select {
+ if v, ok := allFields[k]; ok && userAttrsCachedFromLoginCluster[k] {
+ updates[k] = v
+ }
+ }
+ } else {
+ for k := range updates {
+ if !userAttrsCachedFromLoginCluster[k] {
+ delete(updates, k)
+ }
+ }
+ }
+ batchOpts.Updates[user.UUID] = updates
+ }
+ if len(batchOpts.Updates) > 0 {
+ ctxRoot := auth.NewContext(ctx, &auth.Credentials{Tokens: []string{conn.cluster.SystemRootToken}})
+ _, err = conn.local.UserBatchUpdate(ctxRoot, batchOpts)
+ if err != nil {
+ return arvados.UserList{}, fmt.Errorf("error updating local user records: %s", err)
+ }
+ }
+ return resp, nil
+ } else {
+ return conn.generated_UserList(ctx, options)
+ }
+}
+
+func (conn *Conn) UserCreate(ctx context.Context, options arvados.CreateOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.ClusterID).UserCreate(ctx, options)
+}
+
+func (conn *Conn) UserUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.UUID).UserUpdate(ctx, options)
+}
+
+func (conn *Conn) UserUpdateUUID(ctx context.Context, options arvados.UpdateUUIDOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.UUID).UserUpdateUUID(ctx, options)
+}
+
+func (conn *Conn) UserMerge(ctx context.Context, options arvados.UserMergeOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.OldUserUUID).UserMerge(ctx, options)
+}
+
+func (conn *Conn) UserActivate(ctx context.Context, options arvados.UserActivateOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.UUID).UserActivate(ctx, options)
+}
+
+func (conn *Conn) UserSetup(ctx context.Context, options arvados.UserSetupOptions) (map[string]interface{}, error) {
+ return conn.chooseBackend(options.UUID).UserSetup(ctx, options)
+}
+
+func (conn *Conn) UserUnsetup(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.UUID).UserUnsetup(ctx, options)
+}
+
+func (conn *Conn) UserGet(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.UUID).UserGet(ctx, options)
+}
+
+func (conn *Conn) UserGetCurrent(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.UUID).UserGetCurrent(ctx, options)
+}
+
+func (conn *Conn) UserGetSystem(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.UUID).UserGetSystem(ctx, options)
+}
+
+func (conn *Conn) UserDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.User, error) {
+ return conn.chooseBackend(options.UUID).UserDelete(ctx, options)
+}
+
+func (conn *Conn) UserBatchUpdate(ctx context.Context, options arvados.UserBatchUpdateOptions) (arvados.UserList, error) {
+ return conn.local.UserBatchUpdate(ctx, options)
+}
+
func (conn *Conn) APIClientAuthorizationCurrent(ctx context.Context, options arvados.GetOptions) (arvados.APIClientAuthorization, error) {
return conn.chooseBackend(options.UUID).APIClientAuthorizationCurrent(ctx, options)
}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package federation
+
+import (
+ "context"
+ "net/url"
+ "os"
+ "testing"
+
+ "git.arvados.org/arvados.git/lib/controller/router"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
+ check "gopkg.in/check.v1"
+)
+
+// Gocheck boilerplate
+func Test(t *testing.T) {
+ check.TestingT(t)
+}
+
+// FederationSuite does some generic setup/teardown. Don't add Test*
+// methods to FederationSuite itself.
+type FederationSuite struct {
+ cluster *arvados.Cluster
+ ctx context.Context
+ fed *Conn
+}
+
+func (s *FederationSuite) SetUpTest(c *check.C) {
+ s.cluster = &arvados.Cluster{
+ ClusterID: "aaaaa",
+ SystemRootToken: arvadostest.SystemRootToken,
+ RemoteClusters: map[string]arvados.RemoteCluster{
+ "aaaaa": arvados.RemoteCluster{
+ Host: os.Getenv("ARVADOS_API_HOST"),
+ },
+ },
+ }
+ arvadostest.SetServiceURL(&s.cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
+ s.cluster.TLS.Insecure = true
+ s.cluster.API.MaxItemsPerResponse = 3
+
+ ctx := context.Background()
+ ctx = ctxlog.Context(ctx, ctxlog.TestLogger(c))
+ ctx = auth.NewContext(ctx, &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
+ s.ctx = ctx
+
+ s.fed = New(s.cluster)
+}
+
+func (s *FederationSuite) addDirectRemote(c *check.C, id string, backend backend) {
+ s.cluster.RemoteClusters[id] = arvados.RemoteCluster{
+ Host: "in-process.local",
+ }
+ s.fed.remotes[id] = backend
+}
+
+func (s *FederationSuite) addHTTPRemote(c *check.C, id string, backend backend) {
+ srv := httpserver.Server{Addr: ":"}
+ srv.Handler = router.New(backend)
+ c.Check(srv.Start(), check.IsNil)
+ s.cluster.RemoteClusters[id] = arvados.RemoteCluster{
+ Scheme: "http",
+ Host: srv.Addr,
+ Proxy: true,
+ }
+ s.fed.remotes[id] = rpc.NewConn(id, &url.URL{Scheme: "http", Host: srv.Addr}, true, saltedTokenProvider(s.fed.local, id))
+}
if err != nil {
panic(err)
}
- orig := regexp.MustCompile(`(?ms)\nfunc [^\n]*CollectionList\(.*?\n}\n`).Find(buf)
+ orig := regexp.MustCompile(`(?ms)\nfunc [^\n]*generated_CollectionList\(.*?\n}\n`).Find(buf)
if len(orig) == 0 {
panic("can't find CollectionList func")
}
defer out.Close()
out.Write(regexp.MustCompile(`(?ms)^.*package .*?import.*?\n\)\n`).Find(buf))
io.WriteString(out, "//\n// -- this file is auto-generated -- do not edit -- edit list.go and run \"go generate\" instead --\n//\n\n")
- for _, t := range []string{"Container", "Specimen"} {
+ for _, t := range []string{"Container", "Specimen", "User"} {
_, err := out.Write(bytes.ReplaceAll(orig, []byte("Collection"), []byte(t)))
if err != nil {
panic(err)
"context"
"sort"
"sync"
+ "sync/atomic"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
//
// -- this file is auto-generated -- do not edit -- edit list.go and run "go generate" instead --
//
-func (conn *Conn) ContainerList(ctx context.Context, options arvados.ListOptions) (arvados.ContainerList, error) {
+func (conn *Conn) generated_ContainerList(ctx context.Context, options arvados.ListOptions) (arvados.ContainerList, error) {
var mtx sync.Mutex
var merged arvados.ContainerList
+ var needSort atomic.Value
+ needSort.Store(false)
err := conn.splitListRequest(ctx, options, func(ctx context.Context, _ string, backend arvados.API, options arvados.ListOptions) ([]string, error) {
cl, err := backend.ContainerList(ctx, options)
if err != nil {
defer mtx.Unlock()
if len(merged.Items) == 0 {
merged = cl
- } else {
+ } else if len(cl.Items) > 0 {
merged.Items = append(merged.Items, cl.Items...)
+ needSort.Store(true)
}
uuids := make([]string, 0, len(cl.Items))
for _, item := range cl.Items {
}
return uuids, nil
})
- sort.Slice(merged.Items, func(i, j int) bool { return merged.Items[i].UUID < merged.Items[j].UUID })
+ if needSort.Load().(bool) {
+ // Apply the default/implied order, "modified_at desc"
+ sort.Slice(merged.Items, func(i, j int) bool {
+ mi, mj := merged.Items[i].ModifiedAt, merged.Items[j].ModifiedAt
+ return mj.Before(mi)
+ })
+ }
+ if merged.Items == nil {
+ // Return empty results as [], not null
+ // (https://github.com/golang/go/issues/27589 might be
+ // a better solution in the future)
+ merged.Items = []arvados.Container{}
+ }
return merged, err
}
-func (conn *Conn) SpecimenList(ctx context.Context, options arvados.ListOptions) (arvados.SpecimenList, error) {
+func (conn *Conn) generated_SpecimenList(ctx context.Context, options arvados.ListOptions) (arvados.SpecimenList, error) {
var mtx sync.Mutex
var merged arvados.SpecimenList
+ var needSort atomic.Value
+ needSort.Store(false)
err := conn.splitListRequest(ctx, options, func(ctx context.Context, _ string, backend arvados.API, options arvados.ListOptions) ([]string, error) {
cl, err := backend.SpecimenList(ctx, options)
if err != nil {
defer mtx.Unlock()
if len(merged.Items) == 0 {
merged = cl
- } else {
+ } else if len(cl.Items) > 0 {
merged.Items = append(merged.Items, cl.Items...)
+ needSort.Store(true)
}
uuids := make([]string, 0, len(cl.Items))
for _, item := range cl.Items {
}
return uuids, nil
})
- sort.Slice(merged.Items, func(i, j int) bool { return merged.Items[i].UUID < merged.Items[j].UUID })
+ if needSort.Load().(bool) {
+ // Apply the default/implied order, "modified_at desc"
+ sort.Slice(merged.Items, func(i, j int) bool {
+ mi, mj := merged.Items[i].ModifiedAt, merged.Items[j].ModifiedAt
+ return mj.Before(mi)
+ })
+ }
+ if merged.Items == nil {
+ // Return empty results as [], not null
+ // (https://github.com/golang/go/issues/27589 might be
+ // a better solution in the future)
+ merged.Items = []arvados.Specimen{}
+ }
+ return merged, err
+}
+
+func (conn *Conn) generated_UserList(ctx context.Context, options arvados.ListOptions) (arvados.UserList, error) {
+ var mtx sync.Mutex
+ var merged arvados.UserList
+ var needSort atomic.Value
+ needSort.Store(false)
+ err := conn.splitListRequest(ctx, options, func(ctx context.Context, _ string, backend arvados.API, options arvados.ListOptions) ([]string, error) {
+ cl, err := backend.UserList(ctx, options)
+ if err != nil {
+ return nil, err
+ }
+ mtx.Lock()
+ defer mtx.Unlock()
+ if len(merged.Items) == 0 {
+ merged = cl
+ } else if len(cl.Items) > 0 {
+ merged.Items = append(merged.Items, cl.Items...)
+ needSort.Store(true)
+ }
+ uuids := make([]string, 0, len(cl.Items))
+ for _, item := range cl.Items {
+ uuids = append(uuids, item.UUID)
+ }
+ return uuids, nil
+ })
+ if needSort.Load().(bool) {
+ // Apply the default/implied order, "modified_at desc"
+ sort.Slice(merged.Items, func(i, j int) bool {
+ mi, mj := merged.Items[i].ModifiedAt, merged.Items[j].ModifiedAt
+ return mj.Before(mi)
+ })
+ }
+ if merged.Items == nil {
+ // Return empty results as [], not null
+ // (https://github.com/golang/go/issues/27589 might be
+ // a better solution in the future)
+ merged.Items = []arvados.User{}
+ }
return merged, err
}
"net/http"
"sort"
"sync"
+ "sync/atomic"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
//go:generate go run generate.go
// CollectionList is used as a template to auto-generate List()
// methods for other types; see generate.go.
-func (conn *Conn) CollectionList(ctx context.Context, options arvados.ListOptions) (arvados.CollectionList, error) {
+func (conn *Conn) generated_CollectionList(ctx context.Context, options arvados.ListOptions) (arvados.CollectionList, error) {
var mtx sync.Mutex
var merged arvados.CollectionList
+ var needSort atomic.Value
+ needSort.Store(false)
err := conn.splitListRequest(ctx, options, func(ctx context.Context, _ string, backend arvados.API, options arvados.ListOptions) ([]string, error) {
cl, err := backend.CollectionList(ctx, options)
if err != nil {
defer mtx.Unlock()
if len(merged.Items) == 0 {
merged = cl
- } else {
+ } else if len(cl.Items) > 0 {
merged.Items = append(merged.Items, cl.Items...)
+ needSort.Store(true)
}
uuids := make([]string, 0, len(cl.Items))
for _, item := range cl.Items {
}
return uuids, nil
})
- sort.Slice(merged.Items, func(i, j int) bool { return merged.Items[i].UUID < merged.Items[j].UUID })
+ if needSort.Load().(bool) {
+ // Apply the default/implied order, "modified_at desc"
+ sort.Slice(merged.Items, func(i, j int) bool {
+ mi, mj := merged.Items[i].ModifiedAt, merged.Items[j].ModifiedAt
+ return mj.Before(mi)
+ })
+ }
+ if merged.Items == nil {
+ // Return empty results as [], not null
+ // (https://github.com/golang/go/issues/27589 might be
+ // a better solution in the future)
+ merged.Items = []arvados.Collection{}
+ }
return merged, err
}
//
// * len(Order)==0
//
-// * Each filter must be either "uuid = ..." or "uuid in [...]".
+// * Each filter is either "uuid = ..." or "uuid in [...]".
//
// * The maximum possible response size (total number of objects that
// could potentially be matched by all of the specified filters)
}
}
- if len(todoByRemote) > 1 {
- if cannotSplit {
- return httpErrorf(http.StatusBadRequest, "cannot execute federated list query: each filter must be either 'uuid = ...' or 'uuid in [...]'")
- }
- if opts.Count != "none" {
- return httpErrorf(http.StatusBadRequest, "cannot execute federated list query unless count==\"none\"")
- }
- if opts.Limit >= 0 || opts.Offset != 0 || len(opts.Order) > 0 {
- return httpErrorf(http.StatusBadRequest, "cannot execute federated list query with limit, offset, or order parameter")
- }
- if max := conn.cluster.API.MaxItemsPerResponse; nUUIDs > max {
- return httpErrorf(http.StatusBadRequest, "cannot execute federated list query because number of UUIDs (%d) exceeds page size limit %d", nUUIDs, max)
- }
- selectingUUID := false
- for _, attr := range opts.Select {
- if attr == "uuid" {
- selectingUUID = true
- }
- }
- if opts.Select != nil && !selectingUUID {
- return httpErrorf(http.StatusBadRequest, "cannot execute federated list query with a select parameter that does not include uuid")
- }
+ if len(todoByRemote) == 0 {
+ return nil
+ }
+ if len(todoByRemote) == 1 && todoByRemote[conn.cluster.ClusterID] != nil {
+ // All UUIDs are local, so proxy a single request. The
+ // generic case has some limitations (see below) which
+ // we don't want to impose on local requests.
+ _, err := fn(ctx, conn.cluster.ClusterID, conn.local, opts)
+ return err
+ }
+ if cannotSplit {
+ return httpErrorf(http.StatusBadRequest, "cannot execute federated list query: each filter must be either 'uuid = ...' or 'uuid in [...]'")
+ }
+ if opts.Count != "none" {
+ return httpErrorf(http.StatusBadRequest, "cannot execute federated list query unless count==\"none\"")
+ }
+ if opts.Limit >= 0 || opts.Offset != 0 || len(opts.Order) > 0 {
+ return httpErrorf(http.StatusBadRequest, "cannot execute federated list query with limit, offset, or order parameter")
+ }
+ if max := conn.cluster.API.MaxItemsPerResponse; nUUIDs > max {
+ return httpErrorf(http.StatusBadRequest, "cannot execute federated list query because number of UUIDs (%d) exceeds page size limit %d", nUUIDs, max)
}
ctx, cancel := context.WithCancel(ctx)
return
}
remoteOpts := opts
+ if remoteOpts.Select != nil {
+ // We always need to select UUIDs to
+ // use the response, even if our
+ // caller doesn't.
+ remoteOpts.Select = append([]string{"uuid"}, remoteOpts.Select...)
+ }
for len(todo) > 0 {
if len(batch) > len(todo) {
// Reduce batch to just the todo's
done, err := fn(ctx, clusterID, backend, remoteOpts)
if err != nil {
- errs <- err
+ errs <- httpErrorf(http.StatusBadGateway, err.Error())
return
}
progress := false
delete(todo, uuid)
}
}
- if !progress {
- errs <- httpErrorf(http.StatusBadGateway, "cannot make progress in federated list query: cluster %q returned none of the requested UUIDs", clusterID)
+ if len(done) == 0 {
+ // Zero items == no more
+ // results exist, no need to
+ // get another page.
+ break
+ } else if !progress {
+ errs <- httpErrorf(http.StatusBadGateway, "cannot make progress in federated list query: cluster %q returned %d items but none had the requested UUIDs", clusterID, len(done))
return
}
}
"context"
"fmt"
"net/http"
- "net/url"
- "os"
- "testing"
-
- "git.curoverse.com/arvados.git/lib/controller/router"
- "git.curoverse.com/arvados.git/lib/controller/rpc"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
- check "gopkg.in/check.v1"
-)
-
-// Gocheck boilerplate
-func Test(t *testing.T) {
- check.TestingT(t)
-}
+ "reflect"
+ "sort"
-var (
- _ = check.Suite(&FederationSuite{})
- _ = check.Suite(&CollectionListSuite{})
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ check "gopkg.in/check.v1"
)
-type FederationSuite struct {
- cluster *arvados.Cluster
- ctx context.Context
- fed *Conn
-}
-
-func (s *FederationSuite) SetUpTest(c *check.C) {
- s.cluster = &arvados.Cluster{
- ClusterID: "aaaaa",
- RemoteClusters: map[string]arvados.RemoteCluster{
- "aaaaa": arvados.RemoteCluster{
- Host: os.Getenv("ARVADOS_API_HOST"),
- },
- },
- }
- arvadostest.SetServiceURL(&s.cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
- s.cluster.TLS.Insecure = true
- s.cluster.API.MaxItemsPerResponse = 3
-
- ctx := context.Background()
- ctx = ctxlog.Context(ctx, ctxlog.TestLogger(c))
- ctx = auth.NewContext(ctx, &auth.Credentials{Tokens: []string{arvadostest.ActiveTokenV2}})
- s.ctx = ctx
-
- s.fed = New(s.cluster)
-}
-
-func (s *FederationSuite) addDirectRemote(c *check.C, id string, backend backend) {
- s.cluster.RemoteClusters[id] = arvados.RemoteCluster{
- Host: "in-process.local",
- }
- s.fed.remotes[id] = backend
-}
-
-func (s *FederationSuite) addHTTPRemote(c *check.C, id string, backend backend) {
- srv := httpserver.Server{Addr: ":"}
- srv.Handler = router.New(backend)
- c.Check(srv.Start(), check.IsNil)
- s.cluster.RemoteClusters[id] = arvados.RemoteCluster{
- Host: srv.Addr,
- Proxy: true,
- }
- s.fed.remotes[id] = rpc.NewConn(id, &url.URL{Scheme: "http", Host: srv.Addr}, true, saltedTokenProvider(s.fed.local, id))
-}
+var _ = check.Suite(&CollectionListSuite{})
type collectionLister struct {
arvadostest.APIStub
break
}
if cl.matchFilters(c, options.Filters) {
+ if reflect.DeepEqual(options.Select, []string{"uuid", "name"}) {
+ c = arvados.Collection{UUID: c.UUID, Name: c.Name}
+ } else if reflect.DeepEqual(options.Select, []string{"name"}) {
+ c = arvados.Collection{Name: c.Name}
+ } else if len(options.Select) > 0 {
+ panic(fmt.Sprintf("not implemented: options=%#v", options))
+ }
resp.Items = append(resp.Items, c)
}
}
offset int
order []string
filters []arvados.Filter
+ selectfields []string
expectUUIDs []string
expectCalls []int // number of API calls to backends
expectStatus int
})
}
+func (s *CollectionListSuite) TestCollectionListOneLocalDeselectingUUID(c *check.C) {
+ s.test(c, listTrial{
+ count: "none",
+ limit: -1,
+ filters: []arvados.Filter{{"uuid", "=", s.uuids[0][0]}},
+ selectfields: []string{"name"},
+ expectUUIDs: []string{""}, // select=name is honored
+ expectCalls: []int{1, 0, 0},
+ })
+}
+
func (s *CollectionListSuite) TestCollectionListOneLocalUsingInOperator(c *check.C) {
s.test(c, listTrial{
count: "none",
})
}
+func (s *CollectionListSuite) TestCollectionListOneRemoteDeselectingUUID(c *check.C) {
+ s.test(c, listTrial{
+ count: "none",
+ limit: -1,
+ filters: []arvados.Filter{{"uuid", "=", s.uuids[1][0]}},
+ selectfields: []string{"name"},
+ expectUUIDs: []string{s.uuids[1][0]}, // uuid is returned, despite not being selected
+ expectCalls: []int{0, 1, 0},
+ })
+}
+
func (s *CollectionListSuite) TestCollectionListOneLocalOneRemote(c *check.C) {
s.test(c, listTrial{
count: "none",
})
}
+func (s *CollectionListSuite) TestCollectionListOneLocalOneRemoteDeselectingUUID(c *check.C) {
+ s.test(c, listTrial{
+ count: "none",
+ limit: -1,
+ filters: []arvados.Filter{{"uuid", "in", []string{s.uuids[0][0], s.uuids[1][0]}}},
+ selectfields: []string{"name"},
+ expectUUIDs: []string{s.uuids[0][0], s.uuids[1][0]}, // uuid is returned, despite not being selected
+ expectCalls: []int{1, 1, 0},
+ })
+}
+
func (s *CollectionListSuite) TestCollectionListTwoRemotes(c *check.C) {
s.test(c, listTrial{
count: "none",
}
func (s *CollectionListSuite) TestCollectionListRemoteError(c *check.C) {
- s.addDirectRemote(c, "bbbbb", &arvadostest.APIStub{})
+ s.addDirectRemote(c, "bbbbb", &arvadostest.APIStub{Error: fmt.Errorf("stub backend error")})
s.test(c, listTrial{
count: "none",
limit: -1,
Offset: trial.offset,
Order: trial.order,
Filters: trial.filters,
+ Select: trial.selectfields,
})
if trial.expectStatus != 0 {
c.Assert(err, check.NotNil)
- err, _ := err.(interface{ HTTPStatus() int })
- c.Assert(err, check.NotNil) // err must implement HTTPStatus()
+ err, ok := err.(interface{ HTTPStatus() int })
+ c.Assert(ok, check.Equals, true) // err must implement interface{ HTTPStatus() int }
c.Check(err.HTTPStatus(), check.Equals, trial.expectStatus)
c.Logf("returned error is %#v", err)
c.Logf("returned error string is %q", err)
} else {
c.Check(err, check.IsNil)
- var expectItems []arvados.Collection
+ expectItems := []arvados.Collection{}
for _, uuid := range trial.expectUUIDs {
expectItems = append(expectItems, arvados.Collection{UUID: uuid})
}
+ // expectItems is sorted by UUID, so sort resp.Items
+ // by UUID before checking DeepEquals.
+ sort.Slice(resp.Items, func(i, j int) bool { return resp.Items[i].UUID < resp.Items[j].UUID })
c.Check(resp, check.DeepEquals, arvados.CollectionList{
Items: expectItems,
})
"context"
"net/url"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
check "gopkg.in/check.v1"
)
-func (s *FederationSuite) TestDeferToLoginCluster(c *check.C) {
+var _ = check.Suite(&LoginSuite{})
+
+type LoginSuite struct {
+ FederationSuite
+}
+
+func (s *LoginSuite) TestDeferToLoginCluster(c *check.C) {
s.addHTTPRemote(c, "zhome", &arvadostest.APIStub{})
s.cluster.Login.LoginCluster = "zhome"
c.Check(err, check.IsNil)
c.Check(target.Host, check.Equals, s.cluster.RemoteClusters["zhome"].Host)
c.Check(target.Scheme, check.Equals, "http")
- c.Check(target.Query().Get("remote"), check.Equals, remote)
c.Check(target.Query().Get("return_to"), check.Equals, returnTo)
+ c.Check(target.Query().Get("remote"), check.Equals, remote)
+ _, remotePresent := target.Query()["remote"]
+ c.Check(remotePresent, check.Equals, remote != "")
+ }
+}
+
+func (s *LoginSuite) TestLogout(c *check.C) {
+ s.cluster.Services.Workbench1.ExternalURL = arvados.URL{Scheme: "https", Host: "workbench1.example.com"}
+ s.cluster.Services.Workbench2.ExternalURL = arvados.URL{Scheme: "https", Host: "workbench2.example.com"}
+ s.cluster.Login.GoogleClientID = "zzzzzzzzzzzzzz"
+ s.addHTTPRemote(c, "zhome", &arvadostest.APIStub{})
+ s.cluster.Login.LoginCluster = "zhome"
+
+ returnTo := "https://app.example.com/foo?bar"
+ for _, trial := range []struct {
+ token string
+ returnTo string
+ target string
+ }{
+ {token: "", returnTo: "", target: s.cluster.Services.Workbench2.ExternalURL.String()},
+ {token: "", returnTo: returnTo, target: returnTo},
+ {token: "zzzzzzzzzzzzzzzzzzzzz", returnTo: returnTo, target: returnTo},
+ {token: "v2/zzzzz-aaaaa-aaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", returnTo: returnTo, target: returnTo},
+ {token: "v2/zhome-aaaaa-aaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", returnTo: returnTo, target: "http://" + s.cluster.RemoteClusters["zhome"].Host + "/logout?" + url.Values{"return_to": {returnTo}}.Encode()},
+ } {
+ c.Logf("trial %#v", trial)
+ ctx := context.Background()
+ if trial.token != "" {
+ ctx = auth.NewContext(ctx, &auth.Credentials{Tokens: []string{trial.token}})
+ }
+ resp, err := s.fed.Logout(ctx, arvados.LogoutOptions{ReturnTo: trial.returnTo})
+ c.Assert(err, check.IsNil)
+ c.Logf(" RedirectLocation %q", resp.RedirectLocation)
+ target, err := url.Parse(resp.RedirectLocation)
+ c.Check(err, check.IsNil)
+ c.Check(target.String(), check.Equals, trial.target)
}
}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package federation
+
+import (
+ "encoding/json"
+ "errors"
+ "net/url"
+ "os"
+ "strings"
+
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&UserSuite{})
+
+type UserSuite struct {
+ FederationSuite
+}
+
+func (s *UserSuite) TestLoginClusterUserList(c *check.C) {
+ s.cluster.ClusterID = "local"
+ s.cluster.Login.LoginCluster = "zzzzz"
+ s.fed = New(s.cluster)
+ s.addDirectRemote(c, "zzzzz", rpc.NewConn("zzzzz", &url.URL{Scheme: "https", Host: os.Getenv("ARVADOS_API_HOST")}, true, rpc.PassthroughTokenProvider))
+
+ for _, updateFail := range []bool{false, true} {
+ for _, opts := range []arvados.ListOptions{
+ {Offset: 0, Limit: -1, Select: nil},
+ {Offset: 1, Limit: 1, Select: nil},
+ {Offset: 0, Limit: 2, Select: []string{"uuid"}},
+ {Offset: 0, Limit: 2, Select: []string{"uuid", "email"}},
+ } {
+ c.Logf("updateFail %v, opts %#v", updateFail, opts)
+ spy := arvadostest.NewProxy(c, s.cluster.Services.RailsAPI)
+ stub := &arvadostest.APIStub{Error: errors.New("local cluster failure")}
+ if updateFail {
+ s.fed.local = stub
+ } else {
+ s.fed.local = rpc.NewConn(s.cluster.ClusterID, spy.URL, true, rpc.PassthroughTokenProvider)
+ }
+ userlist, err := s.fed.UserList(s.ctx, opts)
+ if updateFail && err == nil {
+ // All local updates fail, so the only
+ // cases expected to succeed are the
+ // ones with 0 results.
+ c.Check(userlist.Items, check.HasLen, 0)
+ c.Check(stub.Calls(nil), check.HasLen, 0)
+ } else if updateFail {
+ c.Logf("... err %#v", err)
+ calls := stub.Calls(stub.UserBatchUpdate)
+ if c.Check(calls, check.HasLen, 1) {
+ c.Logf("... stub.UserUpdate called with options: %#v", calls[0].Options)
+ shouldUpdate := map[string]bool{
+ "uuid": false,
+ "email": true,
+ "first_name": true,
+ "last_name": true,
+ "is_admin": true,
+ "is_active": true,
+ "prefs": true,
+ // can't safely update locally
+ "owner_uuid": false,
+ "identity_url": false,
+ // virtual attrs
+ "full_name": false,
+ "is_invited": false,
+ }
+ if opts.Select != nil {
+ // Only the selected
+ // fields (minus uuid)
+ // should be updated.
+ for k := range shouldUpdate {
+ shouldUpdate[k] = false
+ }
+ for _, k := range opts.Select {
+ if k != "uuid" {
+ shouldUpdate[k] = true
+ }
+ }
+ }
+ var uuid string
+ for uuid = range calls[0].Options.(arvados.UserBatchUpdateOptions).Updates {
+ }
+ for k, shouldFind := range shouldUpdate {
+ _, found := calls[0].Options.(arvados.UserBatchUpdateOptions).Updates[uuid][k]
+ c.Check(found, check.Equals, shouldFind, check.Commentf("offending attr: %s", k))
+ }
+ }
+ } else {
+ updates := 0
+ for _, d := range spy.RequestDumps {
+ d := string(d)
+ if strings.Contains(d, "PATCH /arvados/v1/users/batch") {
+ c.Check(d, check.Matches, `(?ms).*Authorization: Bearer `+arvadostest.SystemRootToken+`.*`)
+ updates++
+ }
+ }
+ c.Check(err, check.IsNil)
+ c.Check(updates, check.Equals, 1)
+ c.Logf("... response items %#v", userlist.Items)
+ }
+ }
+ }
+}
+
+// userAttrsCachedFromLoginCluster must have an entry for every field
+// in the User struct.
+func (s *UserSuite) TestUserAttrsUpdateWhitelist(c *check.C) {
+ buf, err := json.Marshal(&arvados.User{})
+ c.Assert(err, check.IsNil)
+ var allFields map[string]interface{}
+ err = json.Unmarshal(buf, &allFields)
+ c.Assert(err, check.IsNil)
+ for k := range allFields {
+ _, ok := userAttrsCachedFromLoginCluster[k]
+ c.Check(ok, check.Equals, true, check.Commentf("field name %q missing from userAttrsCachedFromLoginCluster", k))
+ }
+}
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
)
c.Assert(s.remoteMock.Start(), check.IsNil)
cluster := &arvados.Cluster{
- ClusterID: "zhome",
- PostgreSQL: integrationTestCluster().PostgreSQL,
- EnableBetaController14287: enableBetaController14287,
+ ClusterID: "zhome",
+ PostgreSQL: integrationTestCluster().PostgreSQL,
+ ForceLegacyAPI14: forceLegacyAPI14,
}
cluster.TLS.Insecure = true
cluster.API.MaxItemsPerResponse = 1000
setPri(1) // Reset fixture so side effect doesn't break other tests.
}
+func (s *FederationSuite) TestCreateContainerRequestBadToken(c *check.C) {
+ defer s.localServiceReturns404(c).Close()
+ // pass cluster_id via query parameter, this allows arvados-controller
+ // to avoid parsing the body
+ req := httptest.NewRequest("POST", "/arvados/v1/container_requests?cluster_id=zzzzz",
+ strings.NewReader(`{"container_request":{}}`))
+ req.Header.Set("Authorization", "Bearer abcdefg")
+ req.Header.Set("Content-type", "application/json")
+ resp := s.testRequest(req).Result()
+ c.Check(resp.StatusCode, check.Equals, http.StatusForbidden)
+ var e map[string][]string
+ c.Check(json.NewDecoder(resp.Body).Decode(&e), check.IsNil)
+ c.Check(e["errors"], check.DeepEquals, []string{"invalid API token"})
+}
+
func (s *FederationSuite) TestCreateRemoteContainerRequest(c *check.C) {
defer s.localServiceReturns404(c).Close()
// pass cluster_id via query parameter, this allows arvados-controller
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/controller/federation"
- "git.curoverse.com/arvados.git/lib/controller/railsproxy"
- "git.curoverse.com/arvados.git/lib/controller/router"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/health"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/lib/controller/federation"
+ "git.arvados.org/arvados.git/lib/controller/railsproxy"
+ "git.arvados.org/arvados.git/lib/controller/router"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/health"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
_ "github.com/lib/pq"
)
rtr := router.New(federation.New(h.Cluster))
mux.Handle("/arvados/v1/config", rtr)
- if h.Cluster.EnableBetaController14287 {
+ if !h.Cluster.ForceLegacyAPI14 {
mux.Handle("/arvados/v1/collections", rtr)
mux.Handle("/arvados/v1/collections/", rtr)
+ mux.Handle("/arvados/v1/users", rtr)
+ mux.Handle("/arvados/v1/users/", rtr)
mux.Handle("/login", rtr)
+ mux.Handle("/logout", rtr)
}
hs := http.NotFoundHandler()
import (
"context"
+ "crypto/tls"
"encoding/json"
+ "io/ioutil"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
"github.com/prometheus/client_golang/prometheus"
check "gopkg.in/check.v1"
)
-var enableBetaController14287 bool
+var forceLegacyAPI14 bool
// Gocheck boilerplate
func Test(t *testing.T) {
- for _, enableBetaController14287 = range []bool{false, true} {
+ for _, forceLegacyAPI14 = range []bool{false, true} {
check.TestingT(t)
}
}
s.ctx, s.cancel = context.WithCancel(context.Background())
s.ctx = ctxlog.Context(s.ctx, ctxlog.New(os.Stderr, "json", "debug"))
s.cluster = &arvados.Cluster{
- ClusterID: "zzzzz",
- PostgreSQL: integrationTestCluster().PostgreSQL,
-
- EnableBetaController14287: enableBetaController14287,
+ ClusterID: "zzzzz",
+ PostgreSQL: integrationTestCluster().PostgreSQL,
+ ForceLegacyAPI14: forceLegacyAPI14,
}
s.cluster.TLS.Insecure = true
arvadostest.SetServiceURL(&s.cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
c.Check(resp.Header().Get("Location"), check.Matches, `(https://0.0.0.0:1)?/auth/joshid\?return_to=%2Cfoo&?`)
}
+func (s *HandlerSuite) TestLogoutSSO(c *check.C) {
+ s.cluster.Login.ProviderAppID = "test"
+ req := httptest.NewRequest("GET", "https://0.0.0.0:1/logout?return_to=https://example.com/foo", nil)
+ resp := httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ if !c.Check(resp.Code, check.Equals, http.StatusFound) {
+ c.Log(resp.Body.String())
+ }
+ c.Check(resp.Header().Get("Location"), check.Equals, "http://localhost:3002/users/sign_out?"+url.Values{"redirect_uri": {"https://example.com/foo"}}.Encode())
+}
+
+func (s *HandlerSuite) TestLogoutGoogle(c *check.C) {
+ if s.cluster.ForceLegacyAPI14 {
+ // Google login N/A
+ return
+ }
+ s.cluster.Login.GoogleClientID = "test"
+ req := httptest.NewRequest("GET", "https://0.0.0.0:1/logout?return_to=https://example.com/foo", nil)
+ resp := httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ if !c.Check(resp.Code, check.Equals, http.StatusFound) {
+ c.Log(resp.Body.String())
+ }
+ c.Check(resp.Header().Get("Location"), check.Equals, "https://example.com/foo")
+}
+
func (s *HandlerSuite) TestValidateV1APIToken(c *check.C) {
req := httptest.NewRequest("GET", "/arvados/v1/users/current", nil)
- user, err := s.handler.(*Handler).validateAPItoken(req, arvadostest.ActiveToken)
+ user, ok, err := s.handler.(*Handler).validateAPItoken(req, arvadostest.ActiveToken)
c.Assert(err, check.IsNil)
+ c.Check(ok, check.Equals, true)
c.Check(user.Authorization.UUID, check.Equals, arvadostest.ActiveTokenUUID)
c.Check(user.Authorization.APIToken, check.Equals, arvadostest.ActiveToken)
c.Check(user.Authorization.Scopes, check.DeepEquals, []string{"all"})
func (s *HandlerSuite) TestValidateV2APIToken(c *check.C) {
req := httptest.NewRequest("GET", "/arvados/v1/users/current", nil)
- user, err := s.handler.(*Handler).validateAPItoken(req, arvadostest.ActiveTokenV2)
+ user, ok, err := s.handler.(*Handler).validateAPItoken(req, arvadostest.ActiveTokenV2)
c.Assert(err, check.IsNil)
+ c.Check(ok, check.Equals, true)
c.Check(user.Authorization.UUID, check.Equals, arvadostest.ActiveTokenUUID)
c.Check(user.Authorization.APIToken, check.Equals, arvadostest.ActiveToken)
c.Check(user.Authorization.Scopes, check.DeepEquals, []string{"all"})
c.Check(user.Authorization.TokenV2(), check.Equals, arvadostest.ActiveTokenV2)
}
+func (s *HandlerSuite) TestValidateRemoteToken(c *check.C) {
+ saltedToken, err := auth.SaltToken(arvadostest.ActiveTokenV2, "abcde")
+ c.Assert(err, check.IsNil)
+ for _, trial := range []struct {
+ code int
+ token string
+ }{
+ {http.StatusOK, saltedToken},
+ {http.StatusUnauthorized, "bogus"},
+ } {
+ req := httptest.NewRequest("GET", "https://0.0.0.0:1/arvados/v1/users/current?remote=abcde", nil)
+ req.Header.Set("Authorization", "Bearer "+trial.token)
+ resp := httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ if !c.Check(resp.Code, check.Equals, trial.code) {
+ c.Logf("HTTP %d: %s", resp.Code, resp.Body.String())
+ }
+ }
+}
+
func (s *HandlerSuite) TestCreateAPIToken(c *check.C) {
req := httptest.NewRequest("GET", "/arvados/v1/users/current", nil)
auth, err := s.handler.(*Handler).createAPItoken(req, arvadostest.ActiveUserUUID, nil)
c.Assert(err, check.IsNil)
c.Check(auth.Scopes, check.DeepEquals, []string{"all"})
- user, err := s.handler.(*Handler).validateAPItoken(req, auth.TokenV2())
+ user, ok, err := s.handler.(*Handler).validateAPItoken(req, auth.TokenV2())
c.Assert(err, check.IsNil)
+ c.Check(ok, check.Equals, true)
c.Check(user.Authorization.UUID, check.Equals, auth.UUID)
c.Check(user.Authorization.APIToken, check.Equals, auth.APIToken)
c.Check(user.Authorization.Scopes, check.DeepEquals, []string{"all"})
c.Check(user.UUID, check.Equals, arvadostest.ActiveUserUUID)
c.Check(user.Authorization.TokenV2(), check.Equals, auth.TokenV2())
}
+
+func (s *HandlerSuite) CheckObjectType(c *check.C, url string, token string, skippedFields map[string]bool) {
+ var proxied, direct map[string]interface{}
+ var err error
+
+ // Get collection from controller
+ req := httptest.NewRequest("GET", url, nil)
+ req.Header.Set("Authorization", "Bearer "+token)
+ resp := httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ c.Assert(resp.Code, check.Equals, http.StatusOK,
+ check.Commentf("Wasn't able to get data from the controller at %q", url))
+ err = json.Unmarshal(resp.Body.Bytes(), &proxied)
+ c.Check(err, check.Equals, nil)
+
+ // Get collection directly from RailsAPI
+ client := &http.Client{
+ Transport: &http.Transport{
+ TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
+ },
+ }
+ resp2, err := client.Get(s.cluster.Services.RailsAPI.ExternalURL.String() + url + "/?api_token=" + token)
+ c.Check(err, check.Equals, nil)
+ defer resp2.Body.Close()
+ db, err := ioutil.ReadAll(resp2.Body)
+ c.Check(err, check.Equals, nil)
+ err = json.Unmarshal(db, &direct)
+ c.Check(err, check.Equals, nil)
+
+ // Check that all RailsAPI provided keys exist on the controller response.
+ for k := range direct {
+ if _, ok := skippedFields[k]; ok {
+ continue
+ } else if val, ok := proxied[k]; ok {
+ if direct["kind"] == "arvados#collection" && k == "manifest_text" {
+ // Tokens differ from request to request
+ c.Check(strings.Split(val.(string), "+A")[0], check.Equals, strings.Split(direct[k].(string), "+A")[0])
+ } else {
+ c.Check(val, check.DeepEquals, direct[k],
+ check.Commentf("RailsAPI %s key %q's value %q differs from controller's %q.", direct["kind"], k, direct[k], val))
+ }
+ } else {
+ c.Errorf("%s's key %q missing on controller's response.", direct["kind"], k)
+ }
+ }
+}
+
+func (s *HandlerSuite) TestGetObjects(c *check.C) {
+ // Get the 1st keep service's uuid from the running test server.
+ req := httptest.NewRequest("GET", "/arvados/v1/keep_services/", nil)
+ req.Header.Set("Authorization", "Bearer "+arvadostest.AdminToken)
+ resp := httptest.NewRecorder()
+ s.handler.ServeHTTP(resp, req)
+ c.Assert(resp.Code, check.Equals, http.StatusOK)
+ var ksList arvados.KeepServiceList
+ json.Unmarshal(resp.Body.Bytes(), &ksList)
+ c.Assert(len(ksList.Items), check.Not(check.Equals), 0)
+ ksUUID := ksList.Items[0].UUID
+
+ testCases := map[string]map[string]bool{
+ "api_clients/" + arvadostest.TrustedWorkbenchAPIClientUUID: nil,
+ "api_client_authorizations/" + arvadostest.AdminTokenUUID: nil,
+ "authorized_keys/" + arvadostest.AdminAuthorizedKeysUUID: nil,
+ "collections/" + arvadostest.CollectionWithUniqueWordsUUID: map[string]bool{"href": true},
+ "containers/" + arvadostest.RunningContainerUUID: nil,
+ "container_requests/" + arvadostest.QueuedContainerRequestUUID: nil,
+ "groups/" + arvadostest.AProjectUUID: nil,
+ "keep_services/" + ksUUID: nil,
+ "links/" + arvadostest.ActiveUserCanReadAllUsersLinkUUID: nil,
+ "logs/" + arvadostest.CrunchstatForRunningJobLogUUID: nil,
+ "nodes/" + arvadostest.IdleNodeUUID: nil,
+ "repositories/" + arvadostest.ArvadosRepoUUID: nil,
+ "users/" + arvadostest.ActiveUserUUID: map[string]bool{"href": true},
+ "virtual_machines/" + arvadostest.TestVMUUID: nil,
+ "workflows/" + arvadostest.WorkflowWithDefinitionYAMLUUID: nil,
+ }
+ for url, skippedFields := range testCases {
+ s.CheckObjectType(c, "/arvados/v1/"+url, arvadostest.AdminToken, skippedFields)
+ }
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package controller
+
+import (
+ "bytes"
+ "context"
+ "io"
+ "net"
+ "net/url"
+ "os"
+ "path/filepath"
+
+ "git.arvados.org/arvados.git/lib/boot"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/lib/service"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&IntegrationSuite{})
+
+type testCluster struct {
+ super boot.Supervisor
+ config arvados.Config
+ controllerURL *url.URL
+}
+
+type IntegrationSuite struct {
+ testClusters map[string]*testCluster
+}
+
+func (s *IntegrationSuite) SetUpSuite(c *check.C) {
+ if forceLegacyAPI14 {
+ c.Skip("heavy integration tests don't run with forceLegacyAPI14")
+ return
+ }
+
+ cwd, _ := os.Getwd()
+ s.testClusters = map[string]*testCluster{
+ "z1111": nil,
+ "z2222": nil,
+ "z3333": nil,
+ }
+ hostport := map[string]string{}
+ for id := range s.testClusters {
+ hostport[id] = func() string {
+ // TODO: Instead of expecting random ports on
+ // 127.0.0.11, 22, 33 to be race-safe, try
+ // different 127.x.y.z until finding one that
+ // isn't in use.
+ ln, err := net.Listen("tcp", ":0")
+ c.Assert(err, check.IsNil)
+ ln.Close()
+ _, port, err := net.SplitHostPort(ln.Addr().String())
+ c.Assert(err, check.IsNil)
+ return "127.0.0." + id[3:] + ":" + port
+ }()
+ }
+ for id := range s.testClusters {
+ yaml := `Clusters:
+ ` + id + `:
+ Services:
+ Controller:
+ ExternalURL: https://` + hostport[id] + `
+ TLS:
+ Insecure: true
+ Login:
+ # LoginCluster: z1111
+ SystemLogs:
+ Format: text
+ RemoteClusters:
+ z1111:
+ Host: ` + hostport["z1111"] + `
+ Scheme: https
+ Insecure: true
+ Proxy: true
+ ActivateUsers: true
+ z2222:
+ Host: ` + hostport["z2222"] + `
+ Scheme: https
+ Insecure: true
+ Proxy: true
+ ActivateUsers: true
+ z3333:
+ Host: ` + hostport["z3333"] + `
+ Scheme: https
+ Insecure: true
+ Proxy: true
+ ActivateUsers: true
+`
+ loader := config.NewLoader(bytes.NewBufferString(yaml), ctxlog.TestLogger(c))
+ loader.Path = "-"
+ loader.SkipLegacy = true
+ loader.SkipAPICalls = true
+ cfg, err := loader.Load()
+ c.Assert(err, check.IsNil)
+ s.testClusters[id] = &testCluster{
+ super: boot.Supervisor{
+ SourcePath: filepath.Join(cwd, "..", ".."),
+ ClusterType: "test",
+ ListenHost: "127.0.0." + id[3:],
+ ControllerAddr: ":0",
+ OwnTemporaryDatabase: true,
+ Stderr: &service.LogPrefixer{Writer: ctxlog.LogWriter(c.Log), Prefix: []byte("[" + id + "] ")},
+ },
+ config: *cfg,
+ }
+ s.testClusters[id].super.Start(context.Background(), &s.testClusters[id].config)
+ }
+ for _, tc := range s.testClusters {
+ au, ok := tc.super.WaitReady()
+ c.Assert(ok, check.Equals, true)
+ u := url.URL(*au)
+ tc.controllerURL = &u
+ }
+}
+
+func (s *IntegrationSuite) TearDownSuite(c *check.C) {
+ for _, c := range s.testClusters {
+ c.super.Stop()
+ }
+}
+
+func (s *IntegrationSuite) conn(clusterID string) *rpc.Conn {
+ return rpc.NewConn(clusterID, s.testClusters[clusterID].controllerURL, true, rpc.PassthroughTokenProvider)
+}
+
+func (s *IntegrationSuite) clientsWithToken(clusterID string, token string) (context.Context, *arvados.Client, *keepclient.KeepClient) {
+ cl := s.testClusters[clusterID].config.Clusters[clusterID]
+ ctx := auth.NewContext(context.Background(), auth.NewCredentials(token))
+ ac, err := arvados.NewClientFromConfig(&cl)
+ if err != nil {
+ panic(err)
+ }
+ ac.AuthToken = token
+ arv, err := arvadosclient.New(ac)
+ if err != nil {
+ panic(err)
+ }
+ kc := keepclient.New(arv)
+ return ctx, ac, kc
+}
+
+func (s *IntegrationSuite) userClients(c *check.C, conn *rpc.Conn, rootctx context.Context, clusterID string, activate bool) (context.Context, *arvados.Client, *keepclient.KeepClient) {
+ login, err := conn.UserSessionCreate(rootctx, rpc.UserSessionCreateOptions{
+ ReturnTo: ",https://example.com",
+ AuthInfo: rpc.UserSessionAuthInfo{
+ Email: "user@example.com",
+ FirstName: "Example",
+ LastName: "User",
+ Username: "example",
+ },
+ })
+ c.Assert(err, check.IsNil)
+ redirURL, err := url.Parse(login.RedirectLocation)
+ c.Assert(err, check.IsNil)
+ userToken := redirURL.Query().Get("api_token")
+ c.Logf("user token: %q", userToken)
+ ctx, ac, kc := s.clientsWithToken(clusterID, userToken)
+ user, err := conn.UserGetCurrent(ctx, arvados.GetOptions{})
+ c.Assert(err, check.IsNil)
+ _, err = conn.UserSetup(rootctx, arvados.UserSetupOptions{UUID: user.UUID})
+ c.Assert(err, check.IsNil)
+ if activate {
+ _, err = conn.UserActivate(rootctx, arvados.UserActivateOptions{UUID: user.UUID})
+ c.Assert(err, check.IsNil)
+ user, err = conn.UserGetCurrent(ctx, arvados.GetOptions{})
+ c.Assert(err, check.IsNil)
+ c.Logf("user UUID: %q", user.UUID)
+ if !user.IsActive {
+ c.Fatalf("failed to activate user -- %#v", user)
+ }
+ }
+ return ctx, ac, kc
+}
+
+func (s *IntegrationSuite) rootClients(clusterID string) (context.Context, *arvados.Client, *keepclient.KeepClient) {
+ return s.clientsWithToken(clusterID, s.testClusters[clusterID].config.Clusters[clusterID].SystemRootToken)
+}
+
+func (s *IntegrationSuite) TestGetCollectionByPDH(c *check.C) {
+ conn1 := s.conn("z1111")
+ rootctx1, _, _ := s.rootClients("z1111")
+ conn3 := s.conn("z3333")
+ userctx1, ac1, kc1 := s.userClients(c, conn1, rootctx1, "z1111", true)
+
+ // Create the collection to find its PDH (but don't save it
+ // anywhere yet)
+ var coll1 arvados.Collection
+ fs1, err := coll1.FileSystem(ac1, kc1)
+ c.Assert(err, check.IsNil)
+ f, err := fs1.OpenFile("test.txt", os.O_CREATE|os.O_RDWR, 0777)
+ c.Assert(err, check.IsNil)
+ _, err = io.WriteString(f, "IntegrationSuite.TestGetCollectionByPDH")
+ c.Assert(err, check.IsNil)
+ err = f.Close()
+ c.Assert(err, check.IsNil)
+ mtxt, err := fs1.MarshalManifest(".")
+ c.Assert(err, check.IsNil)
+ pdh := arvados.PortableDataHash(mtxt)
+
+ // Looking up the PDH before saving returns 404 if cycle
+ // detection is working.
+ _, err = conn1.CollectionGet(userctx1, arvados.GetOptions{UUID: pdh})
+ c.Assert(err, check.ErrorMatches, `.*404 Not Found.*`)
+
+ // Save the collection on cluster z1111.
+ coll1, err = conn1.CollectionCreate(userctx1, arvados.CreateOptions{Attrs: map[string]interface{}{
+ "manifest_text": mtxt,
+ }})
+ c.Assert(err, check.IsNil)
+
+ // Retrieve the collection from cluster z3333.
+ coll, err := conn3.CollectionGet(userctx1, arvados.GetOptions{UUID: pdh})
+ c.Check(err, check.IsNil)
+ c.Check(coll.PortableDataHash, check.Equals, pdh)
+}
"context"
"errors"
- "git.curoverse.com/arvados.git/lib/controller/railsproxy"
- "git.curoverse.com/arvados.git/lib/controller/rpc"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/controller/railsproxy"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
type railsProxy = rpc.Conn
}
}
+func (conn *Conn) Logout(ctx context.Context, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
+ if conn.cluster.Login.ProviderAppID != "" {
+ // Proxy to RailsAPI, which hands off to sso-provider.
+ return conn.railsProxy.Logout(ctx, opts)
+ } else {
+ return conn.googleLoginController.Logout(ctx, conn.cluster, conn.railsProxy, opts)
+ }
+}
+
func (conn *Conn) Login(ctx context.Context, opts arvados.LoginOptions) (arvados.LoginResponse, error) {
wantGoogle := conn.cluster.Login.GoogleClientID != ""
wantSSO := conn.cluster.Login.ProviderAppID != ""
"text/template"
"time"
- "git.curoverse.com/arvados.git/lib/controller/rpc"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/coreos/go-oidc"
"golang.org/x/oauth2"
+ "google.golang.org/api/option"
+ "google.golang.org/api/people/v1"
)
type googleLoginController struct {
- issuer string // override OIDC issuer URL (normally https://accounts.google.com) for testing
- provider *oidc.Provider
- mu sync.Mutex
+ issuer string // override OIDC issuer URL (normally https://accounts.google.com) for testing
+ peopleAPIBasePath string // override Google People API base URL (normally set by google pkg to https://people.googleapis.com/)
+ provider *oidc.Provider
+ mu sync.Mutex
}
func (ctrl *googleLoginController) getProvider() (*oidc.Provider, error) {
return ctrl.provider, nil
}
+func (ctrl *googleLoginController) Logout(ctx context.Context, cluster *arvados.Cluster, railsproxy *railsProxy, opts arvados.LogoutOptions) (arvados.LogoutResponse, error) {
+ target := opts.ReturnTo
+ if target == "" {
+ if cluster.Services.Workbench2.ExternalURL.Host != "" {
+ target = cluster.Services.Workbench2.ExternalURL.String()
+ } else {
+ target = cluster.Services.Workbench1.ExternalURL.String()
+ }
+ }
+ return arvados.LogoutResponse{RedirectLocation: target}, nil
+}
+
func (ctrl *googleLoginController) Login(ctx context.Context, cluster *arvados.Cluster, railsproxy *railsProxy, opts arvados.LoginOptions) (arvados.LoginResponse, error) {
provider, err := ctrl.getProvider()
if err != nil {
if err != nil {
return ctrl.loginError(fmt.Errorf("error verifying ID token: %s", err))
}
- var claims struct {
- Name string `json:"name"`
- Email string `json:"email"`
- Verified bool `json:"email_verified"`
+ authinfo, err := ctrl.getAuthInfo(ctx, cluster, conf, oauth2Token, idToken)
+ if err != nil {
+ return ctrl.loginError(err)
+ }
+ ctxRoot := auth.NewContext(ctx, &auth.Credentials{Tokens: []string{cluster.SystemRootToken}})
+ return railsproxy.UserSessionCreate(ctxRoot, rpc.UserSessionCreateOptions{
+ ReturnTo: state.Remote + "," + state.ReturnTo,
+ AuthInfo: *authinfo,
+ })
+ }
+}
+
+// Use a person's token to get all of their email addresses, with the
+// primary address at index 0. The provided defaultAddr is always
+// included in the returned slice, and is used as the primary if the
+// Google API does not indicate one.
+func (ctrl *googleLoginController) getAuthInfo(ctx context.Context, cluster *arvados.Cluster, conf *oauth2.Config, token *oauth2.Token, idToken *oidc.IDToken) (*rpc.UserSessionAuthInfo, error) {
+ var ret rpc.UserSessionAuthInfo
+ defer ctxlog.FromContext(ctx).WithField("ret", &ret).Debug("getAuthInfo returned")
+
+ var claims struct {
+ Name string `json:"name"`
+ Email string `json:"email"`
+ Verified bool `json:"email_verified"`
+ }
+ if err := idToken.Claims(&claims); err != nil {
+ return nil, fmt.Errorf("error extracting claims from ID token: %s", err)
+ } else if claims.Verified {
+ // Fall back to this info if the People API call
+ // (below) doesn't return a primary && verified email.
+ if names := strings.Fields(strings.TrimSpace(claims.Name)); len(names) > 1 {
+ ret.FirstName = strings.Join(names[0:len(names)-1], " ")
+ ret.LastName = names[len(names)-1]
+ } else {
+ ret.FirstName = names[0]
}
- if err := idToken.Claims(&claims); err != nil {
- return ctrl.loginError(fmt.Errorf("error extracting claims from ID token: %s", err))
+ ret.Email = claims.Email
+ }
+
+ if !cluster.Login.GoogleAlternateEmailAddresses {
+ if ret.Email == "" {
+ return nil, fmt.Errorf("cannot log in with unverified email address %q", claims.Email)
}
- if !claims.Verified {
- return ctrl.loginError(errors.New("cannot authenticate using an unverified email address"))
+ return &ret, nil
+ }
+
+ svc, err := people.NewService(ctx, option.WithTokenSource(conf.TokenSource(ctx, token)), option.WithScopes(people.UserEmailsReadScope))
+ if err != nil {
+ return nil, fmt.Errorf("error setting up People API: %s", err)
+ }
+ if p := ctrl.peopleAPIBasePath; p != "" {
+ // Override normal API endpoint (for testing)
+ svc.BasePath = p
+ }
+ person, err := people.NewPeopleService(svc).Get("people/me").PersonFields("emailAddresses,names").Do()
+ if err != nil {
+ if strings.Contains(err.Error(), "Error 403") && strings.Contains(err.Error(), "accessNotConfigured") {
+ // Log the original API error, but display
+ // only the "fix config" advice to the user.
+ ctxlog.FromContext(ctx).WithError(err).WithField("email", ret.Email).Error("People API is not enabled")
+ return nil, errors.New("configuration error: Login.GoogleAlternateEmailAddresses is true, but Google People API is not enabled")
+ } else {
+ return nil, fmt.Errorf("error getting profile info from People API: %s", err)
}
+ }
- firstname, lastname := strings.TrimSpace(claims.Name), ""
- if names := strings.Fields(firstname); len(names) > 1 {
- firstname = strings.Join(names[0:len(names)-1], " ")
- lastname = names[len(names)-1]
+ // The given/family names returned by the People API and
+ // flagged as "primary" (if any) take precedence over the
+ // split-by-whitespace result from above.
+ for _, name := range person.Names {
+ if name.Metadata != nil && name.Metadata.Primary {
+ ret.FirstName = name.GivenName
+ ret.LastName = name.FamilyName
+ break
}
+ }
- ctxRoot := auth.NewContext(ctx, &auth.Credentials{Tokens: []string{cluster.SystemRootToken}})
- return railsproxy.UserSessionCreate(ctxRoot, rpc.UserSessionCreateOptions{
- ReturnTo: state.Remote + "," + state.ReturnTo,
- AuthInfo: map[string]interface{}{
- "email": claims.Email,
- "first_name": firstname,
- "last_name": lastname,
- },
- })
+ altEmails := map[string]bool{}
+ if ret.Email != "" {
+ altEmails[ret.Email] = true
+ }
+ for _, ea := range person.EmailAddresses {
+ if ea.Metadata == nil || !ea.Metadata.Verified {
+ ctxlog.FromContext(ctx).WithField("address", ea.Value).Info("skipping unverified email address")
+ continue
+ }
+ altEmails[ea.Value] = true
+ if ea.Metadata.Primary || ret.Email == "" {
+ ret.Email = ea.Value
+ }
+ }
+ if len(altEmails) == 0 {
+ return nil, errors.New("cannot log in without a verified email address")
+ }
+ for ae := range altEmails {
+ if ae != ret.Email {
+ ret.AlternateEmails = append(ret.AlternateEmails, ae)
+ if i := strings.Index(ae, "@"); i > 0 && strings.ToLower(ae[i+1:]) == strings.ToLower(cluster.Users.PreferDomainForUsername) {
+ ret.Username = strings.SplitN(ae[:i], "+", 2)[0]
+ }
+ }
}
+ return &ret, nil
}
func (ctrl *googleLoginController) loginError(sendError error) (resp arvados.LoginResponse, err error) {
"net/http"
"net/http/httptest"
"net/url"
+ "sort"
"strings"
"testing"
"time"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/lib/controller/rpc"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
jose "gopkg.in/square/go-jose.v2"
)
var _ = check.Suite(&LoginSuite{})
type LoginSuite struct {
- cluster *arvados.Cluster
- ctx context.Context
- localdb *Conn
- railsSpy *arvadostest.Proxy
- fakeIssuer *httptest.Server
- issuerKey *rsa.PrivateKey
+ cluster *arvados.Cluster
+ ctx context.Context
+ localdb *Conn
+ railsSpy *arvadostest.Proxy
+ fakeIssuer *httptest.Server
+ fakePeopleAPI *httptest.Server
+ fakePeopleAPIResponse map[string]interface{}
+ issuerKey *rsa.PrivateKey
// expected token request
validCode string
authName string
}
+func (s *LoginSuite) TearDownSuite(c *check.C) {
+ // Undo any changes/additions to the user database so they
+ // don't affect subsequent tests.
+ arvadostest.ResetEnv()
+ c.Check(arvados.NewClientFromEnv().RequestAndDecode(nil, "POST", "database/reset", nil, nil), check.IsNil)
+}
+
func (s *LoginSuite) SetUpTest(c *check.C) {
var err error
s.issuerKey, err = rsa.GenerateKey(rand.Reader, 2048)
w.WriteHeader(http.StatusNotFound)
}
}))
+ s.validCode = fmt.Sprintf("abcdefgh-%d", time.Now().Unix())
+
+ s.fakePeopleAPI = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ req.ParseForm()
+ c.Logf("fakePeopleAPI: got req: %s %s %s", req.Method, req.URL, req.Form)
+ w.Header().Set("Content-Type", "application/json")
+ switch req.URL.Path {
+ case "/v1/people/me":
+ if f := req.Form.Get("personFields"); f != "emailAddresses,names" {
+ w.WriteHeader(http.StatusBadRequest)
+ break
+ }
+ json.NewEncoder(w).Encode(s.fakePeopleAPIResponse)
+ default:
+ w.WriteHeader(http.StatusNotFound)
+ }
+ }))
+ s.fakePeopleAPIResponse = map[string]interface{}{}
cfg, err := config.NewLoader(nil, ctxlog.TestLogger(c)).Load()
s.cluster, err = cfg.GetCluster("")
+ s.cluster.Login.ProviderAppID = ""
+ s.cluster.Login.ProviderAppSecret = ""
s.cluster.Login.GoogleClientID = "test%client$id"
s.cluster.Login.GoogleClientSecret = "test#client/secret"
+ s.cluster.Users.PreferDomainForUsername = "PreferDomainForUsername.example.com"
c.Assert(err, check.IsNil)
s.localdb = NewConn(s.cluster)
s.localdb.googleLoginController.issuer = s.fakeIssuer.URL
+ s.localdb.googleLoginController.peopleAPIBasePath = s.fakePeopleAPI.URL
s.railsSpy = arvadostest.NewProxy(c, s.cluster.Services.RailsAPI)
s.localdb.railsProxy = rpc.NewConn(s.cluster.ClusterID, s.railsSpy.URL, true, rpc.PassthroughTokenProvider)
s.railsSpy.Close()
}
-func (s *LoginSuite) TestGoogleLoginStart_Bogus(c *check.C) {
+func (s *LoginSuite) TestGoogleLogout(c *check.C) {
+ resp, err := s.localdb.Logout(context.Background(), arvados.LogoutOptions{ReturnTo: "https://foo.example.com/bar"})
+ c.Check(err, check.IsNil)
+ c.Check(resp.RedirectLocation, check.Equals, "https://foo.example.com/bar")
+}
+
+func (s *LoginSuite) TestGoogleLogin_Start_Bogus(c *check.C) {
resp, err := s.localdb.Login(context.Background(), arvados.LoginOptions{})
c.Check(err, check.IsNil)
c.Check(resp.RedirectLocation, check.Equals, "")
c.Check(resp.HTML.String(), check.Matches, `.*missing return_to parameter.*`)
}
-func (s *LoginSuite) TestGoogleLoginStart(c *check.C) {
+func (s *LoginSuite) TestGoogleLogin_Start(c *check.C) {
for _, remote := range []string{"", "zzzzz"} {
resp, err := s.localdb.Login(context.Background(), arvados.LoginOptions{Remote: remote, ReturnTo: "https://app.example.com/foo?bar"})
c.Check(err, check.IsNil)
}
}
-func (s *LoginSuite) TestGoogleLoginSuccess(c *check.C) {
- // Initiate login, but instead of following the redirect to
- // the provider, just grab state from the redirect URL.
- resp, err := s.localdb.Login(context.Background(), arvados.LoginOptions{ReturnTo: "https://app.example.com/foo?bar"})
- c.Check(err, check.IsNil)
- target, err := url.Parse(resp.RedirectLocation)
- c.Check(err, check.IsNil)
- state := target.Query().Get("state")
- c.Check(state, check.Not(check.Equals), "")
-
- // Prime the fake issuer with a valid code.
- s.validCode = fmt.Sprintf("abcdefgh-%d", time.Now().Unix())
-
- // Callback with invalid code.
- resp, err = s.localdb.Login(context.Background(), arvados.LoginOptions{
+func (s *LoginSuite) TestGoogleLogin_InvalidCode(c *check.C) {
+ state := s.startLogin(c)
+ resp, err := s.localdb.Login(context.Background(), arvados.LoginOptions{
Code: "first-try-a-bogus-code",
State: state,
})
c.Check(err, check.IsNil)
c.Check(resp.RedirectLocation, check.Equals, "")
c.Check(resp.HTML.String(), check.Matches, `(?ms).*error in OAuth2 exchange.*cannot fetch token.*`)
+}
- // Callback with invalid state.
- resp, err = s.localdb.Login(context.Background(), arvados.LoginOptions{
+func (s *LoginSuite) TestGoogleLogin_InvalidState(c *check.C) {
+ s.startLogin(c)
+ resp, err := s.localdb.Login(context.Background(), arvados.LoginOptions{
Code: s.validCode,
State: "bogus-state",
})
c.Check(err, check.IsNil)
c.Check(resp.RedirectLocation, check.Equals, "")
c.Check(resp.HTML.String(), check.Matches, `(?ms).*invalid OAuth2 state.*`)
+}
- // Callback with valid code and state.
- resp, err = s.localdb.Login(context.Background(), arvados.LoginOptions{
+func (s *LoginSuite) setupPeopleAPIError(c *check.C) {
+ s.fakePeopleAPI = httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ w.WriteHeader(http.StatusForbidden)
+ fmt.Fprintln(w, `Error 403: accessNotConfigured`)
+ }))
+ s.localdb.googleLoginController.peopleAPIBasePath = s.fakePeopleAPI.URL
+}
+
+func (s *LoginSuite) TestGoogleLogin_PeopleAPIDisabled(c *check.C) {
+ s.cluster.Login.GoogleAlternateEmailAddresses = false
+ s.authEmail = "joe.smith@primary.example.com"
+ s.setupPeopleAPIError(c)
+ state := s.startLogin(c)
+ _, err := s.localdb.Login(context.Background(), arvados.LoginOptions{
+ Code: s.validCode,
+ State: state,
+ })
+ c.Check(err, check.IsNil)
+ authinfo := s.getCallbackAuthInfo(c)
+ c.Check(authinfo.Email, check.Equals, "joe.smith@primary.example.com")
+}
+
+func (s *LoginSuite) TestGoogleLogin_PeopleAPIError(c *check.C) {
+ s.setupPeopleAPIError(c)
+ state := s.startLogin(c)
+ resp, err := s.localdb.Login(context.Background(), arvados.LoginOptions{
+ Code: s.validCode,
+ State: state,
+ })
+ c.Check(err, check.IsNil)
+ c.Check(resp.RedirectLocation, check.Equals, "")
+}
+
+func (s *LoginSuite) TestGoogleLogin_Success(c *check.C) {
+ state := s.startLogin(c)
+ resp, err := s.localdb.Login(context.Background(), arvados.LoginOptions{
Code: s.validCode,
State: state,
})
c.Check(err, check.IsNil)
c.Check(resp.HTML.String(), check.Equals, "")
- c.Check(resp.RedirectLocation, check.Not(check.Equals), "")
- target, err = url.Parse(resp.RedirectLocation)
+ target, err := url.Parse(resp.RedirectLocation)
c.Check(err, check.IsNil)
c.Check(target.Host, check.Equals, "app.example.com")
c.Check(target.Path, check.Equals, "/foo")
token := target.Query().Get("api_token")
c.Check(token, check.Matches, `v2/zzzzz-gj3su-.{15}/.{32,50}`)
- foundCallback := false
- for _, dump := range s.railsSpy.RequestDumps {
- c.Logf("spied request: %q", dump)
- split := bytes.Split(dump, []byte("\r\n\r\n"))
- c.Assert(split, check.HasLen, 2)
- hdr, body := string(split[0]), string(split[1])
- if strings.Contains(hdr, "POST /auth/controller/callback") {
- vs, err := url.ParseQuery(body)
- var authinfo map[string]interface{}
- c.Check(json.Unmarshal([]byte(vs.Get("auth_info")), &authinfo), check.IsNil)
- c.Check(err, check.IsNil)
- c.Check(authinfo["first_name"], check.Equals, "Fake User")
- c.Check(authinfo["last_name"], check.Equals, "Name")
- c.Check(authinfo["email"], check.Equals, "active-user@arvados.local")
- foundCallback = true
- }
- }
- c.Check(foundCallback, check.Equals, true)
+ authinfo := s.getCallbackAuthInfo(c)
+ c.Check(authinfo.FirstName, check.Equals, "Fake User")
+ c.Check(authinfo.LastName, check.Equals, "Name")
+ c.Check(authinfo.Email, check.Equals, "active-user@arvados.local")
+ c.Check(authinfo.AlternateEmails, check.HasLen, 0)
// Try using the returned Arvados token.
c.Logf("trying an API call with new token %q", token)
c.Check(err, check.ErrorMatches, `.*401 Unauthorized: Not logged in.*`)
}
+func (s *LoginSuite) TestGoogleLogin_RealName(c *check.C) {
+ s.authEmail = "joe.smith@primary.example.com"
+ s.fakePeopleAPIResponse = map[string]interface{}{
+ "names": []map[string]interface{}{
+ {
+ "metadata": map[string]interface{}{"primary": false},
+ "givenName": "Joe",
+ "familyName": "Smith",
+ },
+ {
+ "metadata": map[string]interface{}{"primary": true},
+ "givenName": "Joseph",
+ "familyName": "Psmith",
+ },
+ },
+ }
+ state := s.startLogin(c)
+ s.localdb.Login(context.Background(), arvados.LoginOptions{
+ Code: s.validCode,
+ State: state,
+ })
+
+ authinfo := s.getCallbackAuthInfo(c)
+ c.Check(authinfo.FirstName, check.Equals, "Joseph")
+ c.Check(authinfo.LastName, check.Equals, "Psmith")
+}
+
+func (s *LoginSuite) TestGoogleLogin_OIDCRealName(c *check.C) {
+ s.authName = "Joe P. Smith"
+ s.authEmail = "joe.smith@primary.example.com"
+ state := s.startLogin(c)
+ s.localdb.Login(context.Background(), arvados.LoginOptions{
+ Code: s.validCode,
+ State: state,
+ })
+
+ authinfo := s.getCallbackAuthInfo(c)
+ c.Check(authinfo.FirstName, check.Equals, "Joe P.")
+ c.Check(authinfo.LastName, check.Equals, "Smith")
+}
+
+// People API returns some additional email addresses.
+func (s *LoginSuite) TestGoogleLogin_AlternateEmailAddresses(c *check.C) {
+ s.authEmail = "joe.smith@primary.example.com"
+ s.fakePeopleAPIResponse = map[string]interface{}{
+ "emailAddresses": []map[string]interface{}{
+ {
+ "metadata": map[string]interface{}{"verified": true},
+ "value": "joe.smith@work.example.com",
+ },
+ {
+ "value": "joe.smith@unverified.example.com", // unverified, so this one will be ignored
+ },
+ {
+ "metadata": map[string]interface{}{"verified": true},
+ "value": "joe.smith@home.example.com",
+ },
+ },
+ }
+ state := s.startLogin(c)
+ s.localdb.Login(context.Background(), arvados.LoginOptions{
+ Code: s.validCode,
+ State: state,
+ })
+
+ authinfo := s.getCallbackAuthInfo(c)
+ c.Check(authinfo.Email, check.Equals, "joe.smith@primary.example.com")
+ c.Check(authinfo.AlternateEmails, check.DeepEquals, []string{"joe.smith@home.example.com", "joe.smith@work.example.com"})
+}
+
+// Primary address is not the one initially returned by oidc.
+func (s *LoginSuite) TestGoogleLogin_AlternateEmailAddresses_Primary(c *check.C) {
+ s.authEmail = "joe.smith@alternate.example.com"
+ s.fakePeopleAPIResponse = map[string]interface{}{
+ "emailAddresses": []map[string]interface{}{
+ {
+ "metadata": map[string]interface{}{"verified": true, "primary": true},
+ "value": "joe.smith@primary.example.com",
+ },
+ {
+ "metadata": map[string]interface{}{"verified": true},
+ "value": "joe.smith@alternate.example.com",
+ },
+ {
+ "metadata": map[string]interface{}{"verified": true},
+ "value": "jsmith+123@preferdomainforusername.example.com",
+ },
+ },
+ }
+ state := s.startLogin(c)
+ s.localdb.Login(context.Background(), arvados.LoginOptions{
+ Code: s.validCode,
+ State: state,
+ })
+ authinfo := s.getCallbackAuthInfo(c)
+ c.Check(authinfo.Email, check.Equals, "joe.smith@primary.example.com")
+ c.Check(authinfo.AlternateEmails, check.DeepEquals, []string{"joe.smith@alternate.example.com", "jsmith+123@preferdomainforusername.example.com"})
+ c.Check(authinfo.Username, check.Equals, "jsmith")
+}
+
+func (s *LoginSuite) TestGoogleLogin_NoPrimaryEmailAddress(c *check.C) {
+ s.authEmail = "joe.smith@unverified.example.com"
+ s.authEmailVerified = false
+ s.fakePeopleAPIResponse = map[string]interface{}{
+ "emailAddresses": []map[string]interface{}{
+ {
+ "metadata": map[string]interface{}{"verified": true},
+ "value": "joe.smith@work.example.com",
+ },
+ {
+ "metadata": map[string]interface{}{"verified": true},
+ "value": "joe.smith@home.example.com",
+ },
+ },
+ }
+ state := s.startLogin(c)
+ s.localdb.Login(context.Background(), arvados.LoginOptions{
+ Code: s.validCode,
+ State: state,
+ })
+
+ authinfo := s.getCallbackAuthInfo(c)
+ c.Check(authinfo.Email, check.Equals, "joe.smith@work.example.com") // first verified email in People response
+ c.Check(authinfo.AlternateEmails, check.DeepEquals, []string{"joe.smith@home.example.com"})
+ c.Check(authinfo.Username, check.Equals, "")
+}
+
+func (s *LoginSuite) getCallbackAuthInfo(c *check.C) (authinfo rpc.UserSessionAuthInfo) {
+ for _, dump := range s.railsSpy.RequestDumps {
+ c.Logf("spied request: %q", dump)
+ split := bytes.Split(dump, []byte("\r\n\r\n"))
+ c.Assert(split, check.HasLen, 2)
+ hdr, body := string(split[0]), string(split[1])
+ if strings.Contains(hdr, "POST /auth/controller/callback") {
+ vs, err := url.ParseQuery(body)
+ c.Check(json.Unmarshal([]byte(vs.Get("auth_info")), &authinfo), check.IsNil)
+ c.Check(err, check.IsNil)
+ sort.Strings(authinfo.AlternateEmails)
+ return
+ }
+ }
+ c.Error("callback not found")
+ return
+}
+
+func (s *LoginSuite) startLogin(c *check.C) (state string) {
+ // Initiate login, but instead of following the redirect to
+ // the provider, just grab state from the redirect URL.
+ resp, err := s.localdb.Login(context.Background(), arvados.LoginOptions{ReturnTo: "https://app.example.com/foo?bar"})
+ c.Check(err, check.IsNil)
+ target, err := url.Parse(resp.RedirectLocation)
+ c.Check(err, check.IsNil)
+ state = target.Query().Get("state")
+ c.Check(state, check.Not(check.Equals), "")
+ return
+}
+
func (s *LoginSuite) fakeToken(c *check.C, payload []byte) string {
signer, err := jose.NewSigner(jose.SigningKey{Algorithm: jose.RS256, Key: s.issuerKey}, nil)
if err != nil {
"net/http"
"net/url"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
type proxy struct {
import (
"fmt"
- "net/http"
"net/url"
"strings"
- "git.curoverse.com/arvados.git/lib/controller/rpc"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// For now, FindRailsAPI always uses the rails API running on this
if err != nil {
panic(err)
}
- conn := rpc.NewConn(cluster.ClusterID, url, insecure, rpc.PassthroughTokenProvider)
- // If Rails is running with force_ssl=true, this
- // "X-Forwarded-Proto: https" header prevents it from
- // redirecting our internal request to an invalid https URL.
- conn.SendHeader = http.Header{"X-Forwarded-Proto": []string{"https"}}
- return conn
+ return rpc.NewConn(cluster.ClusterID, url, insecure, rpc.PassthroughTokenProvider)
}
"strconv"
"strings"
- "github.com/julienschmidt/httprouter"
+ "github.com/gorilla/mux"
)
+func guessAndParse(k, v string) (interface{}, error) {
+ // All of these form values arrive as strings, so we need some
+ // type-guessing to accept non-string inputs:
+ //
+ // Values for parameters that take ints (limit=1) or bools
+ // (include_trash=1) are parsed accordingly.
+ //
+ // "null" and "" are nil.
+ //
+ // Values that look like JSON objects, arrays, or strings are
+ // parsed as JSON.
+ //
+ // The rest are left as strings.
+ switch {
+ case intParams[k]:
+ return strconv.ParseInt(v, 10, 64)
+ case boolParams[k]:
+ return stringToBool(v), nil
+ case v == "null" || v == "":
+ return nil, nil
+ case strings.HasPrefix(v, "["):
+ var j []interface{}
+ err := json.Unmarshal([]byte(v), &j)
+ return j, err
+ case strings.HasPrefix(v, "{"):
+ var j map[string]interface{}
+ err := json.Unmarshal([]byte(v), &j)
+ return j, err
+ case strings.HasPrefix(v, "\""):
+ var j string
+ err := json.Unmarshal([]byte(v), &j)
+ return j, err
+ default:
+ return v, nil
+ }
+ // TODO: Need to accept "?foo[]=bar&foo[]=baz" as
+ // foo=["bar","baz"]?
+}
+
// Parse req as an Arvados V1 API request and return the request
// parameters.
//
// Content-Type is application/x-www-form-urlencoded -- the
// request body.
for k, values := range req.Form {
- // All of these form values arrive as strings, so we
- // need some type-guessing to accept non-string
- // inputs:
- //
- // Values for parameters that take ints (limit=1) or
- // bools (include_trash=1) are parsed accordingly.
- //
- // "null" and "" are nil.
- //
- // Values that look like JSON objects, arrays, or
- // strings are parsed as JSON.
- //
- // The rest are left as strings.
for _, v := range values {
- switch {
- case intParams[k]:
- params[k], err = strconv.ParseInt(v, 10, 64)
- if err != nil {
- return nil, err
- }
- case boolParams[k]:
- params[k] = stringToBool(v)
- case v == "null" || v == "":
- params[k] = nil
- case strings.HasPrefix(v, "["):
- var j []interface{}
- err := json.Unmarshal([]byte(v), &j)
- if err != nil {
- return nil, err
- }
- params[k] = j
- case strings.HasPrefix(v, "{"):
- var j map[string]interface{}
- err := json.Unmarshal([]byte(v), &j)
- if err != nil {
- return nil, err
- }
- params[k] = j
- case strings.HasPrefix(v, "\""):
- var j string
- err := json.Unmarshal([]byte(v), &j)
- if err != nil {
- return nil, err
- }
- params[k] = j
- default:
- params[k] = v
+ params[k], err = guessAndParse(k, v)
+ if err != nil {
+ return nil, err
}
- // TODO: Need to accept "?foo[]=bar&foo[]=baz"
- // as foo=["bar","baz"]?
}
}
return nil, httpError(http.StatusBadRequest, err)
}
for k, v := range jsonParams {
- params[k] = v
+ switch v := v.(type) {
+ case string:
+ // The Ruby "arv" cli tool sends a
+ // JSON-encode params map with
+ // JSON-encoded values.
+ dec, err := guessAndParse(k, v)
+ if err != nil {
+ return nil, err
+ }
+ jsonParams[k] = dec
+ params[k] = dec
+ default:
+ params[k] = v
+ }
}
if attrsKey != "" && params[attrsKey] == nil {
// Copy top-level parameters from JSON request
}
}
- routeParams, _ := req.Context().Value(httprouter.ParamsKey).(httprouter.Params)
- for _, p := range routeParams {
- params[p.Key] = p.Value
+ for k, v := range mux.Vars(req) {
+ params[k] = v
}
if v, ok := params[attrsKey]; ok && attrsKey != "" {
}
var boolParams = map[string]bool{
- "distinct": true,
- "ensure_unique_name": true,
- "include_trash": true,
- "include_old_versions": true,
+ "distinct": true,
+ "ensure_unique_name": true,
+ "include_trash": true,
+ "include_old_versions": true,
+ "redirect_to_new_user": true,
+ "send_notification_email": true,
}
func stringToBool(s string) bool {
"net/http/httptest"
"net/url"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
check "gopkg.in/check.v1"
)
} else if tr.json {
if tr.jsonAttrsTop {
for k, v := range tr.attrs {
- param[k] = v
+ if tr.jsonStringParam {
+ j, err := json.Marshal(v)
+ if err != nil {
+ panic(err)
+ }
+ param[k] = string(j)
+ } else {
+ param[k] = v
+ }
}
} else if tr.attrs != nil {
- param[tr.attrsKey] = tr.attrs
+ if tr.jsonStringParam {
+ j, err := json.Marshal(tr.attrs)
+ if err != nil {
+ panic(err)
+ }
+ param[tr.attrsKey] = string(j)
+ } else {
+ param[tr.attrsKey] = tr.attrs
+ }
}
tr.body = bytes.NewBuffer(nil)
err := json.NewEncoder(tr.body).Encode(param)
for _, tr := range []testReq{
{attrsKey: "model_name", json: true, attrs: attrs},
{attrsKey: "model_name", json: true, attrs: attrs, jsonAttrsTop: true},
+ {attrsKey: "model_name", json: true, attrs: attrs, jsonAttrsTop: true, jsonStringParam: true},
+ {attrsKey: "model_name", json: true, attrs: attrs, jsonAttrsTop: false, jsonStringParam: true},
} {
c.Logf("tr: %#v", tr)
req := tr.Request()
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
const rfc3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
selected[attr] = v
}
}
- // Preserve "kind" even if not requested
- if v, ok := orig["kind"]; ok {
- selected["kind"] = v
+ // Some keys are always preserved, even if not requested
+ for _, k := range []string{"etag", "kind", "writable_by"} {
+ if v, ok := orig[k]; ok {
+ selected[k] = v
+ }
}
return selected
}
"net/http"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
- "github.com/julienschmidt/httprouter"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
+ "github.com/gorilla/mux"
"github.com/sirupsen/logrus"
)
type router struct {
- mux *httprouter.Router
+ mux *mux.Router
fed arvados.API
}
func New(fed arvados.API) *router {
rtr := &router{
- mux: httprouter.New(),
+ mux: mux.NewRouter(),
fed: fed,
}
rtr.addRoutes()
return rtr.fed.Login(ctx, *opts.(*arvados.LoginOptions))
},
},
+ {
+ arvados.EndpointLogout,
+ func() interface{} { return &arvados.LogoutOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.Logout(ctx, *opts.(*arvados.LogoutOptions))
+ },
+ },
{
arvados.EndpointCollectionCreate,
func() interface{} { return &arvados.CreateOptions{} },
return rtr.fed.SpecimenDelete(ctx, *opts.(*arvados.DeleteOptions))
},
},
+ {
+ arvados.EndpointUserCreate,
+ func() interface{} { return &arvados.CreateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserCreate(ctx, *opts.(*arvados.CreateOptions))
+ },
+ },
+ {
+ arvados.EndpointUserMerge,
+ func() interface{} { return &arvados.UserMergeOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserMerge(ctx, *opts.(*arvados.UserMergeOptions))
+ },
+ },
+ {
+ arvados.EndpointUserActivate,
+ func() interface{} { return &arvados.UserActivateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserActivate(ctx, *opts.(*arvados.UserActivateOptions))
+ },
+ },
+ {
+ arvados.EndpointUserSetup,
+ func() interface{} { return &arvados.UserSetupOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserSetup(ctx, *opts.(*arvados.UserSetupOptions))
+ },
+ },
+ {
+ arvados.EndpointUserUnsetup,
+ func() interface{} { return &arvados.GetOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserUnsetup(ctx, *opts.(*arvados.GetOptions))
+ },
+ },
+ {
+ arvados.EndpointUserGetCurrent,
+ func() interface{} { return &arvados.GetOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserGetCurrent(ctx, *opts.(*arvados.GetOptions))
+ },
+ },
+ {
+ arvados.EndpointUserGetSystem,
+ func() interface{} { return &arvados.GetOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserGetSystem(ctx, *opts.(*arvados.GetOptions))
+ },
+ },
+ {
+ arvados.EndpointUserGet,
+ func() interface{} { return &arvados.GetOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserGet(ctx, *opts.(*arvados.GetOptions))
+ },
+ },
+ {
+ arvados.EndpointUserUpdateUUID,
+ func() interface{} { return &arvados.UpdateUUIDOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserUpdateUUID(ctx, *opts.(*arvados.UpdateUUIDOptions))
+ },
+ },
+ {
+ arvados.EndpointUserUpdate,
+ func() interface{} { return &arvados.UpdateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserUpdate(ctx, *opts.(*arvados.UpdateOptions))
+ },
+ },
+ {
+ arvados.EndpointUserList,
+ func() interface{} { return &arvados.ListOptions{Limit: -1} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserList(ctx, *opts.(*arvados.ListOptions))
+ },
+ },
+ {
+ arvados.EndpointUserBatchUpdate,
+ func() interface{} { return &arvados.UserBatchUpdateOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserBatchUpdate(ctx, *opts.(*arvados.UserBatchUpdateOptions))
+ },
+ },
+ {
+ arvados.EndpointUserDelete,
+ func() interface{} { return &arvados.DeleteOptions{} },
+ func(ctx context.Context, opts interface{}) (interface{}, error) {
+ return rtr.fed.UserDelete(ctx, *opts.(*arvados.DeleteOptions))
+ },
+ },
} {
rtr.addRoute(route.endpoint, route.defaultOpts, route.exec)
- if route.endpoint.Method == "PATCH" {
- // Accept PUT as a synonym for PATCH.
- endpointPUT := route.endpoint
- endpointPUT.Method = "PUT"
- rtr.addRoute(endpointPUT, route.defaultOpts, route.exec)
- }
}
- rtr.mux.NotFound = http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ rtr.mux.NotFoundHandler = http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
httpserver.Errors(w, []string{"API endpoint not found"}, http.StatusNotFound)
})
- rtr.mux.MethodNotAllowed = http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ rtr.mux.MethodNotAllowedHandler = http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
httpserver.Errors(w, []string{"API endpoint not found"}, http.StatusMethodNotAllowed)
})
}
+var altMethod = map[string]string{
+ "PATCH": "PUT", // Accept PUT as a synonym for PATCH
+ "GET": "HEAD", // Accept HEAD at any GET route
+}
+
func (rtr *router) addRoute(endpoint arvados.APIEndpoint, defaultOpts func() interface{}, exec routableFunc) {
- rtr.mux.HandlerFunc(endpoint.Method, "/"+endpoint.Path, func(w http.ResponseWriter, req *http.Request) {
+ methods := []string{endpoint.Method}
+ if alt, ok := altMethod[endpoint.Method]; ok {
+ methods = append(methods, alt)
+ }
+ rtr.mux.Methods(methods...).Path("/" + endpoint.Path).HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
logger := ctxlog.FromContext(req.Context())
params, err := rtr.loadRequestParams(req, endpoint.AttrsKey)
if err != nil {
}
creds := auth.CredentialsFromRequest(req)
+ err = creds.LoadTokensFromHTTPRequestBody(req)
+ if err != nil {
+ rtr.sendError(w, fmt.Errorf("error loading tokens from request body: %s", err))
+ return
+ }
if rt, _ := params["reader_tokens"].([]interface{}); len(rt) > 0 {
for _, t := range rt {
if t, ok := t.(string); ok {
case "login", "logout", "auth":
default:
w.Header().Set("Access-Control-Allow-Origin", "*")
- w.Header().Set("Access-Control-Allow-Methods", "GET, HEAD, PUT, POST, DELETE")
+ w.Header().Set("Access-Control-Allow-Methods", "GET, HEAD, PUT, POST, PATCH, DELETE")
w.Header().Set("Access-Control-Allow-Headers", "Authorization, Content-Type")
w.Header().Set("Access-Control-Max-Age", "86486400")
}
r2 := *r
r = &r2
r.Method = m
+ } else if m = r.Header.Get("X-Http-Method-Override"); m != "" {
+ r2 := *r
+ r = &r2
+ r.Method = m
}
rtr.mux.ServeHTTP(w, r)
}
"testing"
"time"
- "git.curoverse.com/arvados.git/lib/controller/rpc"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "github.com/julienschmidt/httprouter"
+ "git.arvados.org/arvados.git/lib/controller/rpc"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "github.com/gorilla/mux"
check "gopkg.in/check.v1"
)
func (s *RouterSuite) SetUpTest(c *check.C) {
s.stub = arvadostest.APIStub{}
s.rtr = &router{
- mux: httprouter.New(),
+ mux: mux.NewRouter(),
fed: &s.stub,
}
s.rtr.addRoutes()
shouldCall: "CollectionList",
withOptions: arvados.ListOptions{Limit: 123, Offset: 456, IncludeTrash: true, IncludeOldVersions: true},
},
+ {
+ method: "POST",
+ path: "/arvados/v1/collections?limit=123",
+ body: `{"offset":456,"include_trash":true,"include_old_versions":true}`,
+ header: http.Header{"X-Http-Method-Override": {"GET"}, "Content-Type": {"application/json"}},
+ shouldCall: "CollectionList",
+ withOptions: arvados.ListOptions{Limit: 123, Offset: 456, IncludeTrash: true, IncludeOldVersions: true},
+ },
{
method: "POST",
path: "/arvados/v1/collections?limit=123",
c.Check(rr.Code, check.Equals, http.StatusOK)
c.Check(jresp["items_available"], check.FitsTypeOf, float64(0))
c.Check(jresp["items_available"].(float64) > 2, check.Equals, true)
+ c.Check(jresp["items"], check.NotNil)
+ c.Check(jresp["items"], check.HasLen, 0)
+
+ _, rr, jresp = doRequest(c, s.rtr, token, "GET", `/arvados/v1/containers?filters=[["uuid","in",[]]]`, nil, nil)
+ c.Check(rr.Code, check.Equals, http.StatusOK)
+ c.Check(jresp["items_available"], check.Equals, float64(0))
+ c.Check(jresp["items"], check.NotNil)
c.Check(jresp["items"], check.HasLen, 0)
_, rr, jresp = doRequest(c, s.rtr, token, "GET", `/arvados/v1/containers?limit=2&select=["uuid","command"]`, nil, nil)
c.Check(jresp["uuid"], check.IsNil)
}
+func (s *RouterIntegrationSuite) TestWritableBy(c *check.C) {
+ _, rr, jresp := doRequest(c, s.rtr, arvadostest.ActiveTokenV2, "GET", `/arvados/v1/users/`+arvadostest.ActiveUserUUID, nil, nil)
+ c.Check(rr.Code, check.Equals, http.StatusOK)
+ c.Check(jresp["writable_by"], check.DeepEquals, []interface{}{"zzzzz-tpzed-000000000000000", "zzzzz-tpzed-xurymjxw79nv3jz", "zzzzz-j7d0g-48foin4vonvc2at"})
+}
+
func (s *RouterIntegrationSuite) TestFullTimestampsInResponse(c *check.C) {
uuid := arvadostest.CollectionReplicationDesired2Confirmed2UUID
token := arvadostest.ActiveTokenV2
c.Check(rr.Code, check.Equals, http.StatusOK)
c.Check(resp["kind"], check.Equals, "arvados#container")
+ c.Check(resp["etag"], check.FitsTypeOf, "")
+ c.Check(resp["etag"], check.Not(check.Equals), "")
c.Check(resp["uuid"], check.HasLen, 27)
c.Check(resp["command"], check.HasLen, 2)
c.Check(resp["mounts"], check.IsNil)
}
}
+func (s *RouterIntegrationSuite) TestHEAD(c *check.C) {
+ _, rr, _ := doRequest(c, s.rtr, arvadostest.ActiveTokenV2, "HEAD", "/arvados/v1/containers/"+arvadostest.QueuedContainerUUID, nil, nil)
+ c.Check(rr.Code, check.Equals, http.StatusOK)
+}
+
func (s *RouterIntegrationSuite) TestRouteNotFound(c *check.C) {
token := arvadostest.ActiveTokenV2
req := (&testReq{
for _, hdr := range []string{"Authorization", "Content-Type"} {
c.Check(rr.Result().Header.Get("Access-Control-Allow-Headers"), check.Matches, ".*"+hdr+".*")
}
- for _, method := range []string{"GET", "HEAD", "PUT", "POST", "DELETE"} {
+ for _, method := range []string{"GET", "HEAD", "PUT", "POST", "PATCH", "DELETE"} {
c.Check(rr.Result().Header.Get("Access-Control-Allow-Methods"), check.Matches, ".*"+method+".*")
}
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
)
type TokenProvider func(context.Context) ([]string, error)
params["reader_tokens"] = tokens[1:]
}
path := ep.Path
- if strings.Contains(ep.Path, "/:uuid") {
+ if strings.Contains(ep.Path, "/{uuid}") {
uuid, _ := params["uuid"].(string)
- path = strings.Replace(path, "/:uuid", "/"+uuid, 1)
+ path = strings.Replace(path, "/{uuid}", "/"+uuid, 1)
delete(params, "uuid")
}
return aClient.RequestAndDecodeContext(ctx, dst, ep.Method, path, body, params)
return resp, err
}
+func (conn *Conn) Logout(ctx context.Context, options arvados.LogoutOptions) (arvados.LogoutResponse, error) {
+ ep := arvados.EndpointLogout
+ var resp arvados.LogoutResponse
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ resp.RedirectLocation = conn.relativeToBaseURL(resp.RedirectLocation)
+ return resp, err
+}
+
// If the given location is a valid URL and its origin is the same as
// conn.baseURL, return it as a relative URL. Otherwise, return it
// unmodified.
return resp, err
}
+func (conn *Conn) UserCreate(ctx context.Context, options arvados.CreateOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserCreate
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserUpdate
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserUpdateUUID(ctx context.Context, options arvados.UpdateUUIDOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserUpdateUUID
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserMerge(ctx context.Context, options arvados.UserMergeOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserMerge
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserActivate(ctx context.Context, options arvados.UserActivateOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserActivate
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserSetup(ctx context.Context, options arvados.UserSetupOptions) (map[string]interface{}, error) {
+ ep := arvados.EndpointUserSetup
+ var resp map[string]interface{}
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserUnsetup(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserUnsetup
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserGet(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserGet
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserGetCurrent(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserGetCurrent
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserGetSystem(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserGetSystem
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserList(ctx context.Context, options arvados.ListOptions) (arvados.UserList, error) {
+ ep := arvados.EndpointUserList
+ var resp arvados.UserList
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+func (conn *Conn) UserDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.User, error) {
+ ep := arvados.EndpointUserDelete
+ var resp arvados.User
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
+
func (conn *Conn) APIClientAuthorizationCurrent(ctx context.Context, options arvados.GetOptions) (arvados.APIClientAuthorization, error) {
ep := arvados.EndpointAPIClientAuthorizationCurrent
var resp arvados.APIClientAuthorization
return resp, err
}
+type UserSessionAuthInfo struct {
+ Email string `json:"email"`
+ AlternateEmails []string `json:"alternate_emails"`
+ FirstName string `json:"first_name"`
+ LastName string `json:"last_name"`
+ Username string `json:"username"`
+}
+
type UserSessionCreateOptions struct {
- AuthInfo map[string]interface{} `json:"auth_info"`
- ReturnTo string `json:"return_to"`
+ AuthInfo UserSessionAuthInfo `json:"auth_info"`
+ ReturnTo string `json:"return_to"`
}
func (conn *Conn) UserSessionCreate(ctx context.Context, options UserSessionCreateOptions) (arvados.LoginResponse, error) {
err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
return resp, err
}
+
+func (conn *Conn) UserBatchUpdate(ctx context.Context, options arvados.UserBatchUpdateOptions) (arvados.UserList, error) {
+ ep := arvados.APIEndpoint{Method: "PATCH", Path: "arvados/v1/users/batch_update"}
+ var resp arvados.UserList
+ err := conn.requestAndDecode(ctx, &resp, ep, nil, options)
+ return resp, err
+}
"os"
"testing"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
)
ctx := ctxlog.Context(context.Background(), ctxlog.TestLogger(c))
s.ctx = context.WithValue(ctx, contextKeyTestTokens, []string{arvadostest.ActiveToken})
s.conn = NewConn("zzzzz", &url.URL{Scheme: "https", Host: os.Getenv("ARVADOS_TEST_API_HOST")}, true, func(ctx context.Context) ([]string, error) {
- return ctx.Value(contextKeyTestTokens).([]string), nil
+ tokens, _ := ctx.Value(contextKeyTestTokens).([]string)
+ return tokens, nil
})
}
+func (s *RPCSuite) TestLogin(c *check.C) {
+ s.ctx = context.Background()
+ opts := arvados.LoginOptions{
+ ReturnTo: "https://foo.example.com/bar",
+ }
+ resp, err := s.conn.Login(s.ctx, opts)
+ c.Check(err, check.IsNil)
+ c.Check(resp.RedirectLocation, check.Equals, "/auth/joshid?return_to="+url.QueryEscape(","+opts.ReturnTo))
+}
+
+func (s *RPCSuite) TestLogout(c *check.C) {
+ s.ctx = context.Background()
+ opts := arvados.LogoutOptions{
+ ReturnTo: "https://foo.example.com/bar",
+ }
+ resp, err := s.conn.Logout(s.ctx, opts)
+ c.Check(err, check.IsNil)
+ c.Check(resp.RedirectLocation, check.Equals, "http://localhost:3002/users/sign_out?redirect_uri="+url.QueryEscape(opts.ReturnTo))
+}
+
func (s *RPCSuite) TestCollectionCreate(c *check.C) {
coll, err := s.conn.CollectionCreate(s.ctx, arvados.CreateOptions{Attrs: map[string]interface{}{
"owner_uuid": arvadostest.ActiveUserUUID,
"os"
"path/filepath"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
check "gopkg.in/check.v1"
)
log := ctxlog.TestLogger(c)
handler := &Handler{Cluster: &arvados.Cluster{
- ClusterID: "zzzzz",
- PostgreSQL: integrationTestCluster().PostgreSQL,
-
- EnableBetaController14287: enableBetaController14287,
+ ClusterID: "zzzzz",
+ PostgreSQL: integrationTestCluster().PostgreSQL,
+ ForceLegacyAPI14: forceLegacyAPI14,
}}
handler.Cluster.TLS.Insecure = true
arvadostest.SetServiceURL(&handler.Cluster.Services.RailsAPI, "https://"+os.Getenv("ARVADOS_TEST_API_HOST"))
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"encoding/json"
//
// Stdout and stderr in the child process are sent to the systemd
// journal using the systemd-cat program.
-func Detach(uuid string, args []string, stdout, stderr io.Writer) int {
- return exitcode(stderr, detach(uuid, args, stdout, stderr))
+func Detach(uuid string, prog string, args []string, stdout, stderr io.Writer) int {
+ return exitcode(stderr, detach(uuid, prog, args, stdout, stderr))
}
-func detach(uuid string, args []string, stdout, stderr io.Writer) error {
+func detach(uuid string, prog string, args []string, stdout, stderr io.Writer) error {
lockfile, err := func() (*os.File, error) {
// We must hold the dir-level lock between
// opening/creating the lockfile and acquiring LOCK_EX
defer lockfile.Close()
lockfile.Truncate(0)
- cmd := exec.Command("systemd-cat", append([]string{"--identifier=crunch-run", args[0], "-no-detach"}, args[1:]...)...)
+ execargs := append([]string{"-no-detach"}, args...)
+ if strings.HasSuffix(prog, " crunch-run") {
+ // invoked as "/path/to/arvados-server crunch-run"
+ // (see arvados/lib/cmd.Multi)
+ execargs = append([]string{strings.TrimSuffix(prog, " crunch-run"), "crunch-run"}, execargs...)
+ } else {
+ // invoked as "/path/to/crunch-run"
+ execargs = append([]string{prog}, execargs...)
+ }
+ execargs = append([]string{
+ // Here, if the inner systemd-cat can't exec
+ // crunch-run, it writes an error message to stderr,
+ // and the outer systemd-cat writes it to the journal
+ // where the operator has a chance to discover it. (If
+ // we only used one systemd-cat command, it would be
+ // up to us to report the error -- but we are going to
+ // detach and exit, not wait for something to appear
+ // on stderr.) Note these systemd-cat calls don't
+ // result in additional processes -- they just connect
+ // stderr/stdout to sockets and call exec().
+ "systemd-cat", "--identifier=crunch-run",
+ "systemd-cat", "--identifier=crunch-run",
+ }, execargs...)
+
+ cmd := exec.Command(execargs[0], execargs[1:]...)
// Child inherits lockfile.
cmd.ExtraFiles = []*os.File{lockfile}
// Ensure child isn't interrupted even if we receive signals
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"bytes"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
. "gopkg.in/check.v1"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"encoding/json"
"sort"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
- "git.curoverse.com/arvados.git/sdk/go/manifest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/manifest"
)
type printfer interface {
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"io"
"io/ioutil"
"os"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
check "gopkg.in/check.v1"
)
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"bytes"
"syscall"
"time"
- "git.curoverse.com/arvados.git/lib/crunchstat"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
- "git.curoverse.com/arvados.git/sdk/go/manifest"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/crunchstat"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/manifest"
"golang.org/x/net/context"
dockertypes "github.com/docker/docker/api/types"
dockerclient "github.com/docker/docker/client"
)
-var version = "dev"
+type command struct{}
+
+var Command = command{}
// IArvadosClient is the minimal Arvados API methods used by crunch-run.
type IArvadosClient interface {
// Run the full container lifecycle.
func (runner *ContainerRunner) Run() (err error) {
- runner.CrunchLog.Printf("crunch-run %s started", version)
+ runner.CrunchLog.Printf("crunch-run %s started", cmd.Version.String())
runner.CrunchLog.Printf("Executing container '%s'", runner.Container.UUID)
hostname, hosterr := os.Hostname()
return cr, nil
}
-func main() {
- statInterval := flag.Duration("crunchstat-interval", 10*time.Second, "sampling period for periodic resource usage reporting")
- cgroupRoot := flag.String("cgroup-root", "/sys/fs/cgroup", "path to sysfs cgroup tree")
- cgroupParent := flag.String("cgroup-parent", "docker", "name of container's parent cgroup (ignored if -cgroup-parent-subsystem is used)")
- cgroupParentSubsystem := flag.String("cgroup-parent-subsystem", "", "use current cgroup for given subsystem as parent cgroup for container")
- caCertsPath := flag.String("ca-certs", "", "Path to TLS root certificates")
- detach := flag.Bool("detach", false, "Detach from parent process and run in the background")
- stdinEnv := flag.Bool("stdin-env", false, "Load environment variables from JSON message on stdin")
- sleep := flag.Duration("sleep", 0, "Delay before starting (testing use only)")
- kill := flag.Int("kill", -1, "Send signal to an existing crunch-run process for given UUID")
- list := flag.Bool("list", false, "List UUIDs of existing crunch-run processes")
- enableNetwork := flag.String("container-enable-networking", "default",
+func (command) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
+ statInterval := flags.Duration("crunchstat-interval", 10*time.Second, "sampling period for periodic resource usage reporting")
+ cgroupRoot := flags.String("cgroup-root", "/sys/fs/cgroup", "path to sysfs cgroup tree")
+ cgroupParent := flags.String("cgroup-parent", "docker", "name of container's parent cgroup (ignored if -cgroup-parent-subsystem is used)")
+ cgroupParentSubsystem := flags.String("cgroup-parent-subsystem", "", "use current cgroup for given subsystem as parent cgroup for container")
+ caCertsPath := flags.String("ca-certs", "", "Path to TLS root certificates")
+ detach := flags.Bool("detach", false, "Detach from parent process and run in the background")
+ stdinEnv := flags.Bool("stdin-env", false, "Load environment variables from JSON message on stdin")
+ sleep := flags.Duration("sleep", 0, "Delay before starting (testing use only)")
+ kill := flags.Int("kill", -1, "Send signal to an existing crunch-run process for given UUID")
+ list := flags.Bool("list", false, "List UUIDs of existing crunch-run processes")
+ enableNetwork := flags.String("container-enable-networking", "default",
`Specify if networking should be enabled for container. One of 'default', 'always':
default: only enable networking if container requests it.
always: containers always have networking enabled
`)
- networkMode := flag.String("container-network-mode", "default",
+ networkMode := flags.String("container-network-mode", "default",
`Set networking mode for container. Corresponds to Docker network mode (--net).
`)
- memprofile := flag.String("memprofile", "", "write memory profile to `file` after running container")
- getVersion := flag.Bool("version", false, "Print version information and exit.")
- flag.Duration("check-containerd", 0, "Ignored. Exists for compatibility with older versions.")
+ memprofile := flags.String("memprofile", "", "write memory profile to `file` after running container")
+ flags.Duration("check-containerd", 0, "Ignored. Exists for compatibility with older versions.")
ignoreDetachFlag := false
- if len(os.Args) > 1 && os.Args[1] == "-no-detach" {
+ if len(args) > 0 && args[0] == "-no-detach" {
// This process was invoked by a parent process, which
// has passed along its own arguments, including
// -detach, after the leading -no-detach flag. Strip
// the leading -no-detach flag (it's not recognized by
- // flag.Parse()) and ignore the -detach flag that
+ // flags.Parse()) and ignore the -detach flag that
// comes later.
- os.Args = append([]string{os.Args[0]}, os.Args[2:]...)
+ args = args[1:]
ignoreDetachFlag = true
}
- flag.Parse()
+ if err := flags.Parse(args); err == flag.ErrHelp {
+ return 0
+ } else if err != nil {
+ log.Print(err)
+ return 1
+ }
if *stdinEnv && !ignoreDetachFlag {
// Load env vars on stdin if asked (but not in a
// detached child process, in which case stdin is
// /dev/null).
- loadEnv(os.Stdin)
+ err := loadEnv(os.Stdin)
+ if err != nil {
+ log.Print(err)
+ return 1
+ }
}
+ containerId := flags.Arg(0)
+
switch {
case *detach && !ignoreDetachFlag:
- os.Exit(Detach(flag.Arg(0), os.Args, os.Stdout, os.Stderr))
+ return Detach(containerId, prog, args, os.Stdout, os.Stderr)
case *kill >= 0:
- os.Exit(KillProcess(flag.Arg(0), syscall.Signal(*kill), os.Stdout, os.Stderr))
+ return KillProcess(containerId, syscall.Signal(*kill), os.Stdout, os.Stderr)
case *list:
- os.Exit(ListProcesses(os.Stdout, os.Stderr))
+ return ListProcesses(os.Stdout, os.Stderr)
}
- // Print version information if requested
- if *getVersion {
- fmt.Printf("crunch-run %s\n", version)
- return
+ if containerId == "" {
+ log.Printf("usage: %s [options] UUID", prog)
+ return 1
}
- log.Printf("crunch-run %s started", version)
+ log.Printf("crunch-run %s started", cmd.Version.String())
time.Sleep(*sleep)
- containerId := flag.Arg(0)
-
if *caCertsPath != "" {
arvadosclient.CertFiles = []string{*caCertsPath}
}
api, err := arvadosclient.MakeArvadosClient()
if err != nil {
- log.Fatalf("%s: %v", containerId, err)
+ log.Printf("%s: %v", containerId, err)
+ return 1
}
api.Retries = 8
kc, kcerr := keepclient.MakeKeepClient(api)
if kcerr != nil {
- log.Fatalf("%s: %v", containerId, kcerr)
+ log.Printf("%s: %v", containerId, kcerr)
+ return 1
}
kc.BlockCache = &keepclient.BlockCache{MaxBlocks: 2}
kc.Retries = 4
cr, err := NewContainerRunner(arvados.NewClientFromEnv(), api, kc, docker, containerId)
if err != nil {
- log.Fatal(err)
+ log.Print(err)
+ return 1
}
if dockererr != nil {
cr.CrunchLog.Printf("%s: %v", containerId, dockererr)
cr.checkBrokenNode(dockererr)
cr.CrunchLog.Close()
- os.Exit(1)
+ return 1
}
parentTemp, tmperr := cr.MkTempDir("", "crunch-run."+containerId+".")
if tmperr != nil {
- log.Fatalf("%s: %v", containerId, tmperr)
+ log.Printf("%s: %v", containerId, tmperr)
+ return 1
}
cr.parentTemp = parentTemp
}
if runerr != nil {
- log.Fatalf("%s: %v", containerId, runerr)
+ log.Printf("%s: %v", containerId, runerr)
+ return 1
}
+ return 0
}
-func loadEnv(rdr io.Reader) {
+func loadEnv(rdr io.Reader) error {
buf, err := ioutil.ReadAll(rdr)
if err != nil {
- log.Fatalf("read stdin: %s", err)
+ return fmt.Errorf("read stdin: %s", err)
}
var env map[string]string
err = json.Unmarshal(buf, &env)
if err != nil {
- log.Fatalf("decode stdin: %s", err)
+ return fmt.Errorf("decode stdin: %s", err)
}
for k, v := range env {
err = os.Setenv(k, v)
if err != nil {
- log.Fatalf("setenv(%q): %s", k, err)
+ return fmt.Errorf("setenv(%q): %s", k, err)
}
}
+ return nil
}
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"bufio"
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/manifest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/manifest"
"golang.org/x/net/context"
dockertypes "github.com/docker/docker/api/types"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"fmt"
"path/filepath"
"regexp"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"gopkg.in/src-d/go-billy.v4/osfs"
git "gopkg.in/src-d/go-git.v4"
git_config "gopkg.in/src-d/go-git.v4/config"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"io/ioutil"
"os"
"path/filepath"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
git_client "gopkg.in/src-d/go-git.v4/plumbing/transport/client"
git_http "gopkg.in/src-d/go-git.v4/plumbing/transport/http"
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"bufio"
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
)
// Timestamper is the signature for a function that takes a timestamp and
//
// SPDX-License-Identifier: AGPL-3.0
-package main
+package crunchrun
import (
"fmt"
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
. "gopkg.in/check.v1"
)
"context"
"fmt"
- "git.curoverse.com/arvados.git/lib/cmd"
- "git.curoverse.com/arvados.git/lib/service"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/service"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/prometheus/client_golang/prometheus"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
)
// A QueueEnt is an entry in the queue, consisting of a container
// record and the instance type that should be used to run it.
type QueueEnt struct {
- // The container to run. Only the UUID, State, Priority, and
- // RuntimeConstraints fields are populated.
+ // The container to run. Only the UUID, State, Priority,
+ // RuntimeConstraints, Mounts, and ContainerImage fields are
+ // populated.
Container arvados.Container `json:"container"`
InstanceType arvados.InstanceType `json:"instance_type"`
}
cq.mtx.Lock()
defer cq.mtx.Unlock()
ctr := cq.current[uuid].Container
- if ctr.State == arvados.ContainerStateComplete || ctr.State == arvados.ContainerStateCancelled {
+ if ctr.State == arvados.ContainerStateComplete || ctr.State == arvados.ContainerStateCancelled || (ctr.State == arvados.ContainerStateQueued && ctr.Priority == 0) {
cq.delEnt(uuid, ctr.State)
}
}
go func() {
if ctr.State == arvados.ContainerStateQueued {
// Can't set runtime error without
- // locking first. If Lock() is
- // successful, it will call addEnt()
- // again itself, and we'll fall
- // through to the
- // setRuntimeError/Cancel code below.
+ // locking first.
err := cq.Lock(ctr.UUID)
if err != nil {
logger.WithError(err).Warn("lock failed")
+ return
// ...and try again on the
// next Update, if the problem
// still exists.
}
- return
}
var err error
defer func() {
if cq.dontupdate != nil {
cq.dontupdate[uuid] = struct{}{}
}
- if ent, ok := cq.current[uuid]; !ok {
- cq.addEnt(uuid, resp)
- } else {
- ent.Container.State, ent.Container.Priority, ent.Container.LockedByUUID = resp.State, resp.Priority, resp.LockedByUUID
- cq.current[uuid] = ent
+ ent, ok := cq.current[uuid]
+ if !ok {
+ // Container is not in queue (e.g., it was not added
+ // because there is no suitable instance type, and
+ // we're just locking/updating it in order to set an
+ // error message). No need to add it, and we don't
+ // necessarily have enough information to add it here
+ // anyway because lock/unlock responses don't include
+ // runtime_constraints.
+ return
}
+ ent.Container.State, ent.Container.Priority, ent.Container.LockedByUUID = resp.State, resp.Priority, resp.LockedByUUID
+ cq.current[uuid] = ent
cq.notify()
}
*next[upd.UUID] = upd
}
}
- selectParam := []string{"uuid", "state", "priority", "runtime_constraints"}
+ selectParam := []string{"uuid", "state", "priority", "runtime_constraints", "container_image", "mounts"}
limitParam := 1000
mine, err := cq.fetchAll(arvados.ResourceListParams{
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
)
c.Check(ctr.UUID, check.Equals, uuid)
wg.Add(1)
- go func() {
+ go func(uuid string) {
defer wg.Done()
err := cq.Unlock(uuid)
c.Check(err, check.NotNil)
c.Check(ctr.State, check.Equals, arvados.ContainerStateCancelled)
err = cq.Lock(uuid)
c.Check(err, check.NotNil)
- }()
+ }(uuid)
}
wg.Wait()
}
func (suite *IntegrationSuite) TestCancelIfNoInstanceType(c *check.C) {
errorTypeChooser := func(ctr *arvados.Container) (arvados.InstanceType, error) {
+ // Make sure the relevant container fields are
+ // actually populated.
+ c.Check(ctr.ContainerImage, check.Equals, "test")
+ c.Check(ctr.RuntimeConstraints.VCPUs, check.Equals, 4)
+ c.Check(ctr.RuntimeConstraints.RAM, check.Equals, int64(12000000000))
+ c.Check(ctr.Mounts["/tmp"].Capacity, check.Equals, int64(24000000000))
+ c.Check(ctr.Mounts["/var/spool/cwl"].Capacity, check.Equals, int64(24000000000))
return arvados.InstanceType{}, errors.New("no suitable instance type")
}
client := arvados.NewClientFromEnv()
cq := NewQueue(logger(), nil, errorTypeChooser, client)
+ ch := cq.Subscribe()
+ go func() {
+ defer cq.Unsubscribe(ch)
+ for range ch {
+ // Container should never be added to
+ // queue. Note that polling the queue this way
+ // doesn't guarantee a bug (container being
+ // incorrectly added to the queue) will cause
+ // a test failure.
+ _, ok := cq.Get(arvadostest.QueuedContainerUUID)
+ if !c.Check(ok, check.Equals, false) {
+ // Don't spam the log with more failures
+ break
+ }
+ }
+ }()
+
var ctr arvados.Container
err := client.RequestAndDecode(&ctr, "GET", "arvados/v1/containers/"+arvadostest.QueuedContainerUUID, nil, nil)
c.Check(err, check.IsNil)
c.Check(ctr.State, check.Equals, arvados.ContainerStateQueued)
- cq.Update()
+ go cq.Update()
// Wait for the cancel operation to take effect. Container
// will have state=Cancelled or just disappear from the queue.
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/container"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/scheduler"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/ssh_executor"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/worker"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/container"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/scheduler"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/ssh_executor"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/worker"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
"github.com/julienschmidt/httprouter"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
type pool interface {
scheduler.WorkerPool
+ CheckHealth() error
Instances() []worker.InstanceView
SetIdleBehavior(cloud.InstanceID, worker.IdleBehavior) error
KillInstance(id cloud.InstanceID, reason string) error
// CheckHealth implements service.Handler.
func (disp *dispatcher) CheckHealth() error {
disp.Start()
- return nil
+ return disp.pool.CheckHealth()
}
// Stop dispatching containers and release resources. Typically used
} else {
mux := httprouter.New()
mux.HandlerFunc("GET", "/arvados/v1/dispatch/containers", disp.apiContainers)
- mux.HandlerFunc("POST", "/arvados/v1/dispatch/containers/kill", disp.apiInstanceKill)
+ mux.HandlerFunc("POST", "/arvados/v1/dispatch/containers/kill", disp.apiContainerKill)
mux.HandlerFunc("GET", "/arvados/v1/dispatch/instances", disp.apiInstances)
mux.HandlerFunc("POST", "/arvados/v1/dispatch/instances/hold", disp.apiInstanceHold)
mux.HandlerFunc("POST", "/arvados/v1/dispatch/instances/drain", disp.apiInstanceDrain)
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
"golang.org/x/crypto/ssh"
check "gopkg.in/check.v1"
"fmt"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/cloud/azure"
- "git.curoverse.com/arvados.git/lib/cloud/ec2"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/cloud/azure"
+ "git.arvados.org/arvados.git/lib/cloud/ec2"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/ssh"
"sort"
"strconv"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
var ErrInstanceTypesNotConfigured = errors.New("site configuration does not list any instance types")
package dispatchcloud
import (
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
check "gopkg.in/check.v1"
)
import (
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/worker"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/worker"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// fixStaleLocks waits for any already-locked containers (i.e., locked
import (
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/container"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/worker"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/container"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/worker"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// A ContainerQueue is a set of containers that need to be started or
"sort"
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/container"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/container"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/worker"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/worker"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/sirupsen/logrus"
)
import (
"fmt"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/container"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/container"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
sch.logger.WithFields(logrus.Fields{
"ContainerUUID": uuid,
"State": ent.Container.State,
- }).Info("container finished")
+ }).Info("container finished -- dropping from queue")
sch.queue.Forget(uuid)
}
case arvados.ContainerStateQueued:
"ContainerUUID": uuid,
"State": ent.Container.State,
"Priority": ent.Container.Priority,
- }).Info("container on hold")
+ }).Info("container on hold -- dropping from queue")
sch.queue.Forget(uuid)
}
case arvados.ContainerStateLocked:
"context"
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/cloud"
"golang.org/x/crypto/ssh"
)
"testing"
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
"golang.org/x/crypto/ssh"
check "gopkg.in/check.v1"
)
import (
"fmt"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// ContainerUUID returns a fake container UUID.
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/container"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/container"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// Queue is a test stub for container.Queue. The caller specifies the
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/ssh"
)
package worker
import (
+ "crypto/md5"
"crypto/rand"
"errors"
"fmt"
"io"
+ "io/ioutil"
"sort"
"strings"
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/ssh"
instanceSet: &throttledInstanceSet{InstanceSet: instanceSet},
newExecutor: newExecutor,
bootProbeCommand: cluster.Containers.CloudVMs.BootProbeCommand,
+ runnerSource: cluster.Containers.CloudVMs.DeployRunnerBinary,
imageID: cloud.ImageID(cluster.Containers.CloudVMs.ImageID),
instanceTypes: cluster.InstanceTypes,
maxProbesPerSecond: cluster.Containers.CloudVMs.MaxProbesPerSecond,
instanceSet *throttledInstanceSet
newExecutor func(cloud.Instance) Executor
bootProbeCommand string
+ runnerSource string
imageID cloud.ImageID
instanceTypes map[string]arvados.InstanceType
syncInterval time.Duration
stop chan bool
mtx sync.RWMutex
setupOnce sync.Once
+ runnerData []byte
+ runnerMD5 [md5.Size]byte
+ runnerCmd string
throttleCreate throttle
throttleInstances throttle
instanceType arvados.InstanceType
}
+func (wp *Pool) CheckHealth() error {
+ wp.setupOnce.Do(wp.setup)
+ if err := wp.loadRunnerData(); err != nil {
+ return fmt.Errorf("error loading runner binary: %s", err)
+ }
+ return nil
+}
+
// Subscribe returns a buffered channel that becomes ready after any
// change to the pool's state that could have scheduling implications:
// a worker's state changes, a new worker appears, the cloud
func (wp *Pool) Create(it arvados.InstanceType) bool {
logger := wp.logger.WithField("InstanceType", it.Name)
wp.setupOnce.Do(wp.setup)
+ if wp.loadRunnerData() != nil {
+ // Boot probe is certain to fail.
+ return false
+ }
wp.mtx.Lock()
defer wp.mtx.Unlock()
if time.Now().Before(wp.atQuotaUntil) || wp.throttleCreate.Error() != nil {
// time (Idle) or the earliest create time (Booting)
for _, wkr := range wp.workers {
if wkr.idleBehavior != IdleBehaviorHold && wkr.state == tryState && wkr.instType == it {
- logger.WithField("Instance", wkr.instance).Info("shutting down")
+ logger.WithField("Instance", wkr.instance.ID()).Info("shutting down")
wkr.shutdown()
return true
}
Subsystem: "dispatchcloud",
Name: "instances_total",
Help: "Number of cloud VMs.",
- }, []string{"category"})
+ }, []string{"category", "instance_type"})
reg.MustRegister(wp.mInstances)
wp.mInstancesPrice = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "arvados",
wp.mtx.RLock()
defer wp.mtx.RUnlock()
- instances := map[string]int64{}
+ type entKey struct {
+ cat string
+ instType string
+ }
+ instances := map[entKey]int64{}
price := map[string]float64{}
cpu := map[string]int64{}
mem := map[string]int64{}
default:
cat = "idle"
}
- instances[cat]++
+ instances[entKey{cat, wkr.instType.Name}]++
price[cat] += wkr.instType.Price
cpu[cat] += int64(wkr.instType.VCPUs)
mem[cat] += int64(wkr.instType.RAM)
running += int64(len(wkr.running) + len(wkr.starting))
}
for _, cat := range []string{"inuse", "hold", "booting", "unknown", "idle"} {
- wp.mInstances.WithLabelValues(cat).Set(float64(instances[cat]))
wp.mInstancesPrice.WithLabelValues(cat).Set(price[cat])
wp.mVCPUs.WithLabelValues(cat).Set(float64(cpu[cat]))
wp.mMemory.WithLabelValues(cat).Set(float64(mem[cat]))
+ // make sure to reset gauges for non-existing category/nodetype combinations
+ for _, it := range wp.instanceTypes {
+ if _, ok := instances[entKey{cat, it.Name}]; !ok {
+ wp.mInstances.WithLabelValues(cat, it.Name).Set(float64(0))
+ }
+ }
+ }
+ for k, v := range instances {
+ wp.mInstances.WithLabelValues(k.cat, k.instType).Set(float64(v))
}
wp.mContainersRunning.Set(float64(running))
}
wp.exited = map[string]time.Time{}
wp.workers = map[cloud.InstanceID]*worker{}
wp.subscribers = map[<-chan struct{}]chan<- struct{}{}
+ wp.loadRunnerData()
+}
+
+// Load the runner program to be deployed on worker nodes into
+// wp.runnerData, if necessary. Errors are logged.
+//
+// If auto-deploy is disabled, len(wp.runnerData) will be 0.
+//
+// Caller must not have lock.
+func (wp *Pool) loadRunnerData() error {
+ wp.mtx.Lock()
+ defer wp.mtx.Unlock()
+ if wp.runnerData != nil {
+ return nil
+ } else if wp.runnerSource == "" {
+ wp.runnerCmd = "crunch-run"
+ wp.runnerData = []byte{}
+ return nil
+ }
+ logger := wp.logger.WithField("source", wp.runnerSource)
+ logger.Debug("loading runner")
+ buf, err := ioutil.ReadFile(wp.runnerSource)
+ if err != nil {
+ logger.WithError(err).Error("failed to load runner program")
+ return err
+ }
+ wp.runnerData = buf
+ wp.runnerMD5 = md5.Sum(buf)
+ wp.runnerCmd = fmt.Sprintf("/var/lib/arvados/crunch-run~%x", wp.runnerMD5)
+ return nil
}
func (wp *Pool) notify() {
itTag := inst.Tags()[wp.tagKeyPrefix+tagKeyInstanceType]
it, ok := wp.instanceTypes[itTag]
if !ok {
- wp.logger.WithField("Instance", inst).Errorf("unknown InstanceType tag %q --- ignoring", itTag)
+ wp.logger.WithField("Instance", inst.ID()).Errorf("unknown InstanceType tag %q --- ignoring", itTag)
continue
}
if wkr, isNew := wp.updateWorker(inst, it); isNew {
notify = true
} else if wkr.state == StateShutdown && time.Since(wkr.destroyed) > wp.timeoutShutdown {
- wp.logger.WithField("Instance", inst).Info("worker still listed after shutdown; retrying")
+ wp.logger.WithField("Instance", inst.ID()).Info("worker still listed after shutdown; retrying")
wkr.shutdown()
}
}
"strings"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
check "gopkg.in/check.v1"
)
c.Assert(err, check.IsNil)
newExecutor := func(cloud.Instance) Executor {
- return stubExecutor{
- "crunch-run --list": stubResp{},
- "true": stubResp{},
+ return &stubExecutor{
+ response: map[string]stubResp{
+ "crunch-run --list": stubResp{},
+ "true": stubResp{},
+ },
}
}
type3 := arvados.InstanceType{Name: "a2l", ProviderType: "a2.large", VCPUs: 4, RAM: 4 * GiB, Price: .04}
pool := &Pool{
logger: logger,
- newExecutor: func(cloud.Instance) Executor { return stubExecutor{} },
+ newExecutor: func(cloud.Instance) Executor { return &stubExecutor{} },
instanceSet: &throttledInstanceSet{InstanceSet: instanceSet},
instanceTypes: arvados.InstanceTypeMap{
type1.Name: type1,
uuid string
executor Executor
envJSON json.RawMessage
+ runnerCmd string
remoteUser string
timeoutTERM time.Duration
timeoutSignal time.Duration
uuid: uuid,
executor: wkr.executor,
envJSON: envJSON,
+ runnerCmd: wkr.wp.runnerCmd,
remoteUser: wkr.instance.RemoteUser(),
timeoutTERM: wkr.wp.timeoutTERM,
timeoutSignal: wkr.wp.timeoutSignal,
// assume the remote process _might_ have started, at least until it
// probes the worker and finds otherwise.
func (rr *remoteRunner) Start() {
- cmd := "crunch-run --detach --stdin-env '" + rr.uuid + "'"
+ cmd := rr.runnerCmd + " --detach --stdin-env '" + rr.uuid + "'"
if rr.remoteUser != "root" {
cmd = "sudo " + cmd
}
func (rr *remoteRunner) kill(sig syscall.Signal) {
logger := rr.logger.WithField("Signal", int(sig))
logger.Info("sending signal")
- cmd := fmt.Sprintf("crunch-run --kill %d %s", sig, rr.uuid)
+ cmd := fmt.Sprintf(rr.runnerCmd+" --kill %d %s", sig, rr.uuid)
if rr.remoteUser != "root" {
cmd = "sudo " + cmd
}
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/cloud"
"github.com/sirupsen/logrus"
)
"errors"
"fmt"
- "git.curoverse.com/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/cloud"
"golang.org/x/crypto/ssh"
)
package worker
import (
+ "bytes"
"fmt"
+ "path/filepath"
"strings"
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/stats"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/stats"
"github.com/sirupsen/logrus"
)
}
func (wkr *worker) probeRunning() (running []string, reportsBroken, ok bool) {
- cmd := "crunch-run --list"
+ cmd := wkr.wp.runnerCmd + " --list"
if u := wkr.instance.RemoteUser(); u != "root" {
cmd = "sudo " + cmd
}
return false, stderr
}
logger.Info("boot probe succeeded")
+ if err = wkr.wp.loadRunnerData(); err != nil {
+ wkr.logger.WithError(err).Warn("cannot boot worker: error loading runner binary")
+ return false, stderr
+ } else if len(wkr.wp.runnerData) == 0 {
+ // Assume crunch-run is already installed
+ } else if _, stderr2, err := wkr.copyRunnerData(); err != nil {
+ wkr.logger.WithError(err).WithField("stderr", string(stderr2)).Warn("error copying runner binary")
+ return false, stderr2
+ } else {
+ stderr = append(stderr, stderr2...)
+ }
return true, stderr
}
+func (wkr *worker) copyRunnerData() (stdout, stderr []byte, err error) {
+ hash := fmt.Sprintf("%x", wkr.wp.runnerMD5)
+ dstdir, _ := filepath.Split(wkr.wp.runnerCmd)
+ logger := wkr.logger.WithFields(logrus.Fields{
+ "hash": hash,
+ "path": wkr.wp.runnerCmd,
+ })
+
+ stdout, stderr, err = wkr.executor.Execute(nil, `md5sum `+wkr.wp.runnerCmd, nil)
+ if err == nil && len(stderr) == 0 && bytes.Equal(stdout, []byte(hash+" "+wkr.wp.runnerCmd+"\n")) {
+ logger.Info("runner binary already exists on worker, with correct hash")
+ return
+ }
+
+ // Note touch+chmod come before writing data, to avoid the
+ // possibility of md5 being correct while file mode is
+ // incorrect.
+ cmd := `set -e; dstdir="` + dstdir + `"; dstfile="` + wkr.wp.runnerCmd + `"; mkdir -p "$dstdir"; touch "$dstfile"; chmod 0755 "$dstdir" "$dstfile"; cat >"$dstfile"`
+ if wkr.instance.RemoteUser() != "root" {
+ cmd = `sudo sh -c '` + strings.Replace(cmd, "'", "'\\''", -1) + `'`
+ }
+ logger.WithField("cmd", cmd).Info("installing runner binary on worker")
+ stdout, stderr, err = wkr.executor.Execute(nil, cmd, bytes.NewReader(wkr.wp.runnerData))
+ return
+}
+
// caller must have lock.
func (wkr *worker) shutdownIfBroken(dur time.Duration) bool {
if wkr.idleBehavior == IdleBehaviorHold {
package worker
import (
+ "bytes"
+ "crypto/md5"
"errors"
+ "fmt"
"io"
+ "strings"
"time"
- "git.curoverse.com/arvados.git/lib/cloud"
- "git.curoverse.com/arvados.git/lib/dispatchcloud/test"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/cloud"
+ "git.arvados.org/arvados.git/lib/dispatchcloud/test"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
running int
starting int
respBoot stubResp // zero value is success
+ respDeploy stubResp // zero value is success
respRun stubResp // zero value is success + nothing running
+ respRunDeployed stubResp
+ deployRunner []byte
+ expectStdin []byte
expectState State
expectRunning int
}
errFail := errors.New("failed")
respFail := stubResp{"", "command failed\n", errFail}
respContainerRunning := stubResp{"zzzzz-dz642-abcdefghijklmno\n", "", nil}
- for _, trial := range []trialT{
+ for idx, trial := range []trialT{
{
testCaseComment: "Unknown, probes fail",
state: StateUnknown,
starting: 1,
expectState: StateRunning,
},
+ {
+ testCaseComment: "Booting, boot probe succeeds, deployRunner succeeds, run probe succeeds",
+ state: StateBooting,
+ deployRunner: []byte("ELF"),
+ expectStdin: []byte("ELF"),
+ respRun: respFail,
+ respRunDeployed: respContainerRunning,
+ expectRunning: 1,
+ expectState: StateRunning,
+ },
+ {
+ testCaseComment: "Booting, boot probe succeeds, deployRunner fails",
+ state: StateBooting,
+ deployRunner: []byte("ELF"),
+ respDeploy: respFail,
+ expectStdin: []byte("ELF"),
+ expectState: StateBooting,
+ },
+ {
+ testCaseComment: "Booting, boot probe succeeds, deployRunner skipped, run probe succeeds",
+ state: StateBooting,
+ deployRunner: nil,
+ respDeploy: respFail,
+ expectState: StateIdle,
+ },
} {
- c.Logf("------- %#v", trial)
+ c.Logf("------- trial %d: %#v", idx, trial)
ctime := time.Now().Add(-trial.age)
- exr := stubExecutor{
- "bootprobe": trial.respBoot,
- "crunch-run --list": trial.respRun,
+ exr := &stubExecutor{
+ response: map[string]stubResp{
+ "bootprobe": trial.respBoot,
+ "crunch-run --list": trial.respRun,
+ "{deploy}": trial.respDeploy,
+ },
}
wp := &Pool{
arvClient: ac,
timeoutBooting: bootTimeout,
timeoutProbe: probeTimeout,
exited: map[string]time.Time{},
+ runnerCmd: "crunch-run",
+ runnerData: trial.deployRunner,
+ runnerMD5: md5.Sum(trial.deployRunner),
+ }
+ if trial.deployRunner != nil {
+ svHash := md5.Sum(trial.deployRunner)
+ wp.runnerCmd = fmt.Sprintf("/var/run/arvados/crunch-run~%x", svHash)
+ exr.response[wp.runnerCmd+" --list"] = trial.respRunDeployed
}
wkr := &worker{
logger: logger,
wkr.probeAndUpdate()
c.Check(wkr.state, check.Equals, trial.expectState)
c.Check(len(wkr.running), check.Equals, trial.expectRunning)
+ c.Check(exr.stdin.String(), check.Equals, string(trial.expectStdin))
}
}
stderr string
err error
}
-type stubExecutor map[string]stubResp
-func (se stubExecutor) SetTarget(cloud.ExecutorTarget) {}
-func (se stubExecutor) Close() {}
-func (se stubExecutor) Execute(env map[string]string, cmd string, stdin io.Reader) (stdout, stderr []byte, err error) {
- resp, ok := se[cmd]
+type stubExecutor struct {
+ response map[string]stubResp
+ stdin bytes.Buffer
+}
+
+func (se *stubExecutor) SetTarget(cloud.ExecutorTarget) {}
+func (se *stubExecutor) Close() {}
+func (se *stubExecutor) Execute(env map[string]string, cmd string, stdin io.Reader) (stdout, stderr []byte, err error) {
+ if stdin != nil {
+ _, err = io.Copy(&se.stdin, stdin)
+ if err != nil {
+ return nil, []byte(err.Error()), err
+ }
+ }
+ resp, ok := se.response[cmd]
+ if !ok && strings.Contains(cmd, `; cat >"$dstfile"`) {
+ resp, ok = se.response["{deploy}"]
+ }
if !ok {
- return nil, []byte("command not found\n"), errors.New("command not found")
+ return nil, []byte(fmt.Sprintf("%s: command not found\n", cmd)), errors.New("command not found")
}
return []byte(resp.stdout), []byte(resp.stderr), resp.err
}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package mount
+
+import (
+ "flag"
+ "io"
+ "log"
+ "net/http"
+ _ "net/http/pprof"
+ "os"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
+ "github.com/arvados/cgofuse/fuse"
+)
+
+var Command = &cmd{}
+
+type cmd struct {
+ // ready, if non-nil, will be closed when the mount is
+ // initialized. If ready is non-nil, it RunCommand() should
+ // not be called more than once, or when ready is already
+ // closed.
+ ready chan struct{}
+ // It is safe to call Unmount only after ready has been
+ // closed.
+ Unmount func() (ok bool)
+}
+
+// RunCommand implements the subcommand "mount <path> [fuse options]".
+//
+// The "-d" fuse option (and perhaps other features) ignores the
+// stderr argument and prints to os.Stderr instead.
+func (c *cmd) RunCommand(prog string, args []string, stdin io.Reader, stdout, stderr io.Writer) int {
+ logger := log.New(stderr, prog+" ", 0)
+ flags := flag.NewFlagSet(prog, flag.ContinueOnError)
+ ro := flags.Bool("ro", false, "read-only")
+ experimental := flags.Bool("experimental", false, "acknowledge this is an experimental command, and should not be used in production (required)")
+ blockCache := flags.Int("block-cache", 4, "read cache size (number of 64MiB blocks)")
+ pprof := flags.String("pprof", "", "serve Go profile data at `[addr]:port`")
+ err := flags.Parse(args)
+ if err != nil {
+ logger.Print(err)
+ return 2
+ }
+ if !*experimental {
+ logger.Printf("error: experimental command %q used without --experimental flag", prog)
+ return 2
+ }
+ if *pprof != "" {
+ go func() {
+ log.Println(http.ListenAndServe(*pprof, nil))
+ }()
+ }
+
+ client := arvados.NewClientFromEnv()
+ ac, err := arvadosclient.New(client)
+ if err != nil {
+ logger.Print(err)
+ return 1
+ }
+ kc, err := keepclient.MakeKeepClient(ac)
+ if err != nil {
+ logger.Print(err)
+ return 1
+ }
+ kc.BlockCache = &keepclient.BlockCache{MaxBlocks: *blockCache}
+ host := fuse.NewFileSystemHost(&keepFS{
+ Client: client,
+ KeepClient: kc,
+ ReadOnly: *ro,
+ Uid: os.Getuid(),
+ Gid: os.Getgid(),
+ ready: c.ready,
+ })
+ c.Unmount = host.Unmount
+ ok := host.Mount("", flags.Args())
+ if !ok {
+ return 1
+ }
+ return 0
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package mount
+
+import (
+ "bytes"
+ "encoding/json"
+ "io/ioutil"
+ "os"
+ "time"
+
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&CmdSuite{})
+
+type CmdSuite struct {
+ mnt string
+}
+
+func (s *CmdSuite) SetUpTest(c *check.C) {
+ tmpdir, err := ioutil.TempDir("", "")
+ c.Assert(err, check.IsNil)
+ s.mnt = tmpdir
+}
+
+func (s *CmdSuite) TearDownTest(c *check.C) {
+ c.Check(os.RemoveAll(s.mnt), check.IsNil)
+}
+
+func (s *CmdSuite) TestMount(c *check.C) {
+ exited := make(chan int)
+ stdin := bytes.NewBufferString("stdin")
+ stdout := bytes.NewBuffer(nil)
+ stderr := bytes.NewBuffer(nil)
+ mountCmd := cmd{ready: make(chan struct{})}
+ ready := false
+ go func() {
+ exited <- mountCmd.RunCommand("test mount", []string{"--experimental", s.mnt}, stdin, stdout, stderr)
+ }()
+ go func() {
+ <-mountCmd.ready
+ ready = true
+
+ f, err := os.Open(s.mnt + "/by_id/" + arvadostest.FooCollection)
+ if c.Check(err, check.IsNil) {
+ dirnames, err := f.Readdirnames(-1)
+ c.Check(err, check.IsNil)
+ c.Check(dirnames, check.DeepEquals, []string{"foo"})
+ f.Close()
+ }
+
+ buf, err := ioutil.ReadFile(s.mnt + "/by_id/" + arvadostest.FooCollection + "/.arvados#collection")
+ if c.Check(err, check.IsNil) {
+ var m map[string]interface{}
+ err = json.Unmarshal(buf, &m)
+ c.Check(err, check.IsNil)
+ c.Check(m["manifest_text"], check.Matches, `\. acbd.* 0:3:foo\n`)
+ }
+
+ _, err = os.Open(s.mnt + "/by_id/zzzzz-4zz18-does-not-exist")
+ c.Check(os.IsNotExist(err), check.Equals, true)
+
+ ok := mountCmd.Unmount()
+ c.Check(ok, check.Equals, true)
+ }()
+ select {
+ case <-time.After(5 * time.Second):
+ c.Fatal("timed out")
+ case errCode, ok := <-exited:
+ c.Check(ok, check.Equals, true)
+ c.Check(errCode, check.Equals, 0)
+ }
+ c.Check(ready, check.Equals, true)
+ c.Check(stdout.String(), check.Equals, "")
+ // stdin should not have been read
+ c.Check(stdin.String(), check.Equals, "stdin")
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package mount
+
+import (
+ "io"
+ "log"
+ "os"
+ "runtime/debug"
+ "sync"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
+ "github.com/arvados/cgofuse/fuse"
+)
+
+// sharedFile wraps arvados.File with a sync.Mutex, so fuse can safely
+// use a single filehandle concurrently on behalf of multiple
+// threads/processes.
+type sharedFile struct {
+ arvados.File
+ sync.Mutex
+}
+
+// keepFS implements cgofuse's FileSystemInterface.
+type keepFS struct {
+ fuse.FileSystemBase
+ Client *arvados.Client
+ KeepClient *keepclient.KeepClient
+ ReadOnly bool
+ Uid int
+ Gid int
+
+ root arvados.CustomFileSystem
+ open map[uint64]*sharedFile
+ lastFH uint64
+ sync.RWMutex
+
+ // If non-nil, this channel will be closed by Init() to notify
+ // other goroutines that the mount is ready.
+ ready chan struct{}
+}
+
+var (
+ invalidFH = ^uint64(0)
+)
+
+// newFH wraps f in a sharedFile, adds it to fs's lookup table using a
+// new handle number, and returns the handle number.
+func (fs *keepFS) newFH(f arvados.File) uint64 {
+ fs.Lock()
+ defer fs.Unlock()
+ if fs.open == nil {
+ fs.open = make(map[uint64]*sharedFile)
+ }
+ fs.lastFH++
+ fh := fs.lastFH
+ fs.open[fh] = &sharedFile{File: f}
+ return fh
+}
+
+func (fs *keepFS) lookupFH(fh uint64) *sharedFile {
+ fs.RLock()
+ defer fs.RUnlock()
+ return fs.open[fh]
+}
+
+func (fs *keepFS) Init() {
+ defer fs.debugPanics()
+ fs.root = fs.Client.SiteFileSystem(fs.KeepClient)
+ fs.root.MountProject("home", "")
+ if fs.ready != nil {
+ close(fs.ready)
+ }
+}
+
+func (fs *keepFS) Create(path string, flags int, mode uint32) (errc int, fh uint64) {
+ defer fs.debugPanics()
+ if fs.ReadOnly {
+ return -fuse.EROFS, invalidFH
+ }
+ f, err := fs.root.OpenFile(path, flags|os.O_CREATE, os.FileMode(mode))
+ if err == os.ErrExist {
+ return -fuse.EEXIST, invalidFH
+ } else if err != nil {
+ return -fuse.EINVAL, invalidFH
+ }
+ return 0, fs.newFH(f)
+}
+
+func (fs *keepFS) Open(path string, flags int) (errc int, fh uint64) {
+ defer fs.debugPanics()
+ if fs.ReadOnly && flags&(os.O_RDWR|os.O_WRONLY|os.O_CREATE) != 0 {
+ return -fuse.EROFS, invalidFH
+ }
+ f, err := fs.root.OpenFile(path, flags, 0)
+ if err != nil {
+ return -fuse.ENOENT, invalidFH
+ } else if fi, err := f.Stat(); err != nil {
+ return -fuse.EIO, invalidFH
+ } else if fi.IsDir() {
+ f.Close()
+ return -fuse.EISDIR, invalidFH
+ }
+ return 0, fs.newFH(f)
+}
+
+func (fs *keepFS) Utimens(path string, tmsp []fuse.Timespec) int {
+ defer fs.debugPanics()
+ if fs.ReadOnly {
+ return -fuse.EROFS
+ }
+ f, err := fs.root.OpenFile(path, 0, 0)
+ if err != nil {
+ return fs.errCode(err)
+ }
+ f.Close()
+ return 0
+}
+
+func (fs *keepFS) errCode(err error) int {
+ if os.IsNotExist(err) {
+ return -fuse.ENOENT
+ }
+ switch err {
+ case os.ErrExist:
+ return -fuse.EEXIST
+ case arvados.ErrInvalidArgument:
+ return -fuse.EINVAL
+ case arvados.ErrInvalidOperation:
+ return -fuse.ENOSYS
+ case arvados.ErrDirectoryNotEmpty:
+ return -fuse.ENOTEMPTY
+ case nil:
+ return 0
+ default:
+ return -fuse.EIO
+ }
+}
+
+func (fs *keepFS) Mkdir(path string, mode uint32) int {
+ defer fs.debugPanics()
+ if fs.ReadOnly {
+ return -fuse.EROFS
+ }
+ f, err := fs.root.OpenFile(path, os.O_CREATE|os.O_EXCL, os.FileMode(mode)|os.ModeDir)
+ if err != nil {
+ return fs.errCode(err)
+ }
+ f.Close()
+ return 0
+}
+
+func (fs *keepFS) Opendir(path string) (errc int, fh uint64) {
+ defer fs.debugPanics()
+ f, err := fs.root.OpenFile(path, 0, 0)
+ if err != nil {
+ return fs.errCode(err), invalidFH
+ } else if fi, err := f.Stat(); err != nil {
+ return fs.errCode(err), invalidFH
+ } else if !fi.IsDir() {
+ f.Close()
+ return -fuse.ENOTDIR, invalidFH
+ }
+ return 0, fs.newFH(f)
+}
+
+func (fs *keepFS) Releasedir(path string, fh uint64) (errc int) {
+ defer fs.debugPanics()
+ return fs.Release(path, fh)
+}
+
+func (fs *keepFS) Rmdir(path string) int {
+ defer fs.debugPanics()
+ return fs.errCode(fs.root.Remove(path))
+}
+
+func (fs *keepFS) Release(path string, fh uint64) (errc int) {
+ defer fs.debugPanics()
+ fs.Lock()
+ defer fs.Unlock()
+ defer delete(fs.open, fh)
+ if f := fs.open[fh]; f != nil {
+ err := f.Close()
+ if err != nil {
+ return -fuse.EIO
+ }
+ }
+ return 0
+}
+
+func (fs *keepFS) Rename(oldname, newname string) (errc int) {
+ defer fs.debugPanics()
+ if fs.ReadOnly {
+ return -fuse.EROFS
+ }
+ return fs.errCode(fs.root.Rename(oldname, newname))
+}
+
+func (fs *keepFS) Unlink(path string) (errc int) {
+ defer fs.debugPanics()
+ if fs.ReadOnly {
+ return -fuse.EROFS
+ }
+ return fs.errCode(fs.root.Remove(path))
+}
+
+func (fs *keepFS) Truncate(path string, size int64, fh uint64) (errc int) {
+ defer fs.debugPanics()
+ if fs.ReadOnly {
+ return -fuse.EROFS
+ }
+
+ // Sometimes fh is a valid filehandle and we don't need to
+ // waste a name lookup.
+ if f := fs.lookupFH(fh); f != nil {
+ return fs.errCode(f.Truncate(size))
+ }
+
+ // Other times, fh is invalid and we need to lookup path.
+ f, err := fs.root.OpenFile(path, os.O_RDWR, 0)
+ if err != nil {
+ return fs.errCode(err)
+ }
+ defer f.Close()
+ return fs.errCode(f.Truncate(size))
+}
+
+func (fs *keepFS) Getattr(path string, stat *fuse.Stat_t, fh uint64) (errc int) {
+ defer fs.debugPanics()
+ var fi os.FileInfo
+ var err error
+ if f := fs.lookupFH(fh); f != nil {
+ // Valid filehandle -- ignore path.
+ fi, err = f.Stat()
+ } else {
+ // Invalid filehandle -- lookup path.
+ fi, err = fs.root.Stat(path)
+ }
+ if err != nil {
+ return fs.errCode(err)
+ }
+ fs.fillStat(stat, fi)
+ return 0
+}
+
+func (fs *keepFS) Chmod(path string, mode uint32) (errc int) {
+ if fs.ReadOnly {
+ return -fuse.EROFS
+ }
+ if fi, err := fs.root.Stat(path); err != nil {
+ return fs.errCode(err)
+ } else if mode & ^uint32(fuse.S_IFREG|fuse.S_IFDIR|0777) != 0 {
+ // Refuse to set mode bits other than
+ // regfile/dir/perms
+ return -fuse.ENOSYS
+ } else if (fi.Mode()&os.ModeDir != 0) != (mode&fuse.S_IFDIR != 0) {
+ // Refuse to transform a regular file to a dir, or
+ // vice versa
+ return -fuse.ENOSYS
+ }
+ // As long as the change isn't nonsense, chmod is a no-op,
+ // because we don't save permission bits.
+ return 0
+}
+
+func (fs *keepFS) fillStat(stat *fuse.Stat_t, fi os.FileInfo) {
+ defer fs.debugPanics()
+ var m uint32
+ if fi.IsDir() {
+ m = m | fuse.S_IFDIR
+ } else {
+ m = m | fuse.S_IFREG
+ }
+ m = m | uint32(fi.Mode()&os.ModePerm)
+ stat.Mode = m
+ stat.Nlink = 1
+ stat.Size = fi.Size()
+ t := fuse.NewTimespec(fi.ModTime())
+ stat.Mtim = t
+ stat.Ctim = t
+ stat.Atim = t
+ stat.Birthtim = t
+ stat.Blksize = 1024
+ stat.Blocks = (stat.Size + stat.Blksize - 1) / stat.Blksize
+ if fs.Uid > 0 && int64(fs.Uid) < 1<<31 {
+ stat.Uid = uint32(fs.Uid)
+ }
+ if fs.Gid > 0 && int64(fs.Gid) < 1<<31 {
+ stat.Gid = uint32(fs.Gid)
+ }
+}
+
+func (fs *keepFS) Write(path string, buf []byte, ofst int64, fh uint64) (n int) {
+ defer fs.debugPanics()
+ if fs.ReadOnly {
+ return -fuse.EROFS
+ }
+ f := fs.lookupFH(fh)
+ if f == nil {
+ return -fuse.EBADF
+ }
+ f.Lock()
+ defer f.Unlock()
+ if _, err := f.Seek(ofst, io.SeekStart); err != nil {
+ return fs.errCode(err)
+ }
+ n, err := f.Write(buf)
+ if err != nil {
+ log.Printf("error writing %q: %s", path, err)
+ return fs.errCode(err)
+ }
+ return n
+}
+
+func (fs *keepFS) Read(path string, buf []byte, ofst int64, fh uint64) (n int) {
+ defer fs.debugPanics()
+ f := fs.lookupFH(fh)
+ if f == nil {
+ return -fuse.EBADF
+ }
+ f.Lock()
+ defer f.Unlock()
+ if _, err := f.Seek(ofst, io.SeekStart); err != nil {
+ return fs.errCode(err)
+ }
+ n, err := f.Read(buf)
+ for err == nil && n < len(buf) {
+ // f is an io.Reader ("If some data is available but
+ // not len(p) bytes, Read conventionally returns what
+ // is available instead of waiting for more") -- but
+ // our caller requires us to either fill buf or reach
+ // EOF.
+ done := n
+ n, err = f.Read(buf[done:])
+ n += done
+ }
+ if err != nil && err != io.EOF {
+ log.Printf("error reading %q: %s", path, err)
+ return fs.errCode(err)
+ }
+ return n
+}
+
+func (fs *keepFS) Readdir(path string,
+ fill func(name string, stat *fuse.Stat_t, ofst int64) bool,
+ ofst int64,
+ fh uint64) (errc int) {
+ defer fs.debugPanics()
+ f := fs.lookupFH(fh)
+ if f == nil {
+ return -fuse.EBADF
+ }
+ fill(".", nil, 0)
+ fill("..", nil, 0)
+ var stat fuse.Stat_t
+ fis, err := f.Readdir(-1)
+ if err != nil {
+ return fs.errCode(err)
+ }
+ for _, fi := range fis {
+ fs.fillStat(&stat, fi)
+ fill(fi.Name(), &stat, 0)
+ }
+ return 0
+}
+
+func (fs *keepFS) Fsync(path string, datasync bool, fh uint64) int {
+ defer fs.debugPanics()
+ f := fs.lookupFH(fh)
+ if f == nil {
+ return -fuse.EBADF
+ }
+ return fs.errCode(f.Sync())
+}
+
+func (fs *keepFS) Fsyncdir(path string, datasync bool, fh uint64) int {
+ return fs.Fsync(path, datasync, fh)
+}
+
+// debugPanics (when deferred by keepFS handlers) prints an error and
+// stack trace on stderr when a handler crashes. (Without this,
+// cgofuse recovers from panics silently and returns EIO.)
+func (fs *keepFS) debugPanics() {
+ if err := recover(); err != nil {
+ log.Printf("(%T) %v", err, err)
+ debug.PrintStack()
+ panic(err)
+ }
+}
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: Apache-2.0
+
+package mount
+
+import (
+ "testing"
+
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
+ "github.com/arvados/cgofuse/fuse"
+ check "gopkg.in/check.v1"
+)
+
+// Gocheck boilerplate
+func Test(t *testing.T) {
+ check.TestingT(t)
+}
+
+var _ = check.Suite(&FSSuite{})
+
+type FSSuite struct{}
+
+func (*FSSuite) TestFuseInterface(c *check.C) {
+ var _ fuse.FileSystemInterface = &keepFS{}
+}
+
+func (*FSSuite) TestOpendir(c *check.C) {
+ client := arvados.NewClientFromEnv()
+ ac, err := arvadosclient.New(client)
+ c.Assert(err, check.IsNil)
+ kc, err := keepclient.MakeKeepClient(ac)
+ c.Assert(err, check.IsNil)
+
+ var fs fuse.FileSystemInterface = &keepFS{
+ Client: client,
+ KeepClient: kc,
+ }
+ fs.Init()
+ errc, fh := fs.Opendir("/by_id")
+ c.Check(errc, check.Equals, 0)
+ c.Check(fh, check.Not(check.Equals), uint64(0))
+ c.Check(fh, check.Not(check.Equals), invalidFH)
+ errc, fh = fs.Opendir("/bogus")
+ c.Check(errc, check.Equals, -fuse.ENOENT)
+ c.Check(fh, check.Equals, invalidFH)
+}
"io"
"net"
"net/http"
+ "net/url"
"os"
"strings"
- "git.curoverse.com/arvados.git/lib/cmd"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
"github.com/coreos/go-systemd/daemon"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
var err error
defer func() {
if err != nil {
- log.WithError(err).Info("exiting")
+ log.WithError(err).Error("exiting")
}
}()
if !ok {
return arvados.URL{}, fmt.Errorf("unknown service name %q", prog)
}
+
+ if want := os.Getenv("ARVADOS_SERVICE_INTERNAL_URL"); want == "" {
+ } else if url, err := url.Parse(want); err != nil {
+ return arvados.URL{}, fmt.Errorf("$ARVADOS_SERVICE_INTERNAL_URL (%q): %s", want, err)
+ } else {
+ return arvados.URL(*url), nil
+ }
+
errors := []string{}
for url := range svc.InternalURLs {
listener, err := net.Listen("tcp", url.Host)
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
check "gopkg.in/check.v1"
)
"context"
"net/http"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/sirupsen/logrus"
)
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package service
+
+import (
+ "bytes"
+ "io"
+)
+
+type LogPrefixer struct {
+ io.Writer
+ Prefix []byte
+ did bool
+}
+
+func (lp *LogPrefixer) Write(p []byte) (int, error) {
+ if len(p) == 0 {
+ return 0, nil
+ }
+ var out []byte
+ if !lp.did {
+ out = append(out, lp.Prefix...)
+ }
+ lp.did = p[len(p)-1] != '\n'
+ out = append(out, bytes.Replace(p[:len(p)-1], []byte("\n"), append([]byte("\n"), lp.Prefix...), -1)...)
+ out = append(out, p[len(p)-1])
+ _, err := lp.Writer.Write(out)
+ if err != nil {
+ return 0, err
+ }
+ return len(p), nil
+}
"strings"
"syscall"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
gemspec
gem 'minitest', '>= 5.0.0'
gem 'rake'
+gem 'signet', '<= 0.11'
exit
end
-git_latest_tag = `git tag -l |sort -V -r |head -n1`
-git_latest_tag = git_latest_tag.encode('utf-8').strip
-git_timestamp, git_hash = `git log -n1 --first-parent --format=%ct:%H .`.chomp.split(":")
-git_timestamp = Time.at(git_timestamp.to_i).utc
+git_dir = ENV["GIT_DIR"]
+git_work = ENV["GIT_WORK_TREE"]
+begin
+ ENV["GIT_DIR"] = File.expand_path "#{__dir__}/../../.git"
+ ENV["GIT_WORK_TREE"] = File.expand_path "#{__dir__}/../.."
+ git_timestamp, git_hash = `git log -n1 --first-parent --format=%ct:%H #{__dir__}`.chomp.split(":")
+ if ENV["ARVADOS_BUILDING_VERSION"]
+ version = ENV["ARVADOS_BUILDING_VERSION"]
+ else
+ version = `#{__dir__}/../../build/version-at-commit.sh #{git_hash}`.encode('utf-8').strip
+ end
+ git_timestamp = Time.at(git_timestamp.to_i).utc
+ensure
+ ENV["GIT_DIR"] = git_dir
+ ENV["GIT_WORK_TREE"] = git_work
+end
Gem::Specification.new do |s|
s.name = 'arvados-cli'
- s.version = "#{git_latest_tag}.#{git_timestamp.strftime('%Y%m%d%H%M%S')}"
+ s.version = version
s.date = git_timestamp.strftime("%Y-%m-%d")
s.summary = "Arvados CLI tools"
s.description = "Arvados command line tools, git commit #{git_hash}"
s.add_runtime_dependency 'andand', '~> 1.3', '>= 1.3.3'
s.add_runtime_dependency 'oj', '~> 3.0'
s.add_runtime_dependency 'curb', '~> 0.8'
+ s.add_runtime_dependency 'launchy', '< 2.5'
# arvados-google-api-client 0.8.7.2 is incompatible with faraday 0.16.2
s.add_dependency('faraday', '< 0.16')
s.homepage =
from .fsaccess import CollectionFetcher
from .pathmapper import NoFollowPathMapper, trim_listing
from .perf import Perf
+from ._version import __version__
logger = logging.getLogger('arvados.cwl-runner')
metrics = logging.getLogger('arvados.cwl-runner.metrics')
(docker_req, docker_is_req) = self.get_requirement("DockerRequirement")
if not docker_req:
- docker_req = {"dockerImageId": "arvados/jobs"}
+ docker_req = {"dockerImageId": "arvados/jobs:"+__version__}
container_request["container_image"] = arv_docker_get_image(self.arvrunner.api,
docker_req,
import logging
from schema_salad.sourceline import SourceLine, cmap
+import schema_salad.ref_resolver
from cwltool.pack import pack
-from cwltool.load_tool import fetch_document
+from cwltool.load_tool import fetch_document, resolve_and_validate_document
from cwltool.process import shortname
from cwltool.workflow import Workflow, WorkflowException, WorkflowStep
from cwltool.pathmapper import adjustFileObjs, adjustDirObjs, visit_class
with SourceLine(self.tool, None, WorkflowException, logger.isEnabledFor(logging.DEBUG)):
if "id" not in self.tool:
raise WorkflowException("%s object must have 'id'" % (self.tool["class"]))
- document_loader, workflowobj, uri = (self.doc_loader, self.doc_loader.fetch(self.tool["id"]), self.tool["id"])
discover_secondary_files(self.arvrunner.fs_access, builder,
self.tool["inputs"], joborder)
with Perf(metrics, "subworkflow upload_deps"):
upload_dependencies(self.arvrunner,
os.path.basename(joborder.get("id", "#")),
- document_loader,
+ self.doc_loader,
joborder,
joborder.get("id", "#"),
False)
if self.wf_pdh is None:
- workflowobj["requirements"] = dedup_reqs(self.requirements)
- workflowobj["hints"] = dedup_reqs(self.hints)
+ packed = pack(self.loadingContext, self.tool["id"], loader=self.doc_loader)
- packed = pack(document_loader, workflowobj, uri, self.metadata)
+ for p in packed["$graph"]:
+ if p["id"] == "#main":
+ p["requirements"] = dedup_reqs(self.requirements)
+ p["hints"] = dedup_reqs(self.hints)
def visit(item):
+ if "requirements" in item:
+ item["requirements"] = [i for i in item["requirements"] if i["class"] != "DockerRequirement"]
for t in ("hints", "requirements"):
if t not in item:
continue
raise WorkflowException("Non-top-level ResourceRequirement in single container cannot have expressions")
if not dyn:
self.static_resource_req.append(req)
- if req["class"] == "DockerRequirement":
- if "http://arvados.org/cwl#dockerCollectionPDH" in req:
- del req["http://arvados.org/cwl#dockerCollectionPDH"]
visit_class(packed["$graph"], ("Workflow", "CommandLineTool"), visit)
upload_dependencies(self.arvrunner,
runtimeContext.name,
- document_loader,
+ self.doc_loader,
packed,
- uri,
+ self.tool["id"],
False)
# Discover files/directories referenced by the
if job_res_reqs[0].get("ramMin", 1024) < 128:
job_res_reqs[0]["ramMin"] = 128
+ arguments = ["--no-container", "--move-outputs", "--preserve-entire-environment", "workflow.cwl", "cwl.input.yml"]
+ if runtimeContext.debug:
+ arguments.insert(0, '--debug')
+
wf_runner = cmap({
"class": "CommandLineTool",
"baseCommand": "cwltool",
}]
}],
"hints": self.hints,
- "arguments": ["--no-container", "--move-outputs", "--preserve-entire-environment", "workflow.cwl#main", "cwl.input.yml"],
+ "arguments": arguments,
"id": "#"
})
return ArvadosCommandTool(self.arvrunner, wf_runner, self.loadingContext).job(joborder_resolved, output_callback, runtimeContext)
for req in job_reqs:
tool.requirements.append(req)
- def arv_executor(self, tool, job_order, runtimeContext, logger=None):
+ def arv_executor(self, updated_tool, job_order, runtimeContext, logger=None):
self.debug = runtimeContext.debug
- tool.visit(self.check_features)
+ updated_tool.visit(self.check_features)
self.project_uuid = runtimeContext.project_uuid
self.pipeline = None
raise Exception("--submit-request-uuid requires containers API, but using '{}' api".format(self.work_api))
if not runtimeContext.name:
- runtimeContext.name = self.name = tool.tool.get("label") or tool.metadata.get("label") or os.path.basename(tool.tool["id"])
+ runtimeContext.name = self.name = updated_tool.tool.get("label") or updated_tool.metadata.get("label") or os.path.basename(updated_tool.tool["id"])
# Upload local file references in the job order.
job_order = upload_job_order(self, "%s input" % runtimeContext.name,
- tool, job_order)
+ updated_tool, job_order)
+
+ # the last clause means: if it is a command line tool, and we
+ # are going to wait for the result, and always_submit_runner
+ # is false, then we don't submit a runner process.
submitting = (runtimeContext.update_workflow or
runtimeContext.create_workflow or
(runtimeContext.submit and not
- (tool.tool["class"] == "CommandLineTool" and
+ (updated_tool.tool["class"] == "CommandLineTool" and
runtimeContext.wait and
not runtimeContext.always_submit_runner)))
if submitting:
# Document may have been auto-updated. Reload the original
# document with updating disabled because we want to
- # submit the original document, not the auto-updated one.
- tool = load_tool(tool.tool["id"], loadingContext)
+ # submit the document with its original CWL version, not
+ # the auto-updated one.
+ tool = load_tool(updated_tool.tool["id"], loadingContext)
+ else:
+ tool = updated_tool
# Upload direct dependencies of workflow steps, get back mapping of files to keep references.
# Also uploads docker images.
if runtimeContext.submit:
# Submit a runner job to run the workflow for us.
if self.work_api == "containers":
- if tool.tool["class"] == "CommandLineTool" and runtimeContext.wait and (not runtimeContext.always_submit_runner):
- runtimeContext.runnerjob = tool.tool["id"]
+ if submitting:
+ tool = RunnerContainer(self, updated_tool,
+ tool, loadingContext, runtimeContext.enable_reuse,
+ self.output_name,
+ self.output_tags,
+ submit_runner_ram=runtimeContext.submit_runner_ram,
+ name=runtimeContext.name,
+ on_error=runtimeContext.on_error,
+ submit_runner_image=runtimeContext.submit_runner_image,
+ intermediate_output_ttl=runtimeContext.intermediate_output_ttl,
+ merged_map=merged_map,
+ priority=runtimeContext.priority,
+ secret_store=self.secret_store,
+ collection_cache_size=runtimeContext.collection_cache_size,
+ collection_cache_is_default=self.should_estimate_cache_size)
else:
- tool = RunnerContainer(self, tool, loadingContext, runtimeContext.enable_reuse,
- self.output_name,
- self.output_tags,
- submit_runner_ram=runtimeContext.submit_runner_ram,
- name=runtimeContext.name,
- on_error=runtimeContext.on_error,
- submit_runner_image=runtimeContext.submit_runner_image,
- intermediate_output_ttl=runtimeContext.intermediate_output_ttl,
- merged_map=merged_map,
- priority=runtimeContext.priority,
- secret_store=self.secret_store,
- collection_cache_size=runtimeContext.collection_cache_size,
- collection_cache_is_default=self.should_estimate_cache_size)
+ runtimeContext.runnerjob = tool.tool["id"]
if runtimeContext.cwl_runner_job is not None:
self.uuid = runtimeContext.cwl_runner_job.get('uuid')
return f.read()
if url.startswith("arvwf:"):
record = self.api_client.workflows().get(uuid=url[6:]).execute(num_retries=self.num_retries)
- definition = record["definition"] + ('\nlabel: "%s"\n' % record["name"].replace('"', '\\"'))
- return definition
+ definition = yaml.round_trip_load(record["definition"])
+ definition["label"] = record["name"]
+ return yaml.round_trip_dump(definition)
return super(CollectionFetcher, self).fetch_text(url)
def check_exists(self, url):
A "packed" workflow is one where all the components have been combined into a single document."""
rewrites = {}
- packed = pack(tool.doc_loader, tool.doc_loader.fetch(tool.tool["id"]),
- tool.tool["id"], tool.metadata, rewrite_out=rewrites)
+ packed = pack(arvrunner.loadingContext, tool.tool["id"],
+ rewrite_out=rewrites,
+ loader=tool.doc_loader)
rewrite_to_orig = {v: k for k,v in viewitems(rewrites)}
"""Base class for runner processes, which submit an instance of
arvados-cwl-runner and wait for the final result."""
- def __init__(self, runner, tool, loadingContext, enable_reuse,
+ def __init__(self, runner, updated_tool,
+ tool, loadingContext, enable_reuse,
output_name, output_tags, submit_runner_ram=0,
name=None, on_error=None, submit_runner_image=None,
intermediate_output_ttl=0, merged_map=None,
collection_cache_is_default=True):
loadingContext = loadingContext.copy()
- loadingContext.metadata = loadingContext.metadata.copy()
- loadingContext.metadata["cwlVersion"] = INTERNAL_VERSION
+ loadingContext.metadata = updated_tool.metadata.copy()
- super(Runner, self).__init__(tool.tool, loadingContext)
+ super(Runner, self).__init__(updated_tool.tool, loadingContext)
self.arvrunner = runner
self.embedded_tool = tool
import os
import re
-SETUP_DIR = os.path.dirname(__file__) or '.'
-
-def git_latest_tag():
- gittags = subprocess.check_output(['git', 'tag', '-l']).split()
- gittags.sort(key=lambda s: [int(u) for u in s.split(b'.')],reverse=True)
- return str(next(iter(gittags)).decode('utf-8'))
+SETUP_DIR = os.path.dirname(os.path.abspath(__file__))
def choose_version_from():
sdk_ts = subprocess.check_output(
getver = SETUP_DIR
return getver
-def git_timestamp_tag():
- gitinfo = subprocess.check_output(
- ['git', 'log', '--first-parent', '--max-count=1',
- '--format=format:%ct', choose_version_from()]).strip()
- return str(time.strftime('.%Y%m%d%H%M%S', time.gmtime(int(gitinfo))))
+def git_version_at_commit():
+ curdir = choose_version_from()
+ myhash = subprocess.check_output(['git', 'log', '-n1', '--first-parent',
+ '--format=%H', curdir]).strip()
+ myversion = subprocess.check_output([curdir+'/../../build/version-at-commit.sh', myhash]).strip().decode()
+ return myversion
def save_version(setup_dir, module, v):
- with open(os.path.join(setup_dir, module, "_version.py"), 'w') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'wt') as fp:
return fp.write("__version__ = '%s'\n" % v)
def read_version(setup_dir, module):
- with open(os.path.join(setup_dir, module, "_version.py"), 'r') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'rt') as fp:
return re.match("__version__ = '(.*)'$", fp.read()).groups()[0]
def get_version(setup_dir, module):
save_version(setup_dir, module, env_version)
else:
try:
- save_version(setup_dir, module, git_latest_tag() + git_timestamp_tag())
+ save_version(setup_dir, module, git_version_at_commit())
except (subprocess.CalledProcessError, OSError):
pass
author='Arvados',
author_email='info@arvados.org',
url="https://arvados.org",
- download_url="https://github.com/curoverse/arvados.git",
+ download_url="https://github.com/arvados/arvados.git",
license='Apache 2.0',
packages=find_packages(),
package_data={'arvados_cwl': ['arv-cwl-schema-v1.0.yml', 'arv-cwl-schema-v1.1.yml']},
'bin/arvados-cwl-runner',
],
# Note that arvados/build/run-build-packages.sh looks at this
- # file to determine what version of cwltool and schema-salad to build.
+ # file to determine what version of cwltool and schema-salad to
+ # build.
install_requires=[
- 'cwltool==1.0.20190831161204',
- 'schema-salad==4.5.20190815125611',
- 'typing >= 3.6.4',
- 'ruamel.yaml >=0.15.54, <=0.15.77',
+ 'cwltool==3.0.20200317203547',
+ 'schema-salad==5.0.20200302192450',
'arvados-python-client{}'.format(pysdk_dep),
'setuptools',
- 'ciso8601 >= 2.0.0',
- 'networkx < 2.3'
+ 'ciso8601 >= 2.0.0'
],
extras_require={
':os.name=="posix" and python_version<"3"': ['subprocess32 >= 3.5.1'],
data_files=[
('share/doc/arvados-cwl-runner', ['LICENSE-2.0.txt', 'README.rst']),
],
+ python_requires=">=3.5, <4",
classifiers=[
- 'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
],
test_suite='tests',
tests_require=[
- 'mock>=1.0',
+ 'mock>=1.0,<4',
'subprocess32>=3.5.1',
],
- zip_safe=True
- )
+ zip_safe=True,
+)
reset_container=1
leave_running=0
config=dev
+devcwl=0
tag="latest"
-pythoncmd=python
+pythoncmd=python3
suite=conformance
runapi=containers
build=1
shift
;;
+ --devcwl)
+ devcwl=1
+ shift
+ ;;
--pythoncmd)
pythoncmd=$2
shift ; shift
shift ; shift
;;
-h|--help)
- echo "$0 [--no-reset-container] [--leave-running] [--config dev|localdemo] [--tag docker_tag] [--build] [--pythoncmd python(2|3)] [--suite (integration|conformance-v1.0|conformance-v1.1)]"
+ echo "$0 [--no-reset-container] [--leave-running] [--config dev|localdemo] [--tag docker_tag] [--build] [--pythoncmd python(2|3)] [--suite (integration|conformance-v1.0|conformance-*)]"
exit
;;
*)
git clone https://github.com/common-workflow-language/common-workflow-language.git
fi
cd common-workflow-language
-elif [[ "$suite" = "conformance-v1.1" ]] ; then
- if ! test -d cwl-v1.1 ; then
- git clone https://github.com/common-workflow-language/cwl-v1.1.git
+elif [[ "$suite" =~ conformance-(.*) ]] ; then
+ version=\${BASH_REMATCH[1]}
+ if ! test -d cwl-\${version} ; then
+ git clone https://github.com/common-workflow-language/cwl-\${version}.git
fi
- cd cwl-v1.1
+ cd cwl-\${version}
fi
if [[ "$suite" != "integration" ]] ; then
EOF2
chmod +x /tmp/cwltest/arv-cwl-containers
+EXTRA=--compute-checksum
+
+if [[ $devcwl == 1 ]] ; then
+ EXTRA="\$EXTRA --enable-dev"
+fi
+
env
if [[ "$suite" = "integration" ]] ; then
cd /usr/src/arvados/sdk/cwl/tests
exec ./arvados-tests.sh $@
else
- exec ./run_test.sh RUNNER=/tmp/cwltest/arv-cwl-${runapi} EXTRA=--compute-checksum $@
+ exec ./run_test.sh RUNNER=/tmp/cwltest/arv-cwl-${runapi} EXTRA="\$EXTRA" $@
fi
EOF
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+cwlVersion: v1.0
+class: Workflow
+$namespaces:
+ arv: "http://arvados.org/cwl#"
+ cwltool: "http://commonwl.org/cwltool#"
+requirements:
+ cwltool:LoadListingRequirement:
+ loadListing: no_listing
+inputs:
+ d: Directory
+steps:
+ step1:
+ in:
+ d: d
+ out: [out]
+ run: wf/16169-step.cwl
+outputs:
+ out:
+ type: File
+ outputSource: step1/out
should_fail: true
tool: 15295-bad-keep-ref.cwl
doc: Test checking for invalid keepref
+
+- job: listing-job.yml
+ output: {
+ "out": {
+ "class": "File",
+ "location": "output.txt",
+ "size": 5,
+ "checksum": "sha1$724ba28f4a9a1b472057ff99511ed393a45552e1"
+ }
+ }
+ tool: 16169-no-listing-hint.cwl
+ doc: "Test cwltool:LoadListingRequirement propagation"
default:
class: File
location: ../../../../tools/arvbox/bin/arvbox
+ branch:
+ type: string
+ default: master
+ logincluster:
+ type: boolean
+ default: false
outputs:
arvados_api_token:
type: string
container_name: containers
arvbox_data: mkdir/arvbox_data
arvbox_bin: arvbox
+ branch: branch
out: [cluster_id, container_host, arvbox_data_out, superuser_token]
scatter: [container_name, arvbox_data]
scatterMethod: dotproduct
cluster_hosts: start/container_host
arvbox_data: start/arvbox_data_out
arvbox_bin: arvbox
+ logincluster: logincluster
out: []
scatter: [container_name, this_cluster_id, arvbox_data]
scatterMethod: dotproduct
cluster_hosts: string[]
arvbox_data: Directory
arvbox_bin: File
+ logincluster:
+ type: boolean
+ default: false
outputs:
arvbox_data_out:
type: Directory
}
var r = {"Clusters": {}};
r["Clusters"][inputs.this_cluster_id] = {"RemoteClusters": remoteClusters};
+ if (r["Clusters"][inputs.this_cluster_id]) {
+ r["Clusters"][inputs.this_cluster_id]["Login"] = {"LoginCluster": inputs.cluster_ids[0]};
+ }
return JSON.stringify(r);
}
- entryname: application.yml.override
container_name: string
arvbox_data: Directory
arvbox_bin: File
+ branch:
+ type: string
+ default: master
outputs:
cluster_id:
type: string
- shellQuote: false
valueFrom: |
set -ex
- $(inputs.arvbox_bin.path) start dev
+ mkdir -p $ARVBOX_DATA
+ if ! test -d $ARVBOX_DATA/arvados ; then
+ cd $ARVBOX_DATA
+ git clone https://github.com/arvados/arvados.git
+ fi
+ cd $ARVBOX_DATA/arvados
+ gitver=`git rev-parse HEAD`
+ git fetch
+ git checkout -f $(inputs.branch)
+ git pull
+ pulled=`git rev-parse HEAD`
+ git --no-pager log -n1 $pulled
+
+ cd $(runtime.outdir)
+ if test "$gitver" = "$pulled" ; then
+ $(inputs.arvbox_bin.path) start dev
+ else
+ $(inputs.arvbox_bin.path) restart dev
+ fi
$(inputs.arvbox_bin.path) status > status.txt
$(inputs.arvbox_bin.path) cat /var/lib/arvados/superuser_token > superuser_token.txt
import functools
import cwltool.process
import cwltool.secrets
+from cwltool.update import INTERNAL_VERSION
from schema_salad.ref_resolver import Loader
from schema_salad.sourceline import cmap
cwltool.process._names = set()
def helper(self, runner, enable_reuse=True):
- document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema("v1.1")
+ document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema(INTERNAL_VERSION)
make_fs_access=functools.partial(arvados_cwl.CollectionFsAccess,
collection_cache=arvados_cwl.CollectionCache(runner.api, None, 0))
"basedir": "",
"make_fs_access": make_fs_access,
"loader": Loader({}),
- "metadata": {"cwlVersion": "v1.1", "http://commonwl.org/cwltool#original_cwlVersion": "v1.0"}})
+ "metadata": {"cwlVersion": INTERNAL_VERSION, "http://commonwl.org/cwltool#original_cwlVersion": "v1.0"}})
runtimeContext = arvados_cwl.context.ArvRuntimeContext(
{"work_api": "containers",
"basedir": "",
runner.api.collections().get().execute.return_value = {
"portable_data_hash": "99999999999999999999999999999993+99"}
- document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema("v1.1")
+ document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema(INTERNAL_VERSION)
tool = cmap({
"inputs": [],
cwltool.process._names = set()
def helper(self, runner, enable_reuse=True):
- document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema("v1.1")
+ document_loader, avsc_names, schema_metadata, metaschema_loader = cwltool.process.get_schema("v1.0")
make_fs_access=functools.partial(arvados_cwl.CollectionFsAccess,
collection_cache=arvados_cwl.CollectionCache(runner.api, None, 0))
"basedir": "",
"make_fs_access": make_fs_access,
"loader": document_loader,
- "metadata": {"cwlVersion": "v1.1", "http://commonwl.org/cwltool#original_cwlVersion": "v1.0"},
+ "metadata": {"cwlVersion": INTERNAL_VERSION, "http://commonwl.org/cwltool#original_cwlVersion": "v1.0"},
"construct_tool_object": runner.arv_make_tool})
runtimeContext = arvados_cwl.context.ArvRuntimeContext(
{"work_api": "containers",
"--no-container",
"--move-outputs",
"--preserve-entire-environment",
- "workflow.cwl#main",
+ "workflow.cwl",
"cwl.input.yml"
],
"container_image": "99999999999999999999999999999993+99",
u'--no-container',
u'--move-outputs',
u'--preserve-entire-environment',
- u'workflow.cwl#main',
+ u'workflow.cwl',
u'cwl.input.yml'
],
'use_existing': True,
import sys
import unittest
import cwltool.process
+import re
from io import BytesIO
stubs.expect_container_request_uuid + '\n')
self.assertEqual(exited, 0)
+
+ @stubs
+ def test_submit_container_tool(self, stubs):
+ # test for issue #16139
+ exited = arvados_cwl.main(
+ ["--submit", "--no-wait", "--api=containers", "--debug",
+ "tests/tool/tool_with_sf.cwl", "tests/tool/tool_with_sf.yml"],
+ stubs.capture_stdout, sys.stderr, api_client=stubs.api, keep_client=stubs.keep_client)
+
+ self.assertEqual(stubs.capture_stdout.getvalue(),
+ stubs.expect_container_request_uuid + '\n')
+ self.assertEqual(exited, 0)
+
@stubs
def test_submit_container_no_reuse(self, stubs):
exited = arvados_cwl.main(
self.assertEqual(exited, 1)
self.assertRegexpMatches(
- capture_stderr.getvalue(),
+ re.sub(r'[ \n]+', ' ', capture_stderr.getvalue()),
r"Expected collection uuid zzzzz-4zz18-zzzzzzzzzzzzzzz to be 99999999999999999999999999999998\+99 but API server reported 99999999999999999999999999999997\+99")
finally:
cwltool_logger.removeHandler(stderr_logger)
--- /dev/null
+clipper clupper
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+# Test case for arvados-cwl-runner
+#
+# Used to test whether scanning a tool file for dependencies (e.g. default
+# value blub.txt) and uploading to Keep works as intended.
+
+class: CommandLineTool
+cwlVersion: v1.0
+requirements:
+ - class: DockerRequirement
+ dockerPull: debian:8
+inputs:
+ - id: x
+ type: File
+ secondaryFiles:
+ - .cat
+ inputBinding:
+ valueFrom: $(self.path).cat
+ position: 1
+outputs: []
+baseCommand: cat
--- /dev/null
+x:
+ class: File
+ location: blub.txt
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: Apache-2.0
+
+class: CommandLineTool
+cwlVersion: v1.0
+requirements:
+ InlineJavascriptRequirement: {}
+ DockerRequirement:
+ dockerPull: debian:stretch-slim
+inputs:
+ d: Directory
+outputs:
+ out: stdout
+stdout: output.txt
+arguments:
+ [echo, "${if(inputs.d.listing === undefined) {return 'true';} else {return 'false';}}"]
{
"$graph": [
{
+ "$namespaces": {
+ "arv": "http://arvados.org/cwl#"
+ },
"class": "Workflow",
- "cwlVersion": "v1.1",
+ "cwlVersion": "v1.0",
"hints": [],
"id": "#main",
"inputs": [
"run": {
"baseCommand": "sleep",
"class": "CommandLineTool",
- "id": "#main/sleep1/run/subtool",
+ "id": "#main/sleep1/subtool",
"inputs": [
{
- "id": "#main/sleep1/run/subtool/sleeptime",
+ "id": "#main/sleep1/subtool/sleeptime",
"inputBinding": {
"position": 1
},
],
"outputs": [
{
- "id": "#main/sleep1/run/subtool/out",
+ "id": "#main/sleep1/subtool/out",
"outputBinding": {
"outputEval": "out"
},
]
}
],
- "cwlVersion": "v1.1"
-}
+ "cwlVersion": "v1.0"
+}
\ No newline at end of file
var (
EndpointConfigGet = APIEndpoint{"GET", "arvados/v1/config", ""}
EndpointLogin = APIEndpoint{"GET", "login", ""}
+ EndpointLogout = APIEndpoint{"GET", "logout", ""}
EndpointCollectionCreate = APIEndpoint{"POST", "arvados/v1/collections", "collection"}
- EndpointCollectionUpdate = APIEndpoint{"PATCH", "arvados/v1/collections/:uuid", "collection"}
- EndpointCollectionGet = APIEndpoint{"GET", "arvados/v1/collections/:uuid", ""}
+ EndpointCollectionUpdate = APIEndpoint{"PATCH", "arvados/v1/collections/{uuid}", "collection"}
+ EndpointCollectionGet = APIEndpoint{"GET", "arvados/v1/collections/{uuid}", ""}
EndpointCollectionList = APIEndpoint{"GET", "arvados/v1/collections", ""}
- EndpointCollectionProvenance = APIEndpoint{"GET", "arvados/v1/collections/:uuid/provenance", ""}
- EndpointCollectionUsedBy = APIEndpoint{"GET", "arvados/v1/collections/:uuid/used_by", ""}
- EndpointCollectionDelete = APIEndpoint{"DELETE", "arvados/v1/collections/:uuid", ""}
- EndpointCollectionTrash = APIEndpoint{"POST", "arvados/v1/collections/:uuid/trash", ""}
- EndpointCollectionUntrash = APIEndpoint{"POST", "arvados/v1/collections/:uuid/untrash", ""}
+ EndpointCollectionProvenance = APIEndpoint{"GET", "arvados/v1/collections/{uuid}/provenance", ""}
+ EndpointCollectionUsedBy = APIEndpoint{"GET", "arvados/v1/collections/{uuid}/used_by", ""}
+ EndpointCollectionDelete = APIEndpoint{"DELETE", "arvados/v1/collections/{uuid}", ""}
+ EndpointCollectionTrash = APIEndpoint{"POST", "arvados/v1/collections/{uuid}/trash", ""}
+ EndpointCollectionUntrash = APIEndpoint{"POST", "arvados/v1/collections/{uuid}/untrash", ""}
EndpointSpecimenCreate = APIEndpoint{"POST", "arvados/v1/specimens", "specimen"}
- EndpointSpecimenUpdate = APIEndpoint{"PATCH", "arvados/v1/specimens/:uuid", "specimen"}
- EndpointSpecimenGet = APIEndpoint{"GET", "arvados/v1/specimens/:uuid", ""}
+ EndpointSpecimenUpdate = APIEndpoint{"PATCH", "arvados/v1/specimens/{uuid}", "specimen"}
+ EndpointSpecimenGet = APIEndpoint{"GET", "arvados/v1/specimens/{uuid}", ""}
EndpointSpecimenList = APIEndpoint{"GET", "arvados/v1/specimens", ""}
- EndpointSpecimenDelete = APIEndpoint{"DELETE", "arvados/v1/specimens/:uuid", ""}
+ EndpointSpecimenDelete = APIEndpoint{"DELETE", "arvados/v1/specimens/{uuid}", ""}
EndpointContainerCreate = APIEndpoint{"POST", "arvados/v1/containers", "container"}
- EndpointContainerUpdate = APIEndpoint{"PATCH", "arvados/v1/containers/:uuid", "container"}
- EndpointContainerGet = APIEndpoint{"GET", "arvados/v1/containers/:uuid", ""}
+ EndpointContainerUpdate = APIEndpoint{"PATCH", "arvados/v1/containers/{uuid}", "container"}
+ EndpointContainerGet = APIEndpoint{"GET", "arvados/v1/containers/{uuid}", ""}
EndpointContainerList = APIEndpoint{"GET", "arvados/v1/containers", ""}
- EndpointContainerDelete = APIEndpoint{"DELETE", "arvados/v1/containers/:uuid", ""}
- EndpointContainerLock = APIEndpoint{"POST", "arvados/v1/containers/:uuid/lock", ""}
- EndpointContainerUnlock = APIEndpoint{"POST", "arvados/v1/containers/:uuid/unlock", ""}
+ EndpointContainerDelete = APIEndpoint{"DELETE", "arvados/v1/containers/{uuid}", ""}
+ EndpointContainerLock = APIEndpoint{"POST", "arvados/v1/containers/{uuid}/lock", ""}
+ EndpointContainerUnlock = APIEndpoint{"POST", "arvados/v1/containers/{uuid}/unlock", ""}
+ EndpointUserActivate = APIEndpoint{"POST", "arvados/v1/users/{uuid}/activate", ""}
+ EndpointUserCreate = APIEndpoint{"POST", "arvados/v1/users", "user"}
+ EndpointUserCurrent = APIEndpoint{"GET", "arvados/v1/users/current", ""}
+ EndpointUserDelete = APIEndpoint{"DELETE", "arvados/v1/users/{uuid}", ""}
+ EndpointUserGet = APIEndpoint{"GET", "arvados/v1/users/{uuid}", ""}
+ EndpointUserGetCurrent = APIEndpoint{"GET", "arvados/v1/users/current", ""}
+ EndpointUserGetSystem = APIEndpoint{"GET", "arvados/v1/users/system", ""}
+ EndpointUserList = APIEndpoint{"GET", "arvados/v1/users", ""}
+ EndpointUserMerge = APIEndpoint{"POST", "arvados/v1/users/merge", ""}
+ EndpointUserSetup = APIEndpoint{"POST", "arvados/v1/users/setup", "user"}
+ EndpointUserSystem = APIEndpoint{"GET", "arvados/v1/users/system", ""}
+ EndpointUserUnsetup = APIEndpoint{"POST", "arvados/v1/users/{uuid}/unsetup", ""}
+ EndpointUserUpdate = APIEndpoint{"PATCH", "arvados/v1/users/{uuid}", "user"}
+ EndpointUserUpdateUUID = APIEndpoint{"POST", "arvados/v1/users/{uuid}/update_uuid", ""}
+ EndpointUserBatchUpdate = APIEndpoint{"PATCH", "arvados/v1/users/batch", ""}
EndpointAPIClientAuthorizationCurrent = APIEndpoint{"GET", "arvados/v1/api_client_authorizations/current", ""}
)
UUID string `json:"uuid"`
Select []string `json:"select"`
IncludeTrash bool `json:"include_trash"`
+ ForwardedFor string `json:"forwarded_for"`
+ Remote string `json:"remote"`
}
type UntrashOptions struct {
Attrs map[string]interface{} `json:"attrs"`
}
+type UpdateUUIDOptions struct {
+ UUID string `json:"uuid"`
+ NewUUID string `json:"new_uuid"`
+}
+
+type UserActivateOptions struct {
+ UUID string `json:"uuid"`
+}
+
+type UserSetupOptions struct {
+ UUID string `json:"uuid,omitempty"`
+ Email string `json:"email,omitempty"`
+ OpenIDPrefix string `json:"openid_prefix,omitempty"`
+ RepoName string `json:"repo_name,omitempty"`
+ VMUUID string `json:"vm_uuid,omitempty"`
+ SendNotificationEmail bool `json:"send_notification_email,omitempty"`
+ Attrs map[string]interface{} `json:"attrs"`
+}
+
+type UserMergeOptions struct {
+ NewUserUUID string `json:"new_user_uuid,omitempty"`
+ OldUserUUID string `json:"old_user_uuid,omitempty"`
+ NewOwnerUUID string `json:"new_owner_uuid,omitempty"`
+ NewUserToken string `json:"new_user_token,omitempty"`
+ RedirectToNewUser bool `json:"redirect_to_new_user"`
+}
+
+type UserBatchUpdateOptions struct {
+ Updates map[string]map[string]interface{} `json:"updates"`
+}
+
+type UserBatchUpdateResponse struct{}
+
type DeleteOptions struct {
UUID string `json:"uuid"`
}
State string `json:"state,omitempty"` // OAuth2 callback state
}
+type LogoutOptions struct {
+ ReturnTo string `json:"return_to"` // Redirect to this URL after logging out
+}
+
type API interface {
ConfigGet(ctx context.Context) (json.RawMessage, error)
Login(ctx context.Context, options LoginOptions) (LoginResponse, error)
+ Logout(ctx context.Context, options LogoutOptions) (LogoutResponse, error)
CollectionCreate(ctx context.Context, options CreateOptions) (Collection, error)
CollectionUpdate(ctx context.Context, options UpdateOptions) (Collection, error)
CollectionGet(ctx context.Context, options GetOptions) (Collection, error)
SpecimenGet(ctx context.Context, options GetOptions) (Specimen, error)
SpecimenList(ctx context.Context, options ListOptions) (SpecimenList, error)
SpecimenDelete(ctx context.Context, options DeleteOptions) (Specimen, error)
+ UserCreate(ctx context.Context, options CreateOptions) (User, error)
+ UserUpdate(ctx context.Context, options UpdateOptions) (User, error)
+ UserUpdateUUID(ctx context.Context, options UpdateUUIDOptions) (User, error)
+ UserMerge(ctx context.Context, options UserMergeOptions) (User, error)
+ UserActivate(ctx context.Context, options UserActivateOptions) (User, error)
+ UserSetup(ctx context.Context, options UserSetupOptions) (map[string]interface{}, error)
+ UserUnsetup(ctx context.Context, options GetOptions) (User, error)
+ UserGet(ctx context.Context, options GetOptions) (User, error)
+ UserGetCurrent(ctx context.Context, options GetOptions) (User, error)
+ UserGetSystem(ctx context.Context, options GetOptions) (User, error)
+ UserList(ctx context.Context, options ListOptions) (UserList, error)
+ UserDelete(ctx context.Context, options DeleteOptions) (User, error)
+ UserBatchUpdate(context.Context, UserBatchUpdateOptions) (UserList, error)
APIClientAuthorizationCurrent(ctx context.Context, options GetOptions) (APIClientAuthorization, error)
}
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
// A Client is an HTTP client with an API endpoint and a set of
}
if urlValues == nil {
// Nothing to send
- } else if method == "GET" || method == "HEAD" || body != nil {
- // Must send params in query part of URL (FIXME: what
- // if resulting URL is too long?)
+ } else if body != nil || ((method == "GET" || method == "HEAD") && len(urlValues.Encode()) < 1000) {
+ // Send params in query part of URL
u, err := url.Parse(urlString)
if err != nil {
return err
if err != nil {
return err
}
+ if (method == "GET" || method == "HEAD") && body != nil {
+ req.Header.Set("X-Http-Method-Override", method)
+ req.Method = "POST"
+ }
req = req.WithContext(ctx)
req.Header.Set("Content-type", "application/x-www-form-urlencoded")
for k, v := range c.SendHeader {
import (
"bufio"
+ "crypto/md5"
"fmt"
+ "regexp"
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/blockdigest"
+ "git.arvados.org/arvados.git/sdk/go/blockdigest"
)
// Collection is an arvados#collection resource.
ManifestText string `json:"manifest_text"`
UnsignedManifestText string `json:"unsigned_manifest_text"`
Name string `json:"name"`
- CreatedAt *time.Time `json:"created_at"`
- ModifiedAt *time.Time `json:"modified_at"`
+ CreatedAt time.Time `json:"created_at"`
+ ModifiedAt time.Time `json:"modified_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
PortableDataHash string `json:"portable_data_hash"`
ReplicationConfirmed *int `json:"replication_confirmed"`
ReplicationConfirmedAt *time.Time `json:"replication_confirmed_at"`
DeleteAt *time.Time `json:"delete_at"`
IsTrashed bool `json:"is_trashed"`
Properties map[string]interface{} `json:"properties"`
+ WritableBy []string `json:"writable_by,omitempty"`
+ FileCount int `json:"file_count"`
+ FileSizeTotal int64 `json:"file_size_total"`
+ Version int `json:"version"`
+ PreserveVersion bool `json:"preserve_version"`
+ CurrentVersionUUID string `json:"current_version_uuid"`
+ Description string `json:"description"`
}
func (c Collection) resourceName() string {
Offset int `json:"offset"`
Limit int `json:"limit"`
}
+
+var (
+ blkRe = regexp.MustCompile(`^ [0-9a-f]{32}\+\d+`)
+ tokRe = regexp.MustCompile(` ?[^ ]*`)
+)
+
+// PortableDataHash computes the portable data hash of the given
+// manifest.
+func PortableDataHash(mt string) string {
+ h := md5.New()
+ size := 0
+ _ = tokRe.ReplaceAllFunc([]byte(mt), func(tok []byte) []byte {
+ if m := blkRe.Find(tok); m != nil {
+ // write hash+size, ignore remaining block hints
+ tok = m
+ }
+ n, err := h.Write(tok)
+ if err != nil {
+ panic(err)
+ }
+ size += n
+ return nil
+ })
+ return fmt.Sprintf("%x+%d", h.Sum(nil), size)
+}
"net/url"
"os"
- "git.curoverse.com/arvados.git/sdk/go/config"
+ "git.arvados.org/arvados.git/sdk/go/config"
)
var DefaultConfigFile = func() string {
Function string
Protected bool
}
- PreserveVersionIfIdle Duration
- TrashSweepInterval Duration
- TrustAllContent bool
+ PreserveVersionIfIdle Duration
+ TrashSweepInterval Duration
+ TrustAllContent bool
+ ForwardSlashNameSubstitution string
BlobMissingReport string
BalancePeriod Duration
Repositories string
}
Login struct {
- GoogleClientID string
- GoogleClientSecret string
- ProviderAppID string
- ProviderAppSecret string
- LoginCluster string
- RemoteTokenRefresh Duration
+ GoogleClientID string
+ GoogleClientSecret string
+ GoogleAlternateEmailAddresses bool
+ ProviderAppID string
+ ProviderAppSecret string
+ LoginCluster string
+ RemoteTokenRefresh Duration
}
Mail struct {
MailchimpAPIKey string
NewUsersAreActive bool
UserNotifierEmailFrom string
UserProfileNotificationAddress string
+ PreferDomainForUsername string
}
Volumes map[string]Volume
Workbench struct {
VocabularyURL string
WelcomePageHTML string
InactivePageHTML string
+ SSHHelpPageHTML string
+ SSHHelpHostSuffix string
}
- EnableBetaController14287 bool
+ ForceLegacyAPI14 bool
}
type Volume struct {
Logging struct {
MaxAge Duration
LogBytesPerEvent int
- LogSecondsBetweenEvents int
+ LogSecondsBetweenEvents Duration
LogThrottlePeriod Duration
LogThrottleBytes int
LogThrottleLines int
Enable bool
BootProbeCommand string
+ DeployRunnerBinary string
ImageID string
MaxCloudOpsPerSecond int
MaxProbesPerSecond int
// Container is an arvados#container resource.
type Container struct {
UUID string `json:"uuid"`
+ Etag string `json:"etag"`
CreatedAt time.Time `json:"created_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ ModifiedAt time.Time `json:"modified_at"`
Command []string `json:"command"`
ContainerImage string `json:"container_image"`
Cwd string `json:"cwd"`
type ContainerRequestState string
const (
- ContainerRequestStateUncomitted = ContainerState("Uncommitted")
- ContainerRequestStateCommitted = ContainerState("Committed")
- ContainerRequestStateFinal = ContainerState("Final")
+ ContainerRequestStateUncomitted = ContainerRequestState("Uncommitted")
+ ContainerRequestStateCommitted = ContainerRequestState("Committed")
+ ContainerRequestStateFinal = ContainerRequestState("Final")
)
"strconv"
"strings"
"sync"
+ "sync/atomic"
"time"
)
// FileSystem returns a CollectionFileSystem for the collection.
func (c *Collection) FileSystem(client apiClient, kc keepClient) (CollectionFileSystem, error) {
- var modTime time.Time
- if c.ModifiedAt == nil {
+ modTime := c.ModifiedAt
+ if modTime.IsZero() {
modTime = time.Now()
- } else {
- modTime = *c.ModifiedAt
}
fs := &collectionFileSystem{
uuid: c.UUID,
// A new seg.buf has been allocated.
return
}
- seg.flushing = nil
if err != nil {
// TODO: stall (or return errors from)
// subsequent writes until flushing
offsets := make([]int, 0, len(refs)) // location of segment's data within block
for _, ref := range refs {
seg := ref.fn.segments[ref.idx].(*memSegment)
- if seg.flushing != nil && !sync {
+ if !sync && seg.flushingUnfinished() {
// Let the other flushing goroutine finish. If
// it fails, we'll try again next time.
+ close(done)
return nil
} else {
// In sync mode, we proceed regardless of
}
segs = append(segs, seg)
}
+ blocksize := len(block)
dn.fs.throttle().Acquire()
errs := make(chan error, 1)
go func() {
defer close(done)
defer close(errs)
- locked := map[*filenode]bool{}
locator, _, err := dn.fs.PutB(block)
dn.fs.throttle().Release()
- {
- if !sync {
- for _, name := range dn.sortedNames() {
- if fn, ok := dn.inodes[name].(*filenode); ok {
- fn.Lock()
- defer fn.Unlock()
- locked[fn] = true
- }
- }
- }
- defer func() {
- for _, seg := range segs {
- if seg.flushing == done {
- seg.flushing = nil
- }
- }
- }()
- }
if err != nil {
errs <- err
return
}
for idx, ref := range refs {
if !sync {
+ ref.fn.Lock()
// In async mode, fn's lock was
// released while we were waiting for
// PutB(); lots of things might have
// file segments have
// rearranged or changed in
// some way
+ ref.fn.Unlock()
continue
} else if seg, ok := ref.fn.segments[ref.idx].(*memSegment); !ok || seg != segs[idx] {
// segment has been replaced
+ ref.fn.Unlock()
continue
} else if seg.flushing != done {
// seg.buf has been replaced
- continue
- } else if !locked[ref.fn] {
- // file was renamed, moved, or
- // deleted since we called
- // PutB
+ ref.fn.Unlock()
continue
}
}
ref.fn.segments[ref.idx] = storedSegment{
kc: dn.fs,
locator: locator,
- size: len(block),
+ size: blocksize,
offset: offsets[idx],
length: len(data),
}
- ref.fn.memsize -= int64(len(data))
+ // atomic is needed here despite caller having
+ // lock: caller might be running concurrent
+ // commitBlock() goroutines using the same
+ // lock, writing different segments from the
+ // same file.
+ atomic.AddInt64(&ref.fn.memsize, -int64(len(data)))
+ if !sync {
+ ref.fn.Unlock()
+ }
}
}()
if sync {
type memSegment struct {
buf []byte
- // If flushing is not nil, then a) buf is being shared by a
- // pruneMemSegments goroutine, and must be copied on write;
- // and b) the flushing channel will close when the goroutine
- // finishes, whether it succeeds or not.
+ // If flushing is not nil and not ready/closed, then a) buf is
+ // being shared by a pruneMemSegments goroutine, and must be
+ // copied on write; and b) the flushing channel will close
+ // when the goroutine finishes, whether it succeeds or not.
flushing <-chan struct{}
}
+func (me *memSegment) flushingUnfinished() bool {
+ if me.flushing == nil {
+ return false
+ }
+ select {
+ case <-me.flushing:
+ me.flushing = nil
+ return false
+ default:
+ return true
+ }
+}
+
func (me *memSegment) Len() int {
return len(me.buf)
}
"os"
"regexp"
"runtime"
+ "runtime/pprof"
"strings"
"sync"
"sync/atomic"
fs.Flush("", true)
}
+func (s *CollectionFSSuite) TestFlushStress(c *check.C) {
+ done := false
+ defer func() { done = true }()
+ time.AfterFunc(10*time.Second, func() {
+ if !done {
+ pprof.Lookup("goroutine").WriteTo(os.Stderr, 1)
+ panic("timeout")
+ }
+ })
+
+ wrote := 0
+ s.kc.onPut = func(p []byte) {
+ s.kc.Lock()
+ s.kc.blocks = map[string][]byte{}
+ wrote++
+ defer c.Logf("wrote block %d, %d bytes", wrote, len(p))
+ s.kc.Unlock()
+ time.Sleep(20 * time.Millisecond)
+ }
+
+ fs, err := (&Collection{}).FileSystem(s.client, s.kc)
+ c.Assert(err, check.IsNil)
+
+ data := make([]byte, 1<<20)
+ for i := 0; i < 3; i++ {
+ dir := fmt.Sprintf("dir%d", i)
+ fs.Mkdir(dir, 0755)
+ for j := 0; j < 200; j++ {
+ data[0] = byte(j)
+ f, err := fs.OpenFile(fmt.Sprintf("%s/file%d", dir, j), os.O_WRONLY|os.O_CREATE, 0)
+ c.Assert(err, check.IsNil)
+ _, err = f.Write(data)
+ c.Assert(err, check.IsNil)
+ f.Close()
+ fs.Flush(dir, false)
+ }
+ _, err := fs.MarshalManifest(".")
+ c.Check(err, check.IsNil)
+ }
+}
+
+func (s *CollectionFSSuite) TestFlushShort(c *check.C) {
+ s.kc.onPut = func([]byte) {
+ s.kc.Lock()
+ s.kc.blocks = map[string][]byte{}
+ s.kc.Unlock()
+ }
+ fs, err := (&Collection{}).FileSystem(s.client, s.kc)
+ c.Assert(err, check.IsNil)
+ for _, blocksize := range []int{8, 1000000} {
+ dir := fmt.Sprintf("dir%d", blocksize)
+ err = fs.Mkdir(dir, 0755)
+ c.Assert(err, check.IsNil)
+ data := make([]byte, blocksize)
+ for i := 0; i < 100; i++ {
+ f, err := fs.OpenFile(fmt.Sprintf("%s/file%d", dir, i), os.O_WRONLY|os.O_CREATE, 0)
+ c.Assert(err, check.IsNil)
+ _, err = f.Write(data)
+ c.Assert(err, check.IsNil)
+ f.Close()
+ fs.Flush(dir, false)
+ }
+ fs.Flush(dir, true)
+ _, err := fs.MarshalManifest(".")
+ c.Check(err, check.IsNil)
+ }
+}
+
func (s *CollectionFSSuite) TestBrokenManifests(c *check.C) {
for _, txt := range []string{
"\n",
)
func deferredCollectionFS(fs FileSystem, parent inode, coll Collection) inode {
- var modTime time.Time
- if coll.ModifiedAt != nil {
- modTime = *coll.ModifiedAt
- } else {
+ modTime := coll.ModifiedAt
+ if modTime.IsZero() {
modTime = time.Now()
}
placeholder := &treenode{
}
var contents CollectionList
- err = fs.RequestAndDecode(&contents, "GET", "arvados/v1/groups/"+uuid+"/contents", nil, ResourceListParams{
- Count: "none",
- Filters: []Filter{
- {"name", "=", name},
- {"uuid", "is_a", []string{"arvados#collection", "arvados#group"}},
- {"groups.group_class", "=", "project"},
- },
- })
- if err != nil {
- return nil, err
+ for _, subst := range []string{"/", fs.forwardSlashNameSubstitution} {
+ contents = CollectionList{}
+ err = fs.RequestAndDecode(&contents, "GET", "arvados/v1/groups/"+uuid+"/contents", nil, ResourceListParams{
+ Count: "none",
+ Filters: []Filter{
+ {"name", "=", strings.Replace(name, subst, "/", -1)},
+ {"uuid", "is_a", []string{"arvados#collection", "arvados#group"}},
+ {"groups.group_class", "=", "project"},
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ if len(contents.Items) > 0 || fs.forwardSlashNameSubstitution == "/" || fs.forwardSlashNameSubstitution == "" || !strings.Contains(name, fs.forwardSlashNameSubstitution) {
+ break
+ }
+ // If the requested name contains the configured "/"
+ // replacement string and didn't match a
+ // project/collection exactly, we'll try again with
+ // "/" in its place, so a lookup of a munged name
+ // works regardless of whether the directory listing
+ // has been populated with escaped names.
+ //
+ // Note this doesn't handle items whose names contain
+ // both "/" and the substitution string.
}
if len(contents.Items) == 0 {
return nil, os.ErrNotExist
}
for _, i := range resp.Items {
coll := i
+ if fs.forwardSlashNameSubstitution != "" {
+ coll.Name = strings.Replace(coll.Name, "/", fs.forwardSlashNameSubstitution, -1)
+ }
if !permittedName(coll.Name) {
continue
}
break
}
for _, group := range resp.Items {
+ if fs.forwardSlashNameSubstitution != "" {
+ group.Name = strings.Replace(group.Name, "/", fs.forwardSlashNameSubstitution, -1)
+ }
if !permittedName(group.Name) {
continue
}
c.Logf("fi.Name() == %q", fi.Name())
c.Check(strings.Contains(fi.Name(), "/"), check.Equals, false)
}
+
+ // Make a new fs (otherwise content will still be cached from
+ // above) and enable "/" replacement string.
+ s.fs = s.client.SiteFileSystem(s.kc)
+ s.fs.ForwardSlashNameSubstitution("___")
+ dir, err = s.fs.Open("/users/active/A Project/bad___collection")
+ if c.Check(err, check.IsNil) {
+ _, err = dir.Readdir(-1)
+ c.Check(err, check.IsNil)
+ }
+ dir, err = s.fs.Open("/users/active/A Project/bad___project")
+ if c.Check(err, check.IsNil) {
+ _, err = dir.Readdir(-1)
+ c.Check(err, check.IsNil)
+ }
}
func (s *SiteFSSuite) TestProjectUpdatedByOther(c *check.C) {
MountByID(mount string)
MountProject(mount, uuid string)
MountUsers(mount string)
+ ForwardSlashNameSubstitution(string)
}
type customFileSystem struct {
staleThreshold time.Time
staleLock sync.Mutex
+
+ forwardSlashNameSubstitution string
}
func (c *Client) CustomFileSystem(kc keepClient) CustomFileSystem {
})
}
+func (fs *customFileSystem) ForwardSlashNameSubstitution(repl string) {
+ fs.forwardSlashNameSubstitution = repl
+}
+
// SiteFileSystem returns a FileSystem that maps collections and other
// Arvados objects onto a filesystem layout.
//
}
func (resp LoginResponse) ServeHTTP(w http.ResponseWriter, req *http.Request) {
+ w.Header().Set("Cache-Control", "no-store")
if resp.RedirectLocation != "" {
w.Header().Set("Location", resp.RedirectLocation)
w.WriteHeader(http.StatusFound)
} else {
+ w.Header().Set("Content-Type", "text/html")
w.Write(resp.HTML.Bytes())
}
}
+
+type LogoutResponse struct {
+ RedirectLocation string
+}
+
+func (resp LogoutResponse) ServeHTTP(w http.ResponseWriter, req *http.Request) {
+ w.Header().Set("Location", resp.RedirectLocation)
+ w.WriteHeader(http.StatusFound)
+}
import "time"
type Specimen struct {
- UUID string `json:"uuid"`
- OwnerUUID string `json:"owner_uuid"`
- CreatedAt time.Time `json:"created_at"`
- ModifiedAt time.Time `json:"modified_at"`
- UpdatedAt time.Time `json:"updated_at"`
- Properties map[string]interface{} `json:"properties"`
+ UUID string `json:"uuid"`
+ OwnerUUID string `json:"owner_uuid"`
+ CreatedAt time.Time `json:"created_at"`
+ ModifiedAt time.Time `json:"modified_at"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ Properties map[string]interface{} `json:"properties"`
}
type SpecimenList struct {
package arvados
+import "time"
+
// User is an arvados#user record
type User struct {
- UUID string `json:"uuid"`
- IsActive bool `json:"is_active"`
- IsAdmin bool `json:"is_admin"`
- Username string `json:"username"`
- Email string `json:"email"`
+ UUID string `json:"uuid"`
+ Etag string `json:"etag"`
+ IsActive bool `json:"is_active"`
+ IsAdmin bool `json:"is_admin"`
+ Username string `json:"username"`
+ Email string `json:"email"`
+ FullName string `json:"full_name"`
+ FirstName string `json:"first_name"`
+ LastName string `json:"last_name"`
+ IdentityURL string `json:"identity_url"`
+ IsInvited bool `json:"is_invited"`
+ OwnerUUID string `json:"owner_uuid"`
+ CreatedAt time.Time `json:"created_at"`
+ ModifiedAt time.Time `json:"modified_at"`
+ ModifiedByUserUUID string `json:"modified_by_user_uuid"`
+ ModifiedByClientUUID string `json:"modified_by_client_uuid"`
+ Prefs map[string]interface{} `json:"prefs"`
+ WritableBy []string `json:"writable_by,omitempty"`
}
// UserList is an arvados#userList resource.
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
type StringMatcher func(string) bool
"os"
"testing"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
. "gopkg.in/check.v1"
)
"runtime"
"sync"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
var ErrStubUnimplemented = errors.New("stub unimplemented")
as.appendCall(as.Login, ctx, options)
return arvados.LoginResponse{}, as.Error
}
+func (as *APIStub) Logout(ctx context.Context, options arvados.LogoutOptions) (arvados.LogoutResponse, error) {
+ as.appendCall(as.Logout, ctx, options)
+ return arvados.LogoutResponse{}, as.Error
+}
func (as *APIStub) CollectionCreate(ctx context.Context, options arvados.CreateOptions) (arvados.Collection, error) {
as.appendCall(as.CollectionCreate, ctx, options)
return arvados.Collection{}, as.Error
as.appendCall(as.SpecimenDelete, ctx, options)
return arvados.Specimen{}, as.Error
}
+func (as *APIStub) UserCreate(ctx context.Context, options arvados.CreateOptions) (arvados.User, error) {
+ as.appendCall(as.UserCreate, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserUpdate(ctx context.Context, options arvados.UpdateOptions) (arvados.User, error) {
+ as.appendCall(as.UserUpdate, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserUpdateUUID(ctx context.Context, options arvados.UpdateUUIDOptions) (arvados.User, error) {
+ as.appendCall(as.UserUpdateUUID, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserActivate(ctx context.Context, options arvados.UserActivateOptions) (arvados.User, error) {
+ as.appendCall(as.UserActivate, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserSetup(ctx context.Context, options arvados.UserSetupOptions) (map[string]interface{}, error) {
+ as.appendCall(as.UserSetup, ctx, options)
+ return nil, as.Error
+}
+func (as *APIStub) UserUnsetup(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ as.appendCall(as.UserUnsetup, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserGet(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ as.appendCall(as.UserGet, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserGetCurrent(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ as.appendCall(as.UserGetCurrent, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserGetSystem(ctx context.Context, options arvados.GetOptions) (arvados.User, error) {
+ as.appendCall(as.UserGetSystem, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserList(ctx context.Context, options arvados.ListOptions) (arvados.UserList, error) {
+ as.appendCall(as.UserList, ctx, options)
+ return arvados.UserList{}, as.Error
+}
+func (as *APIStub) UserDelete(ctx context.Context, options arvados.DeleteOptions) (arvados.User, error) {
+ as.appendCall(as.UserDelete, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserMerge(ctx context.Context, options arvados.UserMergeOptions) (arvados.User, error) {
+ as.appendCall(as.UserMerge, ctx, options)
+ return arvados.User{}, as.Error
+}
+func (as *APIStub) UserBatchUpdate(ctx context.Context, options arvados.UserBatchUpdateOptions) (arvados.UserList, error) {
+ as.appendCall(as.UserBatchUpdate, ctx, options)
+ return arvados.UserList{}, as.Error
+}
func (as *APIStub) APIClientAuthorizationCurrent(ctx context.Context, options arvados.GetOptions) (arvados.APIClientAuthorization, error) {
as.appendCall(as.APIClientAuthorizationCurrent, ctx, options)
return arvados.APIClientAuthorization{}, as.Error
defer as.mtx.Unlock()
var calls []APIStubCall
for _, call := range as.calls {
-
if method == nil || (runtime.FuncForPC(reflect.ValueOf(call.Method).Pointer()).Name() ==
runtime.FuncForPC(reflect.ValueOf(method).Pointer()).Name()) {
calls = append(calls, call)
ActiveTokenUUID = "zzzzz-gj3su-077z32aux8dg2s1"
ActiveTokenV2 = "v2/zzzzz-gj3su-077z32aux8dg2s1/3kg6k6lzmp9kj5cpkcoxie963cmvjahbt2fod9zru30k1jqdmi"
AdminToken = "4axaw8zxe0qm22wa6urpp5nskcne8z88cvbupv653y1njyi05h"
+ AdminTokenUUID = "zzzzz-gj3su-027z32aux8dg2s1"
AnonymousToken = "4kg6k6lzmp9kj4cpkcoxie964cmvjahbt4fod9zru44k4jqdmi"
DataManagerToken = "320mkve8qkswstz7ff61glpk3mhgghmg67wmic7elw4z41pke1"
+ SystemRootToken = "systemusertesttoken1234567890aoeuidhtnsqjkxbmwvzpy"
ManagementToken = "jg3ajndnq63sywcd50gbs5dskdc9ckkysb0nsqmfz08nwf17nl"
ActiveUserUUID = "zzzzz-tpzed-xurymjxw79nv3jz"
FederatedActiveUserUUID = "zbbbb-tpzed-xurymjxw79nv3jz"
WorkflowWithDefinitionYAMLUUID = "zzzzz-7fd4e-validworkfloyml"
CollectionReplicationDesired2Confirmed2UUID = "zzzzz-4zz18-434zv1tnnf2rygp"
+
+ ActiveUserCanReadAllUsersLinkUUID = "zzzzz-o0j2j-ctbysaduejxfrs5"
+
+ TrustedWorkbenchAPIClientUUID = "zzzzz-ozdt8-teyxzyd8qllg11h"
+
+ AdminAuthorizedKeysUUID = "zzzzz-fngyi-12nc9ov4osp8nae"
+
+ CrunchstatForRunningJobLogUUID = "zzzzz-57u5n-tmymyrojrbtnxh1"
+
+ IdleNodeUUID = "zzzzz-7ekkf-2z3mc76g2q73aio"
+
+ TestVMUUID = "zzzzz-2x53u-382brsig8rp3064"
+
+ CollectionWithUniqueWordsUUID = "zzzzz-4zz18-mnt690klmb51aud"
)
// PathologicalManifest : A valid manifest designed to test
"net/url"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"gopkg.in/check.v1"
)
"net/http"
"net/url"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// StubResponse struct with response status and body
Tokens []string
}
-func NewCredentials() *Credentials {
- return &Credentials{Tokens: []string{}}
+func NewCredentials(tokens ...string) *Credentials {
+ return &Credentials{Tokens: tokens}
}
func NewContext(ctx context.Context, c *Credentials) context.Context {
return logger
}
+// LogWriter returns an io.Writer that writes to the given log func,
+// which is typically (*check.C).Log().
+func LogWriter(log func(...interface{})) io.Writer {
+ return &logWriter{log}
+}
+
// SetLevel sets the current logging level. See logrus for level
// names.
func SetLevel(level string) {
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
"github.com/sirupsen/logrus"
)
import (
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
. "gopkg.in/check.v1"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/auth"
)
const defaultTimeout = arvados.Duration(2 * time.Second)
sendErr(http.StatusUnauthorized, errUnauthorized)
return
}
- if req.URL.Path != "/_health/all" {
+ if req.URL.Path == "/_health/all" {
+ json.NewEncoder(resp).Encode(agg.ClusterHealth())
+ } else if req.URL.Path == "/_health/ping" {
+ resp.Write(healthyBody)
+ } else {
sendErr(http.StatusNotFound, errNotFound)
return
}
- json.NewEncoder(resp).Encode(agg.ClusterHealth())
if agg.Log != nil {
agg.Log(req, nil)
}
}
func (agg *Aggregator) ClusterHealth() ClusterHealthResponse {
+ agg.setupOnce.Do(agg.setup)
resp := ClusterHealthResponse{
Health: "OK",
Checks: make(map[string]CheckResult),
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
"gopkg.in/check.v1"
)
"net/http"
"time"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/stats"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/stats"
"github.com/sirupsen/logrus"
)
func logResponse(w *responseTimer, req *http.Request, lgr *logrus.Entry) {
if tStart, ok := req.Context().Value(&requestTimeContextKey).(time.Time); ok {
tDone := time.Now()
+ writeTime := w.writeTime
+ if !w.wrote {
+ // Empty response body. Header was sent when
+ // handler exited.
+ writeTime = tDone
+ }
lgr = lgr.WithFields(logrus.Fields{
"timeTotal": stats.Duration(tDone.Sub(tStart)),
- "timeToStatus": stats.Duration(w.writeTime.Sub(tStart)),
- "timeWriteBody": stats.Duration(tDone.Sub(w.writeTime)),
+ "timeToStatus": stats.Duration(writeTime.Sub(tStart)),
+ "timeWriteBody": stats.Duration(tDone.Sub(writeTime)),
})
}
respCode := w.WroteStatus()
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
)
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/stats"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/stats"
"github.com/gogo/protobuf/jsonpb"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"errors"
"os"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/manifest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/manifest"
)
// ErrNoManifest indicates the given collection has no manifest
"strconv"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
check "gopkg.in/check.v1"
)
"syscall"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
)
// ClearCache clears the Keep service discovery cache.
"gopkg.in/check.v1"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
)
func (s *ServerRequiredSuite) TestOverrideDiscovery(c *check.C) {
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/asyncbuf"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/asyncbuf"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
// A Keep "block" is 64MB.
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
. "gopkg.in/check.v1"
)
"os"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
)
// Function used to emit debug messages. The easiest way to enable
import (
"errors"
"fmt"
- "git.curoverse.com/arvados.git/sdk/go/blockdigest"
+ "git.arvados.org/arvados.git/sdk/go/blockdigest"
"path"
"regexp"
"sort"
import (
"fmt"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/blockdigest"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/blockdigest"
"io/ioutil"
"reflect"
"regexp"
##### About
Arvados Java Client allows to access Arvados servers and uses two APIs:
* lower level [Keep Server API](https://doc.arvados.org/api/index.html)
-* higher level [Keep-Web API](https://godoc.org/github.com/curoverse/arvados/services/keep-web) (when needed)
+* higher level [Keep-Web API](https://godoc.org/github.com/arvados/arvados/services/keep-web) (when needed)
##### Required Java version
This SDK requires Java 8+
packaging 'jar'
groupId 'org.arvados'
description 'Arvados Java SDK'
- url 'https://github.com/curoverse/arvados'
+ url 'https://github.com/arvados/arvados'
scm {
- url 'scm:git@https://github.com/curoverse/arvados.git'
- connection 'scm:git@https://github.com/curoverse/arvados.git'
- developerConnection 'scm:git@https://github.com/curoverse/arvados.git'
+ url 'scm:git@https://github.com/arvados/arvados.git'
+ connection 'scm:git@https://github.com/arvados/arvados.git'
+ developerConnection 'scm:git@https://github.com/arvados/arvados.git'
}
licenses {
BaseApiClient(ConfigProvider config) {
this.config = config;
- client = OkHttpClientFactory.builder()
- .build()
- .create(config.isApiHostInsecure());
+ this.client = OkHttpClientFactory.INSTANCE.create(config.isApiHostInsecure());
}
Request.Builder getRequestBuilder() {
--- /dev/null
+/*
+ * Copyright (C) The Arvados Authors. All rights reserved.
+ *
+ * SPDX-License-Identifier: AGPL-3.0 OR Apache-2.0
+ *
+ */
+
+package org.arvados.client.api.client;
+
+import org.arvados.client.api.model.Link;
+import org.arvados.client.api.model.LinkList;
+import org.arvados.client.config.ConfigProvider;
+
+public class LinksApiClient extends BaseStandardApiClient<Link, LinkList> {
+
+ private static final String RESOURCE = "links";
+
+ public LinksApiClient(ConfigProvider config) {
+ super(config);
+ }
+
+ @Override
+ String getResource() {
+ return RESOURCE;
+ }
+
+ @Override
+ Class<Link> getType() {
+ return Link.class;
+ }
+
+ @Override
+ Class<LinkList> getListType() {
+ return LinkList.class;
+ }
+}
package org.arvados.client.api.client.factory;
+import com.google.common.base.Suppliers;
import okhttp3.OkHttpClient;
import org.arvados.client.exception.ArvadosClientException;
import org.slf4j.Logger;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import java.security.cert.X509Certificate;
+import java.util.function.Supplier;
-public class OkHttpClientFactory {
-
+/**
+ * {@link OkHttpClient} instance factory that builds and configures client instances sharing
+ * the common resource pool: this is the recommended approach to optimize resource usage.
+ */
+public final class OkHttpClientFactory {
+ public static final OkHttpClientFactory INSTANCE = new OkHttpClientFactory();
private final Logger log = org.slf4j.LoggerFactory.getLogger(OkHttpClientFactory.class);
+ private final OkHttpClient clientSecure = new OkHttpClient();
+ private final Supplier<OkHttpClient> clientUnsecure =
+ Suppliers.memoize(this::getDefaultClientAcceptingAllCertificates);
+
+ private OkHttpClientFactory() { /* singleton */}
- OkHttpClientFactory() {
+ public OkHttpClient create(boolean apiHostInsecure) {
+ return apiHostInsecure ? getDefaultUnsecureClient() : getDefaultClient();
}
- public static OkHttpClientFactoryBuilder builder() {
- return new OkHttpClientFactoryBuilder();
+ /**
+ * @return default secure {@link OkHttpClient} with shared resource pool.
+ */
+ public OkHttpClient getDefaultClient() {
+ return clientSecure;
}
- public OkHttpClient create(boolean apiHostInsecure) {
- OkHttpClient.Builder builder = new OkHttpClient.Builder();
- if (apiHostInsecure) {
- trustAllCertificates(builder);
- }
- return builder.build();
+ /**
+ * @return default {@link OkHttpClient} with shared resource pool
+ * that will accept all SSL certificates by default.
+ */
+ public OkHttpClient getDefaultUnsecureClient() {
+ return clientUnsecure.get();
+ }
+
+ /**
+ * @return default {@link OkHttpClient.Builder} with shared resource pool.
+ */
+ public OkHttpClient.Builder getDefaultClientBuilder() {
+ return clientSecure.newBuilder();
+ }
+
+ /**
+ * @return default {@link OkHttpClient.Builder} with shared resource pool
+ * that is preconfigured to accept all SSL certificates.
+ */
+ public OkHttpClient.Builder getDefaultUnsecureClientBuilder() {
+ return clientUnsecure.get().newBuilder();
}
- private void trustAllCertificates(OkHttpClient.Builder builder) {
+ private OkHttpClient getDefaultClientAcceptingAllCertificates() {
log.warn("Creating unsafe OkHttpClient. All SSL certificates will be accepted.");
try {
// Create a trust manager that does not validate certificate chains
- final TrustManager[] trustAllCerts = new TrustManager[] { createX509TrustManager() };
+ final TrustManager[] trustAllCerts = {createX509TrustManager()};
// Install the all-trusting trust manager
SSLContext sslContext = SSLContext.getInstance("SSL");
// Create an ssl socket factory with our all-trusting manager
final SSLSocketFactory sslSocketFactory = sslContext.getSocketFactory();
+ // Create the OkHttpClient.Builder with shared resource pool
+ final OkHttpClient.Builder builder = clientSecure.newBuilder();
builder.sslSocketFactory(sslSocketFactory, (X509TrustManager) trustAllCerts[0]);
builder.hostnameVerifier((hostname, session) -> true);
+ return builder.build();
} catch (NoSuchAlgorithmException | KeyManagementException e) {
throw new ArvadosClientException("Error establishing SSL context", e);
}
private static X509TrustManager createX509TrustManager() {
return new X509TrustManager() {
-
+
@Override
- public void checkClientTrusted(X509Certificate[] chain, String authType) {}
+ public void checkClientTrusted(X509Certificate[] chain, String authType) {
+ }
@Override
- public void checkServerTrusted(X509Certificate[] chain, String authType) {}
+ public void checkServerTrusted(X509Certificate[] chain, String authType) {
+ }
@Override
public X509Certificate[] getAcceptedIssuers() {
- return new X509Certificate[] {};
+ return new X509Certificate[]{};
}
};
}
-
- public static class OkHttpClientFactoryBuilder {
- OkHttpClientFactoryBuilder() {
- }
-
- public OkHttpClientFactory build() {
- return new OkHttpClientFactory();
- }
-
- public String toString() {
- return "OkHttpClientFactory.OkHttpClientFactoryBuilder()";
- }
- }
}
--- /dev/null
+/*
+ * Copyright (C) The Arvados Authors. All rights reserved.
+ *
+ * SPDX-License-Identifier: AGPL-3.0 OR Apache-2.0
+ *
+ */
+
+package org.arvados.client.api.model;
+
+import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+import com.fasterxml.jackson.annotation.JsonInclude;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonPropertyOrder;
+
+
+@JsonInclude(JsonInclude.Include.NON_NULL)
+@JsonIgnoreProperties(ignoreUnknown = true)
+@JsonPropertyOrder({ "name", "head_kind", "head_uuid", "link_class" })
+public class Link extends Item {
+
+ @JsonProperty("name")
+ private String name;
+ @JsonProperty("head_kind")
+ private String headKind;
+ @JsonProperty("head_uuid")
+ private String headUuid;
+ @JsonProperty("link_class")
+ private String linkClass;
+
+ public String getName() {
+ return name;
+ }
+
+ public String getHeadKind() {
+ return headKind;
+ }
+
+ public String getHeadUuid() {
+ return headUuid;
+ }
+
+ public String getLinkClass() {
+ return linkClass;
+ }
+
+ public void setName(String name) {
+ this.name = name;
+ }
+
+ public void setHeadKind(String headKind) {
+ this.headKind = headKind;
+ }
+
+ public void setHeadUuid(String headUuid) {
+ this.headUuid = headUuid;
+ }
+
+ public void setLinkClass(String linkClass) {
+ this.linkClass = linkClass;
+ }
+
+}
--- /dev/null
+/*
+ * Copyright (C) The Arvados Authors. All rights reserved.
+ *
+ * SPDX-License-Identifier: AGPL-3.0 OR Apache-2.0
+ *
+ */
+
+package org.arvados.client.api.model;
+
+import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
+import com.fasterxml.jackson.annotation.JsonInclude;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonPropertyOrder;
+
+import java.util.List;
+
+@JsonInclude(JsonInclude.Include.NON_NULL)
+@JsonIgnoreProperties(ignoreUnknown = true)
+@JsonPropertyOrder({ "items" })
+public class LinkList extends ItemList {
+
+ @JsonProperty("items")
+ private List<Link> items;
+
+ public List<Link> getItems() {
+ return this.items;
+ }
+
+ public void setItems(List<Link> items) {
+ this.items = items;
+ }
+}
NOT_IN,
@JsonProperty("is_a")
- IS_A
+ IS_A,
+
+ @JsonProperty("exists")
+ EXISTS
}
}
return collectionsApiClient.list(listArgument);
}
+ /**
+ * Gets project details by uuid.
+ *
+ * @param projectUuid uuid of project
+ * @return Group object containing information about project
+ */
+ public Group getProjectByUuid(String projectUuid) {
+ Group project = groupsApiClient.get(projectUuid);
+ log.debug("Retrieved " + project.getName() + " with UUID: " + project.getUuid());
+ return project;
+ }
+
/**
* Creates new project that will be a subproject of "home" for current user.
*
public class FileToken {
- private int filePosition;
+ private long filePosition;
private long fileSize;
private String fileName;
private String path;
private void splitFileTokenInfo(String fileTokenInfo) {
String[] tokenPieces = fileTokenInfo.split(":");
- this.filePosition = Integer.parseInt(tokenPieces[0]);
+ this.filePosition = Long.parseLong(tokenPieces[0]);
this.fileSize = Long.parseLong(tokenPieces[1]);
this.fileName = tokenPieces[2].replace(Characters.SPACE, " ");
}
return Strings.isNullOrEmpty(path) ? fileName : path + fileName;
}
- public int getFilePosition() {
+ public long getFilePosition() {
return this.filePosition;
}
public void secureOkHttpClientIsCreated() throws Exception {
// given
- OkHttpClientFactory factory = OkHttpClientFactory.builder().build();
+ OkHttpClientFactory factory = OkHttpClientFactory.INSTANCE;
// * configure HTTPS server
SSLSocketFactory sf = getSSLSocketFactoryWithSelfSignedCertificate();
server.useHttps(sf, false);
@Test
public void insecureOkHttpClientIsCreated() throws Exception {
// given
- OkHttpClientFactory factory = OkHttpClientFactory.builder().build();
+ OkHttpClientFactory factory = OkHttpClientFactory.INSTANCE;
// * configure HTTPS server
SSLSocketFactory sf = getSSLSocketFactoryWithSelfSignedCertificate();
server.useHttps(sf, false);
import os
import re
-def git_latest_tag():
- gittags = subprocess.check_output(['git', 'tag', '-l']).split()
- gittags.sort(key=lambda s: [int(u) for u in s.split(b'.')],reverse=True)
- return str(next(iter(gittags)).decode('utf-8'))
-
-def git_timestamp_tag():
- gitinfo = subprocess.check_output(
- ['git', 'log', '--first-parent', '--max-count=1',
- '--format=format:%ct', '.']).strip()
- return str(time.strftime('.%Y%m%d%H%M%S', time.gmtime(int(gitinfo))))
+def git_version_at_commit():
+ curdir = os.path.dirname(os.path.abspath(__file__))
+ myhash = subprocess.check_output(['git', 'log', '-n1', '--first-parent',
+ '--format=%H', curdir]).strip()
+ myversion = subprocess.check_output([curdir+'/../../build/version-at-commit.sh', myhash]).strip().decode()
+ return myversion
def save_version(setup_dir, module, v):
- with open(os.path.join(setup_dir, module, "_version.py"), 'w') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'wt') as fp:
return fp.write("__version__ = '%s'\n" % v)
def read_version(setup_dir, module):
- with open(os.path.join(setup_dir, module, "_version.py"), 'r') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'rt') as fp:
return re.match("__version__ = '(.*)'$", fp.read()).groups()[0]
def get_version(setup_dir, module):
save_version(setup_dir, module, env_version)
else:
try:
- save_version(setup_dir, module, git_latest_tag() + git_timestamp_tag())
- except subprocess.CalledProcessError:
+ save_version(setup_dir, module, git_version_at_commit())
+ except (subprocess.CalledProcessError, OSError):
pass
return read_version(setup_dir, module)
author='Arvados',
author_email='info@arvados.org',
url='https://arvados.org',
- download_url='https://github.com/curoverse/arvados.git',
+ download_url='https://github.com/arvados/arvados.git',
license='Apache 2.0',
packages=[
'arvados_pam',
],
test_suite='tests',
tests_require=['pbr<1.7.0', 'mock>=1.0', 'python-pam'],
- zip_safe=False
- )
+ zip_safe=False,
+)
svc.api_token = token
svc.insecure = insecure
svc.request_id = request_id
+ svc.config = lambda: util.get_config_once(svc)
kwargs['http'].max_request_size = svc._rootDesc.get('maxRequestSize', 0)
kwargs['http'].cache = None
kwargs['http']._request_id = lambda: svc.request_id or util.new_request_id()
#
# By default, arv-copy recursively copies any dependent objects
# necessary to make the object functional in the new instance
-# (e.g. for a pipeline instance, arv-copy copies the pipeline
-# template, input collection, docker images, git repositories). If
+# (e.g. for a workflow, arv-copy copies the workflow,
+# input collections, and docker images). If
# --no-recursive is given, arv-copy copies only the single record
# identified by object-uuid.
#
copy_opts.add_argument(
'-f', '--force', dest='force', action='store_true',
help='Perform copy even if the object appears to exist at the remote destination.')
- copy_opts.add_argument(
- '--force-filters', action='store_true', default=False,
- help="Copy pipeline template filters verbatim, even if they act differently on the destination cluster.")
copy_opts.add_argument(
'--src', dest='source_arvados', required=True,
help='The name of the source Arvados instance (required) - points at an Arvados config file. May be either a pathname to a config file, or (for example) "foo" as shorthand for $HOME/.config/arvados/foo.conf.')
copy_opts.add_argument(
'--no-recursive', dest='recursive', action='store_false',
help='Do not copy any dependencies. NOTE: if this option is given, the copied object will need to be updated manually in order to be functional.')
- copy_opts.add_argument(
- '--dst-git-repo', dest='dst_git_repo',
- help='The name of the destination git repository. Required when copying a pipeline recursively.')
copy_opts.add_argument(
'--project-uuid', dest='project_uuid',
- help='The UUID of the project at the destination to which the pipeline should be copied.')
- copy_opts.add_argument(
- '--allow-git-http-src', action="store_true",
- help='Allow cloning git repositories over insecure http')
- copy_opts.add_argument(
- '--allow-git-http-dst', action="store_true",
- help='Allow pushing git repositories over insecure http')
+ help='The UUID of the project at the destination to which the collection or workflow should be copied.')
copy_opts.add_argument(
'object_uuid',
copy_opts.set_defaults(recursive=True)
parser = argparse.ArgumentParser(
- description='Copy a pipeline instance, template, workflow, or collection from one Arvados instance to another.',
+ description='Copy a workflow or collection from one Arvados instance to another.',
parents=[copy_opts, arv_cmd.retry_opt])
args = parser.parse_args()
result = copy_collection(args.object_uuid,
src_arv, dst_arv,
args)
- elif t == 'PipelineInstance':
- set_src_owner_uuid(src_arv.pipeline_instances(), args.object_uuid, args)
- result = copy_pipeline_instance(args.object_uuid,
- src_arv, dst_arv,
- args)
- elif t == 'PipelineTemplate':
- set_src_owner_uuid(src_arv.pipeline_templates(), args.object_uuid, args)
- result = copy_pipeline_template(args.object_uuid,
- src_arv, dst_arv, args)
elif t == 'Workflow':
set_src_owner_uuid(src_arv.workflows(), args.object_uuid, args)
result = copy_workflow(args.object_uuid, src_arv, dst_arv, args)
except Exception:
abort('git command is not available. Please ensure git is installed.')
-# copy_pipeline_instance(pi_uuid, src, dst, args)
-#
-# Copies a pipeline instance identified by pi_uuid from src to dst.
-#
-# If the args.recursive option is set:
-# 1. Copies all input collections
-# * For each component in the pipeline, include all collections
-# listed as job dependencies for that component)
-# 2. Copy docker images
-# 3. Copy git repositories
-# 4. Copy the pipeline template
-#
-# The only changes made to the copied pipeline instance are:
-# 1. The original pipeline instance UUID is preserved in
-# the 'properties' hash as 'copied_from_pipeline_instance_uuid'.
-# 2. The pipeline_template_uuid is changed to the new template uuid.
-# 3. The owner_uuid of the instance is changed to the user who
-# copied it.
-#
-def copy_pipeline_instance(pi_uuid, src, dst, args):
- # Fetch the pipeline instance record.
- pi = src.pipeline_instances().get(uuid=pi_uuid).execute(num_retries=args.retries)
-
- if args.recursive:
- check_git_availability()
-
- if not args.dst_git_repo:
- abort('--dst-git-repo is required when copying a pipeline recursively.')
- # Copy the pipeline template and save the copied template.
- if pi.get('pipeline_template_uuid', None):
- pt = copy_pipeline_template(pi['pipeline_template_uuid'],
- src, dst, args)
-
- # Copy input collections, docker images and git repos.
- pi = copy_collections(pi, src, dst, args)
- copy_git_repos(pi, src, dst, args.dst_git_repo, args)
- copy_docker_images(pi, src, dst, args)
-
- # Update the fields of the pipeline instance with the copied
- # pipeline template.
- if pi.get('pipeline_template_uuid', None):
- pi['pipeline_template_uuid'] = pt['uuid']
-
- else:
- # not recursive
- logger.info("Copying only pipeline instance %s.", pi_uuid)
- logger.info("You are responsible for making sure all pipeline dependencies have been updated.")
-
- # Update the pipeline instance properties, and create the new
- # instance at dst.
- pi['properties']['copied_from_pipeline_instance_uuid'] = pi_uuid
- pi['description'] = "Pipeline copied from {}\n\n{}".format(
- pi_uuid,
- pi['description'] if pi.get('description', None) else '')
-
- pi['owner_uuid'] = args.project_uuid
-
- del pi['uuid']
-
- new_pi = dst.pipeline_instances().create(body=pi, ensure_unique_name=True).execute(num_retries=args.retries)
- return new_pi
def filter_iter(arg):
"""Iterate a filter string-or-list.
except exc_types as error:
handler(error)
-def migrate_components_filters(template_components, dst_git_repo):
- """Update template component filters in-place for the destination.
-
- template_components is a dictionary of components in a pipeline template.
- This method walks over each component's filters, and updates them to have
- identical semantics on the destination cluster. It returns a list of
- error strings that describe what filters could not be updated safely.
-
- dst_git_repo is the name of the destination Git repository, which can
- be None if that is not known.
- """
- errors = []
- for cname, cspec in template_components.items():
- def add_error(errmsg):
- errors.append("{}: {}".format(cname, errmsg))
- if not isinstance(cspec, dict):
- add_error("value is not a component definition")
- continue
- src_repository = cspec.get('repository')
- filters = cspec.get('filters', [])
- if not isinstance(filters, list):
- add_error("filters are not a list")
- continue
- for cfilter in filters:
- if not (isinstance(cfilter, list) and (len(cfilter) == 3)):
- add_error("malformed filter {!r}".format(cfilter))
- continue
- if attr_filtered(cfilter, 'repository'):
- with exception_handler(add_error, ValueError):
- migrate_repository_filter(cfilter, src_repository, dst_git_repo)
- if attr_filtered(cfilter, 'script_version'):
- with exception_handler(add_error, ValueError):
- migrate_script_version_filter(cfilter)
- return errors
-
-# copy_pipeline_template(pt_uuid, src, dst, args)
-#
-# Copies a pipeline template identified by pt_uuid from src to dst.
-#
-# If args.recursive is True, also copy any collections, docker
-# images and git repositories that this template references.
-#
-# The owner_uuid of the new template is changed to that of the user
-# who copied the template.
-#
-# Returns the copied pipeline template object.
-#
-def copy_pipeline_template(pt_uuid, src, dst, args):
- # fetch the pipeline template from the source instance
- pt = src.pipeline_templates().get(uuid=pt_uuid).execute(num_retries=args.retries)
-
- if not args.force_filters:
- filter_errors = migrate_components_filters(pt['components'], args.dst_git_repo)
- if filter_errors:
- abort("Template filters cannot be copied safely. Use --force-filters to copy anyway.\n" +
- "\n".join(filter_errors))
-
- if args.recursive:
- check_git_availability()
-
- if not args.dst_git_repo:
- abort('--dst-git-repo is required when copying a pipeline recursively.')
- # Copy input collections, docker images and git repos.
- pt = copy_collections(pt, src, dst, args)
- copy_git_repos(pt, src, dst, args.dst_git_repo, args)
- copy_docker_images(pt, src, dst, args)
-
- pt['description'] = "Pipeline template copied from {}\n\n{}".format(
- pt_uuid,
- pt['description'] if pt.get('description', None) else '')
- pt['name'] = "{} copied from {}".format(pt.get('name', ''), pt_uuid)
- del pt['uuid']
-
- pt['owner_uuid'] = args.project_uuid
-
- return dst.pipeline_templates().create(body=pt, ensure_unique_name=True).execute(num_retries=args.retries)
# copy_workflow(wf_uuid, src, dst, args)
#
return type(obj)(copy_collections(v, src, dst, args) for v in obj)
return obj
-def migrate_jobspec(jobspec, src, dst, dst_repo, args):
- """Copy a job's script to the destination repository, and update its record.
-
- Given a jobspec dictionary, this function finds the referenced script from
- src and copies it to dst and dst_repo. It also updates jobspec in place to
- refer to names on the destination.
- """
- repo = jobspec.get('repository')
- if repo is None:
- return
- # script_version is the "script_version" parameter from the source
- # component or job. If no script_version was supplied in the
- # component or job, it is a mistake in the pipeline, but for the
- # purposes of copying the repository, default to "master".
- script_version = jobspec.get('script_version') or 'master'
- script_key = (repo, script_version)
- if script_key not in scripts_copied:
- copy_git_repo(repo, src, dst, dst_repo, script_version, args)
- scripts_copied.add(script_key)
- jobspec['repository'] = dst_repo
- repo_dir = local_repo_dir[repo]
- for version_key in ['script_version', 'supplied_script_version']:
- if version_key in jobspec:
- jobspec[version_key] = git_rev_parse(jobspec[version_key], repo_dir)
-
-# copy_git_repos(p, src, dst, dst_repo, args)
-#
-# Copies all git repositories referenced by pipeline instance or
-# template 'p' from src to dst.
-#
-# For each component c in the pipeline:
-# * Copy git repositories named in c['repository'] and c['job']['repository'] if present
-# * Rename script versions:
-# * c['script_version']
-# * c['job']['script_version']
-# * c['job']['supplied_script_version']
-# to the commit hashes they resolve to, since any symbolic
-# names (tags, branches) are not preserved in the destination repo.
-#
-# The pipeline object is updated in place with the new repository
-# names. The return value is undefined.
-#
-def copy_git_repos(p, src, dst, dst_repo, args):
- for component in p['components'].values():
- migrate_jobspec(component, src, dst, dst_repo, args)
- if 'job' in component:
- migrate_jobspec(component['job'], src, dst, dst_repo, args)
def total_collection_size(manifest_text):
"""Return the total number of bytes in this collection (excluding
available."""
collection_uuid = c['uuid']
- del c['uuid']
-
- if not c["name"]:
- c['name'] = "copied from " + collection_uuid
+ body = {}
+ for d in ('description', 'manifest_text', 'name', 'portable_data_hash', 'properties'):
+ body[d] = c[d]
- if 'properties' in c:
- del c['properties']
+ if not body["name"]:
+ body['name'] = "copied from " + collection_uuid
- c['owner_uuid'] = args.project_uuid
+ body['owner_uuid'] = args.project_uuid
- dst_collection = dst.collections().create(body=c, ensure_unique_name=True).execute(num_retries=args.retries)
+ dst_collection = dst.collections().create(body=body, ensure_unique_name=True).execute(num_retries=args.retries)
# Create docker_image_repo+tag and docker_image_hash links
# at the destination.
c = items[0]
if not c:
# See if there is a collection that's in the same project
- # as the root item (usually a pipeline) being copied.
+ # as the root item (usually a workflow) being copied.
for i in items:
if i.get("owner_uuid") == src_owner_uuid and i.get("name"):
c = i
return (git_url, git_config)
-# copy_git_repo(src_git_repo, src, dst, dst_git_repo, script_version, args)
-#
-# Copies commits from git repository 'src_git_repo' on Arvados
-# instance 'src' to 'dst_git_repo' on 'dst'. Both src_git_repo
-# and dst_git_repo are repository names, not UUIDs (i.e. "arvados"
-# or "jsmith")
-#
-# All commits will be copied to a destination branch named for the
-# source repository URL.
-#
-# The destination repository must already exist.
-#
-# The user running this command must be authenticated
-# to both repositories.
-#
-def copy_git_repo(src_git_repo, src, dst, dst_git_repo, script_version, args):
- # Identify the fetch and push URLs for the git repositories.
-
- (src_git_url, src_git_config) = select_git_url(src, src_git_repo, args.retries, args.allow_git_http_src, "--allow-git-http-src")
- (dst_git_url, dst_git_config) = select_git_url(dst, dst_git_repo, args.retries, args.allow_git_http_dst, "--allow-git-http-dst")
-
- logger.debug('src_git_url: {}'.format(src_git_url))
- logger.debug('dst_git_url: {}'.format(dst_git_url))
-
- dst_branch = re.sub(r'\W+', '_', "{}_{}".format(src_git_url, script_version))
-
- # Copy git commits from src repo to dst repo.
- if src_git_repo not in local_repo_dir:
- local_repo_dir[src_git_repo] = tempfile.mkdtemp()
- arvados.util.run_command(
- ["git"] + src_git_config + ["clone", "--bare", src_git_url,
- local_repo_dir[src_git_repo]],
- cwd=os.path.dirname(local_repo_dir[src_git_repo]),
- env={"HOME": os.environ["HOME"],
- "ARVADOS_API_TOKEN": src.api_token,
- "GIT_ASKPASS": "/bin/false"})
- arvados.util.run_command(
- ["git", "remote", "add", "dst", dst_git_url],
- cwd=local_repo_dir[src_git_repo])
- arvados.util.run_command(
- ["git", "branch", dst_branch, script_version],
- cwd=local_repo_dir[src_git_repo])
- arvados.util.run_command(["git"] + dst_git_config + ["push", "dst", dst_branch],
- cwd=local_repo_dir[src_git_repo],
- env={"HOME": os.environ["HOME"],
- "ARVADOS_API_TOKEN": dst.api_token,
- "GIT_ASKPASS": "/bin/false"})
-
-def copy_docker_images(pipeline, src, dst, args):
- """Copy any docker images named in the pipeline components'
- runtime_constraints field from src to dst."""
-
- logger.debug('copy_docker_images: {}'.format(pipeline['uuid']))
- for c_name, c_info in pipeline['components'].items():
- if ('runtime_constraints' in c_info and
- 'docker_image' in c_info['runtime_constraints']):
- copy_docker_image(
- c_info['runtime_constraints']['docker_image'],
- c_info['runtime_constraints'].get('docker_image_tag', 'latest'),
- src, dst, args)
-
-
def copy_docker_image(docker_image, docker_image_tag, src, dst, args):
"""Copy the docker image identified by docker_image and
docker_image_tag from src to dst. Create appropriate
# the second field of the uuid. This function consults the api's
# schema to identify the object class.
#
-# It returns a string such as 'Collection', 'PipelineInstance', etc.
+# It returns a string such as 'Collection', 'Workflow', etc.
#
# Special case: if handed a Keep locator hash, return 'Collection'.
#
return None
print("(%s) No user listed with same email to migrate %s to %s, will create new user with username '%s'" % (email, old_user_uuid, userhome, username))
if not args.dry_run:
+ oldhomecluster = old_user_uuid[0:5]
+ oldhomearv = clusters[oldhomecluster]
newhomecluster = userhome[0:5]
homearv = clusters[userhome]
user = None
try:
+ olduser = oldhomearv.users().get(uuid=old_user_uuid).execute()
conflicts = homearv.users().list(filters=[["username", "=", username]]).execute()
if conflicts["items"]:
homearv.users().update(uuid=conflicts["items"][0]["uuid"], body={"user": {"username": username+"migrate"}}).execute()
- user = homearv.users().create(body={"user": {"email": email, "username": username}}).execute()
+ user = homearv.users().create(body={"user": {"email": email, "username": username, "is_active": olduser["is_active"]}}).execute()
except arvados.errors.ApiError as e:
print("(%s) Could not create user: %s" % (email, str(e)))
return None
cmd = popen_docker(['inspect', '--format={{.Id}}', image_hash],
stdout=subprocess.PIPE)
try:
- image_id = next(cmd.stdout).decode().strip()
+ image_id = next(cmd.stdout).decode('utf-8').strip()
if image_id.startswith('sha256:'):
return 'v2'
elif ':' not in image_id:
next(list_output) # Ignore the header line
for line in list_output:
words = line.split()
- words = [word.decode() for word in words]
+ words = [word.decode('utf-8') for word in words]
size_index = len(words) - 2
repo, tag, imageid = words[:3]
ctime = ' '.join(words[3:size_index])
try:
image_hash = find_one_image_hash(args.image, args.tag)
except DockerError as error:
- logger.error(error.message)
+ logger.error(str(error))
sys.exit(1)
if not docker_image_compatible(api, image_hash):
if args.name is None:
if image_repo_tag:
- collection_name = 'Docker image {} {}'.format(image_repo_tag, image_hash[0:12])
+ collection_name = 'Docker image {} {}'.format(image_repo_tag.replace("/", " "), image_hash[0:12])
else:
collection_name = 'Docker image {}'.format(image_hash[0:12])
else:
coll_uuid = api.collections().create(
body={"manifest_text": collections[0]['manifest_text'],
"name": collection_name,
- "owner_uuid": parent_project_uuid},
+ "owner_uuid": parent_project_uuid,
+ "properties": {"docker-image-repo-tag": image_repo_tag}},
ensure_unique_name=True
).execute(num_retries=args.retries)['uuid']
put_args + ['--filename', outfile_name, image_file.name], stdout=stdout,
install_sig_handlers=install_sig_handlers).strip()
+ api.collections().update(uuid=coll_uuid, body={"properties": {"docker-image-repo-tag": image_repo_tag}}).execute(num_retries=args.retries)
+
# Read the image metadata and make Arvados links from it.
image_file.seek(0)
image_tar = tarfile.open(fileobj=image_file)
else:
json_filename = raw_image_hash + '/json'
json_file = image_tar.extractfile(image_tar.getmember(json_filename))
- image_metadata = json.loads(json_file.read().decode())
+ image_metadata = json.loads(json_file.read().decode('utf-8'))
json_file.close()
image_tar.close()
link_base = {'head_uuid': coll_uuid, 'properties': {}}
rid += chr(c+ord('a')-10)
n = n // 36
return rid
+
+def get_config_once(svc):
+ if not svc._rootDesc.get('resources').get('configs', False):
+ # Old API server version, no config export endpoint
+ return {}
+ if not hasattr(svc, '_cached_config'):
+ svc._cached_config = svc.configs().get().execute()
+ return svc._cached_config
import os
import re
-def git_latest_tag():
- gittags = subprocess.check_output(['git', 'tag', '-l']).split()
- gittags.sort(key=lambda s: [int(u) for u in s.split(b'.')],reverse=True)
- return str(next(iter(gittags)).decode('utf-8'))
-
-def git_timestamp_tag():
- gitinfo = subprocess.check_output(
- ['git', 'log', '--first-parent', '--max-count=1',
- '--format=format:%ct', '.']).strip()
- return str(time.strftime('.%Y%m%d%H%M%S', time.gmtime(int(gitinfo))))
+def git_version_at_commit():
+ curdir = os.path.dirname(os.path.abspath(__file__))
+ myhash = subprocess.check_output(['git', 'log', '-n1', '--first-parent',
+ '--format=%H', curdir]).strip()
+ myversion = subprocess.check_output([curdir+'/../../build/version-at-commit.sh', myhash]).strip().decode()
+ return myversion
def save_version(setup_dir, module, v):
- with open(os.path.join(setup_dir, module, "_version.py"), 'w') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'wt') as fp:
return fp.write("__version__ = '%s'\n" % v)
def read_version(setup_dir, module):
- with open(os.path.join(setup_dir, module, "_version.py"), 'r') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'rt') as fp:
return re.match("__version__ = '(.*)'$", fp.read()).groups()[0]
def get_version(setup_dir, module):
save_version(setup_dir, module, env_version)
else:
try:
- save_version(setup_dir, module, git_latest_tag() + git_timestamp_tag())
+ save_version(setup_dir, module, git_version_at_commit())
except (subprocess.CalledProcessError, OSError):
pass
author='Arvados',
author_email='info@arvados.org',
url="https://arvados.org",
- download_url="https://github.com/curoverse/arvados.git",
+ download_url="https://github.com/arvados/arvados.git",
license='Apache 2.0',
packages=find_packages(),
scripts=[
'google-api-python-client >=1.6.2, <1.7',
'httplib2 >=0.9.2',
'pycurl >=7.19.5.1',
- 'ruamel.yaml >=0.15.54, <=0.15.77',
+ 'ruamel.yaml >=0.15.54, <=0.16.5',
'setuptools',
'ws4py >=0.4.2',
],
'Programming Language :: Python :: 3',
],
test_suite='tests',
- tests_require=['pbr<1.7.0', 'mock>=1.0', 'PyYAML'],
+ tests_require=['pbr<1.7.0', 'mock>=1.0,<4', 'PyYAML'],
zip_safe=False
)
# Python 2 writes version info on stderr.
self.assertEqual(out.getvalue(), '')
v = err.getvalue()
- self.assertRegex(v, r"[0-9]+\.[0-9]+\.[0-9]+$\n")
+ self.assertRegex(v, r"[0-9]+\.[0-9]+\.[0-9]+(\.dev[0-9]+)?$\n")
class FakeCurl(object):
--- /dev/null
+cwlVersion: v1.0
+class: Workflow
+requirements:
+ ScatterFeatureRequirement: {}
+inputs:
+ exfiles:
+ type: string[]
+ default:
+ - fed-migrate.cwlex
+ - run-test.cwlex
+ dir:
+ type: Directory
+ default:
+ class: Directory
+ location: .
+outputs:
+ out:
+ type: File[]
+ outputSource: step1/converted
+
+steps:
+ step1:
+ in:
+ inpdir: dir
+ inpfile: exfiles
+ out: [converted]
+ scatter: inpfile
+ run: cwlex.cwl
$namespaces:
arv: "http://arvados.org/cwl#"
cwltool: "http://commonwl.org/cwltool#"
+
inputs:
arvbox_base: Directory
+ branch:
+ type: string
+ default: master
outputs:
arvados_api_hosts:
type: string[]
arvbox_bin:
type: File
outputSource: start/arvbox_bin
+ refspec:
+ type: string
+ outputSource: branch
requirements:
SubworkflowFeatureRequirement: {}
+ ScatterFeatureRequirement: {}
+ StepInputExpressionRequirement: {}
cwltool:LoadListingRequirement:
loadListing: no_listing
steps:
start:
in:
arvbox_base: arvbox_base
+ branch: branch
+ logincluster:
+ default: true
out: [arvados_api_hosts, arvados_cluster_ids, arvado_api_host_insecure, superuser_tokens, arvbox_containers, arvbox_bin]
run: ../../../cwl/tests/federation/arvbox-make-federation.cwl
apiB = arvados.api(host=j["arvados_api_hosts"][1], token=j["superuser_tokens"][1], insecure=True)
apiC = arvados.api(host=j["arvados_api_hosts"][2], token=j["superuser_tokens"][2], insecure=True)
+###
+### Check users on API server "A" (the LoginCluster) ###
+###
+
users = apiA.users().list().execute()
assert len(users["items"]) == 11
by_username[u["username"]] = u["uuid"]
assert found
+# Should be active
+for i in (1, 2, 3, 4, 5, 6, 7, 8):
+ found = False
+ for u in users["items"]:
+ if u["username"] == ("case%d" % i) and u["email"] == ("case%d@test" % i) and u["is_active"] is True:
+ found = True
+ assert found, "Not found case%i" % i
+
+# case9 should not be active
found = False
for u in users["items"]:
if (u["username"] == "case9" and u["email"] == "case9@test" and
found = True
assert found
+
+###
+### Check users on API server "B" (federation member) ###
+###
users = apiB.users().list().execute()
assert len(users["items"]) == 11
-for i in range(2, 10):
+for i in range(2, 9):
found = False
for u in users["items"]:
- if u["username"] == ("case%d" % i) and u["email"] == ("case%d@test" % i) and u["uuid"] == by_username[u["username"]]:
+ if (u["username"] == ("case%d" % i) and u["email"] == ("case%d@test" % i) and
+ u["uuid"] == by_username[u["username"]] and u["is_active"] is True):
found = True
- assert found
+ assert found, "Not found case%i" % i
+
+found = False
+for u in users["items"]:
+ if (u["username"] == "case9" and u["email"] == "case9@test" and
+ u["uuid"] == by_username[u["username"]] and u["is_active"] is False):
+ found = True
+assert found
+
+###
+### Check users on API server "C" (federation member) ###
+###
users = apiC.users().list().execute()
assert len(users["items"]) == 8
for i in (2, 4, 6, 7, 8):
found = False
for u in users["items"]:
- if u["username"] == ("case%d" % i) and u["email"] == ("case%d@test" % i) and u["uuid"] == by_username[u["username"]]:
+ if (u["username"] == ("case%d" % i) and u["email"] == ("case%d@test" % i) and
+ u["uuid"] == by_username[u["username"]] and u["is_active"] is True):
found = True
assert found
for i in (3, 5, 9):
found = False
for u in users["items"]:
- if u["username"] == ("case%d" % i) and u["email"] == ("case%d@test" % i) and u["uuid"] == by_username[u["username"]]:
+ if (u["username"] == ("case%d" % i) and u["email"] == ("case%d@test" % i) and
+ u["uuid"] == by_username[u["username"]] and u["is_active"] is True):
found = True
assert not found
--- /dev/null
+#!/usr/bin/env cwl-runner
+arguments:
+ - cwlex
+ - '$(inputs.inp ? inputs.inp.path : inputs.inpdir.path+''/''+inputs.inpfile)'
+class: CommandLineTool
+cwlVersion: v1.0
+id: '#main'
+inputs:
+ - id: inp
+ type:
+ - 'null'
+ - File
+ - id: inpdir
+ type:
+ - 'null'
+ - Directory
+ - id: inpfile
+ type:
+ - 'null'
+ - string
+ - id: outname
+ type:
+ - 'null'
+ - string
+outputs:
+ - id: converted
+ outputBinding:
+ glob: $(outname(inputs))
+ type: File
+requirements:
+ - class: DockerRequirement
+ dockerPull: commonworkflowlanguage/cwlex
+ - class: InlineJavascriptRequirement
+ expressionLib:
+ - |
+
+ function outname(inputs) {
+ return inputs.outname ? inputs.outname : (inputs.inp ? inputs.inp.nameroot+'.cwl' : inputs.inpfile.replace(/(.*).cwlex/, '$1.cwl'));
+ }
+stdout: $(outname(inputs))
+
type: string
- id: arvbox_bin
type: File
- - default: 15531-logincluster-migrate
+ - default: master
id: refspec
type: string
outputs:
type: string
outputs:
- id: supertok
- outputSource: superuser_tok_3/superuser_token
+ outputSource: superuser_tok_2/superuser_token
type: string
requirements:
- - class: EnvVarRequirement
- envDef:
- ARVBOX_CONTAINER: $(inputs.container)
+ InlineJavascriptRequirement: {}
steps:
- id: main_2_embed_1
- in:
- cluster_id:
- source: cluster_id
- container:
- source: container
- logincluster:
- source: logincluster
- set_login:
- default:
- class: File
- location: set_login.py
- out:
- - c
- run:
- arguments:
- - sh
- - _script
- class: CommandLineTool
- id: main_2_embed_1_embed
- inputs:
- - id: container
- type: string
- - id: cluster_id
- type: string
- - id: logincluster
- type: string
- - id: set_login
- type: File
- outputs:
- - id: c
- outputBinding:
- outputEval: $(inputs.container)
- type: string
- requirements:
- InitialWorkDirRequirement:
- listing:
- - entry: >
- set -x
-
- docker cp
- $(inputs.container):/var/lib/arvados/cluster_config.yml.override
- .
-
- chmod +w cluster_config.yml.override
-
- python $(inputs.set_login.path)
- cluster_config.yml.override $(inputs.cluster_id)
- $(inputs.logincluster)
-
- docker cp cluster_config.yml.override
- $(inputs.container):/var/lib/arvados
- entryname: _script
- InlineJavascriptRequirement: {}
- - id: main_2_embed_2
in:
arvbox_bin:
source: arvbox_bin
- c:
- source: main_2_embed_1/c
container:
source: container
host:
- sh
- _script
class: CommandLineTool
- id: main_2_embed_2_embed
+ id: main_2_embed_1_embed
inputs:
- id: container
type: string
type: string
- id: arvbox_bin
type: File
- - id: c
- type: string
- id: refspec
type: string
outputs:
- id: d
outputBinding:
- outputEval: $(inputs.c)
+ outputEval: $(inputs.container)
type: string
requirements:
InitialWorkDirRequirement:
listing:
- - entry: >
+ - entry: >+
set -xe
+ export ARVBOX_CONTAINER="$(inputs.container)"
+
$(inputs.arvbox_bin.path) pipe <<EOF
cd /usr/src/arvados
https://$(inputs.host)/discovery/v1/apis/arvados/v1/rest
>/dev/null ; do sleep 3 ; done
- export ARVADOS_API_HOST=$(inputs.host)
-
- export ARVADOS_API_TOKEN=\$($(inputs.arvbox_bin.path)
- cat /var/lib/arvados/superuser_token)
-
- export ARVADOS_API_HOST_INSECURE=1
ARVADOS_VIRTUAL_MACHINE_UUID=\$($(inputs.arvbox_bin.path)
cat /var/lib/arvados/vm-uuid)
- while ! python -c "import arvados ;
- arvados.api().virtual_machines().get(uuid='$ARVADOS_VIRTUAL_MACHINE_UUID').execute()"
- 2>/dev/null ; do sleep 3; done
+ ARVADOS_API_TOKEN=\$($(inputs.arvbox_bin.path) cat
+ /var/lib/arvados/superuser_token)
+
+ while ! curl --fail --insecure --silent -H
+ "Authorization: Bearer $ARVADOS_API_TOKEN"
+ https://$(inputs.host)/arvados/v1/virtual_machines/$ARVADOS_VIRTUAL_MACHINE_UUID
+ >/dev/null ; do sleep 3 ; done
+
entryname: _script
InlineJavascriptRequirement: {}
- - id: superuser_tok_3
+ - id: superuser_tok_2
in:
container:
source: container
d:
- source: main_2_embed_2/d
+ source: main_2_embed_1/d
out:
- superuser_token
run: '#superuser_tok'
arvbox_containers string[],
fed_migrate="arv-federation-migrate",
arvbox_bin File,
- refspec="15531-logincluster-migrate"
+ refspec="master"
) {
logincluster = run expr (arvados_cluster_ids) string (inputs.arvados_cluster_ids[0])
arvados_api_hosts as host
do run workflow(logincluster, arvbox_bin, refspec)
{
- requirements {
- EnvVarRequirement {
- envDef: {
- ARVBOX_CONTAINER: "$(inputs.container)"
- }
- }
- }
-
- run tool(container, cluster_id, logincluster, set_login = File("set_login.py")) {
-sh <<<
-set -x
-docker cp $(inputs.container):/var/lib/arvados/cluster_config.yml.override .
-chmod +w cluster_config.yml.override
-python $(inputs.set_login.path) cluster_config.yml.override $(inputs.cluster_id) $(inputs.logincluster)
-docker cp cluster_config.yml.override $(inputs.container):/var/lib/arvados
->>>
- return container as c
- }
- run tool(container, host, arvbox_bin, c, refspec) {
+ run tool(container, host, arvbox_bin, refspec) {
sh <<<
set -xe
+export ARVBOX_CONTAINER="$(inputs.container)"
$(inputs.arvbox_bin.path) pipe <<EOF
cd /usr/src/arvados
git fetch
$(inputs.arvbox_bin.path) hotreset
while ! curl --fail --insecure --silent https://$(inputs.host)/discovery/v1/apis/arvados/v1/rest >/dev/null ; do sleep 3 ; done
-export ARVADOS_API_HOST=$(inputs.host)
-export ARVADOS_API_TOKEN=\$($(inputs.arvbox_bin.path) cat /var/lib/arvados/superuser_token)
-export ARVADOS_API_HOST_INSECURE=1
+
ARVADOS_VIRTUAL_MACHINE_UUID=\$($(inputs.arvbox_bin.path) cat /var/lib/arvados/vm-uuid)
-while ! python -c "import arvados ; arvados.api().virtual_machines().get(uuid='$ARVADOS_VIRTUAL_MACHINE_UUID').execute()" 2>/dev/null ; do sleep 3; done
+ARVADOS_API_TOKEN=\$($(inputs.arvbox_bin.path) cat /var/lib/arvados/superuser_token)
+while ! curl --fail --insecure --silent -H "Authorization: Bearer $ARVADOS_API_TOKEN" https://$(inputs.host)/arvados/v1/virtual_machines/$ARVADOS_VIRTUAL_MACHINE_UUID >/dev/null ; do sleep 3 ; done
+
>>>
- return c as d
+ return container as d
}
supertok = superuser_tok(container, d)
return supertok
+++ /dev/null
-import json
-import sys
-
-f = open(sys.argv[1], "r+")
-j = json.load(f)
-j["Clusters"][sys.argv[2]]["Login"] = {"LoginCluster": sys.argv[3]}
-for r in j["Clusters"][sys.argv[2]]["RemoteClusters"]:
- j["Clusters"][sys.argv[2]]["RemoteClusters"][r]["Insecure"] = True
-f.seek(0)
-json.dump(j, f)
uwsgi_temp_path "{{TMPDIR}}";
scgi_temp_path "{{TMPDIR}}";
upstream arv-git-http {
- server localhost:{{GITPORT}};
+ server {{LISTENHOST}}:{{GITPORT}};
}
server {
- listen *:{{GITSSLPORT}} ssl default_server;
+ listen {{LISTENHOST}}:{{GITSSLPORT}} ssl default_server;
server_name arv-git-http;
ssl_certificate "{{SSLCERT}}";
ssl_certificate_key "{{SSLKEY}}";
}
}
upstream keepproxy {
- server localhost:{{KEEPPROXYPORT}};
+ server {{LISTENHOST}}:{{KEEPPROXYPORT}};
}
server {
- listen *:{{KEEPPROXYSSLPORT}} ssl default_server;
+ listen {{LISTENHOST}}:{{KEEPPROXYSSLPORT}} ssl default_server;
server_name keepproxy;
ssl_certificate "{{SSLCERT}}";
ssl_certificate_key "{{SSLKEY}}";
}
}
upstream keep-web {
- server localhost:{{KEEPWEBPORT}};
+ server {{LISTENHOST}}:{{KEEPWEBPORT}};
}
server {
- listen *:{{KEEPWEBSSLPORT}} ssl default_server;
+ listen {{LISTENHOST}}:{{KEEPWEBSSLPORT}} ssl default_server;
server_name keep-web;
ssl_certificate "{{SSLCERT}}";
ssl_certificate_key "{{SSLKEY}}";
}
}
server {
- listen *:{{KEEPWEBDLSSLPORT}} ssl default_server;
+ listen {{LISTENHOST}}:{{KEEPWEBDLSSLPORT}} ssl default_server;
server_name keep-web-dl ~.*;
ssl_certificate "{{SSLCERT}}";
ssl_certificate_key "{{SSLKEY}}";
}
}
upstream ws {
- server localhost:{{WSPORT}};
+ server {{LISTENHOST}}:{{WSPORT}};
}
server {
- listen *:{{WSSPORT}} ssl default_server;
+ listen {{LISTENHOST}}:{{WSSSLPORT}} ssl default_server;
server_name websocket;
ssl_certificate "{{SSLCERT}}";
ssl_certificate_key "{{SSLKEY}}";
proxy_redirect off;
}
}
+ upstream workbench1 {
+ server {{LISTENHOST}}:{{WORKBENCH1PORT}};
+ }
+ server {
+ listen {{LISTENHOST}}:{{WORKBENCH1SSLPORT}} ssl default_server;
+ server_name workbench1;
+ ssl_certificate "{{SSLCERT}}";
+ ssl_certificate_key "{{SSLKEY}}";
+ location / {
+ proxy_pass http://workbench1;
+ proxy_set_header Host $http_host;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto https;
+ proxy_redirect off;
+ }
+ }
upstream controller {
- server localhost:{{CONTROLLERPORT}};
+ server {{LISTENHOST}}:{{CONTROLLERPORT}};
}
server {
- listen *:{{CONTROLLERSSLPORT}} ssl default_server;
+ listen {{LISTENHOST}}:{{CONTROLLERSSLPORT}} ssl default_server;
server_name controller;
ssl_certificate "{{SSLCERT}}";
ssl_certificate_key "{{SSLKEY}}";
port = internal_port_from_config("RailsAPI")
env = os.environ.copy()
env['RAILS_ENV'] = 'test'
+ env['ARVADOS_RAILS_LOG_TO_STDOUT'] = '1'
env.pop('ARVADOS_WEBSOCKETS', None)
env.pop('ARVADOS_TEST_API_HOST', None)
env.pop('ARVADOS_API_HOST', None)
env.pop('ARVADOS_API_HOST_INSECURE', None)
env.pop('ARVADOS_API_TOKEN', None)
- start_msg = subprocess.check_output(
+ logf = open(_logfilename('railsapi'), 'a')
+ railsapi = subprocess.Popen(
['bundle', 'exec',
- 'passenger', 'start', '-d', '-p{}'.format(port),
+ 'passenger', 'start', '-p{}'.format(port),
'--pid-file', pid_file,
- '--log-file', os.path.join(os.getcwd(), 'log/test.log'),
+ '--log-file', '/dev/stdout',
'--ssl',
'--ssl-certificate', 'tmp/self-signed.pem',
'--ssl-certificate-key', 'tmp/self-signed.key'],
- env=env)
+ env=env, stdin=open('/dev/null'), stdout=logf, stderr=logf)
if not leave_running_atexit:
atexit.register(kill_server_pid, pid_file, passenger_root=api_src_dir)
- match = re.search(r'Accessible via: https://(.*?)/', start_msg)
- if not match:
- raise Exception(
- "Passenger did not report endpoint: {}".format(start_msg))
- my_api_host = match.group(1)
+ my_api_host = "127.0.0.1:"+str(port)
os.environ['ARVADOS_API_HOST'] = my_api_host
# Make sure the server has written its pid file and started
# listening on its TCP port
- find_server_pid(pid_file)
_wait_until_port_listens(port)
+ find_server_pid(pid_file)
reset()
os.chdir(restore_cwd)
return
stop_nginx()
nginxconf = {}
+ nginxconf['LISTENHOST'] = 'localhost'
nginxconf['CONTROLLERPORT'] = internal_port_from_config("Controller")
nginxconf['CONTROLLERSSLPORT'] = external_port_from_config("Controller")
nginxconf['KEEPWEBPORT'] = internal_port_from_config("WebDAV")
nginxconf['GITPORT'] = internal_port_from_config("GitHTTP")
nginxconf['GITSSLPORT'] = external_port_from_config("GitHTTP")
nginxconf['WSPORT'] = internal_port_from_config("Websocket")
- nginxconf['WSSPORT'] = external_port_from_config("Websocket")
+ nginxconf['WSSSLPORT'] = external_port_from_config("Websocket")
+ nginxconf['WORKBENCH1PORT'] = internal_port_from_config("Workbench1")
+ nginxconf['WORKBENCH1SSLPORT'] = external_port_from_config("Workbench1")
nginxconf['SSLCERT'] = os.path.join(SERVICES_SRC_DIR, 'api', 'tmp', 'self-signed.pem')
nginxconf['SSLKEY'] = os.path.join(SERVICES_SRC_DIR, 'api', 'tmp', 'self-signed.key')
nginxconf['ACCESSLOG'] = _logfilename('nginx_access')
conffile = os.path.join(TEST_TMPDIR, 'nginx.conf')
with open(conffile, 'w') as f:
f.write(re.sub(
- r'{{([A-Z]+)}}',
+ r'{{([A-Z]+[A-Z0-9]+)}}',
lambda match: str(nginxconf.get(match.group(1))),
open(conftemplatefile).read()))
controller_external_port = find_available_port()
websocket_port = find_available_port()
websocket_external_port = find_available_port()
+ workbench1_port = find_available_port()
+ workbench1_external_port = find_available_port()
git_httpd_port = find_available_port()
git_httpd_external_port = find_available_port()
keepproxy_port = find_available_port()
"http://%s:%s"%(localhost, websocket_port): {},
},
},
+ "Workbench1": {
+ "ExternalURL": "https://%s:%s/" % (localhost, workbench1_external_port),
+ "InternalURLs": {
+ "http://%s:%s"%(localhost, workbench1_port): {},
+ },
+ },
"GitHTTP": {
"ExternalURL": "https://%s:%s" % (localhost, git_httpd_external_port),
"InternalURLs": {
"http://%s:%s"%(localhost, keep_web_dl_port): {},
},
},
+ "SSO": {
+ "ExternalURL": "http://localhost:3002",
+ },
}
config = {
"Clusters": {
"zzzzz": {
- "EnableBetaController14287": ('14287' in os.environ.get('ARVADOS_EXPERIMENTAL', '')),
"ManagementToken": "e687950a23c3a9bceec28c6223a06c79",
- "SystemRootToken": auth_token('data_manager'),
+ "SystemRootToken": auth_token('system_user'),
"API": {
"RequestTimeout": "30s",
+ "RailsSessionSecretToken": "e24205c490ac07e028fd5f8a692dcb398bcd654eff1aef5f9fe6891994b18483",
+ },
+ "Login": {
+ "ProviderAppID": "arvados-server",
+ "ProviderAppSecret": "608dbf356a327e2d0d4932b60161e212c2d8d8f5e25690d7b622f850a990cd33",
},
"SystemLogs": {
"LogLevel": ('info' if os.environ.get('ARVADOS_DEBUG', '') in ['','0'] else 'debug'),
"Services": services,
"Users": {
"AnonymousUserToken": auth_token('anonymous'),
+ "UserProfileNotificationAddress": "arvados@example.com",
},
"Collections": {
"BlobSigningKey": "zfhgfenhffzltr9dixws36j1yhksjoll2grmku38mi7yxd66h5j4q9w4jzanezacp8s6q0ro3hxakfye02152hncy6zml2ed0uc",
- "TrustAllContent": True,
+ "TrustAllContent": False,
+ "ForwardSlashNameSubstitution": "/",
+ "TrashSweepInterval": "-1s",
},
"Git": {
- "Repositories": "%s/test" % os.path.join(SERVICES_SRC_DIR, 'api', 'tmp', 'git'),
+ "Repositories": os.path.join(SERVICES_SRC_DIR, 'api', 'tmp', 'git', 'test'),
+ },
+ "Containers": {
+ "JobsAPI": {
+ "GitInternalDir": os.path.join(SERVICES_SRC_DIR, 'api', 'tmp', 'internal.git'),
+ },
+ "SupportedDockerImageFormats": {"v1": {}},
},
"Volumes": {
"zzzzz-nyw5e-%015d"%n: {
gemspec
gem 'rake'
gem 'minitest', '>= 5.0.0'
+gem 'signet', '<= 0.11'
exit
end
-git_latest_tag = `git tag -l |sort -V -r |head -n1`
-git_latest_tag = git_latest_tag.encode('utf-8').strip
-git_timestamp, git_hash = `git log -n1 --first-parent --format=%ct:%H .`.chomp.split(":")
-git_timestamp = Time.at(git_timestamp.to_i).utc
+git_dir = ENV["GIT_DIR"]
+git_work = ENV["GIT_WORK_TREE"]
+begin
+ ENV["GIT_DIR"] = File.expand_path "#{__dir__}/../../.git"
+ ENV["GIT_WORK_TREE"] = File.expand_path "#{__dir__}/../.."
+ git_timestamp, git_hash = `git log -n1 --first-parent --format=%ct:%H #{__dir__}`.chomp.split(":")
+ if ENV["ARVADOS_BUILDING_VERSION"]
+ version = ENV["ARVADOS_BUILDING_VERSION"]
+ else
+ version = `#{__dir__}/../../build/version-at-commit.sh #{git_hash}`.encode('utf-8').strip
+ end
+ git_timestamp = Time.at(git_timestamp.to_i).utc
+ensure
+ ENV["GIT_DIR"] = git_dir
+ ENV["GIT_WORK_TREE"] = git_work
+end
Gem::Specification.new do |s|
s.name = 'arvados'
- s.version = "#{git_latest_tag}.#{git_timestamp.strftime('%Y%m%d%H%M%S')}"
+ s.version = version
s.date = git_timestamp.strftime("%Y-%m-%d")
s.summary = "Arvados client library"
s.description = "Arvados client library, git commit #{git_hash}"
gem 'andand'
gem 'optimist'
-gem 'faye-websocket'
-gem 'themes_for_rails', git: 'https://github.com/curoverse/themes_for_rails'
+gem 'themes_for_rails', git: 'https://github.com/arvados/themes_for_rails'
-# We need arvados-cli because of crunchv1. Note: bundler can't handle
-# two gems with the same "git" url but different "glob" values, hence
-# the use of a wildcard here instead of literal paths
-# (sdk/cli/arvados-cli.gem and sdk/ruby/arvados.gem).
-gem 'arvados-cli', git: 'https://github.com/curoverse/arvados.git', glob: 'sdk/*/*.gemspec'
-gem 'arvados', git: 'https://github.com/curoverse/arvados.git', glob: 'sdk/*/*.gemspec'
+# Import arvados gem. Note: actual git commit is pinned via Gemfile.lock
+gem 'arvados', git: 'https://github.com/arvados/arvados.git', glob: 'sdk/ruby/arvados.gemspec'
gem 'httpclient'
gem 'sshkey'
GIT
- remote: https://github.com/curoverse/arvados.git
- revision: dd9f2403f43bcb93da5908ddde57d8c0491bb4c2
- glob: sdk/*/*.gemspec
+ remote: https://github.com/arvados/arvados.git
+ revision: 81725af5d5d2e6cd18ba7099ba5fb1fc520f4f8c
+ glob: sdk/ruby/arvados.gemspec
specs:
- arvados (1.4.1.20191019025325)
+ arvados (1.5.0.pre20200114202620)
activesupport (>= 3)
andand (~> 1.3, >= 1.3.3)
arvados-google-api-client (>= 0.7, < 0.8.9)
i18n (~> 0)
json (>= 1.7.7, < 3)
jwt (>= 0.1.5, < 2)
- arvados-cli (1.4.1.20191017145711)
- activesupport (>= 3.2.13, < 5.1)
- andand (~> 1.3, >= 1.3.3)
- arvados (>= 1.4.1.20190320201707)
- arvados-google-api-client (~> 0.6, >= 0.6.3, < 0.8.9)
- curb (~> 0.8)
- faraday (< 0.16)
- json (>= 1.7.7, < 3)
- oj (~> 3.0)
- optimist (~> 3.0)
GIT
- remote: https://github.com/curoverse/themes_for_rails
+ remote: https://github.com/arvados/themes_for_rails
revision: ddf6e592b3b6493ea0c2de7b5d3faa120ed35be0
specs:
themes_for_rails (0.5.1)
net-ssh-gateway (>= 1.1.0)
concurrent-ruby (1.1.5)
crass (1.0.4)
- curb (0.9.10)
database_cleaner (1.7.0)
erubis (2.7.0)
- eventmachine (1.2.7)
execjs (2.7.0)
extlib (0.9.16)
factory_bot (5.0.2)
railties (>= 4.2.0)
faraday (0.15.4)
multipart-post (>= 1.2, < 3)
- faye-websocket (0.10.7)
- eventmachine (>= 0.12.0)
- websocket-driver (>= 0.5.1)
ffi (1.9.25)
globalid (0.4.2)
activesupport (>= 4.2.0)
rails-dom-testing (>= 1, < 3)
railties (>= 4.2.0)
thor (>= 0.14, < 2.0)
- json (2.2.0)
+ json (2.3.0)
jwt (1.5.6)
launchy (2.4.3)
addressable (~> 2.3)
nokogiri (>= 1.5.9)
mail (2.7.1)
mini_mime (>= 0.1.1)
- memoist (0.16.0)
+ memoist (0.16.2)
metaclass (0.0.4)
method_source (0.9.2)
mini_mime (1.0.1)
rake (>= 0.8.1)
pg (1.1.4)
power_assert (1.1.4)
- public_suffix (4.0.1)
+ public_suffix (4.0.3)
rack (2.0.7)
rack-test (0.6.3)
rack (>= 1.0)
method_source
rake (>= 0.8.7)
thor (>= 0.18.1, < 2.0)
- rake (12.3.2)
+ rake (13.0.1)
rb-fsevent (0.10.3)
rb-inotify (0.9.10)
ffi (>= 0.5.0, < 2)
thor (0.20.3)
thread_safe (0.3.6)
tilt (2.0.8)
- tzinfo (1.2.5)
+ tzinfo (1.2.6)
thread_safe (~> 0.1)
uglifier (2.7.2)
execjs (>= 0.3.0)
acts_as_api
andand
arvados!
- arvados-cli!
byebug
database_cleaner
factory_bot_rails
- faye-websocket
httpclient
jquery-rails
lograge
uglifier (~> 2.0)
BUNDLED WITH
- 1.17.3
+ 1.11
# format is YYYYMMDD, must be fixed with (needs to be linearly
# sortable), updated manually, may be used by clients to
# determine availability of API server features.
- revision: "20190926",
+ revision: "20200212",
source_version: AppVersion.hash,
sourceVersion: AppVersion.hash, # source_version should be deprecated in the future
packageVersion: AppVersion.package_version,
class Arvados::V1::UsersController < ApplicationController
accept_attribute_as_json :prefs, Hash
+ accept_param_as_json :updates
skip_before_action :find_object_by_uuid, only:
- [:activate, :current, :system, :setup, :merge]
+ [:activate, :current, :system, :setup, :merge, :batch_update]
skip_before_action :render_404_if_no_object, only:
- [:activate, :current, :system, :setup, :merge]
- before_action :admin_required, only: [:setup, :unsetup, :update_uuid]
+ [:activate, :current, :system, :setup, :merge, :batch_update]
+ before_action :admin_required, only: [:setup, :unsetup, :update_uuid, :batch_update]
+
+ # Internal API used by controller to update local cache of user
+ # records from LoginCluster.
+ def batch_update
+ @objects = []
+ params[:updates].andand.each do |uuid, attrs|
+ begin
+ u = User.find_or_create_by(uuid: uuid)
+ rescue ActiveRecord::RecordNotUnique
+ retry
+ end
+ u.update_attributes!(attrs)
+ @objects << u
+ end
+ @offset = 0
+ @limit = -1
+ render_list
+ end
def current
if current_user
end
def activate
+ if params[:id] and params[:id].match(/\D/)
+ params[:uuid] = params.delete :id
+ end
if current_user.andand.is_admin && params[:uuid]
- @object = User.find params[:uuid]
+ @object = User.find_by_uuid params[:uuid]
else
@object = current_user
end
raise ArgumentError.new "Required uuid or user"
elsif !params[:user]['email']
raise ArgumentError.new "Require user email"
- elsif !params[:openid_prefix]
- raise ArgumentError.new "Required openid_prefix parameter is missing."
else
@object = model_class.create! resource_attrs
end
end
@response = @object.setup(repo_name: full_repo_name,
- vm_uuid: params[:vm_uuid],
- openid_prefix: params[:openid_prefix])
+ vm_uuid: params[:vm_uuid])
# setup succeeded. send email to user
if params[:send_notification_email]
- UserNotifier.account_is_setup(@object).deliver_now
+ begin
+ UserNotifier.account_is_setup(@object).deliver_now
+ rescue => e
+ logger.warn "Failed to send email to #{@object.email}: #{e}"
+ end
end
send_json kind: "arvados#HashList", items: @response.as_api_response(nil)
def self._setup_requires_parameters
{
+ uuid: {
+ type: 'string', required: false
+ },
user: {
type: 'object', required: false
},
- openid_prefix: {
- type: 'string', required: false
- },
repo_name: {
type: 'string', required: false
},
# use @example.com email addresses when creating user records, so
# we can tell they're not valuable.
user_uuids = User.
- where('email is null or email not like ?', '%@example.com').
+ where('email is null or (email not like ? and email not like ?)', '%@example.com', '%.example.com').
collect(&:uuid)
fixture_uuids =
YAML::load_file(File.expand_path('../../../test/fixtures/users.yml',
begin
user = User.register(authinfo)
rescue => e
- Rails.logger.warn e
+ Rails.logger.warn "User.register error #{e}"
+ Rails.logger.warn "authinfo was #{authinfo.inspect}"
return redirect_to login_failure_url
end
def account_is_setup(user)
@user = user
- mail(to: user.email, subject: 'Welcome to Arvados - shell account enabled')
+ mail(to: user.email, subject: 'Welcome to Arvados - account enabled')
end
end
+++ /dev/null
-# Copyright (C) The Arvados Authors. All rights reserved.
-#
-# SPDX-License-Identifier: AGPL-3.0
-
-require 'rack'
-require 'faye/websocket'
-require 'eventmachine'
-
-# A Rack middleware to handle inbound websocket connection requests and hand
-# them over to the faye websocket library.
-class RackSocket
-
- DEFAULT_ENDPOINT = '/websocket'
-
- # Stop EventMachine on signal, this should give it a chance to to unwind any
- # open connections.
- def die_gracefully_on_signal
- Signal.trap("INT") { EM.stop }
- Signal.trap("TERM") { EM.stop }
- end
-
- # Create a new RackSocket handler
- # +app+ The next layer of the Rack stack.
- #
- # Accepts options:
- # +:handler+ (Required) A class to handle new connections. #initialize will
- # call handler.new to create the actual handler instance object. When a new
- # websocket connection is established, #on_connect on the handler instance
- # object will be called with the new connection.
- #
- # +:mount+ The HTTP request path that will be recognized for websocket
- # connect requests, defaults to '/websocket'.
- #
- # +:websocket_only+ If true, the server will only handle websocket requests,
- # and all other requests will result in an error. If false, unhandled
- # non-websocket requests will be passed along on to 'app' in the usual Rack
- # way.
- def initialize(app = nil, options = nil)
- @app = app if app.respond_to?(:call)
- @options = [app, options].grep(Hash).first || {}
- @endpoint = @options[:mount] || DEFAULT_ENDPOINT
- @websocket_only = @options[:websocket_only] || false
-
- # from https://gist.github.com/eatenbyagrue/1338545#file-eventmachine-rb
- if defined?(PhusionPassenger)
- PhusionPassenger.on_event(:starting_worker_process) do |forked|
- # for passenger, we need to avoid orphaned threads
- if forked && EM.reactor_running?
- EM.stop
- end
- Thread.new do
- begin
- EM.run
- ensure
- ActiveRecord::Base.connection.close
- end
- end
- die_gracefully_on_signal
- end
- else
- # faciliates debugging
- Thread.abort_on_exception = true
- # just spawn a thread and start it up
- Thread.new do
- begin
- EM.run
- ensure
- ActiveRecord::Base.connection.close
- end
- end
- end
-
- # Create actual handler instance object from handler class.
- @handler = @options[:handler].new
- end
-
- # Handle websocket connection request, or pass on to the next middleware
- # supplied in +app+ initialize (unless +:websocket_only+ option is true, in
- # which case return an error response.)
- # +env+ the Rack environment with information about the request.
- def call env
- request = Rack::Request.new(env)
- if request.path_info == @endpoint and Faye::WebSocket.websocket?(env)
- if @handler.overloaded?
- return [503, {"Content-Type" => "text/plain"}, ["Too many connections, try again later."]]
- end
-
- ws = Faye::WebSocket.new(env, nil, :ping => 30)
-
- # Notify handler about new connection
- @handler.on_connect ws
-
- # Return async Rack response
- ws.rack_response
- elsif not @websocket_only
- @app.call env
- else
- [406, {"Content-Type" => "text/plain"}, ["Only websocket connections are permitted on this port."]]
- end
- end
-
-end
t.add :url_prefix
t.add :is_trusted
end
+
+ def is_trusted
+ norm(self.url_prefix) == norm(Rails.configuration.Services.Workbench1.ExternalURL) ||
+ norm(self.url_prefix) == norm(Rails.configuration.Services.Workbench2.ExternalURL) ||
+ super
+ end
+
+ protected
+
+ def norm url
+ # normalize URL for comparison
+ url = URI(url)
+ if url.scheme == "https"
+ url.port == "443"
+ end
+ if url.scheme == "http"
+ url.port == "80"
+ end
+ url.path = "/"
+ url
+ end
end
clnt
end
+ def self.check_system_root_token token
+ if token == Rails.configuration.SystemRootToken
+ return ApiClientAuthorization.new(user: User.find_by_uuid(system_user_uuid),
+ uuid: Rails.configuration.ClusterID+"-gj3su-000000000000000",
+ api_token: token,
+ api_client: ApiClient.new(is_trusted: true, url_prefix: ""))
+ else
+ return nil
+ end
+ end
+
def self.validate(token:, remote: nil)
- return nil if !token
+ return nil if token.nil? or token.empty?
remote ||= Rails.configuration.ClusterID
+ auth = self.check_system_root_token(token)
+ if !auth.nil?
+ return auth
+ end
+
case token[0..2]
when 'v2/'
_, token_uuid, secret, optional = token.split('/')
# Sync user record.
if remote_user_prefix == Rails.configuration.Login.LoginCluster
- # Remote cluster controls our user database, copy both
- # 'is_active' and 'is_admin'
- user.is_active = remote_user['is_active']
+ # Remote cluster controls our user database, set is_active if
+ # remote is active. If remote is not active, user will be
+ # unsetup (see below).
+ user.is_active = true if remote_user['is_active']
user.is_admin = remote_user['is_admin']
else
if Rails.configuration.Users.NewUsersAreActive ||
Rails.configuration.RemoteClusters[remote_user_prefix].andand["ActivateUsers"]
- # Default policy is to activate users, so match activate
- # with the remote record.
- user.is_active = remote_user['is_active']
- elsif !remote_user['is_active']
- # Deactivate user if the remote is inactive, otherwise don't
- # change 'is_active'.
- user.is_active = false
+ # Default policy is to activate users
+ user.is_active = true if remote_user['is_active']
end
end
end
act_as_system_user do
+ if user.is_active && !remote_user['is_active']
+ user.unsetup
+ end
+
user.save!
# We will accept this token (and avoid reloading the user
if not ft[:cond_out].any?
return query
end
+ ft[:joins].each do |t|
+ query = query.joins(t)
+ end
query.where('(' + ft[:cond_out].join(') AND (') + ')',
*ft[:param_out])
end
end
end
+ def ensure_filesystem_compatible_name
+ if name == "." || name == ".."
+ errors.add(:name, "cannot be '.' or '..'")
+ elsif Rails.configuration.Collections.ForwardSlashNameSubstitution == "" && !name.nil? && name.index('/')
+ errors.add(:name, "cannot contain a '/' character")
+ end
+ end
+
class Email
def self.kind
"email"
before_validation :check_signatures
before_validation :strip_signatures_and_update_replication_confirmed
before_validation :name_null_if_empty
+ validate :ensure_filesystem_compatible_name
validate :ensure_pdh_matches_manifest_text
validate :ensure_storage_classes_desired_is_not_empty
validate :ensure_storage_classes_contain_non_empty_strings
return
end
(managed_props.keys - self.properties.keys).each do |key|
- if managed_props[key].has_key?('Value')
- self.properties[key] = managed_props[key]['Value']
- elsif managed_props[key]['Function'].andand == 'original_owner'
+ if managed_props[key]['Function'] == 'original_owner'
self.properties[key] = self.user_owner_uuid
+ elsif managed_props[key]['Value']
+ self.properties[key] = managed_props[key]['Value']
else
logger.warn "Unidentified default property definition '#{key}': #{managed_props[key].inspect}"
end
# (same order as Container#handle_completed). Locking always
# reloads the Container and ContainerRequest records.
c = Container.find_by_uuid(container_uuid)
- c.lock!
+ c.lock! if !c.nil?
self.lock!
- if container_uuid != c.uuid
+ if !c.nil? && container_uuid != c.uuid
# After locking, we've noticed a race, the container_uuid is
# different than the container record we just loaded. This
# can happen if Container#handle_completed scheduled a new
redo
end
- if state == Committed && c.final?
- # The current container is
- act_as_system_user do
- leave_modified_by_user_alone do
- finalize!
+ if !c.nil?
+ if state == Committed && c.final?
+ # The current container is
+ act_as_system_user do
+ leave_modified_by_user_alone do
+ finalize!
+ end
end
end
+ elsif state == Committed
+ # Behave as if the container is cancelled
+ update_attributes!(state: Final)
end
return true
end
# finished/cancelled.
def finalize!
container = Container.find_by_uuid(container_uuid)
- update_collections(container: container)
-
- if container.state == Container::Complete
- log_col = Collection.where(portable_data_hash: container.log).first
- if log_col
- # Need to save collection
- completed_coll = Collection.new(
- owner_uuid: self.owner_uuid,
- name: "Container log for container #{container_uuid}",
- properties: {
- 'type' => 'log',
- 'container_request' => self.uuid,
- 'container_uuid' => container_uuid,
- },
- portable_data_hash: log_col.portable_data_hash,
- manifest_text: log_col.manifest_text)
- completed_coll.save_with_unique_name!
+ if !container.nil?
+ update_collections(container: container)
+
+ if container.state == Container::Complete
+ log_col = Collection.where(portable_data_hash: container.log).first
+ if log_col
+ # Need to save collection
+ completed_coll = Collection.new(
+ owner_uuid: self.owner_uuid,
+ name: "Container log for container #{container_uuid}",
+ properties: {
+ 'type' => 'log',
+ 'container_request' => self.uuid,
+ 'container_uuid' => container_uuid,
+ },
+ portable_data_hash: log_col.portable_data_hash,
+ manifest_text: log_col.manifest_text)
+ completed_coll.save_with_unique_name!
+ end
end
end
-
update_attributes!(state: Final)
end
collections.each do |out_type|
pdh = container.send(out_type)
next if pdh.nil?
+ c = Collection.where(portable_data_hash: pdh).first
+ next if c.nil?
+ manifest = c.manifest_text
+
coll_name = "Container #{out_type} for request #{uuid}"
trash_at = nil
if out_type == 'output'
trash_at = db_current_time + self.output_ttl
end
end
- manifest = Collection.where(portable_data_hash: pdh).first.manifest_text
coll_uuid = self.send(out_type + '_uuid')
coll = coll_uuid.nil? ? nil : Collection.where(uuid: coll_uuid).first
if self.container_count_changed?
errors.add :container_count, "cannot be updated directly."
return false
- else
- self.container_count += 1
- if self.container_uuid_was
- old_container = Container.find_by_uuid(self.container_uuid_was)
- old_logs = Collection.where(portable_data_hash: old_container.log).first
- if old_logs
- log_coll = self.log_uuid.nil? ? nil : Collection.where(uuid: self.log_uuid).first
- if self.log_uuid.nil?
- log_coll = Collection.new(
- owner_uuid: self.owner_uuid,
- name: coll_name = "Container log for request #{uuid}",
- manifest_text: "")
- end
+ end
- # copy logs from old container into CR's log collection
- src = Arv::Collection.new(old_logs.manifest_text)
- dst = Arv::Collection.new(log_coll.manifest_text)
- dst.cp_r("./", "log for container #{old_container.uuid}", src)
- manifest = dst.manifest_text
-
- log_coll.assign_attributes(
- portable_data_hash: Digest::MD5.hexdigest(manifest) + '+' + manifest.bytesize.to_s,
- manifest_text: manifest)
- log_coll.save_with_unique_name!
- self.log_uuid = log_coll.uuid
- end
- end
+ self.container_count += 1
+ return if self.container_uuid_was.nil?
+
+ old_container = Container.find_by_uuid(self.container_uuid_was)
+ return if old_container.nil?
+
+ old_logs = Collection.where(portable_data_hash: old_container.log).first
+ return if old_logs.nil?
+
+ log_coll = self.log_uuid.nil? ? nil : Collection.where(uuid: self.log_uuid).first
+ if self.log_uuid.nil?
+ log_coll = Collection.new(
+ owner_uuid: self.owner_uuid,
+ name: coll_name = "Container log for request #{uuid}",
+ manifest_text: "")
end
+
+ # copy logs from old container into CR's log collection
+ src = Arv::Collection.new(old_logs.manifest_text)
+ dst = Arv::Collection.new(log_coll.manifest_text)
+ dst.cp_r("./", "log for container #{old_container.uuid}", src)
+ manifest = dst.manifest_text
+
+ log_coll.assign_attributes(
+ portable_data_hash: Digest::MD5.hexdigest(manifest) + '+' + manifest.bytesize.to_s,
+ manifest_text: manifest)
+ log_coll.save_with_unique_name!
+ self.log_uuid = log_coll.uuid
end
end
# already know how to properly treat them.
attribute :properties, :jsonbHash, default: {}
+ validate :ensure_filesystem_compatible_name
after_create :invalidate_permissions_cache
after_update :maybe_invalidate_permissions_cache
before_create :assign_name
t.add :properties
end
+ def ensure_filesystem_compatible_name
+ # project groups need filesystem-compatible names, but others
+ # don't.
+ super if group_class == 'project'
+ end
+
def maybe_invalidate_permissions_cache
if uuid_changed? or owner_uuid_changed? or is_trashed_changed?
# This can change users' permissions on other groups as well as
},
uniqueness: true,
allow_nil: true)
+ validate :must_unsetup_to_deactivate
before_update :prevent_privilege_escalation
before_update :prevent_inactive_admin
before_update :verify_repositories_empty, :if => Proc.new { |user|
end
# create links
- def setup(openid_prefix:, repo_name: nil, vm_uuid: nil)
- oid_login_perm = create_oid_login_perm openid_prefix
+ def setup(repo_name: nil, vm_uuid: nil)
repo_perm = create_user_repo_link repo_name
vm_login_perm = create_vm_login_permission_link(vm_uuid, username) if vm_uuid
group_perm = create_user_group_link
- return [oid_login_perm, repo_perm, vm_login_perm, group_perm, self].compact
+ return [repo_perm, vm_login_perm, group_perm, self].compact
end
# delete user signatures, login, repo, and vm perms, and mark as inactive
def unsetup
# delete oid_login_perms for this user
+ #
+ # note: these permission links are obsolete, they have no effect
+ # on anything and they are not created for new users.
Link.where(tail_uuid: self.email,
link_class: 'permission',
name: 'can_login').destroy_all
self.save!
end
+ def must_unsetup_to_deactivate
+ if self.is_active_changed? &&
+ self.is_active_was == true &&
+ !self.is_active
+
+ group = Group.where(name: 'All users').select do |g|
+ g[:uuid].match(/-f+$/)
+ end.first
+
+ # When a user is set up, they are added to the "All users"
+ # group. A user that is part of the "All users" group is
+ # allowed to self-activate.
+ #
+ # It doesn't make sense to deactivate a user (set is_active =
+ # false) without first removing them from the "All users" group,
+ # because they would be able to immediately reactivate
+ # themselves.
+ #
+ # The 'unsetup' method removes the user from the "All users"
+ # group (and also sets is_active = false) so send a message
+ # explaining the correct way to deactivate a user.
+ #
+ if Link.where(tail_uuid: self.uuid,
+ head_uuid: group[:uuid],
+ link_class: 'permission',
+ name: 'can_read').any?
+ errors.add :is_active, "cannot be set to false directly, use the 'Deactivate' button on Workbench, or the 'unsetup' API call"
+ end
+ end
+ end
+
def set_initial_username(requested: false)
if !requested.is_a?(String) || requested.empty?
email_parts = email.partition("@")
user = self
redirects = 0
while (uuid = user.redirect_to_user_uuid)
- user = User.unscoped.find_by_uuid(uuid)
- if !user
- raise Exception.new("user uuid #{user.uuid} redirects to nonexistent uuid #{uuid}")
+ break if uuid.empty?
+ nextuser = User.unscoped.find_by_uuid(uuid)
+ if !nextuser
+ raise Exception.new("user uuid #{user.uuid} redirects to nonexistent uuid '#{uuid}'")
end
+ user = nextuser
redirects += 1
if redirects > 15
raise "Starting from #{self.uuid} redirect_to_user_uuid exceeded maximum number of redirects"
:is_admin => false,
:is_active => Rails.configuration.Users.NewUsersAreActive)
- primary_user.set_initial_username(requested: info['username']) if info['username']
+ primary_user.set_initial_username(requested: info['username']) if info['username'] && !info['username'].blank?
primary_user.identity_url = info['identity_url'] if identity_url
end
merged
end
- def create_oid_login_perm(openid_prefix)
- # Check oid_login_perm
- oid_login_perms = Link.where(tail_uuid: self.email,
- head_uuid: self.uuid,
- link_class: 'permission',
- name: 'can_login')
-
- if !oid_login_perms.any?
- # create openid login permission
- oid_login_perm = Link.create!(link_class: 'permission',
- name: 'can_login',
- tail_uuid: self.email,
- head_uuid: self.uuid,
- properties: {
- "identity_url_prefix" => openid_prefix,
- })
- logger.info { "openid login permission: " + oid_login_perm[:uuid] }
- else
- oid_login_perm = oid_login_perms.first
- end
-
- return oid_login_perm
- end
-
def create_user_repo_link(repo_name)
# repo_name is optional
if not repo_name
def setup_on_activate
return if [system_user_uuid, anonymous_user_uuid].include?(self.uuid)
if is_active && (new_record? || is_active_changed?)
- setup(openid_prefix: Rails.configuration.default_openid_prefix)
+ setup
end
end
# Automatically setup new user during creation
def auto_setup_new_user
- setup(openid_prefix: Rails.configuration.default_openid_prefix)
+ setup
if username
create_vm_login_permission_link(Rails.configuration.Users.AutoSetupNewUsersWithVmUUID,
username)
action_controller.allow_forgery_protection: false
action_mailer.delivery_method: :test
active_support.deprecation: :stderr
- uuid_prefix: zzzzz
- sso_app_id: arvados-server
- sso_app_secret: <%= rand(2**512).to_s(36) %>
- sso_provider_url: http://localhost:3002
- secret_token: <%= rand(2**512).to_s(36) %>
- blob_signing_key: zfhgfenhffzltr9dixws36j1yhksjoll2grmku38mi7yxd66h5j4q9w4jzanezacp8s6q0ro3hxakfye02152hncy6zml2ed0uc
- user_profile_notification_address: arvados@example.com
- workbench_address: https://localhost:3001/
- git_repositories_dir: <%= Rails.root.join 'tmp', 'git', 'test' %>
- git_internal_dir: <%= Rails.root.join 'tmp', 'internal.git' %>
- trash_sweep_interval: -1
- docker_image_formats: ["v1"]
end
end
+if ENV["ARVADOS_RAILS_LOG_TO_STDOUT"]
+ Rails.logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT))
+end
+
module Server
class Application < Rails::Application
# The following is to avoid SafeYAML's warning message
config.action_dispatch.perform_deep_munge = false
+ # force_ssl's redirect-to-https feature doesn't work when the
+ # client supplies a port number, and prevents arvados-controller
+ # from connecting to Rails internally via plain http.
+ config.ssl_options = {redirect: false}
+
I18n.enforce_available_locales = false
# Before using the filesystem backend for Rails.cache, check
arvcfg.declare_config "Login.LoginCluster", String
arvcfg.declare_config "Login.RemoteTokenRefresh", ActiveSupport::Duration
arvcfg.declare_config "TLS.Insecure", Boolean, :sso_insecure
-arvcfg.declare_config "Services.SSO.ExternalURL", NonemptyString, :sso_provider_url
+arvcfg.declare_config "Services.SSO.ExternalURL", String, :sso_provider_url
arvcfg.declare_config "AuditLogs.MaxAge", ActiveSupport::Duration, :max_audit_log_age
arvcfg.declare_config "AuditLogs.MaxDeleteBatch", Integer, :max_audit_log_delete_batch
arvcfg.declare_config "AuditLogs.UnloggedAttributes", Hash, :unlogged_attributes, ->(cfg, k, v) { arrayToHash cfg, "AuditLogs.UnloggedAttributes", v }
arvcfg.declare_config "Collections.BlobSigningKey", NonemptyString, :blob_signing_key
arvcfg.declare_config "Collections.BlobSigningTTL", ActiveSupport::Duration, :blob_signature_ttl
arvcfg.declare_config "Collections.BlobSigning", Boolean, :permit_create_collection_with_unsigned_manifest, ->(cfg, k, v) { ConfigLoader.set_cfg cfg, "Collections.BlobSigning", !v }
+arvcfg.declare_config "Collections.ForwardSlashNameSubstitution", String
arvcfg.declare_config "Containers.SupportedDockerImageFormats", Hash, :docker_image_formats, ->(cfg, k, v) { arrayToHash cfg, "Containers.SupportedDockerImageFormats", v }
arvcfg.declare_config "Containers.LogReuseDecisions", Boolean, :log_reuse_decisions
arvcfg.declare_config "Containers.DefaultKeepCacheRAM", Integer, :container_default_keep_cache_ram
db_config = {}
path = "#{::Rails.root.to_s}/config/database.yml"
-if File.exist? path
+if !ENV['ARVADOS_CONFIG_NOLEGACY'] && File.exist?(path)
db_config = ConfigLoader.load(path, erb: true)
end
post 'unsetup', on: :member
post 'update_uuid', on: :member
post 'merge', on: :collection
+ patch 'batch_update', on: :collection
end
resources :virtual_machines do
get 'logins', on: :member
case "$TARGET" in
centos*)
- fpm_depends+=(libcurl-devel postgresql-devel)
+ fpm_depends+=(libcurl-devel postgresql-devel bison make automake gcc gcc-c++)
;;
debian* | ubuntu*)
- fpm_depends+=(libcurl-ssl-dev libpq-dev g++)
+ fpm_depends+=(libcurl-ssl-dev libpq-dev g++ bison zlib1g-dev make)
;;
esac
end
end
- def self.parse_duration durstr, cfgkey:
- duration_re = /-?(\d+(\.\d+)?)(s|m|h)/
+ def self.parse_duration(durstr, cfgkey:)
+ sign = 1
+ if durstr[0] == '-'
+ durstr = durstr[1..-1]
+ sign = -1
+ end
+ duration_re = /(\d+(\.\d+)?)(s|m|h)/
dursec = 0
while durstr != ""
mt = duration_re.match durstr
raise "#{cfgkey} not a valid duration: '#{durstr}', accepted suffixes are s, m, h"
end
multiplier = {s: 1, m: 60, h: 3600}
- dursec += (Float(mt[1]) * multiplier[mt[3].to_sym])
+ dursec += (Float(mt[1]) * multiplier[mt[3].to_sym] * sign)
durstr = durstr[mt[0].length..-1]
end
return dursec.seconds
# +model_class+ subclass of ActiveRecord being filtered
#
# Output:
- # Hash with two keys:
+ # Hash with the following keys:
# :cond_out array of SQL fragments for each filter expression
- # :param_out array of values for parameter substitution in cond_out
+ # :param_out array of values for parameter substitution in cond_out
+ # :joins array of joins: either [] or ["JOIN containers ON ..."]
def record_filters filters, model_class
conds_out = []
param_out = []
+ joins = []
- ar_table_name = model_class.table_name
+ model_table_name = model_class.table_name
filters.each do |filter|
attrs_in, operator, operand = filter
if attrs_in == 'any' && operator != '@@'
attrs.each do |attr|
subproperty = attr.split(".", 2)
- col = model_class.columns.select { |c| c.name == subproperty[0] }.first
+ if subproperty.length == 2 && subproperty[0] == 'container' && model_table_name == "container_requests"
+ # attr is "tablename.colname" -- e.g., ["container.state", "=", "Complete"]
+ joins = ["JOIN containers ON container_requests.container_uuid = containers.uuid"]
+ attr_model_class = Container
+ attr_table_name = "containers"
+ subproperty = subproperty[1].split(".", 2)
+ else
+ attr_model_class = model_class
+ attr_table_name = model_table_name
+ end
+
+ attr = subproperty[0]
+ proppath = subproperty[1]
+ col = attr_model_class.columns.select { |c| c.name == attr }.first
- if subproperty.length == 2
+ if proppath
if col.nil? or col.type != :jsonb
- raise ArgumentError.new("Invalid attribute '#{subproperty[0]}' for subproperty filter")
+ raise ArgumentError.new("Invalid attribute '#{attr}' for subproperty filter")
end
- if subproperty[1][0] == "<" and subproperty[1][-1] == ">"
- subproperty[1] = subproperty[1][1..-2]
+ if proppath[0] == "<" and proppath[-1] == ">"
+ proppath = proppath[1..-2]
end
# jsonb search
case operator.downcase
when '=', '!='
not_in = if operator.downcase == "!=" then "NOT " else "" end
- cond_out << "#{not_in}(#{ar_table_name}.#{subproperty[0]} @> ?::jsonb)"
- param_out << SafeJSON.dump({subproperty[1] => operand})
+ cond_out << "#{not_in}(#{attr_table_name}.#{attr} @> ?::jsonb)"
+ param_out << SafeJSON.dump({proppath => operand})
when 'in'
if operand.is_a? Array
operand.each do |opr|
- cond_out << "#{ar_table_name}.#{subproperty[0]} @> ?::jsonb"
- param_out << SafeJSON.dump({subproperty[1] => opr})
+ cond_out << "#{attr_table_name}.#{attr} @> ?::jsonb"
+ param_out << SafeJSON.dump({proppath => opr})
end
else
raise ArgumentError.new("Invalid operand type '#{operand.class}' "\
"for '#{operator}' operator in filters")
end
when '<', '<=', '>', '>='
- cond_out << "#{ar_table_name}.#{subproperty[0]}->? #{operator} ?::jsonb"
- param_out << subproperty[1]
+ cond_out << "#{attr_table_name}.#{attr}->? #{operator} ?::jsonb"
+ param_out << proppath
param_out << SafeJSON.dump(operand)
when 'like', 'ilike'
- cond_out << "#{ar_table_name}.#{subproperty[0]}->>? #{operator} ?"
- param_out << subproperty[1]
+ cond_out << "#{attr_table_name}.#{attr}->>? #{operator} ?"
+ param_out << proppath
param_out << operand
when 'not in'
if operand.is_a? Array
- cond_out << "#{ar_table_name}.#{subproperty[0]}->>? NOT IN (?) OR #{ar_table_name}.#{subproperty[0]}->>? IS NULL"
- param_out << subproperty[1]
+ cond_out << "#{attr_table_name}.#{attr}->>? NOT IN (?) OR #{attr_table_name}.#{attr}->>? IS NULL"
+ param_out << proppath
param_out << operand
- param_out << subproperty[1]
+ param_out << proppath
else
raise ArgumentError.new("Invalid operand type '#{operand.class}' "\
"for '#{operator}' operator in filters")
end
when 'exists'
if operand == true
- cond_out << "jsonb_exists(#{ar_table_name}.#{subproperty[0]}, ?)"
+ cond_out << "jsonb_exists(#{attr_table_name}.#{attr}, ?)"
elsif operand == false
- cond_out << "(NOT jsonb_exists(#{ar_table_name}.#{subproperty[0]}, ?)) OR #{ar_table_name}.#{subproperty[0]} is NULL"
+ cond_out << "(NOT jsonb_exists(#{attr_table_name}.#{attr}, ?)) OR #{attr_table_name}.#{attr} is NULL"
else
raise ArgumentError.new("Invalid operand '#{operand}' for '#{operator}' must be true or false")
end
- param_out << subproperty[1]
+ param_out << proppath
+ when 'contains'
+ cond_out << "#{attr_table_name}.#{attr} @> ?::jsonb OR #{attr_table_name}.#{attr} @> ?::jsonb"
+ param_out << SafeJSON.dump({proppath => operand})
+ param_out << SafeJSON.dump({proppath => [operand]})
else
raise ArgumentError.new("Invalid operator for subproperty search '#{operator}'")
end
elsif operator.downcase == "exists"
if col.type != :jsonb
- raise ArgumentError.new("Invalid attribute '#{subproperty[0]}' for operator '#{operator}' in filter")
+ raise ArgumentError.new("Invalid attribute '#{attr}' for operator '#{operator}' in filter")
end
- cond_out << "jsonb_exists(#{ar_table_name}.#{subproperty[0]}, ?)"
+ cond_out << "jsonb_exists(#{attr_table_name}.#{attr}, ?)"
param_out << operand
else
- if !model_class.searchable_columns(operator).index subproperty[0]
- raise ArgumentError.new("Invalid attribute '#{subproperty[0]}' in filter")
+ if !attr_model_class.searchable_columns(operator).index attr
+ raise ArgumentError.new("Invalid attribute '#{attr}' in filter")
end
case operator.downcase
when '=', '<', '<=', '>', '>=', '!=', 'like', 'ilike'
- attr_type = model_class.attribute_column(attr).type
+ attr_type = attr_model_class.attribute_column(attr).type
operator = '<>' if operator == '!='
if operand.is_a? String
if attr_type == :boolean
end
if operator == '<>'
# explicitly allow NULL
- cond_out << "#{ar_table_name}.#{attr} #{operator} ? OR #{ar_table_name}.#{attr} IS NULL"
+ cond_out << "#{attr_table_name}.#{attr} #{operator} ? OR #{attr_table_name}.#{attr} IS NULL"
else
- cond_out << "#{ar_table_name}.#{attr} #{operator} ?"
+ cond_out << "#{attr_table_name}.#{attr} #{operator} ?"
end
if (# any operator that operates on value rather than
# representation:
end
param_out << operand
elsif operand.nil? and operator == '='
- cond_out << "#{ar_table_name}.#{attr} is null"
+ cond_out << "#{attr_table_name}.#{attr} is null"
elsif operand.nil? and operator == '<>'
- cond_out << "#{ar_table_name}.#{attr} is not null"
+ cond_out << "#{attr_table_name}.#{attr} is not null"
elsif (attr_type == :boolean) and ['=', '<>'].include?(operator) and
[true, false].include?(operand)
- cond_out << "#{ar_table_name}.#{attr} #{operator} ?"
+ cond_out << "#{attr_table_name}.#{attr} #{operator} ?"
param_out << operand
elsif (attr_type == :integer)
- cond_out << "#{ar_table_name}.#{attr} #{operator} ?"
+ cond_out << "#{attr_table_name}.#{attr} #{operator} ?"
param_out << operand
else
raise ArgumentError.new("Invalid operand type '#{operand.class}' "\
end
when 'in', 'not in'
if operand.is_a? Array
- cond_out << "#{ar_table_name}.#{attr} #{operator} (?)"
+ cond_out << "#{attr_table_name}.#{attr} #{operator} (?)"
param_out << operand
if operator == 'not in' and not operand.include?(nil)
# explicitly allow NULL
- cond_out[-1] = "(#{cond_out[-1]} OR #{ar_table_name}.#{attr} IS NULL)"
+ cond_out[-1] = "(#{cond_out[-1]} OR #{attr_table_name}.#{attr} IS NULL)"
end
else
raise ArgumentError.new("Invalid operand type '#{operand.class}' "\
cl = ArvadosModel::kind_class op
if cl
if attr == 'uuid'
- if model_class.uuid_prefix == cl.uuid_prefix
+ if attr_model_class.uuid_prefix == cl.uuid_prefix
cond << "1=1"
else
cond << "1=0"
end
else
# Use a substring query to support remote uuids
- cond << "substring(#{ar_table_name}.#{attr}, 7, 5) = ?"
+ cond << "substring(#{attr_table_name}.#{attr}, 7, 5) = ?"
param_out << cl.uuid_prefix
end
else
conds_out << cond_out.join(' OR ') if cond_out.any?
end
- {:cond_out => conds_out, :param_out => param_out}
+ {:cond_out => conds_out, :param_out => param_out, :joins => joins}
end
end
modified_at: 2015-02-13T17:22:54Z
updated_at: 2015-02-13T17:22:54Z
manifest_text: ". 37b51d194a7513e45b56f6524f2d51f2+3 0:3:bar\n"
- name: collection with prop1 1
+ name: collection with prop2 1
properties:
prop2: 1
modified_at: 2015-02-13T17:22:54Z
updated_at: 2015-02-13T17:22:54Z
manifest_text: ". 37b51d194a7513e45b56f6524f2d51f2+3 0:3:bar\n"
- name: collection with prop1 5
+ name: collection with prop2 5
properties:
prop2: 5
+collection_with_list_prop_odd:
+ uuid: zzzzz-4zz18-listpropertyodd
+ current_version_uuid: zzzzz-4zz18-listpropertyodd
+ portable_data_hash: fa7aeb5140e2848d39b416daeef4ffc5+45
+ owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ created_at: 2015-02-13T17:22:54Z
+ modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
+ modified_by_user_uuid: zzzzz-tpzed-d9tiejq69daie8f
+ modified_at: 2015-02-13T17:22:54Z
+ updated_at: 2015-02-13T17:22:54Z
+ manifest_text: ". 37b51d194a7513e45b56f6524f2d51f2+3 0:3:bar\n"
+ name: collection with list property with odd values
+ properties:
+ listprop: [elem1, elem3, 5]
+
+collection_with_list_prop_even:
+ uuid: zzzzz-4zz18-listpropertyeven
+ current_version_uuid: zzzzz-4zz18-listpropertyeven
+ portable_data_hash: fa7aeb5140e2848d39b416daeef4ffc5+45
+ owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ created_at: 2015-02-13T17:22:54Z
+ modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
+ modified_by_user_uuid: zzzzz-tpzed-d9tiejq69daie8f
+ modified_at: 2015-02-13T17:22:54Z
+ updated_at: 2015-02-13T17:22:54Z
+ manifest_text: ". 37b51d194a7513e45b56f6524f2d51f2+3 0:3:bar\n"
+ name: collection with list property with even values
+ properties:
+ listprop: [elem2, 4, elem6, ELEM8]
+
+collection_with_listprop_elem1:
+ uuid: zzzzz-4zz18-listpropelem1
+ current_version_uuid: zzzzz-4zz18-listpropelem1
+ portable_data_hash: fa7aeb5140e2848d39b416daeef4ffc5+45
+ owner_uuid: zzzzz-tpzed-xurymjxw79nv3jz
+ created_at: 2015-02-13T17:22:54Z
+ modified_by_client_uuid: zzzzz-ozdt8-brczlopd8u8d0jr
+ modified_by_user_uuid: zzzzz-tpzed-d9tiejq69daie8f
+ modified_at: 2015-02-13T17:22:54Z
+ updated_at: 2015-02-13T17:22:54Z
+ manifest_text: ". 37b51d194a7513e45b56f6524f2d51f2+3 0:3:bar\n"
+ name: collection with list property with string value
+ properties:
+ listprop: elem1
+
collection_with_uri_prop:
uuid: zzzzz-4zz18-withuripropval1
current_version_uuid: zzzzz-4zz18-withuripropval1
owner_uuid: zzzzz-tpzed-000000000000000
uuid: zzzzz-tpzed-xurymjxw79nv3jz
email: active-user@arvados.local
+ modified_by_client_uuid: zzzzz-ozdt8-teyxzyd8qllg11h
+ modified_by_user_uuid: zzzzz-tpzed-xurymjxw79nv3jz
first_name: Active
last_name: User
identity_url: https://active-user.openid.local
is_active: true
is_admin: false
+ modified_at: 2015-03-26 12:34:56.789000000 Z
username: active
prefs:
profile:
api_client_authorizations(:active).api_token)
end
+ test "get current token using SystemRootToken" do
+ Rails.configuration.SystemRootToken = "xyzzy-systemroottoken"
+ authorize_with_token Rails.configuration.SystemRootToken
+ get :current
+ assert_response :success
+ assert_equal(Rails.configuration.SystemRootToken, json_response['api_token'])
+ assert_not_empty(json_response['uuid'])
+ end
+
test "get current token, no auth" do
get :current
assert_response 401
assert_equal api_client_authorizations(:spectator).token, req.runtime_token
end
+ %w(Running Complete).each do |state|
+ test "filter on container.state = #{state}" do
+ authorize_with :active
+ get :index, params: {
+ filters: [['container.state', '=', state]],
+ }
+ assert_response :success
+ assert_operator json_response['items'].length, :>, 0
+ json_response['items'].each do |cr|
+ assert_equal state, Container.find_by_uuid(cr['container_uuid']).state
+ end
+ end
+ end
+
+ test "filter on container success" do
+ authorize_with :active
+ get :index, params: {
+ filters: [
+ ['container.state', '=', 'Complete'],
+ ['container.exit_code', '=', '0'],
+ ],
+ }
+ assert_response :success
+ assert_operator json_response['items'].length, :>, 0
+ json_response['items'].each do |cr|
+ assert_equal 'Complete', Container.find_by_uuid(cr['container_uuid']).state
+ assert_equal 0, Container.find_by_uuid(cr['container_uuid']).exit_code
+ end
+ end
+
+ test "filter on container subproperty runtime_status[foo] = bar" do
+ ctr = containers(:running)
+ act_as_system_user do
+ ctr.update_attributes!(runtime_status: {foo: 'bar'})
+ end
+ authorize_with :active
+ get :index, params: {
+ filters: [
+ ['container.runtime_status.foo', '=', 'bar'],
+ ],
+ }
+ assert_response :success
+ assert_equal [ctr.uuid], json_response['items'].collect { |cr| cr['container_uuid'] }.uniq
+ end
end
['prop2', '<=', 5, [:collection_with_prop2_1, :collection_with_prop2_5], []],
['prop2', '>=', 1, [:collection_with_prop2_1, :collection_with_prop2_5], []],
['<http://schema.org/example>', '=', "value1", [:collection_with_uri_prop], []],
+ ['listprop', 'contains', 'elem1', [:collection_with_list_prop_odd, :collection_with_listprop_elem1], [:collection_with_list_prop_even]],
+ ['listprop', '=', 'elem1', [:collection_with_listprop_elem1], [:collection_with_list_prop_odd]],
+ ['listprop', 'contains', 5, [:collection_with_list_prop_odd], [:collection_with_list_prop_even, :collection_with_listprop_elem1]],
+ ['listprop', 'contains', 'elem2', [:collection_with_list_prop_even], [:collection_with_list_prop_odd, :collection_with_listprop_elem1]],
+ ['listprop', 'contains', 'ELEM2', [], [:collection_with_list_prop_even]],
+ ['listprop', 'contains', 'elem8', [], [:collection_with_list_prop_even]],
+ ['listprop', 'contains', 4, [:collection_with_list_prop_even], [:collection_with_list_prop_odd, :collection_with_listprop_elem1]],
].each do |prop, op, opr, inc, ex|
test "jsonb filter properties.#{prop} #{op} #{opr})" do
@controller = Arvados::V1::CollectionsController.new
end
act_as_system_user do
u = users(:active)
- u.is_active = false
+ u.unsetup
u.save!
end
authorize_with :admin
post :setup, params: {
repo_name: repo_name,
- openid_prefix: 'https://www.google.com/accounts/o8/id',
user: {
uuid: 'zzzzz-tpzed-abcdefghijklmno',
first_name: "in_create_test_first_name",
assert_not_nil created['email'], 'expected non-nil email'
assert_nil created['identity_url'], 'expected no identity_url'
- # arvados#user, repo link and link add user to 'All users' group
- verify_links_added 4
-
- verify_link response_items, 'arvados#user', true, 'permission', 'can_login',
- created['uuid'], created['email'], 'arvados#user', false, 'User'
+ # repo link and link add user to 'All users' group
+ verify_links_added 3
verify_link response_items, 'arvados#repository', true, 'permission', 'can_manage',
"foo/#{repo_name}", created['uuid'], 'arvados#repository', true, 'Repository'
user: {uuid: 'bogus_uuid'},
repo_name: 'usertestrepo',
vm_uuid: @vm_uuid,
- openid_prefix: 'https://www.google.com/accounts/o8/id'
}
response_body = JSON.parse(@response.body)
response_errors = response_body['errors']
post :setup, params: {
repo_name: 'usertestrepo',
vm_uuid: @vm_uuid,
- openid_prefix: 'https://www.google.com/accounts/o8/id'
}
response_body = JSON.parse(@response.body)
response_errors = response_body['errors']
user: {},
repo_name: 'usertestrepo',
vm_uuid: @vm_uuid,
- openid_prefix: 'https://www.google.com/accounts/o8/id'
}
response_body = JSON.parse(@response.body)
response_errors = response_body['errors']
post :setup, params: {
repo_name: 'usertestrepo',
user: {email: 'foo@example.com'},
- openid_prefix: 'https://www.google.com/accounts/o8/id'
}
assert_response :success
assert_not_nil response_object['uuid'], 'expected uuid for the new user'
assert_equal response_object['email'], 'foo@example.com', 'expected given email'
- # four extra links; system_group, login, group and repo perms
- verify_links_added 4
+ # three extra links; system_group, group and repo perms
+ verify_links_added 3
end
test "setup user with fake vm and expect error" do
repo_name: 'usertestrepo',
vm_uuid: 'no_such_vm',
user: {email: 'foo@example.com'},
- openid_prefix: 'https://www.google.com/accounts/o8/id'
}
response_body = JSON.parse(@response.body)
post :setup, params: {
repo_name: 'usertestrepo',
- openid_prefix: 'https://www.google.com/accounts/o8/id',
vm_uuid: @vm_uuid,
user: {email: 'foo@example.com'}
}
assert_not_nil response_object['uuid'], 'expected uuid for the new user'
assert_equal response_object['email'], 'foo@example.com', 'expected given email'
- # five extra links; system_group, login, group, vm, repo
- verify_links_added 5
+ # four extra links; system_group, group, vm, repo
+ verify_links_added 4
end
test "setup user with valid email, no vm and no repo as input" do
post :setup, params: {
user: {email: 'foo@example.com'},
- openid_prefix: 'https://www.google.com/accounts/o8/id'
}
assert_response :success
assert_not_nil response_object['uuid'], 'expected uuid for new user'
assert_equal response_object['email'], 'foo@example.com', 'expected given email'
- # three extra links; system_group, login, and group
- verify_links_added 3
-
- verify_link response_items, 'arvados#user', true, 'permission', 'can_login',
- response_object['uuid'], response_object['email'], 'arvados#user', false, 'User'
+ # two extra links; system_group, and group
+ verify_links_added 2
verify_link response_items, 'arvados#group', true, 'permission', 'can_read',
'All users', response_object['uuid'], 'arvados#group', true, 'Group'
authorize_with :admin
post :setup, params: {
- openid_prefix: 'https://www.google.com/accounts/o8/id',
repo_name: 'usertestrepo',
vm_uuid: @vm_uuid,
user: {
assert_equal 'test_first_name', response_object['first_name'],
'expecting first name'
- # five extra links; system_group, login, group, repo and vm
- verify_links_added 5
+ # four extra links; system_group, group, repo and vm
+ verify_links_added 4
end
test "setup user with an existing user email and check different object is created" do
inactive_user = users(:inactive)
post :setup, params: {
- openid_prefix: 'https://www.google.com/accounts/o8/id',
repo_name: 'usertestrepo',
user: {
email: inactive_user['email']
assert_not_equal response_object['uuid'], inactive_user['uuid'],
'expected different uuid after create operation'
assert_equal inactive_user['email'], response_object['email'], 'expected given email'
- # system_group, openid, group, and repo. No vm link.
- verify_links_added 4
+ # system_group, group, and repo. No vm link.
+ verify_links_added 3
end
test "setup user with openid prefix" do
post :setup, params: {
repo_name: 'usertestrepo',
- openid_prefix: 'http://www.example.com/account',
user: {
first_name: "in_create_test_first_name",
last_name: "test_last_name",
assert_nil created['identity_url'], 'expected no identity_url'
# verify links
- # four new links: system_group, arvados#user, repo, and 'All users' group.
- verify_links_added 4
-
- verify_link response_items, 'arvados#user', true, 'permission', 'can_login',
- created['uuid'], created['email'], 'arvados#user', false, 'User'
+ # three new links: system_group, repo, and 'All users' group.
+ verify_links_added 3
verify_link response_items, 'arvados#repository', true, 'permission', 'can_manage',
'foo/usertestrepo', created['uuid'], 'arvados#repository', true, 'Repository'
nil, created['uuid'], 'arvados#virtualMachine', false, 'VirtualMachine'
end
- test "invoke setup with no openid prefix, expect error" do
- authorize_with :admin
-
- post :setup, params: {
- repo_name: 'usertestrepo',
- user: {
- first_name: "in_create_test_first_name",
- last_name: "test_last_name",
- email: "foo@example.com"
- }
- }
-
- response_body = JSON.parse(@response.body)
- response_errors = response_body['errors']
- assert_not_nil response_errors, 'Expected error in response'
- assert (response_errors.first.include? 'openid_prefix parameter is missing'),
- 'Expected ArgumentError'
- end
-
test "setup user with user, vm and repo and verify links" do
authorize_with :admin
},
vm_uuid: @vm_uuid,
repo_name: 'usertestrepo',
- openid_prefix: 'https://www.google.com/accounts/o8/id'
}
assert_response :success
assert_not_nil created['email'], 'expected non-nil email'
assert_nil created['identity_url'], 'expected no identity_url'
- # five new links: system_group, arvados#user, repo, vm and 'All
- # users' group link
- verify_links_added 5
+ # four new links: system_group, repo, vm and 'All users' group link
+ verify_links_added 4
- verify_link response_items, 'arvados#user', true, 'permission', 'can_login',
- created['uuid'], created['email'], 'arvados#user', false, 'User'
+ # system_group isn't part of the response. See User#add_system_group_permission_link
verify_link response_items, 'arvados#repository', true, 'permission', 'can_manage',
'foo/usertestrepo', created['uuid'], 'arvados#repository', true, 'Repository'
authorize_with :active
post :setup, params: {
- openid_prefix: 'https://www.google.com/accounts/o8/id',
user: {email: 'foo@example.com'}
}
authorize_with :admin
post :setup, params: {
- openid_prefix: 'http://www.example.com/account',
send_notification_email: 'false',
user: {
email: "foo@example.com"
authorize_with :admin
post :setup, params: {
- openid_prefix: 'http://www.example.com/account',
send_notification_email: 'true',
user: {
email: "foo@example.com"
assert_equal Rails.configuration.Users.UserNotifierEmailFrom, setup_email.from[0]
assert_equal 'foo@example.com', setup_email.to[0]
- assert_equal 'Welcome to Arvados - shell account enabled', setup_email.subject
+ assert_equal 'Welcome to Arvados - account enabled', setup_email.subject
assert (setup_email.body.to_s.include? 'Your Arvados shell account has been set up'),
'Expected Your Arvados shell account has been set up in email body'
assert (setup_email.body.to_s.include? "#{Rails.configuration.Services.Workbench1.ExternalURL}users/#{created['uuid']}/virtual_machines"), 'Expected virtual machines url in email body'
assert_nil(users(:project_viewer).redirect_to_user_uuid)
end
+ test "batch update fails for non-admin" do
+ authorize_with(:active)
+ patch(:batch_update, params: {updates: {}})
+ assert_response(403)
+ end
+
+ test "batch update" do
+ existinguuid = 'remot-tpzed-foobarbazwazqux'
+ newuuid = 'remot-tpzed-newnarnazwazqux'
+ act_as_system_user do
+ User.create!(uuid: existinguuid, email: 'root@existing.example.com')
+ end
+
+ authorize_with(:admin)
+ patch(:batch_update,
+ params: {
+ updates: {
+ existinguuid => {
+ 'first_name' => 'root',
+ 'email' => 'root@remot.example.com',
+ 'is_active' => true,
+ 'is_admin' => true,
+ 'prefs' => {'foo' => 'bar'},
+ },
+ newuuid => {
+ 'first_name' => 'noot',
+ 'email' => 'root@remot.example.com',
+ },
+ }})
+ assert_response(:success)
+
+ assert_equal('root', User.find_by_uuid(existinguuid).first_name)
+ assert_equal('root@remot.example.com', User.find_by_uuid(existinguuid).email)
+ assert_equal(true, User.find_by_uuid(existinguuid).is_active)
+ assert_equal(true, User.find_by_uuid(existinguuid).is_admin)
+ assert_equal({'foo' => 'bar'}, User.find_by_uuid(existinguuid).prefs)
+
+ assert_equal('noot', User.find_by_uuid(newuuid).first_name)
+ assert_equal('root@remot.example.com', User.find_by_uuid(newuuid).email)
+ end
+
NON_ADMIN_USER_DATA = ["uuid", "kind", "is_active", "email", "first_name",
"last_name", "username"].sort
oid_login_perms = Link.where(tail_uuid: email,
link_class: 'permission',
name: 'can_login').where("head_uuid like ?", User.uuid_like_pattern)
- if expect_oid_login_perms
- assert oid_login_perms.any?, "expected oid_login_perms"
- else
- assert !oid_login_perms.any?, "expected all oid_login_perms deleted"
- end
+
+ # these don't get added any more! they shouldn't appear ever.
+ assert !oid_login_perms.any?, "expected all oid_login_perms deleted"
repo_perms = Link.where(tail_uuid: uuid,
link_class: 'permission',
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+require 'test_helper'
+
+class ContainerDispatchTest < ActionDispatch::IntegrationTest
+ test "lock container with SystemRootToken" do
+ Rails.configuration.SystemRootToken = "xyzzy-SystemRootToken"
+ authheaders = {'HTTP_AUTHORIZATION' => "Bearer "+Rails.configuration.SystemRootToken}
+ get("/arvados/v1/api_client_authorizations/current",
+ headers: authheaders)
+ assert_response 200
+ #assert_not_empty json_response['uuid']
+
+ system_auth_uuid = json_response['uuid']
+ post("/arvados/v1/containers/#{containers(:queued).uuid}/lock",
+ headers: authheaders)
+ assert_response 200
+ assert_equal system_auth_uuid, Container.find_by_uuid(containers(:queued).uuid).locked_by_uuid
+
+ get("/arvados/v1/containers",
+ params: {filters: SafeJSON.dump([['locked_by_uuid', '=', system_auth_uuid]])},
+ headers: authheaders)
+ assert_response 200
+ assert_equal containers(:queued).uuid, json_response['items'][0]['uuid']
+ assert_equal system_auth_uuid, json_response['items'][0]['locked_by_uuid']
+
+ post("/arvados/v1/containers/#{containers(:queued).uuid}/unlock",
+ headers: authheaders)
+ assert_response 200
+ end
+end
assert_equal 'blarney@example.com', json_response['email']
end
+ test 'remote user is deactivated' do
+ Rails.configuration.RemoteClusters['zbbbb'].ActivateUsers = true
+ get '/arvados/v1/users/current',
+ params: {format: 'json'},
+ headers: auth(remote: 'zbbbb')
+ assert_response :success
+ assert_equal true, json_response['is_active']
+
+ # revoke original token
+ @stub_content[:is_active] = false
+
+ # simulate cache expiry
+ ApiClientAuthorization.where(
+ uuid: salted_active_token(remote: 'zbbbb').split('/')[1]).
+ update_all(expires_at: db_current_time - 1.minute)
+
+ # re-authorize after cache expires
+ get '/arvados/v1/users/current',
+ params: {format: 'json'},
+ headers: auth(remote: 'zbbbb')
+ assert_equal false, json_response['is_active']
+
+ end
+
test 'authenticate with remote token, remote username conflicts with local' do
@stub_content[:username] = 'active'
get '/arvados/v1/users/current',
refute_includes(group_uuids, groups(:testusergroup_admins).uuid)
end
+ test 'do not auto-activate user from untrusted cluster' do
+ Rails.configuration.RemoteClusters['zbbbb'].AutoSetupNewUsers = false
+ Rails.configuration.RemoteClusters['zbbbb'].ActivateUsers = false
+ get '/arvados/v1/users/current',
+ params: {format: 'json'},
+ headers: auth(remote: 'zbbbb')
+ assert_response :success
+ assert_equal 'zbbbb-tpzed-000000000000000', json_response['uuid']
+ assert_equal false, json_response['is_admin']
+ assert_equal false, json_response['is_active']
+ assert_equal 'foo@example.com', json_response['email']
+ assert_equal 'barney', json_response['username']
+ post '/arvados/v1/users/zbbbb-tpzed-000000000000000/activate',
+ params: {format: 'json'},
+ headers: auth(remote: 'zbbbb')
+ assert_response 422
+ end
+
test 'auto-activate user from trusted cluster' do
Rails.configuration.RemoteClusters['zbbbb'].ActivateUsers = true
get '/arvados/v1/users/current',
].each do |testcase|
test "user auto-activate #{testcase.inspect}" do
# Configure auto_setup behavior according to testcase[:cfg]
+ Rails.configuration.Users.NewUsersAreActive = false
Rails.configuration.Users.AutoSetupNewUsers = testcase[:cfg][:auto]
Rails.configuration.Users.AutoSetupNewUsersWithVmUUID =
(testcase[:cfg][:vm] ? virtual_machines(:testvm).uuid : "")
post "/arvados/v1/users/setup",
params: {
repo_name: repo_name,
- openid_prefix: 'https://www.google.com/accounts/o8/id',
user: {
uuid: 'zzzzz-tpzed-abcdefghijklmno',
first_name: "in_create_test_first_name",
assert_not_nil created['email'], 'expected non-nil email'
assert_nil created['identity_url'], 'expected no identity_url'
- # arvados#user, repo link and link add user to 'All users' group
- verify_link response_items, 'arvados#user', true, 'permission', 'can_login',
- created['uuid'], created['email'], 'arvados#user', false, 'arvados#user'
+ # repo link and link add user to 'All users' group
verify_link response_items, 'arvados#repository', true, 'permission', 'can_manage',
'foo/usertestrepo', created['uuid'], 'arvados#repository', true, 'Repository'
params: {
repo_name: repo_name,
vm_uuid: virtual_machines(:testvm).uuid,
- openid_prefix: 'https://www.google.com/accounts/o8/id',
user: {
uuid: 'zzzzz-tpzed-abcdefghijklmno',
first_name: "in_create_test_first_name",
params: {
repo_name: repo_name,
vm_uuid: virtual_machines(:testvm).uuid,
- openid_prefix: 'https://www.google.com/accounts/o8/id',
uuid: 'zzzzz-tpzed-abcdefghijklmno',
},
headers: auth(:admin)
test "setup user in multiple steps and verify response" do
post "/arvados/v1/users/setup",
params: {
- openid_prefix: 'http://www.example.com/account',
user: {
email: "foo@example.com"
}
assert_not_nil created['email'], 'expected non-nil email'
assert_equal created['email'], 'foo@example.com', 'expected input email'
- # three new links: system_group, arvados#user, and 'All users' group.
- verify_link response_items, 'arvados#user', true, 'permission', 'can_login',
- created['uuid'], created['email'], 'arvados#user', false, 'arvados#user'
+ # two new links: system_group, and 'All users' group.
verify_link response_items, 'arvados#group', true, 'permission', 'can_read',
'All users', created['uuid'], 'arvados#group', true, 'Group'
# invoke setup with a repository
post "/arvados/v1/users/setup",
params: {
- openid_prefix: 'http://www.example.com/account',
repo_name: 'newusertestrepo',
uuid: created['uuid']
},
post "/arvados/v1/users/setup",
params: {
vm_uuid: virtual_machines(:testvm).uuid,
- openid_prefix: 'http://www.example.com/account',
user: {
email: 'junk_email'
},
repo_name: 'newusertestrepo',
vm_uuid: virtual_machines(:testvm).uuid,
user: {email: 'foo@example.com'},
- openid_prefix: 'https://www.google.com/accounts/o8/id'
},
headers: auth(:admin)
assert_not_nil created['uuid'], 'expected uuid for the new user'
assert_equal created['email'], 'foo@example.com', 'expected given email'
- # five extra links: system_group, login, group, repo and vm
- verify_link response_items, 'arvados#user', true, 'permission', 'can_login',
- created['uuid'], created['email'], 'arvados#user', false, 'arvados#user'
+ # four extra links: system_group, login, group, repo and vm
verify_link response_items, 'arvados#group', true, 'permission', 'can_read',
'All users', created['uuid'], 'arvados#group', true, 'Group'
end
+ test "cannot set is_active to false directly" do
+ post('/arvados/v1/users',
+ params: {
+ user: {
+ email: "bob@example.com",
+ username: "bobby"
+ },
+ },
+ headers: auth(:admin))
+ assert_response(:success)
+ user = json_response
+ assert_equal false, user['is_active']
+
+ token = act_as_system_user do
+ ApiClientAuthorization.create!(user: User.find_by_uuid(user['uuid']), api_client: ApiClient.all.first).api_token
+ end
+ post("/arvados/v1/user_agreements/sign",
+ params: {uuid: 'zzzzz-4zz18-t68oksiu9m80s4y'},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response :success
+
+ post("/arvados/v1/users/#{user['uuid']}/activate",
+ params: {},
+ headers: auth(:admin))
+ assert_response(:success)
+ user = json_response
+ assert_equal true, user['is_active']
+
+ put("/arvados/v1/users/#{user['uuid']}",
+ params: {
+ user: {is_active: false}
+ },
+ headers: auth(:admin))
+ assert_response 422
+ end
+
+ test "cannot self activate when AutoSetupNewUsers is false" do
+ Rails.configuration.Users.NewUsersAreActive = false
+ Rails.configuration.Users.AutoSetupNewUsers = false
+
+ user = nil
+ token = nil
+ act_as_system_user do
+ user = User.create!(email: "bob@example.com", username: "bobby")
+ ap = ApiClientAuthorization.create!(user: user, api_client: ApiClient.all.first)
+ token = ap.api_token
+ end
+
+ get("/arvados/v1/users/#{user['uuid']}",
+ params: {},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response(:success)
+ user = json_response
+ assert_equal false, user['is_active']
+
+ post("/arvados/v1/users/#{user['uuid']}/activate",
+ params: {},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response 422
+ assert_match(/Cannot activate without being invited/, json_response['errors'][0])
+ end
+
+
+ test "cannot self activate after unsetup" do
+ Rails.configuration.Users.NewUsersAreActive = false
+ Rails.configuration.Users.AutoSetupNewUsers = false
+
+ user = nil
+ token = nil
+ act_as_system_user do
+ user = User.create!(email: "bob@example.com", username: "bobby")
+ ap = ApiClientAuthorization.create!(user: user, api_client_id: 0)
+ token = ap.api_token
+ end
+
+ post("/arvados/v1/users/setup",
+ params: {uuid: user['uuid']},
+ headers: auth(:admin))
+ assert_response :success
+
+ post("/arvados/v1/users/#{user['uuid']}/activate",
+ params: {},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response 403
+ assert_match(/Cannot activate without user agreements/, json_response['errors'][0])
+
+ post("/arvados/v1/user_agreements/sign",
+ params: {uuid: 'zzzzz-4zz18-t68oksiu9m80s4y'},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response :success
+
+ post("/arvados/v1/users/#{user['uuid']}/activate",
+ params: {},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response :success
+
+ get("/arvados/v1/users/#{user['uuid']}",
+ params: {},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response(:success)
+ user = json_response
+ assert_equal true, user['is_active']
+
+ post("/arvados/v1/users/#{user['uuid']}/unsetup",
+ params: {},
+ headers: auth(:admin))
+ assert_response :success
+
+ get("/arvados/v1/users/#{user['uuid']}",
+ params: {},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response(:success)
+ user = json_response
+ assert_equal false, user['is_active']
+
+ post("/arvados/v1/users/#{user['uuid']}/activate",
+ params: {},
+ headers: {"HTTP_AUTHORIZATION" => "Bearer #{token}"})
+ assert_response 422
+ assert_match(/Cannot activate without being invited/, json_response['errors'][0])
+ end
+
+
end
assert_empty ApiClientAuthorization.where(uuid: api_client_authorizations(:expired).uuid)
end
+ test "accepts SystemRootToken" do
+ assert_nil ApiClientAuthorization.validate(token: "xxxSystemRootTokenxxx")
+
+ # will create a new ApiClientAuthorization record
+ Rails.configuration.SystemRootToken = "xxxSystemRootTokenxxx"
+
+ auth = ApiClientAuthorization.validate(token: "xxxSystemRootTokenxxx")
+ assert_equal "xxxSystemRootTokenxxx", auth.api_token
+ assert_equal User.find_by_uuid(system_user_uuid).id, auth.user_id
+ assert auth.api_client.is_trusted
+
+ # now change the token and try to use the old one first
+ Rails.configuration.SystemRootToken = "newxxxSystemRootTokenxxx"
+
+ # old token will fail
+ assert_nil ApiClientAuthorization.validate(token: "xxxSystemRootTokenxxx")
+ # new token will work
+ auth = ApiClientAuthorization.validate(token: "newxxxSystemRootTokenxxx")
+ assert_equal "newxxxSystemRootTokenxxx", auth.api_token
+ assert_equal User.find_by_uuid(system_user_uuid).id, auth.user_id
+
+ # now change the token again and use the new one first
+ Rails.configuration.SystemRootToken = "new2xxxSystemRootTokenxxx"
+
+ # new token will work
+ auth = ApiClientAuthorization.validate(token: "new2xxxSystemRootTokenxxx")
+ assert_equal "new2xxxSystemRootTokenxxx", auth.api_token
+ assert_equal User.find_by_uuid(system_user_uuid).id, auth.user_id
+ # old token will fail
+ assert_nil ApiClientAuthorization.validate(token: "newxxxSystemRootTokenxxx")
+ end
+
+
end
require 'test_helper'
class ApiClientTest < ActiveSupport::TestCase
- # test "the truth" do
- # assert true
- # end
+ include CurrentApiClient
+
+ test "configured workbench is trusted" do
+ Rails.configuration.Services.Workbench1.ExternalURL = URI("http://wb1.example.com")
+ Rails.configuration.Services.Workbench2.ExternalURL = URI("https://wb2.example.com:443")
+
+ act_as_system_user do
+ [["http://wb0.example.com", false],
+ ["http://wb1.example.com", true],
+ ["http://wb2.example.com", false],
+ ["https://wb2.example.com", true],
+ ["https://wb2.example.com/", true],
+ ].each do |pfx, result|
+ a = ApiClient.create(url_prefix: pfx, is_trusted: false)
+ assert_equal result, a.is_trusted
+ end
+
+ a = ApiClient.create(url_prefix: "http://example.com", is_trusted: true)
+ a.save!
+ a.reload
+ assert a.is_trusted
+ end
+ end
end
end
end
end
+
+ test "collection names must be displayable in a filesystem" do
+ set_user_from_auth :active
+ ["", "{SOLIDUS}"].each do |subst|
+ Rails.configuration.Collections.ForwardSlashNameSubstitution = subst
+ c = Collection.create
+ [[nil, true],
+ ["", true],
+ [".", false],
+ ["..", false],
+ ["...", true],
+ ["..z..", true],
+ ["foo/bar", subst != ""],
+ ["../..", subst != ""],
+ ["/", subst != ""],
+ ].each do |name, valid|
+ c.name = name
+ assert_equal valid, c.valid?, "#{name.inspect} should be #{valid ? "valid" : "invalid"}"
+ end
+ end
+ end
end
end
[
- 'https://github.com/curoverse/arvados.git',
- 'http://github.com/curoverse/arvados.git',
- 'git://github.com/curoverse/arvados.git',
+ 'https://github.com/arvados/arvados.git',
+ 'http://github.com/arvados/arvados.git',
+ 'git://github.com/arvados/arvados.git',
].each do |url|
test "find_commit_range uses fetch_remote_repository to get #{url}" do
fake_gitdir = repositories(:foo).server_path
'/bogus/repo',
'/not/allowed/.git',
'file:///not/allowed.git',
- 'git.curoverse.com/arvados.git',
- 'github.com/curoverse/arvados.git',
+ 'git.arvados.org/arvados.git',
+ 'github.com/arvados/arvados.git',
].each do |url|
test "find_commit_range skips fetch_remote_repository for #{url}" do
CommitsHelper::expects(:fetch_remote_repository).never
assert_equal log.owner_uuid, project.uuid, "Container log should be copied to #{project.uuid}"
end
+ # This tests bug report #16144
+ test "Request is finalized when its container is completed even when log & output don't exist" do
+ set_user_from_auth :active
+ project = groups(:private)
+ cr = create_minimal_req!(owner_uuid: project.uuid,
+ priority: 1,
+ state: "Committed")
+ assert_equal users(:active).uuid, cr.modified_by_user_uuid
+
+ output_pdh = '1f4b0bc7583c2a7f9102c395f4ffc5e3+45'
+ log_pdh = 'fa7aeb5140e2848d39b416daeef4ffc5+45'
+
+ c = act_as_system_user do
+ c = Container.find_by_uuid(cr.container_uuid)
+ c.update_attributes!(state: Container::Locked)
+ c.update_attributes!(state: Container::Running,
+ output: output_pdh,
+ log: log_pdh)
+ c
+ end
+
+ cr.reload
+ assert_equal "Committed", cr.state
+
+ act_as_system_user do
+ Collection.where(portable_data_hash: output_pdh).delete_all
+ Collection.where(portable_data_hash: log_pdh).delete_all
+ c.update_attributes!(state: Container::Complete)
+ end
+
+ cr.reload
+ assert_equal "Final", cr.state
+ end
+
+ # This tests bug report #16144
+ test "Can destroy CR even if its container doesn't exist" do
+ set_user_from_auth :active
+ project = groups(:private)
+ cr = create_minimal_req!(owner_uuid: project.uuid,
+ priority: 1,
+ state: "Committed")
+ assert_equal users(:active).uuid, cr.modified_by_user_uuid
+
+ c = act_as_system_user do
+ c = Container.find_by_uuid(cr.container_uuid)
+ c.update_attributes!(state: Container::Locked)
+ c.update_attributes!(state: Container::Running)
+ c
+ end
+
+ cr.reload
+ assert_equal "Committed", cr.state
+
+ cr_uuid = cr.uuid
+ act_as_system_user do
+ Container.find_by_uuid(cr.container_uuid).destroy
+ cr.destroy
+ end
+ assert_nil ContainerRequest.find_by_uuid(cr_uuid)
+ end
+
test "Container makes container request, then is cancelled" do
set_user_from_auth :active
cr = create_minimal_req!(priority: 5, state: "Committed", container_count_max: 1)
assert_equal cr_nr_was-1, ContainerRequest.all.length
assert_equal job_nr_was-1, Job.all.length
end
+
+ test "project names must be displayable in a filesystem" do
+ set_user_from_auth :active
+ ["", "{SOLIDUS}"].each do |subst|
+ Rails.configuration.Collections.ForwardSlashNameSubstitution = subst
+ g = Group.create
+ [[nil, true],
+ ["", true],
+ [".", false],
+ ["..", false],
+ ["...", true],
+ ["..z..", true],
+ ["foo/bar", subst != ""],
+ ["../..", subst != ""],
+ ["/", subst != ""],
+ ].each do |name, valid|
+ g.name = name
+ g.group_class = "role"
+ assert_equal true, g.valid?
+ g.group_class = "project"
+ assert_equal valid, g.valid?, "#{name.inspect} should be #{valid ? "valid" : "invalid"}"
+ end
+ end
+ end
end
a = create :active_user, first_name: "A"
b = create :active_user, first_name: "B"
other = create :active_user, first_name: "OTHER"
+
+ assert_empty(User.readable_by(b).where(uuid: a.uuid),
+ "#{b.first_name} should not be able to see 'a' in the user list")
+ assert_empty(User.readable_by(a).where(uuid: b.uuid),
+ "#{a.first_name} should not be able to see 'b' in the user list")
+
act_as_system_user do
g = create :group
[a,b].each do |u|
name: 'can_read', head_uuid: u.uuid, tail_uuid: g.uuid)
end
end
+
+ assert_not_empty(User.readable_by(b).where(uuid: a.uuid),
+ "#{b.first_name} should be able to see 'a' in the user list")
+ assert_not_empty(User.readable_by(a).where(uuid: b.uuid),
+ "#{a.first_name} should be able to see 'b' in the user list")
+
a_specimen = act_as_user a do
Specimen.create!
end
# Test the body of the sent email contains what we expect it to
assert_equal Rails.configuration.Users.UserNotifierEmailFrom, email.from.first
assert_equal user.email, email.to.first
- assert_equal 'Welcome to Arvados - shell account enabled', email.subject
+ assert_equal 'Welcome to Arvados - account enabled', email.subject
assert (email.body.to_s.include? 'Your Arvados shell account has been set up'),
'Expected Your Arvados shell account has been set up in email body'
assert (email.body.to_s.include? Rails.configuration.Services.Workbench1.ExternalURL.to_s),
set_user_from_auth :admin
email = 'foo@example.com'
- openid_prefix = 'http://openid/prefix'
user = User.create ({uuid: 'zzzzz-tpzed-abcdefghijklmno', email: email})
vm = VirtualMachine.create
- response = user.setup(openid_prefix: openid_prefix,
- repo_name: 'foo/testrepo',
+ response = user.setup(repo_name: 'foo/testrepo',
vm_uuid: vm.uuid)
resp_user = find_obj_in_resp response, 'User'
verify_user resp_user, email
- oid_login_perm = find_obj_in_resp response, 'Link', 'arvados#user'
-
- verify_link oid_login_perm, 'permission', 'can_login', resp_user[:email],
- resp_user[:uuid]
-
- assert_equal openid_prefix, oid_login_perm[:properties]['identity_url_prefix'],
- 'expected identity_url_prefix not found for oid_login_perm'
-
group_perm = find_obj_in_resp response, 'Link', 'arvados#group'
verify_link group_perm, 'permission', 'can_read', resp_user[:uuid], nil
set_user_from_auth :admin
email = 'foo@example.com'
- openid_prefix = 'http://openid/prefix'
user = User.create ({uuid: 'zzzzz-tpzed-abcdefghijklmno', email: email})
verify_link resp_link, 'permission', 'can_login', email, bad_uuid
- response = user.setup(openid_prefix: openid_prefix,
- repo_name: 'foo/testrepo',
+ response = user.setup(repo_name: 'foo/testrepo',
vm_uuid: vm.uuid)
resp_user = find_obj_in_resp response, 'User'
verify_user resp_user, email
- oid_login_perm = find_obj_in_resp response, 'Link', 'arvados#user'
-
- verify_link oid_login_perm, 'permission', 'can_login', resp_user[:email],
- resp_user[:uuid]
-
- assert_equal openid_prefix, oid_login_perm[:properties]['identity_url_prefix'],
- 'expected identity_url_prefix not found for oid_login_perm'
-
group_perm = find_obj_in_resp response, 'Link', 'arvados#group'
verify_link group_perm, 'permission', 'can_read', resp_user[:uuid], nil
set_user_from_auth :admin
email = 'foo@example.com'
- openid_prefix = 'http://openid/prefix'
user = User.create ({uuid: 'zzzzz-tpzed-abcdefghijklmno', email: email})
- response = user.setup(openid_prefix: openid_prefix)
+ response = user.setup()
resp_user = find_obj_in_resp response, 'User'
verify_user resp_user, email
- oid_login_perm = find_obj_in_resp response, 'Link', 'arvados#user'
- verify_link oid_login_perm, 'permission', 'can_login', resp_user[:email],
- resp_user[:uuid]
- assert_equal openid_prefix, oid_login_perm[:properties]['identity_url_prefix'],
- 'expected identity_url_prefix not found for oid_login_perm'
-
group_perm = find_obj_in_resp response, 'Link', 'arvados#group'
verify_link group_perm, 'permission', 'can_read', resp_user[:uuid], nil
# invoke setup again with repo_name
- response = user.setup(openid_prefix: openid_prefix,
- repo_name: 'foo/testrepo')
+ response = user.setup(repo_name: 'foo/testrepo')
resp_user = find_obj_in_resp response, 'User', nil
verify_user resp_user, email
assert_equal user.uuid, resp_user[:uuid], 'expected uuid not found'
# invoke setup again with a vm_uuid
vm = VirtualMachine.create
- response = user.setup(openid_prefix: openid_prefix,
- repo_name: 'foo/testrepo',
+ response = user.setup(repo_name: 'foo/testrepo',
vm_uuid: vm.uuid)
resp_user = find_obj_in_resp response, 'User', nil
verify_link_exists(Rails.configuration.Users.AutoSetupNewUsers || active,
groups(:all_users).uuid, user.uuid,
"permission", "can_read")
- # Check for OID login link.
- verify_link_exists(Rails.configuration.Users.AutoSetupNewUsers || active,
- user.uuid, user.email, "permission", "can_login")
+
# Check for repository.
if named_repo = (prior_repo or
Repository.where(name: expect_repo_name).first)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
type authHandler struct {
"path/filepath"
"strings"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
"net/http/cgi"
"os"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// gitHandler is an http.Handler that invokes git-http-backend (or
"net/url"
"regexp"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
"os/exec"
"strings"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
"strings"
"testing"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
"fmt"
"os"
- "git.curoverse.com/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/config"
"github.com/coreos/go-systemd/daemon"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
import (
"net/http"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/health"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/health"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
)
type server struct {
"os"
"os/exec"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
check "gopkg.in/check.v1"
)
"syscall"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/dispatch"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/dispatch"
"github.com/sirupsen/logrus"
)
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/dispatch"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/dispatch"
"github.com/sirupsen/logrus"
. "gopkg.in/check.v1"
)
"strings"
"time"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/lib/dispatchcloud"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/dispatch"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/dispatchcloud"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/dispatch"
"github.com/coreos/go-systemd/daemon"
"github.com/ghodss/yaml"
"github.com/sirupsen/logrus"
[Service]
Type=notify
ExecStart=/usr/bin/crunch-dispatch-slurm
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=1
LimitNOFILE=1000000
"testing"
"time"
- "git.curoverse.com/arvados.git/lib/dispatchcloud"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/dispatch"
+ "git.arvados.org/arvados.git/lib/dispatchcloud"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/dispatch"
"github.com/sirupsen/logrus"
. "gopkg.in/check.v1"
)
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// SlurmNodeTypeFeatureKludge ensures SLURM accepts every instance
"os"
)
-var exampleConfigFile = []byte(`
- {
- "Client": {
- "APIHost": "zzzzz.arvadosapi.com",
- "AuthToken": "xyzzy",
- "Insecure": false
- "KeepServiceURIs": [],
- },
- "CrunchRunCommand": ["crunch-run"],
- "PollPeriod": "10s",
- "SbatchArguments": ["--partition=foo", "--exclude=node13"],
- "ReserveExtraRAM": 268435456,
- "BatchSize": 10000
- }`)
-
func usage(fs *flag.FlagSet) {
fmt.Fprintf(os.Stderr, `
crunch-dispatch-slurm runs queued Arvados containers by submitting
`)
fs.PrintDefaults()
fmt.Fprintf(os.Stderr, `
-Example config file:
-%s
-`, exampleConfigFile)
+
+For configuration instructions see https://doc.arvados.org/install/crunch2-slurm/install-dispatch.html
+`)
}
"syscall"
"time"
- "git.curoverse.com/arvados.git/lib/crunchstat"
+ "git.arvados.org/arvados.git/lib/crunchstat"
)
const MaxLogLine = 1 << 14 // Child stderr lines >16KiB will be split
Description=Arvados Docker Image Cleaner
Documentation=https://doc.arvados.org/
After=network.target
-#AssertPathExists=/etc/arvados/docker-cleaner/docker-cleaner.json
# systemd==229 (ubuntu:xenial) obeys StartLimitInterval in the [Unit] section
StartLimitInterval=0
import os
import re
-def git_latest_tag():
- gittags = subprocess.check_output(['git', 'tag', '-l']).split()
- gittags.sort(key=lambda s: [int(u) for u in s.split(b'.')],reverse=True)
- return str(next(iter(gittags)).decode('utf-8'))
-
-def git_timestamp_tag():
- gitinfo = subprocess.check_output(
- ['git', 'log', '--first-parent', '--max-count=1',
- '--format=format:%ct', '.']).strip()
- return str(time.strftime('.%Y%m%d%H%M%S', time.gmtime(int(gitinfo))))
+def git_version_at_commit():
+ curdir = os.path.dirname(os.path.abspath(__file__))
+ myhash = subprocess.check_output(['git', 'log', '-n1', '--first-parent',
+ '--format=%H', curdir]).strip()
+ myversion = subprocess.check_output([curdir+'/../../build/version-at-commit.sh', myhash]).strip().decode()
+ return myversion
def save_version(setup_dir, module, v):
- with open(os.path.join(setup_dir, module, "_version.py"), 'w') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'wt') as fp:
return fp.write("__version__ = '%s'\n" % v)
def read_version(setup_dir, module):
- with open(os.path.join(setup_dir, module, "_version.py"), 'r') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'rt') as fp:
return re.match("__version__ = '(.*)'$", fp.read()).groups()[0]
def get_version(setup_dir, module):
save_version(setup_dir, module, env_version)
else:
try:
- save_version(setup_dir, module, git_latest_tag() + git_timestamp_tag())
- except subprocess.CalledProcessError:
+ save_version(setup_dir, module, git_version_at_commit())
+ except (subprocess.CalledProcessError, OSError):
pass
return read_version(setup_dir, module)
author="Arvados",
author_email="info@arvados.org",
url="https://arvados.org",
- download_url="https://github.com/curoverse/arvados.git",
+ download_url="https://github.com/arvados/arvados.git",
license="GNU Affero General Public License version 3.0",
packages=find_packages(),
entry_points={
LLFUSE_VERSION_0 = llfuse.__version__.startswith('0')
-from .fusedir import sanitize_filename, Directory, CollectionDirectory, TmpCollectionDirectory, MagicDirectory, TagsDirectory, ProjectDirectory, SharedDirectory, CollectionDirectoryBase
+from .fusedir import Directory, CollectionDirectory, TmpCollectionDirectory, MagicDirectory, TagsDirectory, ProjectDirectory, SharedDirectory, CollectionDirectoryBase
from .fusefile import StringFile, FuseArvadosFile
_logger = logging.getLogger('arvados.arvados_fuse')
return
e = self.operations.inodes.add_entry(Directory(
- llfuse.ROOT_INODE, self.operations.inodes))
+ llfuse.ROOT_INODE, self.operations.inodes, self.api.config))
dir_args[0] = e.inode
for name in self.args.mount_by_id:
# appear as underscores in the fuse mount.)
_disallowed_filename_characters = re.compile('[\x00/]')
-# '.' and '..' are not reachable if API server is newer than #6277
-def sanitize_filename(dirty):
- """Replace disallowed filename characters with harmless "_"."""
- if dirty is None:
- return None
- elif dirty == '':
- return '_'
- elif dirty == '.':
- return '_'
- elif dirty == '..':
- return '__'
- else:
- return _disallowed_filename_characters.sub('_', dirty)
-
class Directory(FreshBase):
"""Generic directory object, backed by a dict.
and the value referencing a File or Directory object.
"""
- def __init__(self, parent_inode, inodes):
+ def __init__(self, parent_inode, inodes, apiconfig):
"""parent_inode is the integer inode number"""
super(Directory, self).__init__()
raise Exception("parent_inode should be an int")
self.parent_inode = parent_inode
self.inodes = inodes
+ self.apiconfig = apiconfig
self._entries = {}
self._mtime = time.time()
- # Overriden by subclasses to implement logic to update the entries dict
- # when the directory is stale
+ def forward_slash_subst(self):
+ if not hasattr(self, '_fsns'):
+ self._fsns = None
+ config = self.apiconfig()
+ try:
+ self._fsns = config["Collections"]["ForwardSlashNameSubstitution"]
+ except KeyError:
+ # old API server with no FSNS config
+ self._fsns = '_'
+ else:
+ if self._fsns == '' or self._fsns == '/':
+ self._fsns = None
+ return self._fsns
+
+ def unsanitize_filename(self, incoming):
+ """Replace ForwardSlashNameSubstitution value with /"""
+ fsns = self.forward_slash_subst()
+ if isinstance(fsns, str):
+ return incoming.replace(fsns, '/')
+ else:
+ return incoming
+
+ def sanitize_filename(self, dirty):
+ """Replace disallowed filename characters according to
+ ForwardSlashNameSubstitution in self.api_config."""
+ # '.' and '..' are not reachable if API server is newer than #6277
+ if dirty is None:
+ return None
+ elif dirty == '':
+ return '_'
+ elif dirty == '.':
+ return '_'
+ elif dirty == '..':
+ return '__'
+ else:
+ fsns = self.forward_slash_subst()
+ if isinstance(fsns, str):
+ dirty = dirty.replace('/', fsns)
+ return _disallowed_filename_characters.sub('_', dirty)
+
+
+ # Overridden by subclasses to implement logic to update the
+ # entries dict when the directory is stale
@use_counter
def update(self):
pass
self._entries = {}
changed = False
for i in items:
- name = sanitize_filename(fn(i))
+ name = self.sanitize_filename(fn(i))
if name:
if name in oldentries and same(oldentries[name], i):
# move existing directory entry over
"""
- def __init__(self, parent_inode, inodes, collection):
- super(CollectionDirectoryBase, self).__init__(parent_inode, inodes)
+ def __init__(self, parent_inode, inodes, apiconfig, collection):
+ super(CollectionDirectoryBase, self).__init__(parent_inode, inodes, apiconfig)
+ self.apiconfig = apiconfig
self.collection = collection
def new_entry(self, name, item, mtime):
- name = sanitize_filename(name)
+ name = self.sanitize_filename(name)
if hasattr(item, "fuse_entry") and item.fuse_entry is not None:
if item.fuse_entry.dead is not True:
raise Exception("Can only reparent dead inode entry")
item.fuse_entry.dead = False
self._entries[name] = item.fuse_entry
elif isinstance(item, arvados.collection.RichCollectionBase):
- self._entries[name] = self.inodes.add_entry(CollectionDirectoryBase(self.inode, self.inodes, item))
+ self._entries[name] = self.inodes.add_entry(CollectionDirectoryBase(self.inode, self.inodes, self.apiconfig, item))
self._entries[name].populate(mtime)
else:
self._entries[name] = self.inodes.add_entry(FuseArvadosFile(self.inode, item, mtime))
def on_event(self, event, collection, name, item):
if collection == self.collection:
- name = sanitize_filename(name)
+ name = self.sanitize_filename(name)
_logger.debug("collection notify %s %s %s %s", event, collection, name, item)
with llfuse.lock:
if event == arvados.collection.ADD:
"""Represents the root of a directory tree representing a collection."""
def __init__(self, parent_inode, inodes, api, num_retries, collection_record=None, explicit_collection=None):
- super(CollectionDirectory, self).__init__(parent_inode, inodes, None)
+ super(CollectionDirectory, self).__init__(parent_inode, inodes, api.config, None)
self.api = api
self.num_retries = num_retries
self.collection_record_file = None
keep_client=api_client.keep,
num_retries=num_retries)
super(TmpCollectionDirectory, self).__init__(
- parent_inode, inodes, collection)
+ parent_inode, inodes, api_client.config, collection)
self.collection_record_file = None
self.populate(self.mtime())
""".lstrip()
def __init__(self, parent_inode, inodes, api, num_retries, pdh_only=False):
- super(MagicDirectory, self).__init__(parent_inode, inodes)
+ super(MagicDirectory, self).__init__(parent_inode, inodes, api.config)
self.api = api
self.num_retries = num_retries
self.pdh_only = pdh_only
e = self.inodes.add_entry(ProjectDirectory(
self.inode, self.inodes, self.api, self.num_retries, project[u'items'][0]))
else:
+ import sys
e = self.inodes.add_entry(CollectionDirectory(
self.inode, self.inodes, self.api, self.num_retries, k))
"""A special directory that contains as subdirectories all tags visible to the user."""
def __init__(self, parent_inode, inodes, api, num_retries, poll_time=60):
- super(TagsDirectory, self).__init__(parent_inode, inodes)
+ super(TagsDirectory, self).__init__(parent_inode, inodes, api.config)
self.api = api
self.num_retries = num_retries
self._poll = True
def __init__(self, parent_inode, inodes, api, num_retries, tag,
poll=False, poll_time=60):
- super(TagDirectory, self).__init__(parent_inode, inodes)
+ super(TagDirectory, self).__init__(parent_inode, inodes, api.config)
self.api = api
self.num_retries = num_retries
self.tag = tag
def __init__(self, parent_inode, inodes, api, num_retries, project_object,
poll=False, poll_time=60):
- super(ProjectDirectory, self).__init__(parent_inode, inodes)
+ super(ProjectDirectory, self).__init__(parent_inode, inodes, api.config)
self.api = api
self.num_retries = num_retries
self.project_object = project_object
elif self._full_listing or super(ProjectDirectory, self).__contains__(k):
return super(ProjectDirectory, self).__getitem__(k)
with llfuse.lock_released:
+ k2 = self.unsanitize_filename(k)
+ if k2 == k:
+ namefilter = ["name", "=", k]
+ else:
+ namefilter = ["name", "in", [k, k2]]
contents = self.api.groups().list(filters=[["owner_uuid", "=", self.project_uuid],
["group_class", "=", "project"],
- ["name", "=", k]],
- limit=1).execute(num_retries=self.num_retries)["items"]
+ namefilter],
+ limit=2).execute(num_retries=self.num_retries)["items"]
if not contents:
contents = self.api.collections().list(filters=[["owner_uuid", "=", self.project_uuid],
- ["name", "=", k]],
- limit=1).execute(num_retries=self.num_retries)["items"]
+ namefilter],
+ limit=2).execute(num_retries=self.num_retries)["items"]
if contents:
- name = sanitize_filename(self.namefn(contents[0]))
+ if len(contents) > 1 and contents[1]['name'] == k:
+ # If "foo/bar" and "foo[SUBST]bar" both exist, use
+ # "foo[SUBST]bar".
+ contents = [contents[1]]
+ name = self.sanitize_filename(self.namefn(contents[0]))
if name != k:
raise KeyError(k)
return self._add_entry(contents[0], name)
new_attrs = properties.get("new_attributes") or {}
old_attrs["uuid"] = ev["object_uuid"]
new_attrs["uuid"] = ev["object_uuid"]
- old_name = sanitize_filename(self.namefn(old_attrs))
- new_name = sanitize_filename(self.namefn(new_attrs))
+ old_name = self.sanitize_filename(self.namefn(old_attrs))
+ new_name = self.sanitize_filename(self.namefn(new_attrs))
# create events will have a new name, but not an old name
# delete events will have an old name, but not a new name
def __init__(self, parent_inode, inodes, api, num_retries, exclude,
poll=False, poll_time=60):
- super(SharedDirectory, self).__init__(parent_inode, inodes)
+ super(SharedDirectory, self).__init__(parent_inode, inodes, api.config)
self.api = api
self.num_retries = num_retries
self.current_user = api.users().current().execute(num_retries=num_retries)
was_mounted = False
attempted = False
+ fusermount_output = b''
if timeout is None:
deadline = None
else:
return was_mounted
if attempted:
+ # Report buffered stderr from previous call to fusermount,
+ # now that we know it didn't succeed.
+ sys.stderr.write(fusermount_output)
+
delay = 1
if deadline:
delay = min(delay, deadline - time.time())
attempted = True
try:
- subprocess.check_call(["fusermount", "-u", "-z", path])
- except subprocess.CalledProcessError:
- pass
+ subprocess.check_output(
+ ["fusermount", "-u", "-z", path],
+ stderr=subprocess.STDOUT)
+ except subprocess.CalledProcessError as e:
+ fusermount_output = e.output
+ else:
+ fusermount_output = b''
import os
import re
-def git_latest_tag():
- gittags = subprocess.check_output(['git', 'tag', '-l']).split()
- gittags.sort(key=lambda s: [int(u) for u in s.split(b'.')],reverse=True)
- return str(next(iter(gittags)).decode('utf-8'))
+SETUP_DIR = os.path.dirname(os.path.abspath(__file__))
-def git_timestamp_tag():
- gitinfo = subprocess.check_output(
+def choose_version_from():
+ sdk_ts = subprocess.check_output(
['git', 'log', '--first-parent', '--max-count=1',
- '--format=format:%ct', '.']).strip()
- return str(time.strftime('.%Y%m%d%H%M%S', time.gmtime(int(gitinfo))))
+ '--format=format:%ct', os.path.join(SETUP_DIR, "../../sdk/python")]).strip()
+ cwl_ts = subprocess.check_output(
+ ['git', 'log', '--first-parent', '--max-count=1',
+ '--format=format:%ct', SETUP_DIR]).strip()
+ if int(sdk_ts) > int(cwl_ts):
+ getver = os.path.join(SETUP_DIR, "../../sdk/python")
+ else:
+ getver = SETUP_DIR
+ return getver
+
+def git_version_at_commit():
+ curdir = choose_version_from()
+ myhash = subprocess.check_output(['git', 'log', '-n1', '--first-parent',
+ '--format=%H', curdir]).strip()
+ myversion = subprocess.check_output([curdir+'/../../build/version-at-commit.sh', myhash]).strip().decode()
+ return myversion
def save_version(setup_dir, module, v):
- with open(os.path.join(setup_dir, module, "_version.py"), 'w') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'wt') as fp:
return fp.write("__version__ = '%s'\n" % v)
def read_version(setup_dir, module):
- with open(os.path.join(setup_dir, module, "_version.py"), 'r') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'rt') as fp:
return re.match("__version__ = '(.*)'$", fp.read()).groups()[0]
def get_version(setup_dir, module):
save_version(setup_dir, module, env_version)
else:
try:
- save_version(setup_dir, module, git_latest_tag() + git_timestamp_tag())
- except subprocess.CalledProcessError:
+ save_version(setup_dir, module, git_version_at_commit())
+ except (subprocess.CalledProcessError, OSError):
pass
return read_version(setup_dir, module)
author='Arvados',
author_email='info@arvados.org',
url="https://arvados.org",
- download_url="https://github.com/curoverse/arvados.git",
+ download_url="https://github.com/arvados/arvados.git",
license='GNU Affero General Public License, version 3.0',
packages=['arvados_fuse'],
scripts=[
import arvados_fuse as fuse
from . import run_test_server
+from .integration_test import IntegrationTest
from .mount_test_base import MountTestBase
logger = logging.getLogger('arvados.arv-mount')
llfuse.listdir(os.path.join(self.mounttmp, self.testcollection))
-class FuseUnitTest(unittest.TestCase):
+class SanitizeFilenameTest(MountTestBase):
def test_sanitize_filename(self):
+ pdir = fuse.ProjectDirectory(1, {}, self.api, 0, project_object=self.api.users().current().execute())
acceptable = [
"foo.txt",
".foo",
"//",
]
for f in acceptable:
- self.assertEqual(f, fuse.sanitize_filename(f))
+ self.assertEqual(f, pdir.sanitize_filename(f))
for f in unacceptable:
- self.assertNotEqual(f, fuse.sanitize_filename(f))
+ self.assertNotEqual(f, pdir.sanitize_filename(f))
# The sanitized filename should be the same length, though.
- self.assertEqual(len(f), len(fuse.sanitize_filename(f)))
+ self.assertEqual(len(f), len(pdir.sanitize_filename(f)))
# Special cases
- self.assertEqual("_", fuse.sanitize_filename(""))
- self.assertEqual("_", fuse.sanitize_filename("."))
- self.assertEqual("__", fuse.sanitize_filename(".."))
+ self.assertEqual("_", pdir.sanitize_filename(""))
+ self.assertEqual("_", pdir.sanitize_filename("."))
+ self.assertEqual("__", pdir.sanitize_filename(".."))
class FuseMagicTestPDHOnly(MountTestBase):
def test_with_default_by_id(self):
self.verify_pdh_only(skip_pdh_only=True)
+
+
+class SlashSubstitutionTest(IntegrationTest):
+ mnt_args = [
+ '--read-write',
+ '--mount-home', 'zzz',
+ ]
+
+ def setUp(self):
+ super(SlashSubstitutionTest, self).setUp()
+ self.api = arvados.safeapi.ThreadSafeApiCache(arvados.config.settings())
+ self.api.config = lambda: {"Collections": {"ForwardSlashNameSubstitution": "[SLASH]"}}
+ self.testcoll = self.api.collections().create(body={"name": "foo/bar/baz"}).execute()
+ self.testcolleasy = self.api.collections().create(body={"name": "foo-bar-baz"}).execute()
+ self.fusename = 'foo[SLASH]bar[SLASH]baz'
+
+ @IntegrationTest.mount(argv=mnt_args)
+ @mock.patch('arvados.util.get_config_once')
+ def test_slash_substitution_before_listing(self, get_config_once):
+ get_config_once.return_value = {"Collections": {"ForwardSlashNameSubstitution": "[SLASH]"}}
+ self.pool_test(os.path.join(self.mnt, 'zzz'), self.fusename)
+ self.checkContents()
+ @staticmethod
+ def _test_slash_substitution_before_listing(self, tmpdir, fusename):
+ with open(os.path.join(tmpdir, 'foo-bar-baz', 'waz'), 'w') as f:
+ f.write('xxx')
+ with open(os.path.join(tmpdir, fusename, 'waz'), 'w') as f:
+ f.write('foo')
+
+ @IntegrationTest.mount(argv=mnt_args)
+ @mock.patch('arvados.util.get_config_once')
+ def test_slash_substitution_after_listing(self, get_config_once):
+ get_config_once.return_value = {"Collections": {"ForwardSlashNameSubstitution": "[SLASH]"}}
+ self.pool_test(os.path.join(self.mnt, 'zzz'), self.fusename)
+ self.checkContents()
+ @staticmethod
+ def _test_slash_substitution_after_listing(self, tmpdir, fusename):
+ with open(os.path.join(tmpdir, 'foo-bar-baz', 'waz'), 'w') as f:
+ f.write('xxx')
+ os.listdir(tmpdir)
+ with open(os.path.join(tmpdir, fusename, 'waz'), 'w') as f:
+ f.write('foo')
+
+ def checkContents(self):
+ self.assertRegexpMatches(self.api.collections().get(uuid=self.testcoll['uuid']).execute()['manifest_text'], ' acbd18db') # md5(foo)
+ self.assertRegexpMatches(self.api.collections().get(uuid=self.testcolleasy['uuid']).execute()['manifest_text'], ' f561aaf6') # md5(xxx)
+
+ @IntegrationTest.mount(argv=mnt_args)
+ @mock.patch('arvados.util.get_config_once')
+ def test_slash_substitution_conflict(self, get_config_once):
+ self.testcollconflict = self.api.collections().create(body={"name": self.fusename}).execute()
+ get_config_once.return_value = {"Collections": {"ForwardSlashNameSubstitution": "[SLASH]"}}
+ self.pool_test(os.path.join(self.mnt, 'zzz'), self.fusename)
+ self.assertRegexpMatches(self.api.collections().get(uuid=self.testcollconflict['uuid']).execute()['manifest_text'], ' acbd18db') # md5(foo)
+ # foo/bar/baz collection unchanged, because it is masked by foo[SLASH]bar[SLASH]baz
+ self.assertEqual(self.api.collections().get(uuid=self.testcoll['uuid']).execute()['manifest_text'], '')
+ @staticmethod
+ def _test_slash_substitution_conflict(self, tmpdir, fusename):
+ with open(os.path.join(tmpdir, fusename, 'waz'), 'w') as f:
+ f.write('foo')
[Service]
Type=simple
ExecStart=/usr/bin/arvados-health
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=1
"context"
"os"
- "git.curoverse.com/arvados.git/lib/cmd"
- "git.curoverse.com/arvados.git/lib/service"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/health"
+ "git.arvados.org/arvados.git/lib/cmd"
+ "git.arvados.org/arvados.git/lib/service"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/health"
"github.com/prometheus/client_golang/prometheus"
)
"syscall"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/sirupsen/logrus"
)
changeNone: "none",
}
+type balancedBlockState struct {
+ needed int
+ unneeded int
+ pulling int
+ unachievable bool
+}
+
type balanceResult struct {
blk *BlockState
blkid arvados.SizedDigest
- have int
- want int
+ lost bool
+ blockState balancedBlockState
classState map[string]balancedBlockState
}
+type slot struct {
+ mnt *KeepMount // never nil
+ repl *Replica // replica already stored here (or nil)
+ want bool // we should pull/leave a replica here
+}
+
// balanceBlock compares current state to desired state for a single
// block, and makes the appropriate ChangeSet calls.
func (bal *Balancer) balanceBlock(blkid arvados.SizedDigest, blk *BlockState) balanceResult {
bal.Logger.Debugf("balanceBlock: %v %+v", blkid, blk)
- type slot struct {
- mnt *KeepMount // never nil
- repl *Replica // replica already stored here (or nil)
- want bool // we should pull/leave a replica here
- }
-
// Build a list of all slots (one per mounted volume).
slots := make([]slot, 0, bal.mounts)
for _, srv := range bal.KeepServices {
// won't want to trash any replicas.
underreplicated := false
- classState := make(map[string]balancedBlockState, len(bal.classes))
unsafeToDelete := make(map[int64]bool, len(slots))
for _, class := range bal.classes {
desired := blk.Desired[class]
-
- countedDev := map[string]bool{}
- have := 0
- for _, slot := range slots {
- if slot.repl != nil && bal.mountsByClass[class][slot.mnt] && !countedDev[slot.mnt.DeviceID] {
- have += slot.mnt.Replication
- if slot.mnt.DeviceID != "" {
- countedDev[slot.mnt.DeviceID] = true
- }
- }
- }
- classState[class] = balancedBlockState{
- desired: desired,
- surplus: have - desired,
- }
-
if desired == 0 {
continue
}
underreplicated = safe < desired
}
- // set the unachievable flag if there aren't enough
- // slots offering the relevant storage class. (This is
- // as easy as checking slots[desired] because we
- // already sorted the qualifying slots to the front.)
- if desired >= len(slots) || !bal.mountsByClass[class][slots[desired].mnt] {
- cs := classState[class]
- cs.unachievable = true
- classState[class] = cs
- }
-
// Avoid deleting wanted replicas from devices that
// are mounted on multiple servers -- even if they
// haven't already been added to unsafeToDelete
// replica that doesn't have a timestamp collision with
// others.
- countedDev := map[string]bool{}
- var have, want int
- for _, slot := range slots {
- if countedDev[slot.mnt.DeviceID] {
- continue
- }
- if slot.want {
- want += slot.mnt.Replication
- }
- if slot.repl != nil {
- have += slot.mnt.Replication
- }
- if slot.mnt.DeviceID != "" {
- countedDev[slot.mnt.DeviceID] = true
+ for i, slot := range slots {
+ // Don't trash (1) any replicas of an underreplicated
+ // block, even if they're in the wrong positions, or
+ // (2) any replicas whose Mtimes are identical to
+ // needed replicas (in case we're really seeing the
+ // same copy via different mounts).
+ if slot.repl != nil && (underreplicated || unsafeToDelete[slot.repl.Mtime]) {
+ slots[i].want = true
}
}
+ classState := make(map[string]balancedBlockState, len(bal.classes))
+ for _, class := range bal.classes {
+ classState[class] = computeBlockState(slots, bal.mountsByClass[class], len(blk.Replicas), blk.Desired[class])
+ }
+ blockState := computeBlockState(slots, nil, len(blk.Replicas), 0)
+
+ var lost bool
var changes []string
for _, slot := range slots {
// TODO: request a Touch if Mtime is duplicated.
var change int
switch {
- case !underreplicated && !slot.want && slot.repl != nil && slot.repl.Mtime < bal.MinMtime && !unsafeToDelete[slot.repl.Mtime]:
+ case !slot.want && slot.repl != nil && slot.repl.Mtime < bal.MinMtime:
slot.mnt.KeepService.AddTrash(Trash{
SizedDigest: blkid,
Mtime: slot.repl.Mtime,
From: slot.mnt,
})
change = changeTrash
- case len(blk.Replicas) > 0 && slot.repl == nil && slot.want && !slot.mnt.ReadOnly:
+ case slot.repl == nil && slot.want && len(blk.Replicas) == 0:
+ lost = true
+ change = changeNone
+ case slot.repl == nil && slot.want && !slot.mnt.ReadOnly:
slot.mnt.KeepService.AddPull(Pull{
SizedDigest: blkid,
From: blk.Replicas[0].KeepMount.KeepService,
}
}
if bal.Dumper != nil {
- bal.Dumper.Printf("%s refs=%d have=%d want=%v %v %v", blkid, blk.RefCount, have, want, blk.Desired, changes)
+ bal.Dumper.Printf("%s refs=%d needed=%d unneeded=%d pulling=%v %v %v", blkid, blk.RefCount, blockState.needed, blockState.unneeded, blockState.pulling, blk.Desired, changes)
}
return balanceResult{
blk: blk,
blkid: blkid,
- have: have,
- want: want,
+ lost: lost,
+ blockState: blockState,
classState: classState,
}
}
+func computeBlockState(slots []slot, onlyCount map[*KeepMount]bool, have, needRepl int) (bbs balancedBlockState) {
+ repl := 0
+ countedDev := map[string]bool{}
+ for _, slot := range slots {
+ if onlyCount != nil && !onlyCount[slot.mnt] {
+ continue
+ }
+ if countedDev[slot.mnt.DeviceID] {
+ continue
+ }
+ switch {
+ case slot.repl != nil && slot.want:
+ bbs.needed++
+ repl += slot.mnt.Replication
+ case slot.repl != nil && !slot.want:
+ bbs.unneeded++
+ repl += slot.mnt.Replication
+ case slot.repl == nil && slot.want && have > 0:
+ bbs.pulling++
+ repl += slot.mnt.Replication
+ }
+ if slot.mnt.DeviceID != "" {
+ countedDev[slot.mnt.DeviceID] = true
+ }
+ }
+ if repl < needRepl {
+ bbs.unachievable = true
+ }
+ return
+}
+
type blocksNBytes struct {
replicas int
blocks int
return fmt.Sprintf("%d replicas (%d blocks, %d bytes)", bb.replicas, bb.blocks, bb.bytes)
}
+type replicationStats struct {
+ needed blocksNBytes
+ unneeded blocksNBytes
+ pulling blocksNBytes
+ unachievable blocksNBytes
+}
+
type balancerStats struct {
lost blocksNBytes
overrep blocksNBytes
return float64(s.collectionBlockRefs) / float64(s.collectionBlocks)
}
-type replicationStats struct {
- desired blocksNBytes
- surplus blocksNBytes
- short blocksNBytes
- unachievable blocksNBytes
-}
-
-type balancedBlockState struct {
- desired int
- surplus int
- unachievable bool
-}
-
func (bal *Balancer) collectStatistics(results <-chan balanceResult) {
var s balancerStats
s.replHistogram = make([]int, 2)
s.classStats = make(map[string]replicationStats, len(bal.classes))
for result := range results {
- surplus := result.have - result.want
bytes := result.blkid.Size()
if rc := int64(result.blk.RefCount); rc > 0 {
for class, state := range result.classState {
cs := s.classStats[class]
if state.unachievable {
+ cs.unachievable.replicas++
cs.unachievable.blocks++
cs.unachievable.bytes += bytes
}
- if state.desired > 0 {
- cs.desired.replicas += state.desired
- cs.desired.blocks++
- cs.desired.bytes += bytes * int64(state.desired)
+ if state.needed > 0 {
+ cs.needed.replicas += state.needed
+ cs.needed.blocks++
+ cs.needed.bytes += bytes * int64(state.needed)
}
- if state.surplus > 0 {
- cs.surplus.replicas += state.surplus
- cs.surplus.blocks++
- cs.surplus.bytes += bytes * int64(state.surplus)
- } else if state.surplus < 0 {
- cs.short.replicas += -state.surplus
- cs.short.blocks++
- cs.short.bytes += bytes * int64(-state.surplus)
+ if state.unneeded > 0 {
+ cs.unneeded.replicas += state.unneeded
+ cs.unneeded.blocks++
+ cs.unneeded.bytes += bytes * int64(state.unneeded)
+ }
+ if state.pulling > 0 {
+ cs.pulling.replicas += state.pulling
+ cs.pulling.blocks++
+ cs.pulling.bytes += bytes * int64(state.pulling)
}
s.classStats[class] = cs
}
+ bs := result.blockState
switch {
- case result.have == 0 && result.want > 0:
- s.lost.replicas -= surplus
+ case result.lost:
+ s.lost.replicas++
s.lost.blocks++
- s.lost.bytes += bytes * int64(-surplus)
+ s.lost.bytes += bytes
fmt.Fprintf(bal.lostBlocks, "%s", strings.SplitN(string(result.blkid), "+", 2)[0])
for pdh := range result.blk.Refs {
fmt.Fprintf(bal.lostBlocks, " %s", pdh)
}
fmt.Fprint(bal.lostBlocks, "\n")
- case surplus < 0:
- s.underrep.replicas -= surplus
+ case bs.pulling > 0:
+ s.underrep.replicas += bs.pulling
+ s.underrep.blocks++
+ s.underrep.bytes += bytes * int64(bs.pulling)
+ case bs.unachievable:
+ s.underrep.replicas++
s.underrep.blocks++
- s.underrep.bytes += bytes * int64(-surplus)
- case surplus > 0 && result.want == 0:
+ s.underrep.bytes += bytes
+ case bs.unneeded > 0 && bs.needed == 0:
+ // Count as "garbage" if all replicas are old
+ // enough to trash, otherwise count as
+ // "unref".
counter := &s.garbage
for _, r := range result.blk.Replicas {
if r.Mtime >= bal.MinMtime {
break
}
}
- counter.replicas += surplus
+ counter.replicas += bs.unneeded
counter.blocks++
- counter.bytes += bytes * int64(surplus)
- case surplus > 0:
- s.overrep.replicas += surplus
+ counter.bytes += bytes * int64(bs.unneeded)
+ case bs.unneeded > 0:
+ s.overrep.replicas += bs.unneeded
s.overrep.blocks++
- s.overrep.bytes += bytes * int64(result.have-result.want)
+ s.overrep.bytes += bytes * int64(bs.unneeded)
default:
- s.justright.replicas += result.want
+ s.justright.replicas += bs.needed
s.justright.blocks++
- s.justright.bytes += bytes * int64(result.want)
+ s.justright.bytes += bytes * int64(bs.needed)
}
- if result.want > 0 {
- s.desired.replicas += result.want
+ if bs.needed > 0 {
+ s.desired.replicas += bs.needed
s.desired.blocks++
- s.desired.bytes += bytes * int64(result.want)
+ s.desired.bytes += bytes * int64(bs.needed)
}
- if result.have > 0 {
- s.current.replicas += result.have
+ if bs.needed+bs.unneeded > 0 {
+ s.current.replicas += bs.needed + bs.unneeded
s.current.blocks++
- s.current.bytes += bytes * int64(result.have)
+ s.current.bytes += bytes * int64(bs.needed+bs.unneeded)
}
- for len(s.replHistogram) <= result.have {
+ for len(s.replHistogram) <= bs.needed+bs.unneeded {
s.replHistogram = append(s.replHistogram, 0)
}
- s.replHistogram[result.have]++
+ s.replHistogram[bs.needed+bs.unneeded]++
}
for _, srv := range bal.KeepServices {
s.pulls += len(srv.ChangeSet.Pulls)
for _, class := range bal.classes {
cs := bal.stats.classStats[class]
bal.logf("===")
- bal.logf("storage class %q: %s desired", class, cs.desired)
- bal.logf("storage class %q: %s short", class, cs.short)
- bal.logf("storage class %q: %s surplus", class, cs.surplus)
+ bal.logf("storage class %q: %s needed", class, cs.needed)
+ bal.logf("storage class %q: %s unneeded", class, cs.unneeded)
+ bal.logf("storage class %q: %s pulling", class, cs.pulling)
bal.logf("storage class %q: %s unachievable", class, cs.unachievable)
}
bal.logf("===")
}
func (bal *Balancer) printHistogram(hashColumns int) {
- bal.logf("Replication level distribution (counting N replicas on a single server as N):")
+ bal.logf("Replication level distribution:")
maxCount := 0
for _, count := range bal.stats.replHistogram {
if maxCount < count {
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/expfmt"
check "gopkg.in/check.v1"
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
shouldPullMounts []string
shouldTrashMounts []string
- expectResult balanceResult
+ expectBlockState *balancedBlockState
+ expectClassState map[string]balancedBlockState
}
func (bal *balancerSuite) SetUpSuite(c *check.C) {
desired: map[string]int{"default": 2},
current: slots{0, 1},
shouldPull: nil,
- shouldTrash: nil})
+ shouldTrash: nil,
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ }})
}
func (bal *balancerSuite) TestDecreaseRepl(c *check.C) {
bal.try(c, tester{
desired: map[string]int{"default": 2},
current: slots{0, 2, 1},
- shouldTrash: slots{2}})
+ shouldTrash: slots{2},
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ unneeded: 1,
+ }})
}
func (bal *balancerSuite) TestDecreaseReplToZero(c *check.C) {
bal.try(c, tester{
desired: map[string]int{"default": 0},
current: slots{0, 1, 3},
- shouldTrash: slots{0, 1, 3}})
+ shouldTrash: slots{0, 1, 3},
+ expectBlockState: &balancedBlockState{
+ unneeded: 3,
+ }})
}
func (bal *balancerSuite) TestIncreaseRepl(c *check.C) {
bal.try(c, tester{
desired: map[string]int{"default": 4},
current: slots{0, 1},
- shouldPull: slots{2, 3}})
+ shouldPull: slots{2, 3},
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ pulling: 2,
+ }})
}
func (bal *balancerSuite) TestSkipReadonly(c *check.C) {
bal.try(c, tester{
desired: map[string]int{"default": 4},
current: slots{0, 1},
- shouldPull: slots{2, 4}})
+ shouldPull: slots{2, 4},
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ pulling: 2,
+ }})
}
func (bal *balancerSuite) TestMultipleViewsReadOnly(c *check.C) {
desired: map[string]int{"default": 2},
current: slots{0, 1, 2},
timestamps: []int64{oldTime, newTime, newTime + 1},
- expectResult: balanceResult{
- have: 3,
- want: 2,
- classState: map[string]balancedBlockState{"default": {
- desired: 2,
- surplus: 1,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ unneeded: 1,
+ }})
// The best replicas are too new to delete, but the excess
// replica is old enough.
bal.try(c, tester{
known: 0,
desired: map[string]int{"default": 2},
current: slots{1},
- shouldPull: slots{0}})
+ shouldPull: slots{0},
+ expectBlockState: &balancedBlockState{
+ needed: 1,
+ pulling: 1,
+ }})
bal.try(c, tester{
known: 0,
desired: map[string]int{"default": 2},
current: slots{0, 1},
- shouldPull: nil})
+ shouldPull: nil,
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ }})
bal.try(c, tester{
known: 0,
desired: map[string]int{"default": 2},
current: slots{0, 1, 2},
- shouldTrash: slots{2}})
+ shouldTrash: slots{2},
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ unneeded: 1,
+ }})
bal.try(c, tester{
known: 0,
desired: map[string]int{"default": 3},
current: slots{0, 2, 3, 4},
shouldPull: slots{1},
shouldTrash: slots{4},
- expectResult: balanceResult{
- have: 4,
- want: 3,
- classState: map[string]balancedBlockState{"default": {
- desired: 3,
- surplus: 1,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 3,
+ unneeded: 1,
+ pulling: 1,
+ }})
bal.try(c, tester{
known: 0,
desired: map[string]int{"default": 3},
current: slots{0, 1, 2, 3, 4},
- shouldTrash: slots{2, 3, 4}})
+ shouldTrash: slots{2, 3, 4},
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ unneeded: 3,
+ }})
bal.try(c, tester{
known: 0,
desired: map[string]int{"default": 4},
current: slots{0, 1, 2, 3, 4},
shouldTrash: slots{3, 4},
- expectResult: balanceResult{
- have: 6,
- want: 4,
- classState: map[string]balancedBlockState{"default": {
- desired: 4,
- surplus: 2,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 3,
+ unneeded: 2,
+ }})
// block 1 rendezvous is 0,9,7 -- so slot 0 has repl=2
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 2},
current: slots{0},
- expectResult: balanceResult{
- have: 2,
- want: 2,
- classState: map[string]balancedBlockState{"default": {
- desired: 2,
- surplus: 0,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 1,
+ }})
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 3},
current: slots{0},
- shouldPull: slots{1}})
+ shouldPull: slots{1},
+ expectBlockState: &balancedBlockState{
+ needed: 1,
+ pulling: 1,
+ }})
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 4},
current: slots{0},
- shouldPull: slots{1, 2}})
+ shouldPull: slots{1, 2},
+ expectBlockState: &balancedBlockState{
+ needed: 1,
+ pulling: 2,
+ }})
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 4},
current: slots{2},
- shouldPull: slots{0, 1}})
+ shouldPull: slots{0, 1},
+ expectBlockState: &balancedBlockState{
+ needed: 1,
+ pulling: 2,
+ }})
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 4},
current: slots{7},
shouldPull: slots{0, 1, 2},
- expectResult: balanceResult{
- have: 1,
- want: 4,
- classState: map[string]balancedBlockState{"default": {
- desired: 4,
- surplus: -3,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 1,
+ pulling: 3,
+ }})
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 2},
current: slots{1, 2, 3, 4},
shouldPull: slots{0},
- shouldTrash: slots{3, 4}})
+ shouldTrash: slots{3, 4},
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ unneeded: 2,
+ pulling: 1,
+ }})
bal.try(c, tester{
known: 1,
desired: map[string]int{"default": 2},
current: slots{0, 1, 2},
shouldTrash: slots{1, 2},
- expectResult: balanceResult{
- have: 4,
- want: 2,
- classState: map[string]balancedBlockState{"default": {
- desired: 2,
- surplus: 2,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 1,
+ unneeded: 2,
+ }})
}
func (bal *balancerSuite) TestDeviceRWMountedByMultipleServers(c *check.C) {
desired: map[string]int{"default": 2},
current: slots{1, 9},
shouldPull: slots{0},
- expectResult: balanceResult{
- have: 1,
- classState: map[string]balancedBlockState{"default": {
- desired: 2,
- surplus: -1,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 1,
+ pulling: 1,
+ }})
// block 0 is overreplicated, but the second and third
// replicas are the same replica according to DeviceID
// (despite different Mtimes). Don't trash the third replica.
known: 0,
desired: map[string]int{"default": 2},
current: slots{0, 1, 9},
- expectResult: balanceResult{
- have: 2,
- classState: map[string]balancedBlockState{"default": {
- desired: 2,
- surplus: 0,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ }})
// block 0 is overreplicated; the third and fifth replicas are
// extra, but the fourth is another view of the second and
// shouldn't be trashed.
desired: map[string]int{"default": 2},
current: slots{0, 1, 5, 9, 12},
shouldTrash: slots{5, 12},
- expectResult: balanceResult{
- have: 4,
- classState: map[string]balancedBlockState{"default": {
- desired: 2,
- surplus: 2,
- unachievable: false}}}})
+ expectBlockState: &balancedBlockState{
+ needed: 2,
+ unneeded: 2,
+ }})
}
func (bal *balancerSuite) TestChangeStorageClasses(c *check.C) {
sort.Strings(didTrashMounts)
c.Check(didTrashMounts, check.DeepEquals, t.shouldTrashMounts)
}
- if t.expectResult.have > 0 {
- c.Check(result.have, check.Equals, t.expectResult.have)
- }
- if t.expectResult.want > 0 {
- c.Check(result.want, check.Equals, t.expectResult.want)
+ if t.expectBlockState != nil {
+ c.Check(result.blockState, check.Equals, *t.expectBlockState)
}
- if t.expectResult.classState != nil {
- c.Check(result.classState, check.DeepEquals, t.expectResult.classState)
+ if t.expectClassState != nil {
+ c.Check(result.classState, check.DeepEquals, t.expectClassState)
}
}
import (
"sync"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// Replica is a file on disk (or object in an S3 bucket, or blob in an
"fmt"
"sync"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// Pull is a request to retrieve a block from a remote server, and
import (
"encoding/json"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
check "gopkg.in/check.v1"
)
"fmt"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
func countCollections(c *arvados.Client, params arvados.ResourceListParams) (int, error) {
return err
}
for _, coll := range page.Items {
- if last.ModifiedAt != nil && *last.ModifiedAt == *coll.ModifiedAt && last.UUID >= coll.UUID {
+ if last.ModifiedAt == coll.ModifiedAt && last.UUID >= coll.UUID {
continue
}
callCount++
}
if len(page.Items) == 0 && !gettingExactTimestamp {
break
- } else if last.ModifiedAt == nil {
+ } else if last.ModifiedAt.IsZero() {
return fmt.Errorf("BUG: Last collection on the page (%s) has no modified_at timestamp; cannot make progress", last.UUID)
- } else if len(page.Items) > 0 && *last.ModifiedAt == filterTime {
+ } else if len(page.Items) > 0 && last.ModifiedAt == filterTime {
// If we requested time>=X and never got a
// time>X then we might not have received all
// items with time==X yet. Switch to
// avoiding that would add overhead in the
// overwhelmingly common cases, so we don't
// bother.
- filterTime = *last.ModifiedAt
+ filterTime = last.ModifiedAt
params.Filters = []arvados.Filter{{
Attr: "modified_at",
Operator: ">=",
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
check "gopkg.in/check.v1"
)
var lastMod time.Time
sawUUID := make(map[string]bool)
err := EachCollection(s.client, pageSize, func(c arvados.Collection) error {
- if c.ModifiedAt == nil {
+ if c.ModifiedAt.IsZero() {
return nil
}
if sawUUID[c.UUID] {
}
got[trial] = append(got[trial], c.UUID)
sawUUID[c.UUID] = true
- if lastMod == *c.ModifiedAt {
+ if lastMod == c.ModifiedAt {
streak++
if streak > longestStreak {
longestStreak = streak
}
} else {
streak = 0
- lastMod = *c.ModifiedAt
+ lastMod = c.ModifiedAt
}
return nil
}, nil)
"testing"
"time"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
arvadostest.StartKeep(4, true)
arv, err := arvadosclient.MakeArvadosClient()
- arv.ApiToken = arvadostest.DataManagerToken
+ arv.ApiToken = arvadostest.SystemRootToken
c.Assert(err, check.IsNil)
s.keepClient, err = keepclient.MakeKeepClient(arv)
s.client = &arvados.Client{
APIHost: os.Getenv("ARVADOS_API_HOST"),
- AuthToken: arvadostest.DataManagerToken,
+ AuthToken: arvadostest.SystemRootToken,
Insecure: true,
}
}
[Service]
Type=simple
ExecStart=/usr/bin/keep-balance -commit-pulls -commit-trash
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=10s
Nice=19
"io/ioutil"
"net/http"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// KeepService represents a keepstore server that is being rebalanced.
"flag"
"fmt"
"io"
+ "net/http"
"os"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/lib/service"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/service"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
)
options.Dumper = dumper
}
- // Only pass along the version flag, which gets handled in RunCommand
+ // Drop our custom args that would be rejected by the generic
+ // service.Command
args = nil
+ dropFlag := map[string]bool{
+ "once": true,
+ "commit-pulls": true,
+ "commit-trash": true,
+ "dump": true,
+ }
flags.Visit(func(f *flag.Flag) {
- if f.Name == "version" {
+ if !dropFlag[f.Name] {
args = append(args, "-"+f.Name, f.Value.String())
}
})
}
srv := &Server{
+ Handler: http.NotFoundHandler(),
Cluster: cluster,
ArvClient: ac,
RunOptions: options,
--- /dev/null
+// Copyright (C) The Arvados Authors. All rights reserved.
+//
+// SPDX-License-Identifier: AGPL-3.0
+
+package main
+
+import (
+ "bytes"
+ "io/ioutil"
+ "net"
+ "net/http"
+ "time"
+
+ check "gopkg.in/check.v1"
+)
+
+var _ = check.Suite(&mainSuite{})
+
+type mainSuite struct{}
+
+func (s *mainSuite) TestVersionFlag(c *check.C) {
+ var stdout, stderr bytes.Buffer
+ runCommand("keep-balance", []string{"-version"}, nil, &stdout, &stderr)
+ c.Check(stderr.String(), check.Equals, "")
+ c.Log(stdout.String())
+}
+
+func (s *mainSuite) TestHTTPServer(c *check.C) {
+ ln, err := net.Listen("tcp", ":0")
+ if err != nil {
+ c.Fatal(err)
+ }
+ _, p, err := net.SplitHostPort(ln.Addr().String())
+ ln.Close()
+ config := "Clusters:\n zzzzz:\n ManagementToken: abcdefg\n Services: {Keepbalance: {InternalURLs: {'http://localhost:" + p + "/': {}}}}\n"
+
+ var stdout bytes.Buffer
+ go runCommand("keep-balance", []string{"-config", "-"}, bytes.NewBufferString(config), &stdout, &stdout)
+ done := make(chan struct{})
+ go func() {
+ defer close(done)
+ for {
+ time.Sleep(time.Second / 10)
+ req, err := http.NewRequest(http.MethodGet, "http://:"+p+"/metrics", nil)
+ if err != nil {
+ c.Fatal(err)
+ return
+ }
+ req.Header.Set("Authorization", "Bearer abcdefg")
+ resp, err := http.DefaultClient.Do(req)
+ if err != nil {
+ c.Logf("error %s", err)
+ continue
+ }
+ defer resp.Body.Close()
+ if resp.StatusCode != http.StatusOK {
+ c.Logf("http status %d", resp.StatusCode)
+ continue
+ }
+ buf, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ c.Logf("read body: %s", err)
+ continue
+ }
+ c.Check(string(buf), check.Matches, `(?ms).*arvados_keepbalance_sweep_seconds_sum.*`)
+ return
+ }
+ }()
+ select {
+ case <-done:
+ case <-time.After(time.Second):
+ c.Log(stdout.String())
+ c.Fatal("timeout")
+ }
+
+ // Check non-metrics URL that gets passed through to us from
+ // service.Command
+ req, err := http.NewRequest(http.MethodGet, "http://:"+p+"/not-metrics", nil)
+ c.Assert(err, check.IsNil)
+ resp, err := http.DefaultClient.Do(req)
+ c.Check(err, check.IsNil)
+ defer resp.Body.Close()
+ c.Check(resp.StatusCode, check.Equals, http.StatusNotFound)
+}
"syscall"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
"github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
)
import (
"bytes"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/expfmt"
"gopkg.in/check.v1"
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
check "gopkg.in/check.v1"
)
--- /dev/null
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: AGPL-3.0
+
+case "$TARGET" in
+ centos*)
+ fpm_depends+=(mailcap)
+ ;;
+ debian* | ubuntu*)
+ fpm_depends+=(mime-support)
+ ;;
+esac
"strings"
"sync"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/health"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/health"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/sirupsen/logrus"
"golang.org/x/net/webdav"
)
Insecure: arv.ApiInsecure,
}).WithRequestID(r.Header.Get("X-Request-Id"))
fs := client.SiteFileSystem(kc)
+ fs.ForwardSlashNameSubstitution(h.Config.cluster.Collections.ForwardSlashNameSubstitution)
f, err := fs.Open(r.URL.Path)
if os.IsNotExist(err) {
http.Error(w, err.Error(), http.StatusNotFound)
"regexp"
"strings"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
check "gopkg.in/check.v1"
)
s.Config = cfg
}
-func (s *UnitSuite) TestKeepClientBlockCache(c *check.C) {
- cfg := newConfig(s.Config)
- cfg.cluster.Collections.WebDAVCache.MaxBlockEntries = 42
- h := handler{Config: cfg}
- c.Check(keepclient.DefaultBlockCache.MaxBlocks, check.Not(check.Equals), cfg.cluster.Collections.WebDAVCache.MaxBlockEntries)
- u := mustParseURL("http://keep-web.example/c=" + arvadostest.FooCollection + "/t=" + arvadostest.ActiveToken + "/foo")
- req := &http.Request{
- Method: "GET",
- Host: u.Host,
- URL: u,
- RequestURI: u.RequestURI(),
- }
- resp := httptest.NewRecorder()
- h.ServeHTTP(resp, req)
- c.Check(resp.Code, check.Equals, http.StatusOK)
- c.Check(keepclient.DefaultBlockCache.MaxBlocks, check.Equals, cfg.cluster.Collections.WebDAVCache.MaxBlockEntries)
-}
-
func (s *UnitSuite) TestCORSPreflight(c *check.C) {
h := handler{Config: newConfig(s.Config)}
u := mustParseURL("http://keep-web.example/c=" + arvadostest.FooCollection + "/foo")
c.Check(resp.Body.String(), check.Matches, `(?ms).*href="./https:%5c%22odd%27%20path%20chars"\S+https:\\"odd' path chars.*`)
}
+func (s *IntegrationSuite) TestForwardSlashSubstitution(c *check.C) {
+ arv := arvados.NewClientFromEnv()
+ s.testServer.Config.cluster.Services.WebDAVDownload.ExternalURL.Host = "download.example.com"
+ s.testServer.Config.cluster.Collections.ForwardSlashNameSubstitution = "{SOLIDUS}"
+ name := "foo/bar/baz"
+ nameShown := strings.Replace(name, "/", "{SOLIDUS}", -1)
+ nameShownEscaped := strings.Replace(name, "/", "%7bSOLIDUS%7d", -1)
+
+ client := s.testServer.Config.Client
+ client.AuthToken = arvadostest.ActiveToken
+ fs, err := (&arvados.Collection{}).FileSystem(&client, nil)
+ c.Assert(err, check.IsNil)
+ f, err := fs.OpenFile("filename", os.O_CREATE, 0777)
+ c.Assert(err, check.IsNil)
+ f.Close()
+ mtxt, err := fs.MarshalManifest(".")
+ c.Assert(err, check.IsNil)
+ var coll arvados.Collection
+ err = client.RequestAndDecode(&coll, "POST", "arvados/v1/collections", nil, map[string]interface{}{
+ "collection": map[string]string{
+ "manifest_text": mtxt,
+ "name": name,
+ "owner_uuid": arvadostest.AProjectUUID,
+ },
+ })
+ c.Assert(err, check.IsNil)
+ defer arv.RequestAndDecode(&coll, "DELETE", "arvados/v1/collections/"+coll.UUID, nil, nil)
+
+ base := "http://download.example.com/by_id/" + coll.OwnerUUID + "/"
+ for tryURL, expectRegexp := range map[string]string{
+ base: `(?ms).*href="./` + nameShownEscaped + `/"\S+` + nameShown + `.*`,
+ base + nameShownEscaped + "/": `(?ms).*href="./filename"\S+filename.*`,
+ } {
+ u, _ := url.Parse(tryURL)
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + client.AuthToken},
+ },
+ }
+ resp := httptest.NewRecorder()
+ s.testServer.Handler.ServeHTTP(resp, req)
+ c.Check(resp.Code, check.Equals, http.StatusOK)
+ c.Check(resp.Body.String(), check.Matches, expectRegexp)
+ }
+}
+
// XHRs can't follow redirect-with-cookie so they rely on method=POST
// and disposition=attachment (telling us it's acceptable to respond
// with content instead of a redirect) and an Origin header that gets
} else {
c.Check(resp.Code, check.Equals, http.StatusMultiStatus, comment)
for _, e := range trial.expect {
- c.Check(resp.Body.String(), check.Matches, `(?ms).*<D:href>`+filepath.Join(u.Path, e)+`</D:href>.*`, comment)
+ if strings.HasSuffix(e, "/") {
+ e = filepath.Join(u.Path, e) + "/"
+ } else {
+ e = filepath.Join(u.Path, e)
+ }
+ c.Check(resp.Body.String(), check.Matches, `(?ms).*<D:href>`+e+`</D:href>.*`, comment)
}
}
}
c.Check(resp.Body.String(), check.Matches, `{"health":"OK"}\n`)
}
+func (s *IntegrationSuite) TestFileContentType(c *check.C) {
+ s.testServer.Config.cluster.Services.WebDAVDownload.ExternalURL.Host = "download.example.com"
+
+ client := s.testServer.Config.Client
+ client.AuthToken = arvadostest.ActiveToken
+ arv, err := arvadosclient.New(&client)
+ c.Assert(err, check.Equals, nil)
+ kc, err := keepclient.MakeKeepClient(arv)
+ c.Assert(err, check.Equals, nil)
+
+ fs, err := (&arvados.Collection{}).FileSystem(&client, kc)
+ c.Assert(err, check.IsNil)
+
+ trials := []struct {
+ filename string
+ content string
+ contentType string
+ }{
+ {"picture.txt", "BMX bikes are small this year\n", "text/plain; charset=utf-8"},
+ {"picture.bmp", "BMX bikes are small this year\n", "image/x-ms-bmp"},
+ {"picture.jpg", "BMX bikes are small this year\n", "image/jpeg"},
+ {"picture1", "BMX bikes are small this year\n", "image/bmp"}, // content sniff; "BM" is the magic signature for .bmp
+ {"picture2", "Cars are small this year\n", "text/plain; charset=utf-8"}, // content sniff
+ }
+ for _, trial := range trials {
+ f, err := fs.OpenFile(trial.filename, os.O_CREATE|os.O_WRONLY, 0777)
+ c.Assert(err, check.IsNil)
+ _, err = f.Write([]byte(trial.content))
+ c.Assert(err, check.IsNil)
+ c.Assert(f.Close(), check.IsNil)
+ }
+ mtxt, err := fs.MarshalManifest(".")
+ c.Assert(err, check.IsNil)
+ var coll arvados.Collection
+ err = client.RequestAndDecode(&coll, "POST", "arvados/v1/collections", nil, map[string]interface{}{
+ "collection": map[string]string{
+ "manifest_text": mtxt,
+ },
+ })
+ c.Assert(err, check.IsNil)
+
+ for _, trial := range trials {
+ u, _ := url.Parse("http://download.example.com/by_id/" + coll.UUID + "/" + trial.filename)
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ Header: http.Header{
+ "Authorization": {"Bearer " + client.AuthToken},
+ },
+ }
+ resp := httptest.NewRecorder()
+ s.testServer.Handler.ServeHTTP(resp, req)
+ c.Check(resp.Code, check.Equals, http.StatusOK)
+ c.Check(resp.Header().Get("Content-Type"), check.Equals, trial.contentType)
+ c.Check(resp.Body.String(), check.Equals, trial.content)
+ }
+}
+
+func (s *IntegrationSuite) TestKeepClientBlockCache(c *check.C) {
+ s.testServer.Config.cluster.Collections.WebDAVCache.MaxBlockEntries = 42
+ c.Check(keepclient.DefaultBlockCache.MaxBlocks, check.Not(check.Equals), 42)
+ u := mustParseURL("http://keep-web.example/c=" + arvadostest.FooCollection + "/t=" + arvadostest.ActiveToken + "/foo")
+ req := &http.Request{
+ Method: "GET",
+ Host: u.Host,
+ URL: u,
+ RequestURI: u.RequestURI(),
+ }
+ resp := httptest.NewRecorder()
+ s.testServer.Handler.ServeHTTP(resp, req)
+ c.Check(resp.Code, check.Equals, http.StatusOK)
+ c.Check(keepclient.DefaultBlockCache.MaxBlocks, check.Equals, 42)
+}
+
func copyHeader(h http.Header) http.Header {
hc := http.Header{}
for k, v := range h {
[Service]
Type=notify
ExecStart=/usr/bin/keep-web
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=1
import (
"flag"
"fmt"
+ "mime"
"os"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/coreos/go-systemd/daemon"
"github.com/ghodss/yaml"
log "github.com/sirupsen/logrus"
log.Printf("keep-web %s started", version)
+ if ext := ".txt"; mime.TypeByExtension(ext) == "" {
+ log.Warnf("cannot look up MIME type for %q -- this probably means /etc/mime.types is missing -- clients will see incorrect content types", ext)
+ }
+
os.Setenv("ARVADOS_API_HOST", cfg.cluster.Services.Controller.ExternalURL.Host)
srv := &server{Config: cfg}
if err := srv.Start(); err != nil {
"net/http"
"net/http/httptest"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
check "gopkg.in/check.v1"
)
"context"
"net/http"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
)
"strings"
"testing"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
check "gopkg.in/check.v1"
)
c.Assert(err, check.IsNil)
c.Check(resp.StatusCode, check.Equals, http.StatusOK)
type summary struct {
- SampleCount string `json:"sample_count"`
- SampleSum float64 `json:"sample_sum"`
- Quantile []struct {
- Quantile float64
- Value float64
- }
+ SampleCount string
+ SampleSum float64
}
type counter struct {
Value int64
"net/http/httptest"
"net/url"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
"gopkg.in/check.v1"
)
"sync/atomic"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"golang.org/x/net/context"
"golang.org/x/net/webdav"
"syscall"
"time"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/health"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/health"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/coreos/go-systemd/daemon"
"github.com/ghodss/yaml"
"github.com/gorilla/mux"
signal.Notify(term, syscall.SIGINT)
// Start serving requests.
- router = MakeRESTRouter(kc, time.Duration(cluster.API.KeepServiceRequestTimeout), cluster.SystemRootToken)
+ router = MakeRESTRouter(kc, time.Duration(cluster.API.KeepServiceRequestTimeout), cluster.ManagementToken)
return http.Serve(listener, httpserver.AddRequestIDs(httpserver.LogRequests(router)))
}
[Service]
Type=notify
ExecStart=/usr/bin/keepproxy
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=1
"testing"
"time"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
log "github.com/sirupsen/logrus"
. "gopkg.in/check.v1"
resp, err := (&http.Client{}).Do(req)
c.Assert(err, Equals, nil)
c.Check(resp.Header.Get("Via"), Equals, "HTTP/1.1 keepproxy")
+ c.Assert(resp.StatusCode, Equals, http.StatusOK)
locator, err := ioutil.ReadAll(resp.Body)
c.Assert(err, Equals, nil)
resp.Body.Close()
_, _, err = kc.PutB([]byte("some-more-index-data"))
c.Check(err, IsNil)
- kc.Arvados.ApiToken = arvadostest.DataManagerToken
+ kc.Arvados.ApiToken = arvadostest.SystemRootToken
// Invoke GetIndex
for _, spec := range []struct {
import (
"net/http"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
var viaAlias = "keepproxy"
"sync/atomic"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/Azure/azure-sdk-for-go/storage"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/Azure/azure-sdk-for-go/storage"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
"context"
"time"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
. "gopkg.in/check.v1"
)
"os"
"sync"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/lib/service"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/lib/service"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
)
"strings"
"time"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
check "gopkg.in/check.v1"
)
if err != nil {
t.Fatal(err)
}
- cluster.SystemRootToken = arvadostest.DataManagerToken
+ cluster.SystemRootToken = arvadostest.SystemRootToken
cluster.ManagementToken = arvadostest.ManagementToken
cluster.Collections.BlobSigning = false
return cluster
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/health"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/health"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
"github.com/gorilla/mux"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
Environment=GOGC=10
Type=notify
ExecStart=/usr/bin/keepstore
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=1
"net/http"
"net/http/httptest"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/httpserver"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/httpserver"
"github.com/prometheus/client_golang/prometheus"
check "gopkg.in/check.v1"
)
c.Check(resp.Body.String(), check.Equals, "Unauthorized\n")
}
- tok := arvadostest.DataManagerToken
+ tok := arvadostest.SystemRootToken
// Nonexistent mount UUID
resp = s.call("GET", "/mounts/X/blocks", tok, nil)
Value string
}
Summary struct {
- SampleCount string `json:"sample_count"`
- SampleSum float64 `json:"sample_sum"`
- Quantile []struct {
- Quantile float64
- Value float64
- }
+ SampleCount string
+ SampleSum float64
}
}
}
for _, m := range g.Metric {
if len(m.Label) == 2 && m.Label[0].Name == "code" && m.Label[0].Value == "200" && m.Label[1].Name == "method" && m.Label[1].Value == "put" {
c.Check(m.Summary.SampleCount, check.Equals, "2")
- c.Check(len(m.Summary.Quantile), check.Not(check.Equals), 0)
- c.Check(m.Summary.Quantile[0].Value, check.Not(check.Equals), float64(0))
found[g.Name] = true
}
}
import (
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
// SignLocator takes a blobLocator, an apiToken and an expiry time, and
"strconv"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
check "gopkg.in/check.v1"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
type remoteProxy struct {
"sync/atomic"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/auth"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/auth"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/prometheus/client_golang/prometheus"
check "gopkg.in/check.v1"
)
s.remoteAPI.StartTLS()
s.cluster = testCluster(c)
s.cluster.Collections.BlobSigningKey = knownKey
- s.cluster.SystemRootToken = arvadostest.DataManagerToken
+ s.cluster.SystemRootToken = arvadostest.SystemRootToken
s.cluster.RemoteClusters = map[string]arvados.RemoteCluster{
s.remoteClusterID: arvados.RemoteCluster{
Host: strings.Split(s.remoteAPI.URL, "//")[1],
"io/ioutil"
"time"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
// RunPullWorker receives PullRequests from pullq, invokes
"io/ioutil"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/prometheus/client_golang/prometheus"
check "gopkg.in/check.v1"
)
"net/http"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
"github.com/prometheus/client_golang/prometheus"
. "gopkg.in/check.v1"
check "gopkg.in/check.v1"
"sync/atomic"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/AdRoll/goamz/aws"
"github.com/AdRoll/goamz/s3"
"github.com/prometheus/client_golang/prometheus"
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/AdRoll/goamz/s3"
"github.com/AdRoll/goamz/s3/s3test"
"github.com/prometheus/client_golang/prometheus"
"errors"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
"context"
"time"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
check "gopkg.in/check.v1"
)
"syscall"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
)
"syscall"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
"github.com/sirupsen/logrus"
check "gopkg.in/check.v1"
"sync/atomic"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
"github.com/sirupsen/logrus"
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
*.gem
+Gemfile.lock
\ No newline at end of file
+++ /dev/null
-PATH
- remote: .
- specs:
- arvados-login-sync (1.4.1.20190930204434)
- arvados (~> 1.3.0, >= 1.3.0)
- faraday (< 0.16)
-
-GEM
- remote: https://rubygems.org/
- specs:
- activesupport (5.0.7.2)
- concurrent-ruby (~> 1.0, >= 1.0.2)
- i18n (>= 0.7, < 2)
- minitest (~> 5.1)
- tzinfo (~> 1.1)
- addressable (2.7.0)
- public_suffix (>= 2.0.2, < 5.0)
- andand (1.3.3)
- arvados (1.3.3.20190320201707)
- activesupport (>= 3)
- andand (~> 1.3, >= 1.3.3)
- arvados-google-api-client (>= 0.7, < 0.8.9)
- i18n (~> 0)
- json (>= 1.7.7, < 3)
- jwt (>= 0.1.5, < 2)
- arvados-google-api-client (0.8.7.2)
- activesupport (>= 3.2, < 5.1)
- addressable (~> 2.3)
- autoparse (~> 0.3)
- extlib (~> 0.9)
- faraday (~> 0.9)
- googleauth (~> 0.3)
- launchy (~> 2.4)
- multi_json (~> 1.10)
- retriable (~> 1.4)
- signet (~> 0.6)
- autoparse (0.3.3)
- addressable (>= 2.3.1)
- extlib (>= 0.9.15)
- multi_json (>= 1.0.0)
- concurrent-ruby (1.1.5)
- extlib (0.9.16)
- faraday (0.15.4)
- multipart-post (>= 1.2, < 3)
- googleauth (0.9.0)
- faraday (~> 0.12)
- jwt (>= 1.4, < 3.0)
- memoist (~> 0.16)
- multi_json (~> 1.11)
- os (>= 0.9, < 2.0)
- signet (~> 0.7)
- i18n (0.9.5)
- concurrent-ruby (~> 1.0)
- json (2.2.0)
- jwt (1.5.6)
- launchy (2.4.3)
- addressable (~> 2.3)
- memoist (0.16.0)
- metaclass (0.0.4)
- minitest (5.11.3)
- mocha (1.8.0)
- metaclass (~> 0.0.1)
- multi_json (1.13.1)
- multipart-post (2.1.1)
- os (1.0.1)
- public_suffix (4.0.1)
- rake (12.3.2)
- retriable (1.4.1)
- signet (0.11.0)
- addressable (~> 2.3)
- faraday (~> 0.9)
- jwt (>= 1.5, < 3.0)
- multi_json (~> 1.10)
- thread_safe (0.3.6)
- tzinfo (1.2.5)
- thread_safe (~> 0.1)
-
-PLATFORMS
- ruby
-
-DEPENDENCIES
- arvados-login-sync!
- minitest (>= 5.0.0)
- mocha (>= 1.5.0)
- rake
-
-BUNDLED WITH
- 1.17.3
exit
end
-git_latest_tag = `git tag -l |sort -V -r |head -n1`
-git_latest_tag = git_latest_tag.encode('utf-8').strip
-git_timestamp, git_hash = `git log -n1 --first-parent --format=%ct:%H .`.chomp.split(":")
-git_timestamp = Time.at(git_timestamp.to_i).utc
+git_dir = ENV["GIT_DIR"]
+git_work = ENV["GIT_WORK_TREE"]
+begin
+ ENV["GIT_DIR"] = File.expand_path "#{__dir__}/../../.git"
+ ENV["GIT_WORK_TREE"] = File.expand_path "#{__dir__}/../.."
+ git_timestamp, git_hash = `git log -n1 --first-parent --format=%ct:%H #{__dir__}`.chomp.split(":")
+ if ENV["ARVADOS_BUILDING_VERSION"]
+ version = ENV["ARVADOS_BUILDING_VERSION"]
+ else
+ version = `#{__dir__}/../../build/version-at-commit.sh #{git_hash}`.encode('utf-8').strip
+ end
+ git_timestamp = Time.at(git_timestamp.to_i).utc
+ensure
+ ENV["GIT_DIR"] = git_dir
+ ENV["GIT_WORK_TREE"] = git_work
+end
Gem::Specification.new do |s|
s.name = 'arvados-login-sync'
- s.version = "#{git_latest_tag}.#{git_timestamp.strftime('%Y%m%d%H%M%S')}"
+ s.version = version
s.date = git_timestamp.strftime("%Y-%m-%d")
s.summary = "Set up local login accounts for Arvados users"
s.description = "Creates and updates local login accounts for Arvados users. Built from git commit #{git_hash}"
s.authors = ["Arvados Authors"]
s.email = 'gem-dev@curoverse.com'
- s.licenses = ['GNU Affero General Public License, version 3.0']
+ s.licenses = ['AGPL-3.0']
s.files = ["bin/arvados-login-sync", "agpl-3.0.txt"]
s.executables << "arvados-login-sync"
s.required_ruby_version = '>= 2.1.0'
s.add_runtime_dependency 'arvados', '~> 1.3.0', '>= 1.3.0'
+ s.add_runtime_dependency 'launchy', '< 2.5'
# arvados-google-api-client 0.8.7.2 is incompatible with faraday 0.16.2
s.add_dependency('faraday', '< 0.16')
+ # arvados-google-api-client (and thus arvados) gems
+ # depend on signet, but signet 0.12 is incompatible with ruby 2.3.
+ s.add_dependency('signet', '< 0.12')
s.homepage =
'https://arvados.org'
end
import os
import re
-def git_latest_tag():
- gittags = subprocess.check_output(['git', 'tag', '-l']).split()
- gittags.sort(key=lambda s: [int(u) for u in s.split(b'.')],reverse=True)
- return str(next(iter(gittags)).decode('utf-8'))
+SETUP_DIR = os.path.dirname(os.path.abspath(__file__))
-def git_timestamp_tag():
- gitinfo = subprocess.check_output(
+def choose_version_from():
+ sdk_ts = subprocess.check_output(
['git', 'log', '--first-parent', '--max-count=1',
- '--format=format:%ct', '.']).strip()
- return str(time.strftime('.%Y%m%d%H%M%S', time.gmtime(int(gitinfo))))
+ '--format=format:%ct', os.path.join(SETUP_DIR, "../../sdk/python")]).strip()
+ cwl_ts = subprocess.check_output(
+ ['git', 'log', '--first-parent', '--max-count=1',
+ '--format=format:%ct', SETUP_DIR]).strip()
+ if int(sdk_ts) > int(cwl_ts):
+ getver = os.path.join(SETUP_DIR, "../../sdk/python")
+ else:
+ getver = SETUP_DIR
+ return getver
+
+def git_version_at_commit():
+ curdir = choose_version_from()
+ myhash = subprocess.check_output(['git', 'log', '-n1', '--first-parent',
+ '--format=%H', curdir]).strip()
+ myversion = subprocess.check_output([curdir+'/../../build/version-at-commit.sh', myhash]).strip().decode()
+ return myversion
def save_version(setup_dir, module, v):
- with open(os.path.join(setup_dir, module, "_version.py"), 'w') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'wt') as fp:
return fp.write("__version__ = '%s'\n" % v)
def read_version(setup_dir, module):
- with open(os.path.join(setup_dir, module, "_version.py"), 'r') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'rt') as fp:
return re.match("__version__ = '(.*)'$", fp.read()).groups()[0]
def get_version(setup_dir, module):
save_version(setup_dir, module, env_version)
else:
try:
- save_version(setup_dir, module, git_latest_tag() + git_timestamp_tag())
- except subprocess.CalledProcessError:
+ save_version(setup_dir, module, git_version_at_commit())
+ except (subprocess.CalledProcessError, OSError):
pass
return read_version(setup_dir, module)
'apache-libcloud==2.5.0',
'subprocess32>=3.5.1',
],
- zip_safe=False
- )
+ zip_safe=False,
+)
[Service]
Type=notify
ExecStart=/usr/bin/arvados-ws
+# Set a reasonable default for the open file limit
+LimitNOFILE=65536
Restart=always
RestartSec=1
// cache-invalidation event feed at "ws://.../websocket") to
// websocket clients.
//
-// Installation
+// Installation and configuration
//
// See https://doc.arvados.org/install/install-ws.html.
//
//
// Usage
//
-// arvados-ws [-config /etc/arvados/ws/ws.yml] [-dump-config]
-//
-// Minimal configuration
-//
-// Client:
-// APIHost: localhost:443
-// Listen: ":1234"
-// Postgres:
-// dbname: arvados_production
-// host: localhost
-// password: xyzzy
-// user: arvados
+// arvados-ws [-legacy-ws-config /etc/arvados/ws/ws.yml] [-dump-config]
//
// Options
//
-// -config path
+// -legacy-ws-config path
//
-// Load configuration from the given file instead of the default
-// /etc/arvados/ws/ws.yml
+// Load legacy configuration from the given file instead of the default
+// /etc/arvados/ws/ws.yml, legacy config overrides the clusterwide config.yml.
//
// -dump-config
//
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/ghodss/yaml"
)
import (
"context"
"database/sql"
+ "errors"
+ "fmt"
"strconv"
"sync"
"sync/atomic"
"time"
- "git.curoverse.com/arvados.git/sdk/go/stats"
+ "git.arvados.org/arvados.git/sdk/go/stats"
"github.com/lib/pq"
)
case <-ticker.C:
logger(nil).Debug("listener ping")
- ps.pqListener.Ping()
+ err := ps.pqListener.Ping()
+ if err != nil {
+ ps.listenerProblem(-1, fmt.Errorf("pqListener ping failed: %s", err))
+ continue
+ }
case pqEvent, ok := <-ps.pqListener.Notify:
if !ok {
- logger(nil).Debug("pqListener Notify chan closed")
+ logger(nil).Error("pqListener Notify chan closed")
return
}
if pqEvent == nil {
// itself in addition to sending us a
// nil event, so this might be
// superfluous:
- ps.listenerProblem(-1, nil)
+ ps.listenerProblem(-1, errors.New("pqListener Notify chan received nil event"))
continue
}
if pqEvent.Channel != "logs" {
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
check "gopkg.in/check.v1"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/stats"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/stats"
)
type handler struct {
"fmt"
"os"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"github.com/ghodss/yaml"
"github.com/sirupsen/logrus"
)
"net/url"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
const (
package main
import (
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
check "gopkg.in/check.v1"
)
"sync/atomic"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
- "git.curoverse.com/arvados.git/sdk/go/health"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/health"
"github.com/sirupsen/logrus"
"golang.org/x/net/websocket"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/coreos/go-systemd/daemon"
)
func (srv *server) Close() {
srv.WaitReady()
srv.eventSource.Close()
+ srv.httpServer.Close()
srv.listener.Close()
}
"sync"
"time"
- "git.curoverse.com/arvados.git/lib/config"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/lib/config"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
check "gopkg.in/check.v1"
)
import (
"database/sql"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
type session interface {
"sync/atomic"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
"github.com/sirupsen/logrus"
)
"sync"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/ctxlog"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/ctxlog"
"golang.org/x/net/websocket"
check "gopkg.in/check.v1"
)
"database/sql"
"errors"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
// newSessionV1 returns a v1 session -- see
defaultdev=$(/sbin/ip route|awk '/default/ { print $5 }')
localip=$(ip addr show $defaultdev | grep 'inet ' | sed 's/ *inet \(.*\)\/.*/\1/')
fi
+ echo "Public arvbox will use address $localip"
iptemp=$(tempfile)
echo $localip > $iptemp
chmod og+r $iptemp
--publish=3001:3001
--publish=8000:8000
--publish=8900:8900
- --publish=9001:9001
+ --publish=9000:9000
--publish=9002:9002
- --publish=25100:25100
- --publish=25107:25107
- --publish=25108:25108
+ --publish=25101:25101
--publish=8001:8001
--publish=8002:8002"
else
mkdir -p "$PG_DATA" "$VAR_DATA" "$PASSENGER" "$GEMS" "$PIPCACHE" "$NPMCACHE" "$GOSTUFF" "$RLIBS"
if ! test -d "$ARVADOS_ROOT" ; then
- git clone https://github.com/curoverse/arvados.git "$ARVADOS_ROOT"
+ git clone https://github.com/arvados/arvados.git "$ARVADOS_ROOT"
fi
if ! test -d "$SSO_ROOT" ; then
- git clone https://github.com/curoverse/sso-devise-omniauth-provider.git "$SSO_ROOT"
+ git clone https://github.com/arvados/sso-devise-omniauth-provider.git "$SSO_ROOT"
fi
if ! test -d "$COMPOSER_ROOT" ; then
- git clone https://github.com/curoverse/composer.git "$COMPOSER_ROOT"
+ git clone https://github.com/arvados/composer.git "$COMPOSER_ROOT"
git -C "$COMPOSER_ROOT" checkout arvados-fork
git -C "$COMPOSER_ROOT" pull
fi
if ! test -d "$WORKBENCH2_ROOT" ; then
- git clone https://github.com/curoverse/arvados-workbench2.git "$WORKBENCH2_ROOT"
+ git clone https://github.com/arvados/arvados-workbench2.git "$WORKBENCH2_ROOT"
fi
if [[ "$CONFIG" = test ]] ; then
+++ /dev/null
------BEGIN PGP PUBLIC KEY BLOCK-----
-
-mQINBFWln24BEADrBl5p99uKh8+rpvqJ48u4eTtjeXAWbslJotmC/CakbNSqOb9o
-ddfzRvGVeJVERt/Q/mlvEqgnyTQy+e6oEYN2Y2kqXceUhXagThnqCoxcEJ3+KM4R
-mYdoe/BJ/J/6rHOjq7Omk24z2qB3RU1uAv57iY5VGw5p45uZB4C4pNNsBJXoCvPn
-TGAs/7IrekFZDDgVraPx/hdiwopQ8NltSfZCyu/jPpWFK28TR8yfVlzYFwibj5WK
-dHM7ZTqlA1tHIG+agyPf3Rae0jPMsHR6q+arXVwMccyOi+ULU0z8mHUJ3iEMIrpT
-X+80KaN/ZjibfsBOCjcfiJSB/acn4nxQQgNZigna32velafhQivsNREFeJpzENiG
-HOoyC6qVeOgKrRiKxzymj0FIMLru/iFF5pSWcBQB7PYlt8J0G80lAcPr6VCiN+4c
-NKv03SdvA69dCOj79PuO9IIvQsJXsSq96HB+TeEmmL+xSdpGtGdCJHHM1fDeCqkZ
-hT+RtBGQL2SEdWjxbF43oQopocT8cHvyX6Zaltn0svoGs+wX3Z/H6/8P5anog43U
-65c0A+64Jj00rNDr8j31izhtQMRo892kGeQAaaxg4Pz6HnS7hRC+cOMHUU4HA7iM
-zHrouAdYeTZeZEQOA7SxtCME9ZnGwe2grxPXh/U/80WJGkzLFNcTKdv+rwARAQAB
-tDdEb2NrZXIgUmVsZWFzZSBUb29sIChyZWxlYXNlZG9ja2VyKSA8ZG9ja2VyQGRv
-Y2tlci5jb20+iQGcBBABCgAGBQJaJYMKAAoJENNu5NUL+WcWfQML/RjicnhN0G28
-+Hj3icn/SHYXg8VTHMX7aAuuClZh7GoXlvVlyN0cfRHTcFPkhv1LJ5/zFVwJxlIc
-xX0DlWbv5zlPQQQfNYH7mGCt3OS0QJGDpCM9Q6iw1EqC0CdtBDIZMGn7s9pnuq5C
-3kzer097BltvuXWI+BRMvVad2dhzuOQi76jyxhprTUL6Xwm7ytNSja5Xyigfc8HF
-rXhlQxnMEpWpTttY+En1SaTgGg7/4yB9jG7UqtdaVuAvWI69V+qzJcvgW6do5XwH
-b/5waezxOU033stXcRCYkhEenm+mXzcJYXt2avg1BYIQsZuubCBlpPtZkgWWLOf+
-eQR1Qcy9IdWQsfpH8DX6cEbeiC0xMImcuufI5KDHZQk7E7q8SDbDbk5Dam+2tRef
-eTB2A+MybVQnpsgCvEBNQ2TfcWsZ6uLHMBhesx/+rmyOnpJDTvvCLlkOMTUNPISf
-GJI0IHZFHUJ/+/uRfgIzG6dSqxQ0zHXOwGg4GbhjpQ5I+5Eg2BNRkYkCHAQQAQoA
-BgUCVsO73QAKCRBcs2HlUvsNEB8rD/4t+5uEsqDglXJ8m5dfL88ARHKeFQkW17x7
-zl7ctYHHFSFfP2iajSoAVfe5WN766TsoiHgfBE0HoLK8RRO7fxs9K7Czm6nyxB3Z
-p+YgSUZIS3wqc43jp8gd2dCCQelKIDv5rEFWHuQlyZersK9AJqIggS61ZQwJLcVY
-fUVnIdJdCmUV9haR7vIfrjNP88kqiInZWHy2t8uaB7HFPpxlNYuiJsA0w98rGQuY
-6fWlX71JnBEsgG+L73XAB0fm14QP0VvEB3njBZYlsO2do2B8rh5g51htslK5wqgC
-U61lfjnykSM8yRQbOHvPK7uYdmSF3UXqcP/gjmI9+C8s8UdnMa9rv8b8cFwpEjHu
-xeCmQKYQ/tcLOtRYZ1DIvzxETGH0xbrz6wpKuIMgY7d3xaWdjUf3ylvO0DnlXJ9Y
-r15fYndzDLPSlybIO0GrE+5grHntlSBbMa5BUNozaQ/iQBEUZ/RY+AKxy+U28JJB
-W2Wb0oun6+YdhmwgFyBoSFyp446Kz2P2A1+l/AGhzltc25Vsvwha+lRZfet464yY
-GoNBurTbQWS63JWYFoTkKXmWeS2789mQOQqka3wFXMDzVtXzmxSEbaler7lZbhTj
-wjAAJzp6kdNsPbde4lUIzt6FTdJm0Ivb47hMV4dWKEnFXrYjui0ppUH1RFUU6hyz
-IF8kfxDKO4kCHAQQAQoABgUCV0lgZQAKCRBcs2HlUvsNEHh9EACOm7QH2MGD7gI3
-0VMvapZz4Wfsbda58LFM7G5qPCt10zYfpf0dPJ7tHbHM8N9ENcI7tvH4dTfGsttt
-/uvX9PsiAml6kdfAGxoBRil+76NIHxFWsXSLVDd3hzcnRhc5njimwJa8SDBAp0kx
-v05BVWDvTbZb/b0jdgbqZk2oE0RK8S2Sp1bFkc6fl3pcJYFOQQmelOmXvPmyHOhd
-W2bLX9e1/IulzVf6zgi8dsj9IZ9aLKJY6Cz6VvJ85ML6mLGGwgNvJTLdWqntFFr0
-QqkdM8ZSp9ezWUKo28XGoxDAmo6ENNTLIZjuRlnj1Yr9mmwmf4mgucyqlU93XjCR
-y6u5bpuqoQONRPYCR/UKKk/qoGnYXnhX6AtUD+3JHvrV5mINkd/ad5eR5pviUGz+
-H/VeZqVhMbxxgkm3Gra9+bZ2pCCWboKtqIM7JtXYwks/dttkV5fTqBarJtWzcwO/
-Pv3DreTdnMoVNGzNk/84IeNmGww/iQ1Px0psVCKVPsKxr2RjNhVP7qdA0cTguFNX
-y+hx5Y/JYjSVnxIN74aLoDoeuoBhfYpOY+HiJTaM+pbLfoJr5WUPf/YUQ3qBvgG4
-WXiJUOAgsPmNY//n1MSMyhz1SvmhSXfqCVTb26IyVv0oA3UjLRcKjr18mHB5d9Fr
-NIGVHg8gJjRmXid5BZJZwKQ5niivjokCIgQQAQoADAUCV3uc0wWDB4YfgAAKCRAx
-uBWjAQZ0qe2DEACaq16AaJ2QKtOweqlGk92gQoJ2OCbIW15hW/1660u+X+2CQz8d
-nySXaq22AyBx4Do88b6d54D6TqScyObGJpGroHqAjvyh7v/t/V6oEwe34Ls2qUX2
-77lqfqsz3B0nW/aKZ2oH8ygM3tw0J5y4sAj5bMrxqcwuCs14Fds3v+K2mjsntZCu
-ztHB8mqZp/6v00d0vGGqcl6uVaS04cCQMNUkQ7tGMXlyAEIiH2ksU+/RJLaIqFtg
-klfP3Y7foAY15ymCSQPD9c81+xjbf0XNmBtDreL+rQVtesahU4Pp+Sc23iuXGdY2
-yF13wnGmScojNjM2BoUiffhFeyWBdOTgCFhOEhk0Y1zKrkNqDC0sDAj0B5vhQg/T
-10NLR2MerSk9+MJLHZqFrHXo5f59zUvte/JhtViP5TdO/Yd4ptoEcDspDKLv0FrN
-7xsP8Q6DmBz1doCe06PQS1Z1Sv4UToHRS2RXskUnDc8Cpuex5mDBQO+LV+tNToh4
-ZNcpj9lFHNuaA1qS15X3EVCySZaPyn2WRd6ZisCKtwopRmshVItTTcLmrxu+hHAF
-bVRVFRRSCE8JIZLkWwRyMrcxB2KLBYA+f2nCtD2rqiZ8K8Cr9J1qt2iu5yogCwA/
-ombzzYxWWrt/wD6ixJr5kZwBJZroHB7FkRBcTDIzDFYGBYmClACTvLuOnokCIgQS
-AQoADAUCWKy8/gWDB4YfgAAKCRAkW0txwCm5FmrGD/9lL31LQtn5wxwoZvfEKuMh
-KRw0FDUq59lQpqyMxp7lrZozFUqlH4MLTeEWbFle+R+UbUoVkBnZ/cSvVGwtRVaH
-wUeP9NAqBLtIqt4S0T2T0MW6Ug0DVH7V7uYuFktpv1xmIzcC4gV+LHhp95SPYbWr
-uVMi6ENIMZoEqW9uHOy6n2/nh76dR2NVJiZHt5LbG8YXM/Y+z3XsIenwKQ97YO7x
-yEaM7UdsQSqKVB0isTQXT2wxoA/pDvSyu7jpElD5dOtPPz3r0fQpcQKrq0IMjgcB
-u5X5tQ5uktmmdaAvIwLibUB9A+htFiFP4irSx//Lkn66RLjrSqwtMCsv7wbPvTfc
-fdpcmkR767t1VvjQWj9DBfOMjGJk9eiLkUSHYyQst6ELyVdutAIHRV2GQqfEKJzc
-cD3wKdbaOoABqRVr/ok5Oj0YKSrvk0lW3l8vS/TZXvQppSMdJuaTR8JDy6dGuoKt
-uyFDb0fKf1JU3+Gj3Yy2YEfqX0MjNQsck9pDV647UXXdzF9uh3cYVfPbl+xBYOU9
-d9qRcqMut50AVIxpUepGa4Iw7yOSRPCnPAMNAPSmAdJTaQcRWcUd9LOaZH+ZFLJZ
-mpbvS//jQpoBt++Ir8wl9ZJXICRJcvrQuhCjOSNLFzsNr/wyVLnGwmTjLWoJEA0p
-c0cYtLW6fSGknkvNA7e8LYkCMwQQAQgAHRYhBFI9KC2HD6c70cN9svEo88fgKodF
-BQJZ76NPAAoJEPEo88fgKodFYXwP+wW6F7UpNmKXaddu+aamLTe3uv8OSKUHQbRh
-By1oxfINI7iC+BZl9ycJip0S08JH0F+RZsi1H24+GcP9vGTDgu3z0NcOOD4mPpzM
-jSi2/hbGzh9C84pxRJVLAKrbqCz7YQ6JdNG4RUHW/r0QgKTnTlvikVx7n9QaPrVl
-PsVFU3xv5oQxUHpwNWyvpPGTDiycuaGKekodYhZ0vKzJzfyyaUTgfxvTVVj10jyi
-f+mSfY8YBHhDesgYF1d2CUEPth9z5KC/eDgY7KoWs8ZK6sVL3+tGrnqK/s6jqcsk
-J7Kt4c3k0jU56rUo8+jnu9yUHcBXAjtr1Vz/nwVfqmPzukIF1ZkMqdQqIRtvDyEC
-16yGngMpWEVM3/vIsi2/uUMuGvjEkEmqs2oLK1hf+Y0W6Avq+9fZUQUEk0e4wbpu
-RCqX5OjeQTEEXmAzoMsdAiwFvr1ul+eI/BPy+29OQ77hz3/dotdYYfs1JVkiFUhf
-PJwvpoUOXiA5V56wl3i5tkbRSLRSkLmiLTlCEfClHEK/wwLU4ZKuD5UpW8xL438l
-/Ycnsl7aumnofWoaEREBc1Xbnx9SZbrTT8VctW8XpMVIPxCwJCp/LqHtyEbnptnD
-7QoHtdWexFmQFUIlGaDiaL7nv0BD6RA/HwhVSxU3b3deKDYNpG9QnAzte8KXA9/s
-ejP18gCKiQI4BBMBAgAiBQJVpZ9uAhsvBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIX
-gAAKCRD3YiFXLFJgnbRfEAC9Uai7Rv20QIDlDogRzd+Vebg4ahyoUdj0CH+nAk40
-RIoq6G26u1e+sdgjpCa8jF6vrx+smpgd1HeJdmpahUX0XN3X9f9qU9oj9A4I1WDa
-lRWJh+tP5WNv2ySy6AwcP9QnjuBMRTnTK27pk1sEMg9oJHK5p+ts8hlSC4SluyMK
-H5NMVy9c+A9yqq9NF6M6d6/ehKfBFFLG9BX+XLBATvf1ZemGVHQusCQebTGv0C0V
-9yqtdPdRWVIEhHxyNHATaVYOafTj/EF0lDxLl6zDT6trRV5n9F1VCEh4Aal8L5Mx
-VPcIZVO7NHT2EkQgn8CvWjV3oKl2GopZF8V4XdJRl90U/WDv/6cmfI08GkzDYBHh
-S8ULWRFwGKobsSTyIvnbk4NtKdnTGyTJCQ8+6i52s+C54PiNgfj2ieNn6oOR7d+b
-NCcG1CdOYY+ZXVOcsjl73UYvtJrO0Rl/NpYERkZ5d/tzw4jZ6FCXgggA/Zxcjk6Y
-1ZvIm8Mt8wLRFH9Nww+FVsCtaCXJLP8DlJLASMD9rl5QS9Ku3u7ZNrr5HWXPHXIT
-X660jglyshch6CWeiUATqjIAzkEQom/kEnOrvJAtkypRJ59vYQOedZ1sFVELMXg2
-UCkD/FwojfnVtjzYaTCeGwFQeqzHmM241iuOmBYPeyTY5veF49aBJA1gEJOQTvBR
-8YkCOQQRAQgAIxYhBDlHZ/sRadXUayJzU3Es9wyw8WURBQJaajQrBYMHhh+AAAoJ
-EHEs9wyw8WURDyEP/iD903EcaiZP68IqUBsdHMxOaxnKZD9H2RTBaTjR6r9UjCOf
-bomXpVzL0dMZw1nHIE7u2VT++5wk+QvcN7epBgOWUb6tNcv3nI3vqMGRR+fKW15V
-J1sUwMOKGC4vlbLRVRWd2bb+oPZWeteOxNIqu/8DHDFHg3LtoYxWbrMYHhvd0ben
-B9GvwoqeBaqAeERKYCEoPZRB5O6ZHccX2HacjwFs4uYvIoRg4WI+ODXVHXCgOVZq
-yRuVAuQUjwkLbKL1vxJ01EWzWwRI6cY9mngFXNTHEkoxNyjzlfpn/YWheRiwpwg+
-ymDL4oj1KHNq06zNl38dZCd0rde3OFNuF904H6D+reYL50YA9lkL9mRtlaiYyo1J
-SOOjdr+qxuelfbLgDSeM75YVSiYiZZO8DWr2Cq/SNp47z4T4Il/yhQ6eAstZOIkF
-KQlBjr+ZtLdUu67sPdgPoT842IwSrRTrirEUd6cyADbRggPHrOoYEooBCrCgDYCM
-K1xxG9f6Q42yvL1zWKollibsvJF8MVwgkWfJJyhLYylmJ8osvX9LNdCJZErVrRTz
-wAM00crp/KIiIDCREEgE+5BiuGdM70gSuy3JXSs78JHA4l2tu1mDBrMxNR+C8lpj
-1pnLFHTfGYwHQSwKm42/JZqbePh6LKblUdS5Np1dl0tk5DDHBluRzhx16H7E
-=lwu7
------END PGP PUBLIC KEY BLOCK-----
--- /dev/null
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+
+mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
+lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
+38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
+L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
+UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
+cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
+ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
+vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
+G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
+XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
+q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
+tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
+BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
+v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
+tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
+jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
+6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
+XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
+FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
+g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
+ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
+9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
+G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
+FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
+EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
+M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
+Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
+w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
+z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
+eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
+VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
+1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
+zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
+pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
+ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
+BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
+1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
+YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
+mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
+KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
+JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
+cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
+6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
+U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
+VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
+irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
+SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
+QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
+9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
+24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
+dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
+Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
+H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
+/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
+M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
+xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
+jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
+YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
+=0YYh
+-----END PGP PUBLIC KEY BLOCK-----
linkchecker python3-virtualenv python-virtualenv xvfb iceweasel \
libgnutls28-dev python3-dev vim cadaver cython gnupg dirmngr \
libsecret-1-dev r-base r-cran-testthat libxml2-dev pandoc \
- python3-setuptools python3-pip openjdk-8-jdk bsdmainutils net-tools&& \
+ python3-setuptools python3-pip openjdk-8-jdk bsdmainutils net-tools \
+ ruby2.3 ruby-dev bundler && \
apt-get clean
ENV RUBYVERSION_MINOR 2.3
ENV RUBYVERSION 2.3.5
# Install Ruby from source
-RUN cd /tmp && \
- curl -f http://cache.ruby-lang.org/pub/ruby/${RUBYVERSION_MINOR}/ruby-${RUBYVERSION}.tar.gz | tar -xzf - && \
- cd ruby-${RUBYVERSION} && \
- ./configure --disable-install-doc && \
- make && \
- make install && \
- cd /tmp && \
- rm -rf ruby-${RUBYVERSION}
+# RUN cd /tmp && \
+# curl -f http://cache.ruby-lang.org/pub/ruby/${RUBYVERSION_MINOR}/ruby-${RUBYVERSION}.tar.gz | tar -xzf - && \
+# cd ruby-${RUBYVERSION} && \
+# ./configure --disable-install-doc && \
+# make && \
+# make install && \
+# cd /tmp && \
+# rm -rf ruby-${RUBYVERSION}
ENV GEM_HOME /var/lib/gems
ENV GEM_PATH /var/lib/gems
ENV PATH $PATH:/var/lib/gems/bin
-ENV GOVERSION 1.12.7
+ENV GOVERSION 1.13.6
# Install golang binary
RUN curl -f http://storage.googleapis.com/golang/go${GOVERSION}.linux-amd64.tar.gz | \
VOLUME /var/log/nginx
VOLUME /etc/ssl/private
-ADD 58118E89F3A912897C070ADBF76221572C52609D.asc /tmp/
-RUN apt-key add --no-tty /tmp/58118E89F3A912897C070ADBF76221572C52609D.asc && \
- rm -f /tmp/58118E89F3A912897C070ADBF76221572C52609D.asc
+ADD 8D81803C0EBFCD88.asc /tmp/
+RUN apt-key add --no-tty /tmp/8D81803C0EBFCD88.asc && \
+ rm -f /tmp/8D81803C0EBFCD88.asc
RUN mkdir -p /etc/apt/sources.list.d && \
- echo deb https://apt.dockerproject.org/repo debian-stretch main > /etc/apt/sources.list.d/docker.list && \
+ echo deb https://download.docker.com/linux/debian/ stretch stable > /etc/apt/sources.list.d/docker.list && \
apt-get update && \
- apt-get -yq --no-install-recommends install docker-engine=17.05.0~ce-0~debian-stretch && \
+ apt-get -yq --no-install-recommends install docker-ce=17.06.0~ce-0~debian && \
apt-get clean
RUN rm -rf /var/lib/postgresql && mkdir -p /var/lib/postgresql
ARG workbench2_version=master
RUN cd /usr/src && \
- git clone --no-checkout https://github.com/curoverse/arvados.git && \
+ git clone --no-checkout https://github.com/arvados/arvados.git && \
git -C arvados checkout ${arvados_version} && \
git -C arvados pull && \
- git clone --no-checkout https://github.com/curoverse/sso-devise-omniauth-provider.git sso && \
+ git clone --no-checkout https://github.com/arvados/sso-devise-omniauth-provider.git sso && \
git -C sso checkout ${sso_version} && \
git -C sso pull && \
- git clone --no-checkout https://github.com/curoverse/composer.git && \
+ git clone --no-checkout https://github.com/arvados/composer.git && \
git -C composer checkout ${composer_version} && \
git -C composer pull && \
- git clone --no-checkout https://github.com/curoverse/arvados-workbench2.git workbench2 && \
+ git clone --no-checkout https://github.com/arvados/arvados-workbench2.git workbench2 && \
git -C workbench2 checkout ${workbench2_version} && \
- git -C workbench2 pull
+ git -C workbench2 pull && \
+ chown -R 1000:1000 /usr/src
ADD service/ /var/lib/arvbox/service
RUN ln -sf /var/lib/arvbox/service /etc
RUN echo "production" > /var/lib/arvados/sso_rails_env
RUN echo "production" > /var/lib/arvados/workbench_rails_env
-RUN chown -R 1000:1000 /usr/src && /usr/local/lib/arvbox/createusers.sh
+RUN /usr/local/lib/arvbox/createusers.sh
+RUN sudo -u arvbox /var/lib/arvbox/service/api/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/composer/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/workbench2/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/keep-web/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/sso/run-service --only-deps
-RUN sudo -u arvbox /var/lib/arvbox/service/api/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/workbench/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/doc/run-service --only-deps
RUN sudo -u arvbox /var/lib/arvbox/service/vm/run-service --only-deps
AutoSetupNewUsers: true
AutoSetupNewUsersWithVmUUID: $vm_uuid
AutoSetupNewUsersWithRepository: true
- AnonymousUserToken: $(cat /var/lib/arvados/superuser_token)
Workbench:
SecretKeyBase: $workbench_secret_key_base
ArvadosDocsite: http://$localip:${services[doc]}/
export HOME=$(getent passwd arvbox | cut -d: -f6)
defaultdev=$(/sbin/ip route|awk '/default/ { print $5 }')
+dockerip=$(/sbin/ip route | grep default | awk '{ print $3 }')
containerip=$(ip addr show $defaultdev | grep 'inet ' | sed 's/ *inet \(.*\)\/.*/\1/')
if test -s /var/run/localip_override ; then
localip=$(cat /var/run/localip_override)
else
frozen=""
fi
- if ! test -x /var/lib/gems/bin/bundler ; then
- bundlergem=$(ls -r $GEM_HOME/cache/bundler-*.gem 2>/dev/null | head -n1 || true)
- if test -n "$bundlergem" ; then
- flock /var/lib/gems/gems.lock gem install --local --no-document $bundlergem
- else
- flock /var/lib/gems/gems.lock gem install --no-document bundler
- fi
- fi
- if ! flock /var/lib/gems/gems.lock bundler install --path $GEM_HOME --local --no-deployment $frozen "$@" ; then
- flock /var/lib/gems/gems.lock bundler install --path $GEM_HOME --no-deployment $frozen "$@"
+ # if ! test -x /var/lib/gems/bin/bundler ; then
+ # bundleversion=2.0.2
+ # bundlergem=$(ls -r $GEM_HOME/cache/bundler-${bundleversion}.gem 2>/dev/null | head -n1 || true)
+ # if test -n "$bundlergem" ; then
+ # flock /var/lib/gems/gems.lock gem install --verbose --local --no-document $bundlergem
+ # else
+ # flock /var/lib/gems/gems.lock gem install --verbose --no-document bundler --version ${bundleversion}
+ # fi
+ # fi
+ if ! flock /var/lib/gems/gems.lock bundler install --verbose --path $GEM_HOME --local --no-deployment $frozen "$@" ; then
+ flock /var/lib/gems/gems.lock bundler install --verbose --path $GEM_HOME --no-deployment $frozen "$@"
fi
}
popd
if [ "$PYCMD" = "python3" ]; then
- if ! pip3 install --no-index --find-links /var/lib/pip $1 ; then
- pip3 install $1
+ if ! pip3 install --prefix /usr/local --no-index --find-links /var/lib/pip $1 ; then
+ pip3 install --prefix /usr/local $1
fi
else
if ! pip install --no-index --find-links /var/lib/pip $1 ; then
#
# SPDX-License-Identifier: AGPL-3.0
-mkdir -p /var/lib/gopath
-cd /var/lib/gopath
+export GOPATH=/var/lib/gopath
+mkdir -p $GOPATH
-export GOPATH=$PWD
-mkdir -p "$GOPATH/src/git.curoverse.com"
-ln -sfn "/usr/src/arvados" "$GOPATH/src/git.curoverse.com/arvados.git"
-
-flock /var/lib/gopath/gopath.lock go get -t github.com/kardianos/govendor
-cd "$GOPATH/src/git.curoverse.com/arvados.git"
-flock /var/lib/gopath/gopath.lock go get -v -d ...
-flock /var/lib/gopath/gopath.lock "$GOPATH/bin/govendor" sync
-
-flock /var/lib/gopath/gopath.lock go get -t "git.curoverse.com/arvados.git/cmd/arvados-server"
+cd /usr/src/arvados
+if [[ $UID = 0 ]] ; then
+ /usr/local/lib/arvbox/runsu.sh flock /var/lib/gopath/gopath.lock go mod download
+ /usr/local/lib/arvbox/runsu.sh flock /var/lib/gopath/gopath.lock go install git.arvados.org/arvados.git/cmd/arvados-server
+else
+ flock /var/lib/gopath/gopath.lock go mod download
+ flock /var/lib/gopath/gopath.lock go install git.arvados.org/arvados.git/cmd/arvados-server
+fi
install $GOPATH/bin/arvados-server /usr/local/bin
. /usr/local/lib/arvbox/common.sh
. /usr/local/lib/arvbox/go-setup.sh
-flock /var/lib/gopath/gopath.lock go get -t "git.curoverse.com/arvados.git/services/keepstore"
+flock /var/lib/gopath/gopath.lock go install "git.arvados.org/arvados.git/services/keepstore"
install $GOPATH/bin/keepstore /usr/local/bin
if test "$1" = "--only-deps" ; then
. /usr/local/lib/arvbox/common.sh
. /usr/local/lib/arvbox/go-setup.sh
-flock /var/lib/gopath/gopath.lock go get -t "git.curoverse.com/arvados.git/services/arv-git-httpd"
+flock /var/lib/gopath/gopath.lock go install "git.arvados.org/arvados.git/services/arv-git-httpd"
install $GOPATH/bin/arv-git-httpd /usr/local/bin
if test "$1" = "--only-deps" ; then
. /usr/local/lib/arvbox/common.sh
. /usr/local/lib/arvbox/go-setup.sh
-flock /var/lib/gopath/gopath.lock go get -t "git.curoverse.com/arvados.git/services/crunch-run"
-flock /var/lib/gopath/gopath.lock go get -t "git.curoverse.com/arvados.git/services/crunch-dispatch-local"
-install $GOPATH/bin/crunch-run $GOPATH/bin/crunch-dispatch-local /usr/local/bin
+flock /var/lib/gopath/gopath.lock go install "git.arvados.org/arvados.git/services/crunch-dispatch-local"
+install $GOPATH/bin/crunch-dispatch-local /usr/local/bin
+ln -sf arvados-server /usr/local/bin/crunch-run
if test "$1" = "--only-deps" ; then
exit
. /usr/local/lib/arvbox/common.sh
+
cd /usr/src/arvados/doc
run_bundler --without=development
-cd /usr/src/arvados/sdk/R
-R --quiet --vanilla --file=install_deps.R
+# Generating the R docs is expensive, so for development if the file
+# "no-sdk" exists then skip the R stuff.
+if [[ ! -f /usr/src/arvados/doc/no-sdk ]] ; then
+ cd /usr/src/arvados/sdk/R
+ R --quiet --vanilla --file=install_deps.R
+fi
if test "$1" = "--only-deps" ; then
exit
. /usr/local/lib/arvbox/common.sh
. /usr/local/lib/arvbox/go-setup.sh
-flock /var/lib/gopath/gopath.lock go get -t "git.curoverse.com/arvados.git/services/keep-web"
+flock /var/lib/gopath/gopath.lock go install "git.arvados.org/arvados.git/services/keep-web"
install $GOPATH/bin/keep-web /usr/local/bin
if test "$1" = "--only-deps" ; then
. /usr/local/lib/arvbox/common.sh
. /usr/local/lib/arvbox/go-setup.sh
-flock /var/lib/gopath/gopath.lock go get -t "git.curoverse.com/arvados.git/services/keepproxy"
+flock /var/lib/gopath/gopath.lock go install "git.arvados.org/arvados.git/services/keepproxy"
install $GOPATH/bin/keepproxy /usr/local/bin
if test "$1" = "--only-deps" ; then
fi
fi
+geo_dockerip=
+if [[ -f /var/run/localip_override ]] ; then
+ geo_dockerip="$dockerip/32 0;"
+fi
+
openssl verify -CAfile $root_cert $server_cert
cat <<EOF >/var/lib/arvados/nginx.conf
}
http {
- access_log off;
- include /etc/nginx/mime.types;
- default_type application/octet-stream;
- client_max_body_size 128M;
-
- geo \$external_client {
- default 1;
- 127.0.0.0/8 0;
- $containerip/32 0;
- }
-
- server {
- listen ${services[doc]} default_server;
- listen [::]:${services[doc]} default_server;
- root /usr/src/arvados/doc/.site;
- index index.html;
- server_name _;
- }
+ access_log off;
+ include /etc/nginx/mime.types;
+ default_type application/octet-stream;
+ client_max_body_size 128M;
+
+ geo \$external_client {
+ default 1;
+ 127.0.0.0/8 0;
+ $containerip/32 0;
+ $geo_dockerip
+ }
+
+ server {
+ listen ${services[doc]} default_server;
+ listen [::]:${services[doc]} default_server;
+ root /usr/src/arvados/doc/.site;
+ index index.html;
+ server_name _;
+ }
server {
listen 80 default_server;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-External-Client \$external_client;
proxy_redirect off;
+ # This turns off response caching
+ proxy_buffering off;
}
}
-upstream arvados-ws {
- server localhost:${services[websockets]};
-}
-server {
- listen *:${services[websockets-ssl]} ssl default_server;
- server_name websockets;
-
- proxy_connect_timeout 90s;
- proxy_read_timeout 300s;
-
- ssl on;
- ssl_certificate "${server_cert}";
- ssl_certificate_key "${server_cert_key}";
-
- location / {
- proxy_pass http://arvados-ws;
- proxy_set_header Upgrade \$http_upgrade;
- proxy_set_header Connection "upgrade";
- proxy_set_header Host \$http_host;
- proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
+ upstream arvados-ws {
+ server localhost:${services[websockets]};
+ }
+ server {
+ listen *:${services[websockets-ssl]} ssl default_server;
+ server_name websockets;
+
+ proxy_connect_timeout 90s;
+ proxy_read_timeout 300s;
+
+ ssl on;
+ ssl_certificate "${server_cert}";
+ ssl_certificate_key "${server_cert_key}";
+
+ location / {
+ proxy_pass http://arvados-ws;
+ proxy_set_header Upgrade \$http_upgrade;
+ proxy_set_header Connection "upgrade";
+ proxy_set_header Host \$http_host;
+ proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
+ }
}
-}
upstream workbench2 {
server localhost:${services[workbench2]};
}
}
-
upstream keepproxy {
server localhost:${services[keepproxy]};
}
run_bundler --binstubs=$PWD/binstubs
ln -sf /usr/src/arvados/sdk/cli/binstubs/arv /usr/local/bin/arv
+export PYCMD=python3
+
# Need to install the upstream version of pip because the python-pip package
# shipped with Debian 9 is patched to change behavior in a way that breaks our
# use case.
# multiple packages, because it will blindly install the latest version of each
# dependency requested by each package, even if a compatible package version is
# already installed.
-pip_install pip==9.0.3
+if ! pip3 install --no-index --find-links /var/lib/pip pip==9.0.3 ; then
+ pip3 install pip==9.0.3
+fi
pip_install wheel
export ARVADOS_VIRTUAL_MACHINE_UUID=$(cat /var/lib/arvados/vm-uuid)
while true ; do
- bundle exec arvados-login-sync
+ arvados-login-sync
sleep 120
done
. /usr/local/lib/arvbox/common.sh
cd /usr/src/arvados/services/login-sync
-run_bundler
+run_bundler --binstubs=$PWD/binstubs
+ln -sf /usr/src/arvados/services/login-sync/binstubs/arvados-login-sync /usr/local/bin/arvados-login-sync
if test "$1" = "--only-deps" ; then
exit
. /usr/local/lib/arvbox/go-setup.sh
-flock /var/lib/gopath/gopath.lock go get -t "git.curoverse.com/arvados.git/services/ws"
+flock /var/lib/gopath/gopath.lock go install "git.arvados.org/arvados.git/services/ws"
install $GOPATH/bin/ws /usr/local/bin/arvados-ws
if test "$1" = "--only-deps" ; then
cat <<EOF > /usr/src/workbench2/public/config.json
{
"API_HOST": "${localip}:${services[controller-ssl]}",
- "VOCABULARY_URL": "vocabulary-example.json",
- "FILE_VIEWERS_CONFIG_URL": "file-viewers-example.json"
+ "VOCABULARY_URL": "/vocabulary-example.json",
+ "FILE_VIEWERS_CONFIG_URL": "/file-viewers-example.json"
}
EOF
# Can't use "yarn start", need to run the dev server script
# directly so that the TERM signal from "sv restart" gets to the
# right process.
+export VERSION=$(./version-at-commit.sh)
exec node node_modules/react-scripts-ts/scripts/start.js
import os
import re
-def git_latest_tag():
- gittags = subprocess.check_output(['git', 'tag', '-l']).split()
- gittags.sort(key=lambda s: [int(u) for u in s.split(b'.')],reverse=True)
- return str(next(iter(gittags)).decode('utf-8'))
+SETUP_DIR = os.path.dirname(os.path.abspath(__file__))
-def git_timestamp_tag():
- gitinfo = subprocess.check_output(
+def choose_version_from():
+ sdk_ts = subprocess.check_output(
['git', 'log', '--first-parent', '--max-count=1',
- '--format=format:%ct', '.']).strip()
- return str(time.strftime('.%Y%m%d%H%M%S', time.gmtime(int(gitinfo))))
+ '--format=format:%ct', os.path.join(SETUP_DIR, "../../sdk/python")]).strip()
+ cwl_ts = subprocess.check_output(
+ ['git', 'log', '--first-parent', '--max-count=1',
+ '--format=format:%ct', SETUP_DIR]).strip()
+ if int(sdk_ts) > int(cwl_ts):
+ getver = os.path.join(SETUP_DIR, "../../sdk/python")
+ else:
+ getver = SETUP_DIR
+ return getver
+
+def git_version_at_commit():
+ curdir = choose_version_from()
+ myhash = subprocess.check_output(['git', 'log', '-n1', '--first-parent',
+ '--format=%H', curdir]).strip()
+ myversion = subprocess.check_output([curdir+'/../../build/version-at-commit.sh', myhash]).strip().decode()
+ return myversion
def save_version(setup_dir, module, v):
- with open(os.path.join(setup_dir, module, "_version.py"), 'w') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'wt') as fp:
return fp.write("__version__ = '%s'\n" % v)
def read_version(setup_dir, module):
- with open(os.path.join(setup_dir, module, "_version.py"), 'r') as fp:
+ with open(os.path.join(setup_dir, module, "_version.py"), 'rt') as fp:
return re.match("__version__ = '(.*)'$", fp.read()).groups()[0]
def get_version(setup_dir, module):
save_version(setup_dir, module, env_version)
else:
try:
- save_version(setup_dir, module, git_latest_tag() + git_timestamp_tag())
- except subprocess.CalledProcessError:
+ save_version(setup_dir, module, git_version_at_commit())
+ except (subprocess.CalledProcessError, OSError):
pass
return read_version(setup_dir, module)
author='Arvados',
author_email='info@arvados.org',
url="https://arvados.org",
- download_url="https://github.com/curoverse/arvados.git",
+ download_url="https://github.com/arvados/arvados.git",
license='GNU Affero General Public License, version 3.0',
packages=['crunchstat_summary'],
include_package_data=True,
],
test_suite='tests',
tests_require=['pbr<1.7.0', 'mock>=1.0'],
- zip_safe=False
- )
+ zip_safe=False,
+)
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
var version = "dev"
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
. "gopkg.in/check.v1"
)
"os"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
var version = "dev"
"strings"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
)
var version = "dev"
"testing"
"time"
- "git.curoverse.com/arvados.git/sdk/go/arvadosclient"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
- "git.curoverse.com/arvados.git/sdk/go/keepclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadosclient"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/keepclient"
. "gopkg.in/check.v1"
)
// srcConfig
var srcConfig apiConfig
srcConfig.APIHost = os.Getenv("ARVADOS_API_HOST")
- srcConfig.APIToken = arvadostest.DataManagerToken
+ srcConfig.APIToken = arvadostest.SystemRootToken
srcConfig.APIHostInsecure = arvadosclient.StringBool(os.Getenv("ARVADOS_API_HOST_INSECURE"))
// dstConfig
var dstConfig apiConfig
dstConfig.APIHost = os.Getenv("ARVADOS_API_HOST")
- dstConfig.APIToken = arvadostest.DataManagerToken
+ dstConfig.APIToken = arvadostest.SystemRootToken
dstConfig.APIHostInsecure = arvadosclient.StringBool(os.Getenv("ARVADOS_API_HOST_INSECURE"))
if enforcePermissions {
c.Check(err, IsNil)
c.Assert(srcConfig.APIHost, Equals, os.Getenv("ARVADOS_API_HOST"))
- c.Assert(srcConfig.APIToken, Equals, arvadostest.DataManagerToken)
+ c.Assert(srcConfig.APIToken, Equals, arvadostest.SystemRootToken)
c.Assert(srcConfig.APIHostInsecure, Equals, arvadosclient.StringBool(os.Getenv("ARVADOS_API_HOST_INSECURE")))
c.Assert(srcConfig.ExternalClient, Equals, false)
c.Check(err, IsNil)
c.Assert(dstConfig.APIHost, Equals, os.Getenv("ARVADOS_API_HOST"))
- c.Assert(dstConfig.APIToken, Equals, arvadostest.DataManagerToken)
+ c.Assert(dstConfig.APIToken, Equals, arvadostest.SystemRootToken)
c.Assert(dstConfig.APIHostInsecure, Equals, arvadosclient.StringBool(os.Getenv("ARVADOS_API_HOST_INSECURE")))
c.Assert(dstConfig.ExternalClient, Equals, false)
func (s *ServerNotRequiredSuite) TestSetupKeepClient_NoBlobSignatureTTL(c *C) {
var srcConfig apiConfig
srcConfig.APIHost = os.Getenv("ARVADOS_API_HOST")
- srcConfig.APIToken = arvadostest.DataManagerToken
+ srcConfig.APIToken = arvadostest.SystemRootToken
srcConfig.APIHostInsecure = arvadosclient.StringBool(os.Getenv("ARVADOS_API_HOST_INSECURE"))
_, ttl, err := setupKeepClient(srcConfig, srcKeepServicesJSON, false, 0, 0)
c.Check(err, IsNil)
fileContent := "ARVADOS_API_HOST=" + os.Getenv("ARVADOS_API_HOST") + "\n"
- fileContent += "ARVADOS_API_TOKEN=" + arvadostest.DataManagerToken + "\n"
+ fileContent += "ARVADOS_API_TOKEN=" + arvadostest.SystemRootToken + "\n"
fileContent += "ARVADOS_API_HOST_INSECURE=" + os.Getenv("ARVADOS_API_HOST_INSECURE") + "\n"
fileContent += "ARVADOS_EXTERNAL_CLIENT=false\n"
fileContent += "ARVADOS_BLOB_SIGNING_KEY=abcdefg"
"os"
"strings"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
)
var version = "dev"
"strings"
"testing"
- "git.curoverse.com/arvados.git/sdk/go/arvados"
- "git.curoverse.com/arvados.git/sdk/go/arvadostest"
+ "git.arvados.org/arvados.git/sdk/go/arvados"
+ "git.arvados.org/arvados.git/sdk/go/arvadostest"
. "gopkg.in/check.v1"
)
--- /dev/null
+#!/usr/bin/env python
+#
+# Copyright (C) The Arvados Authors. All rights reserved.
+#
+# SPDX-License-Identifier: CC-BY-SA-3.0
+
+import argparse
+import copy
+import json
+import logging
+import os
+import sys
+
+import arvados
+import arvados.util
+
+logger = logging.getLogger('arvados.vocabulary_migrate')
+logger.setLevel(logging.INFO)
+
+class VocabularyError(Exception):
+ pass
+
+opts = argparse.ArgumentParser(add_help=False)
+opts.add_argument('--vocabulary-file', type=str, metavar='PATH', required=True,
+ help="""
+Use vocabulary definition file at PATH for migration decisions.
+""")
+opts.add_argument('--dry-run', action='store_true', default=False,
+ help="""
+Don't actually migrate properties, but only check if any collection/project
+should be migrated.
+""")
+opts.add_argument('--debug', action='store_true', default=False,
+ help="""
+Sets logging level to DEBUG.
+""")
+arg_parser = argparse.ArgumentParser(
+ description='Migrate collections & projects properties to the new vocabulary format.',
+ parents=[opts])
+
+def parse_arguments(arguments):
+ args = arg_parser.parse_args(arguments)
+ if args.debug:
+ logger.setLevel(logging.DEBUG)
+ if not os.path.isfile(args.vocabulary_file):
+ arg_parser.error("{} doesn't exist or isn't a file.".format(args.vocabulary_file))
+ return args
+
+def _label_to_id_mappings(data, obj_name):
+ result = {}
+ for obj_id, obj_data in data.items():
+ for lbl in obj_data['labels']:
+ obj_lbl = lbl['label']
+ if obj_lbl not in result:
+ result[obj_lbl] = obj_id
+ else:
+ raise VocabularyError('{} label "{}" for {} ID "{}" already seen at {} ID "{}".'.format(obj_name, obj_lbl, obj_name, obj_id, obj_name, result[obj_lbl]))
+ return result
+
+def key_labels_to_ids(vocab):
+ return _label_to_id_mappings(vocab['tags'], 'key')
+
+def value_labels_to_ids(vocab, key_id):
+ if key_id in vocab['tags'] and 'values' in vocab['tags'][key_id]:
+ return _label_to_id_mappings(vocab['tags'][key_id]['values'], 'value')
+ return {}
+
+def migrate_properties(properties, key_map, vocab):
+ result = {}
+ for k, v in properties.items():
+ key = key_map.get(k, k)
+ value = value_labels_to_ids(vocab, key).get(v, v)
+ result[key] = value
+ return result
+
+def main(arguments=None):
+ args = parse_arguments(arguments)
+ vocab = None
+ with open(args.vocabulary_file, 'r') as f:
+ vocab = json.load(f)
+ arv = arvados.api('v1')
+ if 'tags' not in vocab or vocab['tags'] == {}:
+ logger.warning('Empty vocabulary file, exiting.')
+ return 1
+ if not arv.users().current().execute()['is_admin']:
+ logger.error('Admin privileges required.')
+ return 1
+ key_label_to_id_map = key_labels_to_ids(vocab)
+ migrated_counter = 0
+
+ for key_label in key_label_to_id_map:
+ logger.debug('Querying objects with property key "{}"'.format(key_label))
+ for resource in [arv.collections(), arv.groups()]:
+ objs = arvados.util.list_all(
+ resource.list,
+ order=['created_at'],
+ select=['uuid', 'properties'],
+ filters=[['properties', 'exists', key_label]]
+ )
+ for o in objs:
+ props = copy.copy(o['properties'])
+ migrated_props = migrate_properties(props, key_label_to_id_map, vocab)
+ if not args.dry_run:
+ logger.debug('Migrating {}: {} -> {}'.format(o['uuid'], props, migrated_props))
+ arv.collections().update(uuid=o['uuid'], body={
+ 'properties': migrated_props
+ }).execute()
+ else:
+ logger.info('Should migrate {}: {} -> {}'.format(o['uuid'], props, migrated_props))
+ migrated_counter += 1
+ if not args.dry_run and migrated_counter % 100 == 0:
+ logger.info('Migrating {} objects...'.format(migrated_counter))
+
+ if args.dry_run and migrated_counter == 0:
+ logger.info('Nothing to do.')
+ elif not args.dry_run:
+ logger.info('Done, total objects migrated: {}.'.format(migrated_counter))
+ return 0
+
+if __name__ == "__main__":
+ sys.exit(main())
\ No newline at end of file
+++ /dev/null
-*
-!vendor.json
-!.gitignore
+++ /dev/null
-{
- "comment": "",
- "ignore": "test appengine",
- "package": [
- {
- "checksumSHA1": "jfYWZyRWLMfG0J5K7G2K8a9AKfs=",
- "origin": "github.com/curoverse/goamz/aws",
- "path": "github.com/AdRoll/goamz/aws",
- "revision": "1bba09f407ef1d02c90bc37eff7e91e2231fa587",
- "revisionTime": "2019-09-05T14:15:25Z"
- },
- {
- "checksumSHA1": "lqoARtBgwnvhEhLyIjR3GLnR5/c=",
- "origin": "github.com/curoverse/goamz/s3",
- "path": "github.com/AdRoll/goamz/s3",
- "revision": "1bba09f407ef1d02c90bc37eff7e91e2231fa587",
- "revisionTime": "2019-09-05T14:15:25Z"
- },
- {
- "checksumSHA1": "tvxbsTkdjB0C/uxEglqD6JfVnMg=",
- "origin": "github.com/curoverse/goamz/s3/s3test",
- "path": "github.com/AdRoll/goamz/s3/s3test",
- "revision": "1bba09f407ef1d02c90bc37eff7e91e2231fa587",
- "revisionTime": "2019-09-05T14:15:25Z"
- },
- {
- "checksumSHA1": "KF4DsRUpZ+h+qRQ/umRAQZfVvw0=",
- "path": "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-06-01/compute",
- "revision": "4e8cbbfb1aeab140cd0fa97fd16b64ee18c3ca6a",
- "revisionTime": "2018-07-27T22:05:59Z"
- },
- {
- "checksumSHA1": "IZNzp1cYx+xYHd4gzosKpG6Jr/k=",
- "path": "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-06-01/network",
- "revision": "4e8cbbfb1aeab140cd0fa97fd16b64ee18c3ca6a",
- "revisionTime": "2018-07-27T22:05:59Z"
- },
- {
- "checksumSHA1": "W4c2uTDJlwhfryWg9esshmJANo0=",
- "path": "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2018-02-01/storage",
- "revision": "4e8cbbfb1aeab140cd0fa97fd16b64ee18c3ca6a",
- "revisionTime": "2018-07-27T22:05:59Z"
- },
- {
- "checksumSHA1": "xHZe/h/tyrqmS9qiR03bLfRv5FI=",
- "path": "github.com/Azure/azure-sdk-for-go/storage",
- "revision": "f8eeb65a1a1f969696b49aada9d24073f2c2acd1",
- "revisionTime": "2018-02-15T19:19:13Z"
- },
- {
- "checksumSHA1": "PfyfOXsPbGEWmdh54cguqzdwloY=",
- "path": "github.com/Azure/azure-sdk-for-go/version",
- "revision": "471256ff7c6c93b96131845cef5309d20edd313d",
- "revisionTime": "2018-02-14T01:17:07Z"
- },
- {
- "checksumSHA1": "1Y2+bSzYrdPHQqRjR1OrBMHAvxY=",
- "path": "github.com/Azure/go-autorest/autorest",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "GxL0HHpZDj2milPhR3SPV6MWLPc=",
- "path": "github.com/Azure/go-autorest/autorest/adal",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "ZNgwJOdHZmm4k/HJIbT1L5giO6M=",
- "path": "github.com/Azure/go-autorest/autorest/azure",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "6i7kwcXGTn55WqfubQs21swgr34=",
- "path": "github.com/Azure/go-autorest/autorest/azure/auth",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "9nXCi9qQsYjxCeajJKWttxgEt0I=",
- "path": "github.com/Azure/go-autorest/autorest/date",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "SbBb2GcJNm5GjuPKGL2777QywR4=",
- "path": "github.com/Azure/go-autorest/autorest/to",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "HjdLfAF3oA2In8F3FKh/Y+BPyXk=",
- "path": "github.com/Azure/go-autorest/autorest/validation",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "b2lrPJRxf+MEfmMafN40wepi5WM=",
- "path": "github.com/Azure/go-autorest/logger",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "UtAIMAsMWLBJ6yO1qZ0soFnb0sI=",
- "path": "github.com/Azure/go-autorest/version",
- "revision": "39013ecb48eaf6ced3f4e3e1d95515140ce6b3cf",
- "revisionTime": "2018-08-09T20:19:59Z"
- },
- {
- "checksumSHA1": "o/3cn04KAiwC7NqNVvmfVTD+hgA=",
- "path": "github.com/Microsoft/go-winio",
- "revision": "78439966b38d69bf38227fbf57ac8a6fee70f69a",
- "revisionTime": "2017-08-04T20:09:54Z"
- },
- {
- "checksumSHA1": "k59wLJfyqGB04o238WhKSAzSz9M=",
- "path": "github.com/aws/aws-sdk-go/aws",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "Y9W+4GimK4Fuxq+vyIskVYFRnX4=",
- "path": "github.com/aws/aws-sdk-go/aws/awserr",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "PEDqMAEPxlh9Y8/dIbHlE6A7LEA=",
- "path": "github.com/aws/aws-sdk-go/aws/awsutil",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "KpW2B6W3J1yB/7QJWjjtsKz1Xbc=",
- "path": "github.com/aws/aws-sdk-go/aws/client",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "uEJU4I6dTKaraQKvrljlYKUZwoc=",
- "path": "github.com/aws/aws-sdk-go/aws/client/metadata",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "GvmthjOyNZGOKmXK4XVrbT5+K9I=",
- "path": "github.com/aws/aws-sdk-go/aws/corehandlers",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "QHizt8XKUpuslIZv6EH6ENiGpGA=",
- "path": "github.com/aws/aws-sdk-go/aws/credentials",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "JTilCBYWVAfhbKSnrxCNhE8IFns=",
- "path": "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "1pENtl2K9hG7qoB7R6J7dAHa82g=",
- "path": "github.com/aws/aws-sdk-go/aws/credentials/endpointcreds",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "sPtOSV32SZr2xN7vZlF4FXo43/o=",
- "path": "github.com/aws/aws-sdk-go/aws/credentials/processcreds",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "JEYqmF83O5n5bHkupAzA6STm0no=",
- "path": "github.com/aws/aws-sdk-go/aws/credentials/stscreds",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "3pJft1H34eTYK6s6p3ijj3mGtc4=",
- "path": "github.com/aws/aws-sdk-go/aws/csm",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "7AmyyJXVkMdmy8dphC3Nalx5XkI=",
- "path": "github.com/aws/aws-sdk-go/aws/defaults",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "47hnR1KYqZDBT3xmHuS7cNtqHP8=",
- "path": "github.com/aws/aws-sdk-go/aws/ec2metadata",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "pcWH1AkR7sUs84cN/XTD9Jexf2Q=",
- "path": "github.com/aws/aws-sdk-go/aws/endpoints",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "nhavXPspOdqm5iAvIGgmZmXk4aI=",
- "path": "github.com/aws/aws-sdk-go/aws/request",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "w4tSwNFNJ4cGgjYEdAgsDnikqec=",
- "path": "github.com/aws/aws-sdk-go/aws/session",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "C9uAu9gsLIpJGIX6/5P+n3s9wQo=",
- "path": "github.com/aws/aws-sdk-go/aws/signer/v4",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "Fe2TPw9X2UvlkRaOS7LPJlpkuTo=",
- "path": "github.com/aws/aws-sdk-go/internal/ini",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "wjxQlU1PYxrDRFoL1Vek8Wch7jk=",
- "path": "github.com/aws/aws-sdk-go/internal/sdkio",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "MYLldFRnsZh21TfCkgkXCT3maPU=",
- "path": "github.com/aws/aws-sdk-go/internal/sdkrand",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "tQVg7Sz2zv+KkhbiXxPH0mh9spg=",
- "path": "github.com/aws/aws-sdk-go/internal/sdkuri",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "sXiZ5x6j2FvlIO57pboVnRTm7QA=",
- "path": "github.com/aws/aws-sdk-go/internal/shareddefaults",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "NtXXi501Kou3laVAsJfcbKSkNI8=",
- "path": "github.com/aws/aws-sdk-go/private/protocol",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "0cZnOaE1EcFUuiu4bdHV2k7slQg=",
- "path": "github.com/aws/aws-sdk-go/private/protocol/ec2query",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "lj56XJFI2OSp+hEOrFZ+eiEi/yM=",
- "path": "github.com/aws/aws-sdk-go/private/protocol/query",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "+O6A945eTP9plLpkEMZB0lwBAcg=",
- "path": "github.com/aws/aws-sdk-go/private/protocol/query/queryutil",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "RDOk9se2S83/HAYmWnpoW3bgQfQ=",
- "path": "github.com/aws/aws-sdk-go/private/protocol/rest",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "B8unEuOlpQfnig4cMyZtXLZVVOs=",
- "path": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "uvEbLM/ZodhtEUVTEoC+Lbc9PHg=",
- "path": "github.com/aws/aws-sdk-go/service/ec2",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "HMY+b4YBLVvWoKm5vB+H7tpKiTI=",
- "path": "github.com/aws/aws-sdk-go/service/sts",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "spyv5/YFBjYyZLZa1U2LBfDR8PM=",
- "path": "github.com/beorn7/perks/quantile",
- "revision": "4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9",
- "revisionTime": "2016-08-04T10:47:26Z"
- },
- {
- "checksumSHA1": "bNT5FFLDUXSamYK3jGHSwsTJqqo=",
- "path": "github.com/coreos/go-oidc",
- "revision": "2be1c5b8a260760503f66dc0996e102b683b3ac3",
- "revisionTime": "2019-08-15T17:57:29Z"
- },
- {
- "checksumSHA1": "+Zz+leZHHC9C0rx8DoRuffSRPso=",
- "path": "github.com/coreos/go-systemd/daemon",
- "revision": "cc4f39464dc797b91c8025330de585294c2a6950",
- "revisionTime": "2018-01-08T08:51:32Z"
- },
- {
- "checksumSHA1": "+TKtBzv23ywvmmqRiGEjUba4YmI=",
- "path": "github.com/dgrijalva/jwt-go",
- "revision": "dbeaa9332f19a944acb5736b4456cfcc02140e29",
- "revisionTime": "2017-10-19T21:57:19Z"
- },
- {
- "checksumSHA1": "7EjxkAUND/QY/sN+2fNKJ52v1Rc=",
- "path": "github.com/dimchansky/utfbom",
- "revision": "5448fe645cb1964ba70ac8f9f2ffe975e61a536c",
- "revisionTime": "2018-07-13T13:37:17Z"
- },
- {
- "checksumSHA1": "Gj+xR1VgFKKmFXYOJMnAczC3Znk=",
- "path": "github.com/docker/distribution/digestset",
- "revision": "277ed486c948042cab91ad367c379524f3b25e18",
- "revisionTime": "2018-01-05T23:27:52Z"
- },
- {
- "checksumSHA1": "2Fe4D6PGaVE2he4fUeenLmhC1lE=",
- "path": "github.com/docker/distribution/reference",
- "revision": "277ed486c948042cab91ad367c379524f3b25e18",
- "revisionTime": "2018-01-05T23:27:52Z"
- },
- {
- "checksumSHA1": "QKCQfrTv4wTL0KBDMHpWM/jHl9I=",
- "path": "github.com/docker/docker/api",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "b91BIyJbqy05pXpEh1eGCJkdjYc=",
- "path": "github.com/docker/docker/api/types",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "jVJDbe0IcyjoKc2xbohwzQr+FF0=",
- "path": "github.com/docker/docker/api/types/blkiodev",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "DuOqFTQ95vKSuSE/Va88yRN/wb8=",
- "path": "github.com/docker/docker/api/types/container",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "XDP7i6sMYGnUKeFzgt+mFBJwjjw=",
- "path": "github.com/docker/docker/api/types/events",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "S4SWOa0XduRd8ene8Alwih2Nwcw=",
- "path": "github.com/docker/docker/api/types/filters",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "KuC0C6jo1t7tlvIqb7G3u1FIaZU=",
- "path": "github.com/docker/docker/api/types/image",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "uJeLBKpHZXP+bWhXP4HhpyUTWYI=",
- "path": "github.com/docker/docker/api/types/mount",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "Gskp+nvbVe8Gk1xPLHylZvNmqTg=",
- "path": "github.com/docker/docker/api/types/network",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "r2vWq7Uc3ExKzMqYgH0b4AKjLKY=",
- "path": "github.com/docker/docker/api/types/registry",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "VTxWyFud/RedrpllGdQonVtGM/A=",
- "path": "github.com/docker/docker/api/types/strslice",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "Q0U3queMsCw+rPPztXnRHwAxQEc=",
- "path": "github.com/docker/docker/api/types/swarm",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "kVfD1e4Gak7k6tqDX5nrgQ57EYY=",
- "path": "github.com/docker/docker/api/types/swarm/runtime",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "77axKFOjRx1nGrzIggGXfTxUYVQ=",
- "path": "github.com/docker/docker/api/types/time",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "uDPQ3nHsrvGQc9tg/J9OSC4N5dQ=",
- "path": "github.com/docker/docker/api/types/versions",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "IBJy2zPEnYmcFJ3lM1eiRWnCxTA=",
- "path": "github.com/docker/docker/api/types/volume",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "zQvx3WYTAwbPZEaVPjAsrmW7V00=",
- "path": "github.com/docker/docker/client",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "JbiWTzH699Sqz25XmDlsARpMN9w=",
- "path": "github.com/docker/go-connections/nat",
- "revision": "3ede32e2033de7505e6500d6c868c2b9ed9f169d",
- "revisionTime": "2017-06-23T20:36:43Z"
- },
- {
- "checksumSHA1": "jUfDG3VQsA2UZHvvIXncgiddpYA=",
- "path": "github.com/docker/go-connections/sockets",
- "revision": "3ede32e2033de7505e6500d6c868c2b9ed9f169d",
- "revisionTime": "2017-06-23T20:36:43Z"
- },
- {
- "checksumSHA1": "c6lDGNwTm5mYq18IHP+lqYpk8xU=",
- "path": "github.com/docker/go-connections/tlsconfig",
- "revision": "3ede32e2033de7505e6500d6c868c2b9ed9f169d",
- "revisionTime": "2017-06-23T20:36:43Z"
- },
- {
- "checksumSHA1": "kP4hqQGUNNXhgYxgB4AMWfNvmnA=",
- "path": "github.com/docker/go-units",
- "revision": "d59758554a3d3911fa25c0269de1ebe2f1912c39",
- "revisionTime": "2017-12-21T20:03:56Z"
- },
- {
- "checksumSHA1": "ImX1uv6O09ggFeBPUJJ2nu7MPSA=",
- "path": "github.com/ghodss/yaml",
- "revision": "0ca9ea5df5451ffdf184b4428c902747c2c11cd7",
- "revisionTime": "2017-03-27T23:54:44Z"
- },
- {
- "checksumSHA1": "8UEp6v0Dczw/SlasE0DivB0mAHA=",
- "path": "github.com/gogo/protobuf/jsonpb",
- "revision": "30cf7ac33676b5786e78c746683f0d4cd64fa75b",
- "revisionTime": "2018-05-09T16:24:41Z"
- },
- {
- "checksumSHA1": "wn2shNJMwRZpvuvkf1s7h0wvqHI=",
- "path": "github.com/gogo/protobuf/proto",
- "revision": "160de10b2537169b5ae3e7e221d28269ef40d311",
- "revisionTime": "2018-01-04T10:21:28Z"
- },
- {
- "checksumSHA1": "HPVQZu059/Rfw2bAWM538bVTcUc=",
- "path": "github.com/gogo/protobuf/sortkeys",
- "revision": "30cf7ac33676b5786e78c746683f0d4cd64fa75b",
- "revisionTime": "2018-05-09T16:24:41Z"
- },
- {
- "checksumSHA1": "SkxU1+wPGUJyLyQENrZtr2/OUBs=",
- "path": "github.com/gogo/protobuf/types",
- "revision": "30cf7ac33676b5786e78c746683f0d4cd64fa75b",
- "revisionTime": "2018-05-09T16:24:41Z"
- },
- {
- "checksumSHA1": "yqF125xVSkmfLpIVGrLlfE05IUk=",
- "path": "github.com/golang/protobuf/proto",
- "revision": "1e59b77b52bf8e4b449a57e6f79f21226d571845",
- "revisionTime": "2017-11-13T18:07:20Z"
- },
- {
- "checksumSHA1": "iIUYZyoanCQQTUaWsu8b+iOSPt4=",
- "origin": "github.com/docker/docker/vendor/github.com/gorilla/context",
- "path": "github.com/gorilla/context",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "fSs1WcPh2F5JJtxqYC+Jt8yCkYc=",
- "path": "github.com/gorilla/mux",
- "revision": "5bbbb5b2b5729b132181cc7f4aa3b3c973e9a0ed",
- "revisionTime": "2018-01-07T15:57:08Z"
- },
- {
- "checksumSHA1": "d9PxF1XQGLMJZRct2R8qVM/eYlE=",
- "path": "github.com/hashicorp/golang-lru",
- "revision": "0a025b7e63adc15a622f29b0b2c4c3848243bbf6",
- "revisionTime": "2016-08-13T22:13:03Z"
- },
- {
- "checksumSHA1": "9hffs0bAIU6CquiRhKQdzjHnKt0=",
- "path": "github.com/hashicorp/golang-lru/simplelru",
- "revision": "0a025b7e63adc15a622f29b0b2c4c3848243bbf6",
- "revisionTime": "2016-08-13T22:13:03Z"
- },
- {
- "checksumSHA1": "x7IEwuVYTztOJItr3jtePGyFDWA=",
- "path": "github.com/imdario/mergo",
- "revision": "5ef87b449ca75fbed1bc3765b749ca8f73f1fa69",
- "revisionTime": "2019-04-15T13:31:43Z"
- },
- {
- "checksumSHA1": "iCsyavJDnXC9OY//p52IWJWy7PY=",
- "path": "github.com/jbenet/go-context/io",
- "revision": "d14ea06fba99483203c19d92cfcd13ebe73135f4",
- "revisionTime": "2015-07-11T00:45:18Z"
- },
- {
- "checksumSHA1": "khL6oKjx81rAZKW+36050b7f5As=",
- "path": "github.com/jmcvetta/randutil",
- "revision": "2bb1b664bcff821e02b2a0644cd29c7e824d54f8",
- "revisionTime": "2015-08-17T12:26:01Z"
- },
- {
- "checksumSHA1": "blwbl9vPvRLtL5QlZgfpLvsFiZ4=",
- "origin": "github.com/aws/aws-sdk-go/vendor/github.com/jmespath/go-jmespath",
- "path": "github.com/jmespath/go-jmespath",
- "revision": "d496c5aab9b8ba36936e457a488e971b4f9fd891",
- "revisionTime": "2019-03-06T20:18:39Z"
- },
- {
- "checksumSHA1": "X7g98YfLr+zM7aN76AZvAfpZyfk=",
- "path": "github.com/julienschmidt/httprouter",
- "revision": "adbc77eec0d91467376ca515bc3a14b8434d0f18",
- "revisionTime": "2018-04-11T15:45:01Z"
- },
- {
- "checksumSHA1": "oX6jFQD74oOApvDIhOzW2dXpg5Q=",
- "path": "github.com/kevinburke/ssh_config",
- "revision": "802051befeb51da415c46972b5caf36e7c33c53d",
- "revisionTime": "2017-10-13T21:14:58Z"
- },
- {
- "checksumSHA1": "IfZcD4U1dtllJKlPNeD2aU4Jn98=",
- "path": "github.com/lib/pq",
- "revision": "83612a56d3dd153a94a629cd64925371c9adad78",
- "revisionTime": "2017-11-26T05:04:59Z"
- },
- {
- "checksumSHA1": "AU3fA8Sm33Vj9PBoRPSeYfxLRuE=",
- "path": "github.com/lib/pq/oid",
- "revision": "83612a56d3dd153a94a629cd64925371c9adad78",
- "revisionTime": "2017-11-26T05:04:59Z"
- },
- {
- "checksumSHA1": "T9E+5mKBQ/BX4wlNxgaPfetxdeI=",
- "path": "github.com/marstr/guid",
- "revision": "8bdf7d1a087ccc975cf37dd6507da50698fd19ca",
- "revisionTime": "2017-04-27T23:51:15Z"
- },
- {
- "checksumSHA1": "bKMZjd2wPw13VwoE7mBeSv5djFA=",
- "path": "github.com/matttproud/golang_protobuf_extensions/pbutil",
- "revision": "c12348ce28de40eed0136aa2b644d0ee0650e56c",
- "revisionTime": "2016-04-24T11:30:07Z"
- },
- {
- "checksumSHA1": "V/quM7+em2ByJbWBLOsEwnY3j/Q=",
- "path": "github.com/mitchellh/go-homedir",
- "revision": "b8bc1bf767474819792c23f32d8286a45736f1c6",
- "revisionTime": "2016-12-03T19:45:07Z"
- },
- {
- "checksumSHA1": "OFNit1Qx2DdWhotfREKodDNUwCM=",
- "path": "github.com/opencontainers/go-digest",
- "revision": "279bed98673dd5bef374d3b6e4b09e2af76183bf",
- "revisionTime": "2017-06-07T19:53:33Z"
- },
- {
- "checksumSHA1": "ZGlIwSRjdLYCUII7JLE++N4w7Xc=",
- "path": "github.com/opencontainers/image-spec/specs-go",
- "revision": "577479e4dc273d3779f00c223c7e0dba4cd6b8b0",
- "revisionTime": "2017-11-25T02:40:18Z"
- },
- {
- "checksumSHA1": "jdbXRRzeu0njLE9/nCEZG+Yg/Jk=",
- "path": "github.com/opencontainers/image-spec/specs-go/v1",
- "revision": "577479e4dc273d3779f00c223c7e0dba4cd6b8b0",
- "revisionTime": "2017-11-25T02:40:18Z"
- },
- {
- "checksumSHA1": "F1IYMLBLAZaTOWnmXsgaxTGvrWI=",
- "path": "github.com/pelletier/go-buffruneio",
- "revision": "c37440a7cf42ac63b919c752ca73a85067e05992",
- "revisionTime": "2017-02-27T22:03:11Z"
- },
- {
- "checksumSHA1": "xCv4GBFyw07vZkVtKF/XrUnkHRk=",
- "path": "github.com/pkg/errors",
- "revision": "e881fd58d78e04cf6d0de1217f8707c8cc2249bc",
- "revisionTime": "2017-12-16T07:03:16Z"
- },
- {
- "checksumSHA1": "KxkAlLxQkuSGHH46Dxu6wpAybO4=",
- "path": "github.com/pquerna/cachecontrol",
- "revision": "1555304b9b35fdd2b425bccf1a5613677705e7d0",
- "revisionTime": "2018-05-17T16:36:45Z"
- },
- {
- "checksumSHA1": "wwaht1P9i8vQu6DqNvMEy24IMgY=",
- "path": "github.com/pquerna/cachecontrol/cacheobject",
- "revision": "1555304b9b35fdd2b425bccf1a5613677705e7d0",
- "revisionTime": "2018-05-17T16:36:45Z"
- },
- {
- "checksumSHA1": "Ajt29IHVbX99PUvzn8Gc/lMCXBY=",
- "path": "github.com/prometheus/client_golang/prometheus",
- "revision": "9bb6ab929dcbe1c8393cd9ef70387cb69811bd1c",
- "revisionTime": "2018-02-03T14:28:15Z"
- },
- {
- "checksumSHA1": "c3Ui7nnLiJ4CAGWZ8dGuEgqHd8s=",
- "path": "github.com/prometheus/client_golang/prometheus/promhttp",
- "revision": "9bb6ab929dcbe1c8393cd9ef70387cb69811bd1c",
- "revisionTime": "2018-02-03T14:28:15Z"
- },
- {
- "checksumSHA1": "DvwvOlPNAgRntBzt3b3OSRMS2N4=",
- "path": "github.com/prometheus/client_model/go",
- "revision": "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c",
- "revisionTime": "2017-11-17T10:05:41Z"
- },
- {
- "checksumSHA1": "xfnn0THnqNwjwimeTClsxahYrIo=",
- "path": "github.com/prometheus/common/expfmt",
- "revision": "89604d197083d4781071d3c65855d24ecfb0a563",
- "revisionTime": "2018-01-10T21:49:58Z"
- },
- {
- "checksumSHA1": "GWlM3d2vPYyNATtTFgftS10/A9w=",
- "path": "github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg",
- "revision": "89604d197083d4781071d3c65855d24ecfb0a563",
- "revisionTime": "2018-01-10T21:49:58Z"
- },
- {
- "checksumSHA1": "YU+/K48IMawQnToO4ETE6a+hhj4=",
- "path": "github.com/prometheus/common/model",
- "revision": "89604d197083d4781071d3c65855d24ecfb0a563",
- "revisionTime": "2018-01-10T21:49:58Z"
- },
- {
- "checksumSHA1": "lolK0h7LSVERIX8zLyVQ/+7wEyA=",
- "path": "github.com/prometheus/procfs",
- "revision": "cb4147076ac75738c9a7d279075a253c0cc5acbd",
- "revisionTime": "2018-01-25T13:30:57Z"
- },
- {
- "checksumSHA1": "lv9rIcjbVEGo8AT1UCUZXhXrfQc=",
- "path": "github.com/prometheus/procfs/internal/util",
- "revision": "cb4147076ac75738c9a7d279075a253c0cc5acbd",
- "revisionTime": "2018-01-25T13:30:57Z"
- },
- {
- "checksumSHA1": "BXJH5h2ri8SU5qC6kkDvTIGCky4=",
- "path": "github.com/prometheus/procfs/nfs",
- "revision": "cb4147076ac75738c9a7d279075a253c0cc5acbd",
- "revisionTime": "2018-01-25T13:30:57Z"
- },
- {
- "checksumSHA1": "yItvTQLUVqm/ArLEbvEhqG0T5a0=",
- "path": "github.com/prometheus/procfs/xfs",
- "revision": "cb4147076ac75738c9a7d279075a253c0cc5acbd",
- "revisionTime": "2018-01-25T13:30:57Z"
- },
- {
- "checksumSHA1": "eDQ6f1EsNf+frcRO/9XukSEchm8=",
- "path": "github.com/satori/go.uuid",
- "revision": "36e9d2ebbde5e3f13ab2e25625fd453271d6522e",
- "revisionTime": "2018-01-03T17:44:51Z"
- },
- {
- "checksumSHA1": "UwtyqB7CaUWPlw0DVJQvw0IFQZs=",
- "path": "github.com/sergi/go-diff/diffmatchpatch",
- "revision": "1744e2970ca51c86172c8190fadad617561ed6e7",
- "revisionTime": "2017-11-10T11:01:46Z"
- },
- {
- "checksumSHA1": "umeXHK5iK/3th4PtrTkZllezgWo=",
- "path": "github.com/sirupsen/logrus",
- "revision": "d682213848ed68c0a260ca37d6dd5ace8423f5ba",
- "revisionTime": "2017-12-05T20:32:29Z"
- },
- {
- "checksumSHA1": "8QeSG127zQqbA+YfkO1WkKx/iUI=",
- "path": "github.com/src-d/gcfg",
- "revision": "f187355171c936ac84a82793659ebb4936bc1c23",
- "revisionTime": "2016-10-26T10:01:55Z"
- },
- {
- "checksumSHA1": "yf5NBT8BofPfGYCXoLnj7BIA1wo=",
- "path": "github.com/src-d/gcfg/scanner",
- "revision": "f187355171c936ac84a82793659ebb4936bc1c23",
- "revisionTime": "2016-10-26T10:01:55Z"
- },
- {
- "checksumSHA1": "C5Z8YVyNTuvupM9AUr9KbPlps4Q=",
- "path": "github.com/src-d/gcfg/token",
- "revision": "f187355171c936ac84a82793659ebb4936bc1c23",
- "revisionTime": "2016-10-26T10:01:55Z"
- },
- {
- "checksumSHA1": "mDkN3UpR7auuFbwUuIwExz4DZgY=",
- "path": "github.com/src-d/gcfg/types",
- "revision": "f187355171c936ac84a82793659ebb4936bc1c23",
- "revisionTime": "2016-10-26T10:01:55Z"
- },
- {
- "checksumSHA1": "iHiMTBffQvWYlOLu3130JXuQpgQ=",
- "path": "github.com/xanzy/ssh-agent",
- "revision": "ba9c9e33906f58169366275e3450db66139a31a9",
- "revisionTime": "2015-12-15T15:34:51Z"
- },
- {
- "checksumSHA1": "TT1rac6kpQp2vz24m5yDGUNQ/QQ=",
- "path": "golang.org/x/crypto/cast5",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "IQkUIOnvlf0tYloFx9mLaXSvXWQ=",
- "path": "golang.org/x/crypto/curve25519",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "1hwn8cgg4EVXhCpJIqmMbzqnUo0=",
- "path": "golang.org/x/crypto/ed25519",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "LXFcVx8I587SnWmKycSDEq9yvK8=",
- "path": "golang.org/x/crypto/ed25519/internal/edwards25519",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "ooU7jaiYSUKlg5BVllI8lsq+5Qk=",
- "path": "golang.org/x/crypto/openpgp",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "olOKkhrdkYQHZ0lf1orrFQPQrv4=",
- "path": "golang.org/x/crypto/openpgp/armor",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "eo/KtdjieJQXH7Qy+faXFcF70ME=",
- "path": "golang.org/x/crypto/openpgp/elgamal",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "rlxVSaGgqdAgwblsErxTxIfuGfg=",
- "path": "golang.org/x/crypto/openpgp/errors",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "Pq88+Dgh04UdXWZN6P+bLgYnbRc=",
- "path": "golang.org/x/crypto/openpgp/packet",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "s2qT4UwvzBSkzXuiuMkowif1Olw=",
- "path": "golang.org/x/crypto/openpgp/s2k",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "1MGpGDQqnUoRpv7VEcQrXOBydXE=",
- "path": "golang.org/x/crypto/pbkdf2",
- "revision": "ae8bce0030810cf999bb2b9868ae5c7c58e6343b",
- "revisionTime": "2018-04-30T17:54:52Z"
- },
- {
- "checksumSHA1": "PJY7uCr3UnX4/Mf/RoWnbieSZ8o=",
- "path": "golang.org/x/crypto/pkcs12",
- "revision": "614d502a4dac94afa3a6ce146bd1736da82514c6",
- "revisionTime": "2018-07-28T08:01:47Z"
- },
- {
- "checksumSHA1": "p0GC51McIdA7JygoP223twJ1s0E=",
- "path": "golang.org/x/crypto/pkcs12/internal/rc2",
- "revision": "614d502a4dac94afa3a6ce146bd1736da82514c6",
- "revisionTime": "2018-07-28T08:01:47Z"
- },
- {
- "checksumSHA1": "NHjGg73p5iGZ+7tflJ4cVABNmKE=",
- "path": "golang.org/x/crypto/ssh",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "NMRX0onGReaL9IfLr0XQ3kl5Id0=",
- "path": "golang.org/x/crypto/ssh/agent",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "zBHtHvMj+MXa1qa4aglBt46uUck=",
- "path": "golang.org/x/crypto/ssh/knownhosts",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "X1NTlfcau2XcV6WtAHF6b/DECOA=",
- "path": "golang.org/x/crypto/ssh/terminal",
- "revision": "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8",
- "revisionTime": "2017-11-25T19:00:56Z"
- },
- {
- "checksumSHA1": "Y+HGqEkYM15ir+J93MEaHdyFy0c=",
- "origin": "github.com/docker/docker/vendor/golang.org/x/net/context",
- "path": "golang.org/x/net/context",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "WHc3uByvGaMcnSoI21fhzYgbOgg=",
- "path": "golang.org/x/net/context/ctxhttp",
- "revision": "434ec0c7fe3742c984919a691b2018a6e9694425",
- "revisionTime": "2017-09-25T09:26:47Z"
- },
- {
- "checksumSHA1": "r9l4r3H6FOLQ0c2JaoXpopFjpnw=",
- "path": "golang.org/x/net/proxy",
- "revision": "434ec0c7fe3742c984919a691b2018a6e9694425",
- "revisionTime": "2017-09-25T09:26:47Z"
- },
- {
- "checksumSHA1": "TBlnCuZUOzJHLu5DNY7XEj8TvbU=",
- "path": "golang.org/x/net/webdav",
- "revision": "434ec0c7fe3742c984919a691b2018a6e9694425",
- "revisionTime": "2017-09-25T09:26:47Z"
- },
- {
- "checksumSHA1": "XgtZlzd39qIkBHs6XYrq9dhTCog=",
- "path": "golang.org/x/net/webdav/internal/xml",
- "revision": "434ec0c7fe3742c984919a691b2018a6e9694425",
- "revisionTime": "2017-09-25T09:26:47Z"
- },
- {
- "checksumSHA1": "7EZyXN0EmZLgGxZxK01IJua4c8o=",
- "path": "golang.org/x/net/websocket",
- "revision": "434ec0c7fe3742c984919a691b2018a6e9694425",
- "revisionTime": "2017-09-25T09:26:47Z"
- },
- {
- "checksumSHA1": "+33kONpAOtjMyyw0uD4AygLvIXg=",
- "path": "golang.org/x/oauth2",
- "revision": "ec22f46f877b4505e0117eeaab541714644fdd28",
- "revisionTime": "2018-05-28T20:23:04Z"
- },
- {
- "checksumSHA1": "fddd1btmbXxnlMKHUZewlVlSaEQ=",
- "path": "golang.org/x/oauth2/internal",
- "revision": "ec22f46f877b4505e0117eeaab541714644fdd28",
- "revisionTime": "2018-05-28T20:23:04Z"
- },
- {
- "checksumSHA1": "znPq37/LZ4pJh7B4Lbu0ZuoMhNk=",
- "origin": "github.com/docker/docker/vendor/golang.org/x/sys/unix",
- "path": "golang.org/x/sys/unix",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "8BcMOi8XTSigDtV2npDc8vMrS60=",
- "origin": "github.com/docker/docker/vendor/golang.org/x/sys/windows",
- "path": "golang.org/x/sys/windows",
- "revision": "94b8a116fbf1cd90e68d8f5361b520d326a66f9b",
- "revisionTime": "2018-01-09T01:38:17Z"
- },
- {
- "checksumSHA1": "ziMb9+ANGRJSSIuxYdRbA+cDRBQ=",
- "path": "golang.org/x/text/transform",
- "revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
- "revisionTime": "2017-12-24T20:31:28Z"
- },
- {
- "checksumSHA1": "BCNYmf4Ek93G4lk5x3ucNi/lTwA=",
- "path": "golang.org/x/text/unicode/norm",
- "revision": "e19ae1496984b1c655b8044a65c0300a3c878dd3",
- "revisionTime": "2017-12-24T20:31:28Z"
- },
- {
- "checksumSHA1": "CEFTYXtWmgSh+3Ik1NmDaJcz4E0=",
- "path": "gopkg.in/check.v1",
- "revision": "20d25e2804050c1cd24a7eea1e7a6447dd0e74ec",
- "revisionTime": "2016-12-08T18:13:25Z"
- },
- {
- "checksumSHA1": "oRfTuL23MIBG2nCwjweTJz4Eiqg=",
- "path": "gopkg.in/square/go-jose.v2",
- "revision": "730df5f748271903322feb182be83b43ebbbe27d",
- "revisionTime": "2019-04-10T21:58:30Z"
- },
- {
- "checksumSHA1": "Ho5sr2GbiR8S35IRni7vC54d5Js=",
- "path": "gopkg.in/square/go-jose.v2/cipher",
- "revision": "730df5f748271903322feb182be83b43ebbbe27d",
- "revisionTime": "2019-04-10T21:58:30Z"
- },
- {
- "checksumSHA1": "JFun0lWY9eqd80Js2iWsehu1gc4=",
- "path": "gopkg.in/square/go-jose.v2/json",
- "revision": "730df5f748271903322feb182be83b43ebbbe27d",
- "revisionTime": "2019-04-10T21:58:30Z"
- },
- {
- "checksumSHA1": "GdsHg+yOsZtdMvD9HJFovPsqKec=",
- "path": "gopkg.in/src-d/go-billy.v4",
- "revision": "053dbd006f81a230434f712314aacfb540b52cc5",
- "revisionTime": "2017-11-27T19:20:57Z"
- },
- {
- "checksumSHA1": "yscejfasrttJfPq91pn7gArFb5o=",
- "path": "gopkg.in/src-d/go-billy.v4/helper/chroot",
- "revision": "053dbd006f81a230434f712314aacfb540b52cc5",
- "revisionTime": "2017-11-27T19:20:57Z"
- },
- {
- "checksumSHA1": "B7HAyGfl+ONIAvlHzbvSsLisx9o=",
- "path": "gopkg.in/src-d/go-billy.v4/helper/polyfill",
- "revision": "053dbd006f81a230434f712314aacfb540b52cc5",
- "revisionTime": "2017-11-27T19:20:57Z"
- },
- {
- "checksumSHA1": "1CnG3JdmIQoa6mE0O98BfymLmuM=",
- "path": "gopkg.in/src-d/go-billy.v4/osfs",
- "revision": "053dbd006f81a230434f712314aacfb540b52cc5",
- "revisionTime": "2017-11-27T19:20:57Z"
- },
- {
- "checksumSHA1": "lo42NuhQJppy2ne/uwPR2T9BSPY=",
- "path": "gopkg.in/src-d/go-billy.v4/util",
- "revision": "053dbd006f81a230434f712314aacfb540b52cc5",
- "revisionTime": "2017-11-27T19:20:57Z"
- },
- {
- "checksumSHA1": "ydjzL2seh3M8h9svrSDV5y/KQJU=",
- "path": "gopkg.in/src-d/go-git.v4",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "TSoIlaADKlw3Zx0ysCCBn6kyXNE=",
- "path": "gopkg.in/src-d/go-git.v4/config",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "B2OLPJ4wnJIM2TMjTyzusYluUeI=",
- "path": "gopkg.in/src-d/go-git.v4/internal/revision",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "o9YH41kQMefVGUS7d3WWSLLhIRk=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "BrsKLhmB0BtaMY+ol1oglnHhvrs=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/cache",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "pHPMiAzXG/TJqTLEKj2SHjxX4zs=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/filemode",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "UGIM9BX7w3MhiadsuN6f8Bx0VZU=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/format/config",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "L1H7nPf65//6nQGt3Lzq16vLD8w=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/format/diff",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "87WhYdropmGA4peZOembY5hEgq8=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/format/gitignore",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "G0TX3efLdk7noo/n1Dt9Tzempig=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/format/idxfile",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "q7HtzrSzVE9qN5N3QOxkLFcZI1U=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/format/index",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "0IxJpGMfdnr3cuuVE59u+1B5n9o=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/format/objfile",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "LJnyldAM69WmMXW5avaEeSScKTU=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/format/packfile",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "T8efjPxCKp23RvSBI51qugHzgxw=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/format/pktline",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "97LEL3gxgDWPP/UlRHMfKb5I0RA=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/object",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "PQmY1mHiPdNBNrh3lESZe3QH36c=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/protocol/packp",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "JjHHYoWDYf0H//nP2FIS05ZLgj8=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/protocol/packp/capability",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "wVfbzV5BNhjW/HFFJuTCjkPSJ5M=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/protocol/packp/sideband",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "m8nTTRFD7kmX9nT5Yfr9lqabR4s=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/revlist",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "Xito+BwVCMpKrhcvgz5wU+MRmEo=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/storer",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "AVSX04sTj3cBv1muAmIbPE9D9FY=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/transport",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "cmOntUALmiRvvblEXAQXNO4Oous=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/transport/client",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "gaKy+c/OjPQFLhENnSAFEZUngok=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/transport/file",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "chcAwbm6J5uXXn6IV58+G6RKCjU=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/transport/git",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "m9TNeIIGUBdZ0qdSl5Xa/0TIvfo=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/transport/http",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "6asrmcjb98FpRr83ICCODXdGWdE=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/transport/internal/common",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "MGiWWrsy8iQ5ZdCXEN2Oc4oprCk=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/transport/server",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "vat8YhxXGXNcg8HvCDfHAR6BcL0=",
- "path": "gopkg.in/src-d/go-git.v4/plumbing/transport/ssh",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "FlVLBdu4cjlXj9zjRRNDurRLABU=",
- "path": "gopkg.in/src-d/go-git.v4/storage",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "IpSxC31PynwJBajOaHR7gtnVc7I=",
- "path": "gopkg.in/src-d/go-git.v4/storage/filesystem",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "OaZO6dgvn6PMvezw0bYQUGLSrF0=",
- "path": "gopkg.in/src-d/go-git.v4/storage/filesystem/internal/dotgit",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "jPRm9YqpcJzx4oasd6PBdD33Dgo=",
- "path": "gopkg.in/src-d/go-git.v4/storage/memory",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "AzdUpuGqSNnNK6DgdNjWrn99i3o=",
- "path": "gopkg.in/src-d/go-git.v4/utils/binary",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "vniUxB6bbDYazl21cOfmhdZZiY8=",
- "path": "gopkg.in/src-d/go-git.v4/utils/diff",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "cspCXRxvzvoNOEUB7wRgOKYrVjQ=",
- "path": "gopkg.in/src-d/go-git.v4/utils/ioutil",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "shsY2I1OFbnjopNWF21Tkfx+tac=",
- "path": "gopkg.in/src-d/go-git.v4/utils/merkletrie",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "QiHHx1Qb/Vv4W6uQb+mJU2zMqLo=",
- "path": "gopkg.in/src-d/go-git.v4/utils/merkletrie/filesystem",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "M+6y9mdBFksksEGBceBh9Se3W7Y=",
- "path": "gopkg.in/src-d/go-git.v4/utils/merkletrie/index",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "7eEw/xsSrFLfSppRf/JIt9u7lbU=",
- "path": "gopkg.in/src-d/go-git.v4/utils/merkletrie/internal/frame",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "qCb9d3cwnPHVLqS/U9NAzK+1Ptg=",
- "path": "gopkg.in/src-d/go-git.v4/utils/merkletrie/noder",
- "revision": "bf3b1f1fb9e0a04d0f87511a7ded2562b48a19d8",
- "revisionTime": "2018-01-08T13:05:52Z"
- },
- {
- "checksumSHA1": "I4c3qsEX8KAUTeB9+2pwVX/2ojU=",
- "path": "gopkg.in/warnings.v0",
- "revision": "ec4a0fea49c7b46c2aeb0b51aac55779c607e52b",
- "revisionTime": "2017-11-15T19:30:34Z"
- },
- {
- "checksumSHA1": "qOmvuDm+F+2nQQecUZBVkZrTn6Y=",
- "path": "gopkg.in/yaml.v2",
- "revision": "d670f9405373e636a5a2765eea47fac0c9bc91a4",
- "revisionTime": "2018-01-09T11:43:31Z"
- },
- {
- "checksumSHA1": "rBIcwbUjE9w1aV0qh7lAL1hcxCQ=",
- "path": "rsc.io/getopt",
- "revision": "20be20937449f18bb9967c10d732849fb4401e63",
- "revisionTime": "2017-08-11T00:05:52Z"
- }
- ],
- "rootPath": "git.curoverse.com/arvados.git"
-}